Another Chinese quant fund joins DeepSeek in AI race with model rivalling GPT-5.1, Claude
Beijing-based Ubiquant launches code-focused systems claiming benchmark wins over US peers despite using far fewer parameters

Beijing-based Ubiquant said it released a series of open-source code-focused LLMs last week that outperformed leading closed-source models on multiple benchmarks despite using far fewer parameters. The IQuest-Coder-V1 family is designed for code intelligence, excelling at tasks such as automated programming, debugging and code explanation.
Despite their size, Ubiquant’s models have demonstrated elite-level performance across major programming benchmarks, according to self-reported data.

The model achieved 49.9 per cent in BigCodeBench, which evaluates LLMs on solving practical and challenging programming tasks without contamination, ahead of Gemini 3 Pro Preview’s 47.1 per cent and GPT-5.1’s 46.8 per cent.