
Chinese AI lab DeepSeek has launched two new reasoning-first AI models, DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, expanding its suite of systems for agents, tool-use and complex inference.
Both the models and the accompanying technical report have been released as open source on Hugging Face.
The company announced on X that V3.2 is the official successor to V3.2-Exp and is now available across its app, web interface and API. The Speciale variant is offered only through a temporary API endpoint until December 15, 2025.
DeepSeek said V3.2 aims to balance inference efficiency with long-context performance, calling it “your daily driver at GPT-5 level performance.”
The V3.2-Speciale model, positioned for high-end reasoning tasks, “rivals Gemini-3.0-Pro,” the company said. According to DeepSeek, Speciale delivers gold-level (expert human proficiency) results across competitive benchmarks such as the IMO, CMO and ICPC World Finals.
The models introduce an expansion of DeepSeek’s agent-training approach, supported by a new synthetic dataset spanning more than 1,800 environments and 85,000 complex instructions. The company stated that V3.2 is its first model to integrate thinking directly into tool use, allowing structured reasoning to operate both within and alongside external tools.
Alongside the release, DeepSeek updated its API, noting that V3.2 maintains the same usage pattern as its predecessor. The Speciale model is priced the same as V3.2 but does not support tool calls. The company also highlighted a new capability in V3.2 described as “Thinking in Tool-Use,” with additional details provided in its developer documentation.
The company recently also released a new open-weight model, DeepSeekMath-V2. The model, as per the AI lab, demonstrates strong theorem-proving capabilities in mathematics and achieved gold-level scores on the International Mathematics Olympiad (IMO) 2025.
The post DeepSeek Releases New Reasoning Models to Match GPT-5, Rival Gemini 3 Pro appeared first on Analytics India Magazine.