DeepSeek Releases Open-Source V4 Models
Chinese AI firm DeepSeek has launched its V4 series of open-source models, including a 1.6 trillion parameter version, which demonstrates performance competitive with leading closed-source systems at a fraction of the inference cost.
The News
On April 24, 2026, Chinese AI company DeepSeek released preview versions of its new V4 models, featuring the V4-Pro-Max (1.6 trillion parameters) and the V4 Flash (284 billion parameters). These mixture-of-experts (MoE) models are open-source and support a 1 million token context window. DeepSeek claims the models rival or surpass the performance of proprietary models like OpenAI's GPT-5.2 and Google's Gemini 3.0 Pro on certain reasoning and coding benchmarks. The models are priced aggressively to reduce inference costs and are compatible with Huawei's Ascend AI accelerators, reducing reliance on U.S. chipmakers.
The OPTYX Analysis
The DeepSeek V4 release is a significant market signal challenging the thesis that state-of-the-art AI performance is exclusive to closed, proprietary systems. By open-sourcing models with massive parameter counts and efficient MoE architectures, DeepSeek is creating downward price pressure on the entire AI API market. The explicit support for Huawei's Ascend chips is a geopolitical and supply chain event, demonstrating a technically viable path to decouple advanced AI development from the Nvidia-dominated hardware ecosystem. This move commoditizes access to high-performance AI, enabling new applications and threatening the margins of incumbent model providers.
Enterprise AI Impact
The availability of powerful, low-cost open-source models creates a direct vendor risk concentration issue for enterprises exclusively reliant on proprietary APIs from US-based providers. The immediate strategic action is to initiate evaluation and testing of high-performing open-source alternatives like DeepSeek V4 for non-sensitive workloads. CIOs and CMOs must develop a multi-provider model strategy to mitigate geopolitical risks and optimize for both performance and cost. This includes building internal expertise to fine-tune and securely deploy open-weight models, transforming AI from a pure operational expense into a controllable and adaptable asset.