AI News

DeepSeek V4 Launches as Biggest Open-Weight AI Model

Apr 24, 2026, 11:30 PM
4 min read
21 views
DeepSeek V4 Launches as Biggest Open-Weight AI Model

Table of Contents

Chinese AI lab DeepSeek has previewed DeepSeek V4, its newest large language model that the company says has nearly closed the gap with leading frontier models from OpenAI, Google, and Anthropic. The V4 Pro model has 1.6 trillion parameters — making it the largest open-weight model available anywhere — while costing a fraction of what competing closed-source models charge.

Two Models, One Architecture

DeepSeek released two preview versions: V4 Flash, a smaller model with 284 billion parameters, and V4 Pro, the full-scale model with 1.6 trillion parameters. Both use a mixture-of-experts architecture, which activates only a subset of parameters per task to dramatically reduce inference costs. V4 Pro activates 49 billion parameters per query, while V4 Flash uses just 13 billion.

Both models support context windows of one million tokens — enough to process entire codebases or massive documents in a single prompt. However, they currently support text only, unlike many competing models that offer audio, video, and image capabilities.

The V4 Pro model is more than double the size of DeepSeek's previous V3.2, which had 671 billion parameters, and significantly larger than other open-weight competitors including Moonshot AI's Kimi K 2.6 at 1.1 trillion and MiniMax's M1 at 456 billion.

Benchmark Performance

DeepSeek claims V4 Pro outperforms all open-source peers across reasoning benchmarks and beats OpenAI's GPT-5.2 and Gemini 3.0 Pro on certain tasks. In coding competition benchmarks, both V4 models perform comparably to GPT-5.4, according to the company's data.

However, the models fall slightly behind frontier models in knowledge tests, specifically GPT-5.4 and Google's Gemini 3.1 Pro. DeepSeek acknowledged this gap, estimating its developmental trajectory trails state-of-the-art frontier models by approximately three to six months — a remarkably small margin for an open-weight model that costs orders of magnitude less to use.

The Price War

The pricing is where DeepSeek makes its strongest statement. V4 Flash costs $0.14 per million input tokens and $0.28 per million output tokens. V4 Pro costs $0.145 per million input tokens and $3.48 per million output tokens. Both undercut every major frontier model on the market — including GPT-5.5, Claude Opus 4.7, and Gemini 3.1 Pro.

For developers and enterprise customers evaluating AI costs, the math is dramatic. A workload that costs hundreds of dollars per day on OpenAI or Anthropic could potentially cost single-digit dollars on DeepSeek — a difference that matters enormously at scale.

The aggressive pricing reinforces a pattern DeepSeek established with its R1 reasoning model, which sparked a global sell-off in AI infrastructure stocks when it demonstrated that frontier-competitive capabilities could be achieved at a fraction of the cost that Western labs were spending.

The IP Theft Shadow

The launch arrives amid escalating tensions between the US and China over AI intellectual property. Just one day before DeepSeek's announcement, the US accused China of stealing American AI labs' IP on an industrial scale using thousands of proxy accounts. Both Anthropic and OpenAI have previously accused DeepSeek of distilling — essentially copying — their models by extracting knowledge through systematic API queries.

DeepSeek has denied these allegations, but the timing underscores the increasingly adversarial dynamic between Chinese and American AI companies. For US policymakers already debating AI export controls and chip restrictions, DeepSeek's continued rapid advancement raises uncomfortable questions about whether current restrictions are working.

Why Open-Weight Matters

DeepSeek V4's open-weight release means developers can download and run the models on their own infrastructure — a significant advantage for companies in regulated industries, government agencies, and organizations that cannot send data to third-party APIs for privacy or compliance reasons.

The combination of near-frontier performance, dramatically lower costs, and open-weight availability makes DeepSeek V4 a serious option for AI adoption in markets and use cases where Western frontier models are too expensive, too restrictive, or legally complicated to deploy.

The Bigger Picture

DeepSeek V4 represents a continuation of the trend that has defined the AI industry's most important story: the rapid narrowing of the gap between Chinese open-weight models and Western closed-source frontier models. If DeepSeek is genuinely only three to six months behind the leading models while charging a tiny fraction of their price, the business models of companies like OpenAI and Anthropic face fundamental pressure.

The question is no longer whether Chinese labs can compete with American frontier models. It is whether the performance gap is closing faster than Western labs can justify their pricing — and whether the hundreds of billions being invested in Western AI infrastructure can generate returns when a competitor offers near-equivalent capabilities at a fraction of the cost.

Amit Kumar

About Amit Kumar

Amit Biwaal is a full-stack AI strategist, SEO entrepreneur, and digital growth builder running a successful SEO agency, an eCommerce business, and an AI tools directory. As the founder of Tech Savy Crew, he helps businesses grow through SEO, AI-led content strategy, and performance-driven digital marketing, with strong expertise in competitive and restricted niches. He has also been featured in live podcast conversations on YouTube and has received industry recognition, further strengthening his profile as a modern growth-focused digital leader.

Comments (0)

Leave a Comment

No Comments Yet

Be the first to share your thoughts!

Relevant AI Tools

More AI News