How DeepSeek's Innovation and Nvidia's Stock Drop Impacts AI Builders
Teams using AI should understand why Nvidia's stock dropped on the launch of DeepSeek and what that means to their own products and solutions.
The day DeepSeek launched a competitive chat technology to OpenAI’s ChatGPT, Nvidia’s stock plunged 17%, resulting in an overnight $600 billion valuation drop—according to Forbes, “The Biggest Market Loss in History.”
AI users and builders should pay close attention to this development. Why did Nvidia’s stock drop? Why is this news significant beyond GPU makers and investors? And what exactly is a GPU?
What is a GPU?
The Graphics Processing Unit (GPU) has been around for over 25 years. The term, credited to Nvidia, originally referred to hardware designed to accelerate complex mathematical computations required for graphical displays. Over time, GPUs have expanded beyond display purposes to become the backbone of AI computing, powering large-scale models like OpenAI’s ChatGPT. This critical role in AI infrastructure has propelled Nvidia to the forefront of AI-related valuations.
How DeepSeek Changed the Game
When DeepSeek was officially announced, it was revealed that the company spent only $6 million on its development. While many analysts believe this figure is a significant understatement, one point is widely accepted—DeepSeek achieved highly competitive accuracy benchmarks, rivaling or even surpassing ChatGPT-4o and Claude 3.5, all while using significantly less computing power. The efficiency gains in AI are advancing so rapidly that costs are decreasing by roughly 75% year over year.
It didn’t take long for analysts to recognize the implications: Nvidia, which dominates the GPU market for AI workloads, may have been significantly overvalued.
The Good News for Humans and Product Companies
This leap in AI efficiency is a win for the planet. The enormous computing demands of AI pose a major challenge to reducing our carbon footprint. Large-scale data centers, essential for running AI models, have led tech giants like Google and Microsoft missing or scaling back their environmental goals. The pace of AI advancements has outstripped our ability to model their ecological impact, but the trajectory is concerning.
Reducing computing power while maintaining high-quality AI output is crucial in mitigating these risks. DeepSeek’s approach demonstrates that it’s possible. (For a technical deep dive into how DeepSeek achieved this, check out this article referencing Apple’s analysis of DeepSeek’s reduced compute power methods or even greater analysis by Khmaïess Al Jannadi).
Another benefit is cost reduction. Lower compute demands translate into significantly cheaper AI processing. For instance, DeepSeek’s initial cost per token was $0.14, compared to Claude’s $3.00 per token. While some of this pricing reflects aggressive competition, it also highlights the lower computational expenses involved.
What Product Teams Should Consider
At a high level, this event challenges our valuation models for AI-driven companies. Traditional business planning relies on year-over-year predictability, but the rapid pace of AI advancements raises an existential question: Can companies successfully plan investments when their core technology’s cost structure changes so drastically and unpredictably? The answer remains uncertain.
On a tactical level, product teams should ask themselves:
What are our environmental considerations when using AI, and how can we minimize our carbon footprint?
How will we architect our solution to prioritize lower cost operations before consuming high cost compute models?
How will we design our AI infrastructure to remain adaptable in a fast-moving market?
What should we be considering with our contract commitments made to 3rd party providers?
How will we ensure our product’s value proposition remains resilient against faster, cheaper competitors?
A Final Thought: The Geopolitical Risk of AI Dependence
While companies have long outsourced manufacturing to China for cost efficiency, the implications of shifting AI compute power to foreign-built models deserve careful consideration. AI models shape bias, truth, and perception—often in ways we’re only beginning to understand.
In my own tests with DeepSeek, I noticed some unsettling behavior. When I asked a general question about Uyghurs in China, the AI initially generated a response but then immediately deleted it, replacing it with: “I can’t talk about that.” It felt eerily like censorship.
As AI continues to evolve, companies must weigh not just cost and performance but also the broader ethical and geopolitical risks of their AI dependencies.