Photo credit: www.fool.com
Nvidia (NVDA 2.63%) has seen its stock decline by 12% from its peak, primarily due to a significant sell-off triggered by a claim from Chinese start-up DeepSeek, which stated that it successfully trained a competitive artificial intelligence (AI) model with a fraction of the computing resources traditionally used by major U.S. developers like OpenAI.
Investor concerns centered around the possibility that DeepSeek’s methodologies could influence other AI developers, potentially reducing the demand for Nvidia’s high-performance graphics processing units (GPUs), which are recognized as optimal hardware for AI model development. However, these apprehensions may be less substantiated than initially thought.
On February 4, Alphabet (GOOG -0.54%) (GOOGL -0.49%), a major purchaser of Nvidia’s AI data center chips, reported comments from its CEO Sundar Pichai that may restore confidence among Nvidia’s investors.
Image source: Nvidia.
The DeepSeek Development
Founded in 2023 by High-Flyer, a successful Chinese hedge fund with a history of developing AI trading algorithms, DeepSeek introduced its V3 large language model (LLM) in December 2024, followed by the R1 reasoning model in January. Their competitive performance has generated considerable interest within the tech community.
DeepSeek’s operations are open source, allowing the industry to gain insights into its efficiency. The start-up boasts that it trained V3 for approximately $5.6 million, excluding an estimated $500 million spent on chips and infrastructure, amounts that are minuscule compared to the staggering investments made by companies such as OpenAI.
Utilizing older generations of Nvidia GPUs like the H100, DeepSeek circumvented U.S. restrictions on exporting advanced chips to China, which were implemented to safeguard American AI leadership.
DeepSeek’s achievements can be attributed to innovative software approaches, such as the development of highly efficient algorithms and unique data input methodologies. Furthermore, it employed a strategy known as distillation, which leverages knowledge from established large AI models to train smaller variants.
OpenAI has even accused DeepSeek of utilizing its GPT-4o models to inform the training of DeepSeek R1, by prompting ChatGPT on a large scale to derive insights from its outputs. This distillation method can significantly accelerate the training process by reducing the need for extensive data collection and processing, leading to lower requirements for computational resources and GPUs.
Considering these developments, investors remain wary that widespread adoption of such techniques by other AI developers could drastically diminish demand for Nvidia’s chips.
Nvidia’s Prospects for GPU Sales
Nvidia is set to announce its financial results for its fiscal year 2025 on February 26, with projections estimating total revenue of $128.6 billion, reflecting an astonishing 112% increase year-over-year. The data center segment is anticipated to account for about 88% of this revenue, driven by a surge in GPU sales.
According to consensus forecasts from Wall Street (as reported by Yahoo), Nvidia could potentially achieve a record $196 billion in total revenue in the 2026 fiscal year. Sustaining such growth hinges on continued demand from AI developers, thereby explaining why investors are on edge following the DeepSeek announcement.
While the H100 remains in high demand, Nvidia’s newest GB200 GPU, built on its Blackwell architecture, offers AI inference capabilities that are up to 30 times faster. Inference refers to the phase where an AI model processes live data – such as a user query – to generate an output, following the initial training.
The GB200 is currently regarded as the leading product in AI data centers, with demand outpacing supply upon its release to customers at the end of 2024.
Image source: Alphabet.
Insight from Sundar Pichai
Pichai communicated with Wall Street analysts on February 4 to discuss Alphabet’s Q4 2024 results. In his remarks, he highlighted a considerable shift in the distribution of computing power over the past three years, with an increasing focus on inference relative to training processes.
He noted that emerging reasoning models, such as DeepSeek’s R1 and Alphabet’s latest Flash Thinking models, will further expedite this trend. These models, which rely on an extended “thinking” phase before generating responses, necessitate significantly higher computing power as compared to their predecessors. This is referred to as test-time scaling, enabling models to yield more accurate outputs without continuous pre-training scaling, which involves feeding considerable volumes of new data into the models.
Meta Platforms CEO Mark Zuckerberg echoed these sentiments, suggesting that a decline in training-related workloads does not imply a reduced need for chips, as the capacity is simply transitioning towards inference.
Furthermore, Alphabet announced plans for substantial capital expenditures (capex) of $75 billion in 2025, primarily directed toward data center infrastructure and chip acquisitions. This marks a significant rise from their 2024 capex of $52 billion, signaling no intention to cut back on investments.
In summary, the landscape for Nvidia’s GPU demand appears robust. With the stock currently positioned favorably, this recent downturn may present an advantageous buying opportunity for investors.
Source
www.fool.com