TLDRs;
- DeepSeek postpones R2 model release due to technical setbacks with Huawei Ascend processors.
- Company reverts to Nvidia chips for training after repeated failures on Huawei hardware.
- Experts cite stability and software gaps as key hurdles for Chinese-made AI chips.
- Delay gives competitors more time to advance in China’s fast-moving AI race.
Chinese AI startup DeepSeek has pushed back the release of its next-generation R2 language model, citing unresolved performance issues with Huawei’s Ascend AI chips.
The launch, initially expected in May 2025, is now delayed after weeks of failed training runs on Huawei’s processors.
The decision comes despite government guidance urging AI firms to prioritize domestically produced chips over foreign alternatives. DeepSeek previously unveiled its R1 model in January 2025 and was encouraged to transition to Huawei’s hardware for R2 training as part of Beijing’s push for semiconductor self-reliance.
DeepSeek switches chips
According to sources familiar with the matter, persistent stability and connectivity problems emerged during R2 training on Huawei’s Ascend chips. Even with Huawei engineers stationed on-site to troubleshoot, DeepSeek was unable to complete a successful training cycle.
As a result, the company reverted to Nvidia GPUs for the heavy computational training phase, while continuing to use Huawei chips for inference, the less demanding process of running the model after training.
The setback highlights a broader industry challenge: despite years of government investment, Chinese chipmakers still trail foreign competitors. Analysts estimate China is two years behind in chip design and up to five generations behind in semiconductor manufacturing equipment.
Nvidia still leads
Market data underscores the scale of the gap. In 2024, Chinese firms purchased roughly 1 million Nvidia H20 chips, compared to 450,000 Huawei Ascend 910B units.
The performance gap between the two brands, especially in memory bandwidth, software compatibility, and processing speed, remains a key factor in purchasing decisions.
Huawei’s production capacity also poses a challenge. The company is projected to manufacture only 200,000 AI chips in 2025, limiting availability for large-scale training projects like DeepSeek’s R2.
Efficiency keeps pace
While hardware constraints have slowed progress, Chinese AI developers have made notable efficiency gains. The performance gap between US and Chinese AI models has shrunk dramatically, from 103 points in January 2024 to just 23 points by February 2025, despite the US holding ten times more total computing power.
DeepSeek’s R1 model was an early example of this resource-efficient approach, delivering competitive results at a fraction of the cost of comparable US projects. Analysts suggest such algorithmic innovations could help Chinese firms remain competitive in global AI development even under ongoing chip shortages.
However, for now, the R2 delay gives rival AI companies a window to capture market share. DeepSeek has not announced a new release date but confirmed ongoing work to optimize the model for Huawei hardware.