Nvidia’s advanced GPUs remain essential for large-scale Chinese AI training outside China.
Over the past few months, Chinese tech giants like Alibaba and ByteDance have begun training their large language models outside China, primarily in data centers across Southeast Asia This is primarily because they want to use Nvidia’s flagship AI chips, which are now banned from export to China by the US Additionally, they lease data centers from foreign institutions—where Nvidia chips are availableand train their models there There are some challenges and alternative efforts underway For example, a Chinese company called DeepSeek, which previously stockpiled Nvidia chips, is now running its models in China but is moving to domestic chipmakers like Huawei, but according to reports, they have not yet proven reliable enough for training large models.
US Chip Export Restrictions
In April 2025, the US tightened its restrictions, particularly on Nvidia chips like the H20, which are essential for training large AI models Subsequently, the supply of these chips to China-based companies was limited or stopped. The easiest and most legally clear way to circumvent this ban was to train AI models abroad, and then utilize the fast GPU power needed to build and train large language models (LLMs) Training requires intensive computing power, as well as numerous GPU hours, large data, fast networks, etc Nvidia’s AI-enabled GPUs are considered the industry standard for this task, while Chinese chips, such as those from Huawei, haven’t yet fully reached this level. Some testing has been conducted, but issues with performance, network speed, and software tool support have been reported.
The Race to Maintain Rapid Growth
Chinese companies don’t want to lag behind in the AI race To build AI models, they need cutting-edge hardware at all times This has made sourcing chip power abroad a viable option. Companies are also moving abroad, such as Alibaba, which has moved its new LLM training to Southeast Asian data centers, and ByteDance, which has also moved its model training abroad, is a slightly different case. It previously stockpiled Nvidia chips, so it’s currently running its models domestically However, it’s documented that it can rely on indigenous chips in the future.
Challenges and Limitations
Indigenous Chips Are Weak Domestic chipmakers like Huawei haven’t yet demonstrated the same reliable performance in training large and heavy-duty models as GPUs like Nvidia’s Despite these limitations, they face limitations in commission, networking speed, and software support Legal and ethical complications When data and training models are shipped abroad, issues such as data confidentiality, user privacy, and China’s data-shipping policies arise. Some models are trained abroad, but the inference/service (serving clients) is done in China. Sustainability and self-reliance remain elusive: Developing indigenous chips and indigenous AI infrastructure is essential for China to fully break free from external dependence in the future That path is not yet fully established.
Technological Momentum Maintained
With this move, Chinese companies can maintain the pace of AI development, but they no longer need to be held back by a chip shortage They are now meeting their needs by going abroad, which gives them time to continue working on their own chips and technology, as well as transitioning to a homegrown AI ecosystem. However, there is a growing opinion that China wants to gradually develop its own chips and AI platforms, eliminating foreign dependence in the future This strategy could make them self-reliant in the long run, while also addressing the challenges facing AI. including understanding, general intelligence, emotions, context understanding, etc.
Doesn’t come from GPU power alone; it requires data, algorithms, software, research, ethics, and extensive resources This chip-shift strategy by China has temporarily addressed the GPU power shortage, but it doesn’t ensure that “human-like” AI will arrive anytime soon. This requires global curiosity, research, open collaboration, ethics, and talent building.
Conclusion: Chinese AI training
The move by Chinese tech giants to offshore AI model training is a strategic and immediate response to US chip restrictions While this allows them to continue using modern chips for the time being, it is merely a band-aid solution. If China wants to become self-reliant in AI in the long term and achieve deeply intelligent AI, like the human mind, it will need indigenous chips, research, data infrastructure, AI education, ethics, and open collaboration Global AI competition will intensify, but limitations will remain. This move demonstrates that AI is no longer just a US-China conflict, but a global political, economic, and technological battle Restrictions, data control, and chip ownership remain limitations that will not be resolved quickly.



