The rise of DeepSeek’s artificial intelligence (AI) models is seen as providing some Chinese chipmakers, such as Huawei, a better chance to compete in the domestic market against more powerful U.S. processors.
Huawei and Chinese firms have struggled for years to develop high-end chips that can compete with Nvidia’s offerings for training algorithms to make accurate decisions.
However, DeepSeek’s models, whichย focus on “inference” or when an AI model produces conclusions, optimise computational efficiency rather than relying solely on raw processing power. That is one reason why the model is expected to partly close the gap between what Chinese-made AI processors and their more powerful U.S. counterparts can do, analysts say.
Read more:ย DeepSeek AI Faces US Ban Over National Security Concerns.
Integrating DeepSeek Further
Huawei and several Chinese AI chipmakers, including Hygon, Tencent-backed EnFlame, Tsingmicro and Moore Threads, have recently announced that their products will support DeepSeek models. However, only a few details have been released. Huawei declined to comment. Moore Threads, Hygon EnFlame and Tsingmicro did not respond to Reuters queries seeking further comment.
Industry executives are now predicting that DeepSeek’s open-source nature and low fees could boost the adoption of AI and the development of real-life applications for the technology. It will help Chinese firms overcome U.S. export curbs on their most powerful chips.
Even before DeepSeek made headlines this year, products such as Huawei’s Ascend 910B were seen by customers, such as ByteDance, as better suited for less computationally intensive “inference” tasks.
In China, dozens of companies, from automakers to telecoms providers, have announced plans to integrate DeepSeek’s models with their products and operations.
“This development is very much aligned with the capability of Chinese AI chipset vendors,” said Lian Jye Su, a chief analyst at tech research firm Omdia.
“Chinese AI chipsets struggle to compete with Nvidia’s GPU (graphics processing unit) in AI training, but AI inference workloads are much more forgiving and require a lot more local and industry-specific understanding,” he said.
Stay tuned toย Brandsynario for latest news and updates.