DeepSeek has not only boosted China’s confidence in its quest for leadership in the global artificial intelligence (AI) industry, but also rapidly transformed the domestic AI model landscape into a solar system, where the start-up becomes the sun around which a number of major planets revolve.
Advertisement
A year ago, China’s large language model (LLM) sector was so fragmented and crowded that the number of players was nearly uncountable. Big Tech firms and new start-ups raced to unveil their answers to ChatGPT in a dizzying competition involving over a hundred models.
DeepSeek, based in Hangzhou and owned by a low-profile hedge fund, has emerged as the unlikely winner. Shortly after the release of its low-cost open-source V3 and R1 models, DeepSeek has become the undisputed Chinese leader in fundamental AI model development. It has received endorsements from cloud service providers and chip developers, as well as government, corporate and individual users.
To some extent, DeepSeek faces no real competition, as it does not engage in traditional rivalry. It has made its products openly accessible to everyone. While many Chinese LLM developers benchmark their models against DeepSeek and claim superior capabilities, few consider themselves direct competitors.
In the DeepSeek era, China’s well-resourced tech giants have adapted by aligning their offerings with DeepSeek and developing their own models. Tencent Holdings on Friday unveiled its new AI reasoning model, Hunyuan T1, joining the ranks of Alibaba Group Holding’s QwQ, ByteDance’s Doubao and Baidu’s X1. Alibaba owns the South China Morning Post.
Advertisement
Meanwhile, many Chinese AI start-ups and their investors are rethinking their business models and even their core purposes in the wake of DeepSeek’s rise. Lee Kai-fu, founder and CEO of 01.AI, told the Post that his company was leveraging DeepSeek’s popularity to sell AI solutions to corporate clients rather than developing in-house pre-trained models, because it had become increasingly difficult to justify the expenditure on training proprietary models.