Chinese artificial intelligence start-up DeepSeek has conducted internal evaluations on the “frontier risks” of its AI models, according to a person familiar with the matter.
Advertisement
The development, not previously reported, comes as Beijing seeks to promote awareness of such risks within China’s AI industry. Frontier risks refer to AI systems that pose potentially significant threats to public safety and social stability.
The Hangzhou-based company, which has been China’s poster child for AI development ever since it released its R1 reasoning model in January, conducted evaluations of the models’ self-replication and cyber-offensive capabilities in particular, according to the person who requested anonymity.
The results were not publicised. It was not clear when the evaluations were completed or which of the company’s models were involved. DeepSeek did not respond to a request for comment on Tuesday.

Unlike US AI firms such as Anthropic and OpenAI, which regularly publish the findings of their frontier risk evaluations, Chinese companies have not announced such details.
Advertisement