DeepSeek paper offers new details on how it used 2,048 Nvidia chips to take on OpenAI

Chinese artificial intelligence (AI) research lab DeepSeek has released a new research paper revealing in detail for the first time how it built one of the world’s most powerful open-source AI systems at a fraction of the cost of its competitors.

Advertisement

“Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures”, co-authored by DeepSeek founder Liang Wenfeng and released on Wednesday, attributes the start-up’s breakthrough in training high-performance, cost-efficient AI systems to a hardware-software co-design approach.

“DeepSeek-V3, trained on 2,048 Nvidia H800 GPUs, demonstrates how hardware-aware model co-design can effectively address these challenges, enabling cost-efficient training and inference at scale,” the researchers wrote. DeepSeek and its hedge fund owner High-Flyer had previously stockpiled the H800, which Nvidia originally designed for the China market to comply with US export restrictions but were banned from export to to the country in 2023.

The start-up’s training approach stemmed from the team’s awareness of hardware constraints and the “exorbitant costs” of training large language models (LLMs) – the technology behind AI chatbots such as OpenAI’s ChatGPT – according to the paper.

The paper details technical optimisations that boost memory efficiency, streamline inter-chip communication, and enhance overall AI infrastructure performance – key advancements for reducing operational costs while scaling capabilities. These offer a “practical blueprint for innovation in next-generation AI systems”, the researchers said.

DeepSeek also highlighted its use of a mixture-of-experts (MoE) model architecture, a machine-learning approach that divides an AI model into separate sub-networks, or experts, each focused on a subset of the input data while working collaboratively.

Advertisement

  

Read More

Leave a Reply