Shanghai-based artificial intelligence (AI) start-up MiniMax has launched an open-source reasoning model that it said requires just half the computing resources of rival DeepSeek’s models for some tasks.
Advertisement
On Tuesday, the company announced the release of MiniMax-M1, its first reasoning model, on its official WeChat account. M1 consumes less than half the computing power of DeepSeek-R1 for reasoning tasks with a generation length of 64,000 tokens or fewer, according to a technical paper released alongside the product.
“Compared with DeepSeek … this substantial reduction in computational cost makes M1 significantly more efficient during both inference and large-scale [model] training,” MiniMax researchers wrote in the report.
The new model comes as Chinese tech giants and start-ups race to develop advanced reasoning models – designed to “think” through a problem before responding – in a bid to catch up with DeepSeek, whose affordable R1 model drew global attention earlier this year. MiniMax referenced DeepSeek 24 times in its technical paper, underscoring the company’s ambition to challenge its Hangzhou-based rival, which has become the darling of China’s AI industry.
MiniMax cited third-party benchmarks showing that M1 matches the performance of leading global models from Google, Microsoft-backed OpenAI and Amazon.com-backed Anthropic in maths, coding and domain knowledge.
Advertisement
M1 is built on the 456-billion-parameter MiniMax-Text-01 foundational model and employs a hybrid mixture-of-experts architecture – an approach to designing AI models to reduce compute, which is also used by DeepSeek. M1 also uses Lightning Attention, a technique that speeds up training, reduces memory usage and enables the model to handle longer texts.