Huawei Technologies on Friday introduced open-source software that it claimed can significantly improve the utilisation rate of artificial intelligence chips, marking the latest effort by Chinese companies to achieve world-class AI training capabilities even without access to the latest Nvidia processors.
Advertisement
Flex:ai, which pools and orchestrates processors to squeeze out more performance, came as part of Huawei’s effort to develop a self-sufficient compute ecosystem by overcoming China’s shortcomings in individual chipsets.
Built on an open-sourced platform called Kubernetes, the software is an orchestration system for graphics processing units (GPUs), neural processing units (NPUs) and other accelerators from different chipmakers. It could slice a single card into multiple virtualised computing units, allowing several AI workloads to run in parallel, according to Huawei.
The tool could enhance the processor utilisation rate by 30 per cent on average, the company said. A smart scheduler called Hi Scheduler handles GPU and NPU allocation across the cluster. The system can also pool idle processors from different nodes and redistribute their power to AI tasks.
Kubernetes – now a backbone of AI infrastructure – is an open-source system that coordinates large fleets of containerised applications. Containerisation is a technique that makes it easy to deploy and scale AI models across different servers and GPU clusters.
Advertisement
“Smaller tasks rarely use up a single card’s capability, while bigger ones can’t be handled by just one, and parallel tasks make compute management harder,” said Zhou Yuefeng, vice-president and head of Huawei’s data storage product line, at a launch event on Friday at the company’s Lianqiuhu campus in Shanghai.
Flex:ai was designed to improve usage of compute resources and was part of Huawei’s effort to speed up the “democratisation of AI” by unleashing the potential of AI infrastructure and open sourcing, he said.

