A team of researchers in China has questioned hospitals’ rapid adoption of DeepSeek, warning that it creates clinical safety and privacy risks, raising red flags over the rush to use the artificial intelligence (AI) start-up’s cost-efficient open-source models.
Advertisement
As of early March, at least 300 hospitals in China have started using DeepSeek’s large language models (LLMs) in clinical diagnostics and medical decision support.
The researchers warned that DeepSeek’s tendency to generate “plausible but factually incorrect outputs” could lead to “substantial clinical risk”, despite strong reasoning capabilities, according to a paper published last month in the medical journal JAMA. The team includes Wong Tien Yin, founding head of Tsinghua Medicine, a group of medical research schools at Tsinghua University in Beijing.
The paper was a rare voice of caution in China against the overzealous use of DeepSeek. The start-up has become the nation’s poster child for AI after its low-cost, high-performance V3 and R1 models captured global attention this year. DeepSeek did not immediately respond to a request for comment.
According to Wong, an ophthalmology professor and former medical director at the Singapore National Eye Centre, and his co-authors, healthcare professionals could become overreliant on or uncritical of DeepSeek’s output. This could result in diagnostic errors or treatment biases, while more cautious clinicians could face the burden of verifying AI output in time-sensitive clinical settings, they said.
Advertisement
While hospitals often choose private, on-site deployment of DeepSeek models instead of cloud-based solutions to mitigate security and privacy risks, this approach presents challenges. It “shifts security responsibilities to individual healthcare facilities”, many of which lack comprehensive cybersecurity infrastructure, according to the researchers.