As militaries around the world race to make their forces more efficient with artificial intelligence – an area of fierce rivalry between Beijing and Washington – AI experts say open-source models could be safer and more sustainable for defence use.
Advertisement
Panellists at the Singapore Defence Technology Summit on Wednesday argued against the common assumption that closed-source AI models were safer because they could prevent adversaries from knowing their true capabilities.
“When you talk about bad actors, I think it is an incorrect assumption – just because the model is closed-source – that those bad actors don’t have access to it,” said Rodrigo Liang, co-founder and CEO of SambaNova Systems, a California-based AI solutions company.
“I do think that it is important for us to distinguish the fact about commercial licensing of models versus the access of the model,” he said, adding that a different set of private data used to train a model would also make it unique and sovereign.
Pascale Fung, chair professor of the Hong Kong University of Science and Technology’s electronic and computer engineering department, said that “we need open source – especially for different kinds of national security and similar applications … [as] they are inspectable”.
Advertisement
She said that when source codes of large language models were shared, “we can engage the entire AI community, including academic research, to make this model safer and more robust”.