Hong Kong firms must take initiative on safe AI practices

As artificial intelligence (AI) develops rapidly, an increasing number of organisations are leveraging this technology to streamline operations, improve quality and enhance competitiveness.

Advertisement

However, AI poses security risks, including personal data privacy risks, that cannot be ignored. For instance, organisations developing or using AI systems often collect, use and process personal data, posing privacy risks such as excessive collection, unauthorised use and breaches of personal data.

The importance of AI security has become a common theme in international declarations and resolutions adopted in recent years. In 2023, 28 countries, including China and the United States, signed the Bletchley Declaration at the AI Safety Summit in the UK. The declaration stated that misuse of advanced AI models could lead to catastrophic harm and emphasised the urgent need to address these risks.

In 2024, the United Nations General Assembly adopted an international resolution on AI, promoting “safe, secure and trustworthy” AI systems. At the AI Action Summit in Paris in February, more than 60 countries, including China, signed a statement emphasising that leveraging the benefits of AI for economic and societal growth depends on advancing AI safety and trust.

Concerning technological and industrial innovation, China has emphasised both development and security. In 2023, the Chinese mainland launched the Global AI Governance Initiative, proposing principles such as taking a people-centred approach and developing AI for good.

Advertisement

More recently, this April, when presiding over a group study session of the Political Bureau, President Xi Jinping remarked that while AI presents unprecedented development opportunities, it also brings risks and challenges not seen before.

  

Read More

Leave a Reply