Nearly all Hong Kong companies vulnerable to identity, deepfake attacks, survey finds

Nearly every Hong Kong company in a new survey has been a victim of identity-related breaches such as phishing and deepfake attacks in the past year, according to a new report.

The 2024 Identity Security Threat Landscape Report, from Nasdaq-listed information security company CyberArk, showed that 98 out of 100 Hong Kong companies surveyed admitted to facing such breaches in their systems, compared with 93 per cent globally, underscoring the city’s ongoing vulnerability to such scams. CyberArk surveyed 2,400 companies in total, each with at least 500 employees.

Risks related to identity-related breaches, which use someone else’s identity or authentication to gain access to a system, are increasing along with the adoption of cloud services and artificial intelligence (AI), according to CyberArk.

Phishing attacks – including those that use deepfake technology to imitate a person’s voice, known as vishing – remained the most prevalent, with 96 per cent of surveyed Hong Kong companies saying they were victims of such attacks.

“Organisations in Hong Kong need to adopt a holistic cybersecurity strategy to secure both human and machine identities to effectively defend themselves against cyberattacks,” said Sandy Lau, CyberArk’s district manager for Hong Kong and Macau.

image
CyberArk’s solution engineering director Billy Chuang (left) and district manager for Hong Kong and Macau Sandy Lau presented the company’s latest findings from a business survey regarding identity-related breaches on July 16, 2024. Photo: CyberArk

Lau said “inadequate security controls for machine identities” are one particular reason that identity-related threats are so prevalent. Machine identities include specific software and algorithms used for authentication. Usage of machine identities is expected to expand as companies use multiple cloud services that require third and fourth parties to have access to sensitive data, according to Lau.

System breaches can be costly and hard to measure, as some companies may not want to report attacks, particularly those that involve ransomware payments, to avoid public embarrassment or scrutiny.

The official Hong Kong Police Force tally for losses from scams last year came to HK$9 billion (US$1.16 billion) across nearly 39,000 cases.

Deepfakes, which use generative AI to digitally create a person’s likeness, are becoming increasingly hard to tell apart from real people as the technology advances and scammers become more sophisticated.

In January, London-based design and engineering firm Arup lost HK$200 million after one of its employees in Hong Kong transferred the money upon request from a deepfake of the company’s chief financial officer in a video meeting.

Still, 60 per cent of the surveyed Hong Kong companies told CyberArk that they are confident their employees can identify deepfakes of their organisational leadership, while 97 per cent expressed concern that AI could have a negative impact on cybersecurity.

The growing adoption of AI also brings other challenges. Compromised AI models can lead to data leaks and generative AI is contributing to malware and phishing attacks, according to Billy Chuang, CyberArk’s solution engineering director.

By pouring resources into chasing the latest tech trends such as AI, Chuang said companies risk accruing “cyber debt” – a term referring to the compounding risks resulting from neglecting security updates like software patches.

“It’s about raising awareness of identity security,” Chuang said. While Hong Kong companies did well in the past, education on identity security awareness is never enough, he added.

image

  

Read More

Leave a Reply