Only one in four people in Singapore can distinguish between deepfake and legitimate videos, even though a majority say they are confident they can do so. This is one of the key findings of a survey released on July 2 by the Cyber Security Agency of Singapore (CSA).
Questions related to deepfakes were included for the first time in the 2024 edition of the Cybersecurity Public Awareness Survey, given the prevalence of generative artificial intelligence (AI) tools that make it easier to create fake content to scam unsuspecting victims.
A total of 1,050 respondents aged 15 and above were polled in October 2024 on their attitude towards issues such as cyber incidents and mobile security, and adoption of cyber-hygiene practices. Nearly 80 per cent said they were confident about identifying deepfakes, citing telltale signs such as suspicious content and unsynchronised lip movements.
But only a quarter of respondents could correctly distinguish between deepfake and legitimate videos when they were put to the test. “With cyber criminals constantly devising new scam tactics, we need to be vigilant, and make it harder for them to scam us,” said CSA’s chief executive David Koh. “Always stop and check with trusted sources before taking any action, so that we can protect what is precious to us.”
Compared with the previous iteration of the survey conducted in 2022, more people knew what phishing is. When respondents were presented with a mix of phishing and legitimate content, 66 per cent were able to identify all phishing content, an increase from 38 per cent in 2022. But only 13 per cent of the respondents were able to correctly distinguish between all phishing and legitimate content, a drop from 24 per cent in 2022.
While such trends are concerning, they are not unexpected, said Mr Vladimir Kalugin, operational director of cyber-security firm Group-IB’s unified products. “This reflects the growing sophistication of scam tactics – for example, attackers now use AI to spoof well-known brands better and faster, adopt perfect grammar and mimic multi-factor prompts,” he said.
Fake phone numbers, stolen accounts of real individuals, and deepfakes of celebrities and politicians are often used, enhancing the trustworthiness of malicious links, he added.
“As the fake looks more like the real, even a more aware public faces greater difficulty making that final call.”
Mr Kalugin added that the growing inability to tell genuine messages from fake ones is eroding digital trust, which causes daily minute decisions such as clicking links and paying bills to slow down or stop entirely. This threatens the efficiency of online services and digital economy goals.
According to the 2024 survey, there has been an increase in the installation of cyber-security apps and the adoption of two-factor authentication (2FA) over the years.
More respondents had installed security apps in 2024, with 63 per cent having at least one app installed, up from 50 per cent in 2022. The adoption of 2FA across all online accounts and apps also increased from 35 per cent in 2022 to 41 per cent in 2024.
Though 36 per cent of respondents in 2024 accepted their mobile devices’ updates immediately, 32 per cent preferred to update later. Those who chose not to update their devices remained low, at 3 per cent, down from 4 per cent in 2022.






