Technology

42.5% of fraud attempts are now AI-driven

Share

As cybercriminals increasingly turn to artificial intelligence to execute complex fraud schemes, the financial sector finds itself in a high-stakes battle to protect its customers and assets. New data from the 2024 report Battle Against AI-Driven Identity Fraud by Signicat reveals that AI-driven fraud now constitutes 42.5% of all detected fraud attempts in the financial and payments sector, marking a critical turning point for cybersecurity in the financial industry. Furthermore, it is estimated that 29% of those attempts are considered successful.

This report sheds light on the rapidly escalating threat posed by AI-enhanced fraud tactics, which include the use of deepfakes, synthetic identities, and sophisticated phishing campaigns. These advanced techniques allow fraudsters to operate at an unprecedented scale and level of sophistication.

Current trends in AI-driven fraud

  • 42.5% of detected fraud involves AI: nearly half of all fraud attempts are now AI-driven, showcasing the growing sophistication and prevalence of these attacks.
  • 80% surge in overall fraud attempts: the financial sector has experienced an 80% increase in fraud attempts over the last three years, driven in part by the adoption of AI by fraudsters.
  • Only 22% of firms have implemented AI defences: despite the escalating risk, less than a quarter of financial institutions have taken action to deploy AI-driven fraud prevention measures, exposing a significant vulnerability.

A weak response: financial institutions behind on AI defences

Facing this evolving threat landscape, financial institutions are increasingly aware that traditional defences are proving insufficient against AI-powered attacks. The report, developed in collaboration with Consult Hyperion, urges companies to adopt AI-based detection systems, enhance cybersecurity frameworks, and foster greater industry collaboration to stay ahead of evolving fraud techniques.

According to Kasada’s 2024 State of Bot Mitigation report, 87% of respondents say their executive team is concerned about bot attacks and AI-driven fraud. Nevertheless, Signicat’s report unveils that three-quarters of respondents claim to lack the expertise, resources and budget to tackle AI-driven identity fraud. This suggests that companies in the financial sector are not prepared for this threat.

The expertise gap: financial institutions struggling to keep pace

“Companies are of course putting in place defence mechanisms against AI-driven identity fraud, but the threat is growing,” states Pinar Alpay, Chief Product & Marketing Officer at Signicat.

“The acceleration of digitalisation we are seeing in recent years has also made attacks more sophisticated and executed at scale. Mechanisms that worked a few years ago are no longer sufficient and it is urgent that companies consider a multi-layered approach, combining for example electronic identities with risk analysis and if required, step-ups. Only in this way can they strike the right balance between letting legitimate users through with less friction and introducing additional security measures when there is a risk.”

“With account takeover being one of the most common forms of identity fraud, secure and robust digital identity solutions also protect end-users and their accounts when logging in or accepting documents,” Alpay added.

The report emphasises the need for a proactive, multi-layered cybersecurity approach that integrates AI with traditional security measures. It further highlights the importance of educating employees and customers on the new threats AI poses in the evolving landscape of cybercrime.