AI-generated fraud surges with 90% of U.S. businesses targeted in 2024

Trustpair and Basware data reveals 90% of U.S. businesses faced payment fraud attempts in 2024, with deepfake-driven BEC scams accounting for 32% of incidents. AP departments are under siege, with 47% of firms reporting $10M+ losses and human error fueling 50% of breaches.

The rise of AI-generated fraud techniques, including deepfakes and BEC attacks, has reached alarming levels, with Trustpair and Basware reporting that 90% of U.S. businesses encountered payment fraud attempts in 2024. The FBI’s 2024 Internet Crime Report highlights that deepfake-driven BEC scams now make up 32% of incidents, signaling a critical shift in the fraud landscape. AP departments are facing unprecedented risks, with 47% of firms suffering losses exceeding $10 million, while human error remains a significant vulnerability, contributing to 50% of breaches.

The Surge in AI-Generated Fraud

According to the FBI’s 2024 Internet Crime Report released on July 10, BEC scams utilizing AI voice cloning have increased by 135% year-over-year, costing businesses a staggering $2.9 billion in the first half of 2024 alone. This alarming trend underscores the growing sophistication of fraudsters who are leveraging generative AI tools to create highly convincing deepfakes and phishing content.

Routable, a leading AP automation platform, announced a $14 million Series B funding round on July 11 to expand its AI-powered solutions. The company is integrating real-time dark web monitoring to detect compromised vendor credentials, a critical feature as fraudsters increasingly exploit stolen data.

The AI vs. AI Battlefield

The UK’s National Cyber Security Centre (NCSC) issued a stark warning on July 12, revealing that state-aligned groups are using advanced AI models like ChatGPT-4o to create polymorphic phishing content that bypasses traditional email filters. This development marks a new frontier in the ‘AI vs. AI’ arms race, where defensive algorithms must constantly evolve to counter increasingly sophisticated attacks.

JPMorgan Chase reported during its Q2 earnings call on July 8 that it had blocked $150 million in deepfake-based CEO fraud attempts, thanks to improved multimodal AI detection systems. This highlights the critical role of AI in both perpetrating and preventing fraud.

Historical Context

The current wave of AI-driven fraud builds on a long history of technological advancements being exploited for criminal gain. In the early 2010s, the rise of mobile banking and digital payments led to a surge in phishing and identity theft. Similarly, the adoption of cloud computing in the late 2000s opened new vulnerabilities that fraudsters quickly exploited.

What sets the current trend apart is the speed and scale at which AI can generate convincing fraudulent content. Unlike previous fraud methods that required significant manual effort, AI allows criminals to automate and personalize attacks at an unprecedented level. This shift mirrors the broader transformation of cybersecurity from a reactive to a proactive discipline, where AI plays a central role in both offense and defense.

Happy
Happy
0%
Sad
Sad
0%
Excited
Excited
0%
Angry
Angry
0%
Surprise
Surprise
0%
Sleepy
Sleepy
0%

Sustainability Metrics Reshape Premium Homeware Market as Circular Economy Gains Traction

Worldpay’s strategic shift to fraud analytics and conversion tools addresses rising BEC fraud

Leave a Reply

Your email address will not be published. Required fields are marked *

two × four =