AI-Based Attacks , Artificial Intelligence & Machine Learning , Finance & Banking

AI Is Changing the Face of Fraud - and Fraud Fighting

Banks Using AI to Spot Fraud, Create Synthetic Data for Better Predictive Analytics
AI Is Changing the Face of Fraud - and Fraud Fighting
Image: Shutterstock

Criminals are harnessing the power of artificial intelligence to change the face of fraud. In addition to horror stories such the finance worker who was tricked into paying out $25 million by an AI-based voice clone of a company director, fraudsters are using AI to scale up campaigns and create convincing synthetic identities with fake social media profiles and manipulated images and audio.

See Also: Webinar | 2024 Phishing Insights: What 11.9 Million User Behaviors Reveal About Your Risk

Deloitte estimated that U.S. banks could lose $40 billion in the next three years to AI-powered scams.

While the criminals may have an advantage in the AI race, banks and other financial services firms are responding with heightened awareness and vigilance, and a growing number of organizations are exploring AI tools to improve fraud detection and response to AI-driven scams.

Harnessing AI for Fraud Programs

The banking sector is now on the lookout for a variety of targeted attacks using phishing and vishing tactics with realistic, personalized messages enhanced by generative AI and large language models. AI tools can replicate any voice patterns with just a few seconds of sample audio and enable caller ID spoofing, which makes fraudulent calls appear as if they are coming from legitimate sources.

To improve detection of these scams, software company SAS is working on a pilot project to help financial services institutions use generative AI to analyze call center records of fraud claims to detect potential scammers. David Stewart, director of financial crimes and compliance at SAS, said a recent survey of over 1,000 fraud-fighting professionals shows that nearly 90% of respondents plan to add generative AI to their tool set by next year.

"Larger institutions are inundated due to spiraling scam volumes and need better intelligence to differentiate legitimate claims from false ones, so they can stem losses and help victims," Stewart told Information Security Media Group.

Most financial services organizations use historical and internal data to detect fraud patterns. This can limit controls to only the types of events already experienced, and the machine learning models detect the new methods of constantly evolving fraudsters only after being scammed. Organizations also can miss broader fraud patterns occurring across different institutions or sectors, leading to potential blind spots in fraud programs.

Traditional machine learning models evaluate transactions as they happen based on historical events, but they struggle with predictive analytics - and with adapting quickly to new fraud trends. A big part of the problem is that static models require recalibration and tuning when new data sources are introduced, said David Barnhardt, strategic adviser to the fraud and anti-money laundering practice group at financial services technology research company Datos Insights, formerly Aite-Novarica Group.

Also, using only internal data can create a siloed view that omits broader fraud trends and patterns across products within the financial institution, he told ISMG.

To address the limitations of ML-based fraud models, financial services companies are beginning to use generative AI to create synthetic data that can help institutions conduct "what-if" analyses of emerging fraud risks.

"The use of digital twins is an enhanced form of simulation that can measure the impact of different scenarios to determine how they would affect banks' operational fraud controls. The combination of synthetic data and simulation is a useful tool to help firms prepare for the unexpected," Stewart said.

Stewart said one organization is using synthetic data to simulate certain types of high-profile risks. "It enables us to test strategies without the PII concerns involved with real customer data," he said. One of the organization's customers is also considering using the synthetic data generator as an alternative to staging a mirror image of its production environment. This approach addresses data privacy concerns and the costs associated with building a development environment with real data, he said.

"Ultimately, synthetic data allows us to test more challenging strategies, which results in improved detection rates and better coverage," Stewart added.

But even if a company is using synthetic data to train models, data governance cannot take a back seat, he said. Banks will need data stewards who understand the applicability of certain features, especially with respect to bias that might influence onboarding, credit decisioning and account closure decisions, Stewart said.

Stewart advocates using hybrid machine learning, which includes supervised and unsupervised techniques, to identify anomalous behaviors not previously monitored. "Anyone who isn't using device, geolocation and biometric markers as part of their digital fraud strategy is vulnerable to synthetic identities and first-party fraudsters," he said.


About the Author

Suparna Goswami

Suparna Goswami

Associate Editor, ISMG

Goswami has more than 10 years of experience in the field of journalism. She has covered a variety of beats including global macro economy, fintech, startups and other business trends. Before joining ISMG, she contributed for Forbes Asia, where she wrote about the Indian startup ecosystem. She has also worked with UK-based International Finance Magazine and leading Indian newspapers, such as DNA and Times of India.

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.in, you agree to our use of cookies.