ɫ

Aaron Painter
Contributor

When voice deepfakes come calling

Opinion
Dec 18, 20247 mins
CybercrimeSecurity

What call center agents don’t know can hurt you.

Man, call center and office in computer with consulting, reading and guide client on voip. Agent, headphones and mic by team for tech support, telemarketing or customer care with advice in workplace
Credit: PeopleImages.com - Yuri A / Shutterstock

Intro: Time was, a call center agent could be relatively secure in knowing who was at the other end of the line. And if they weren’t, multi-factor authentication (MFA), answers to security questions, and verbal passwords would solve the issue.

Those days are behind us, as deepfake audio and video are no longer just for spoofing celebrities. Voice – in which a real person’s voice is cloned from recorded snippets of their voice – are one of the biggest risks facing modern businesses and their call centers.

surged last year, and unlike email phishing, audio and video deepfakes don’t come with red flags like spelling errors or strange links. A recent showed that 86% of call centers surveyed are concerned about the risk of deepfakes, with 66% lacking confidence that their organization could identify them.

How fraudsters use audio deepfakes

1. Navigating IVR

According to of call center deepfake attacks, a primary method favored by fraudsters is using voice deepfakes to successfully move through IVR-based authentication.

Fraudsters also had the answers to security questions and, in one instance, knew the account holder’s one-time password. Often, bots are involved in this process. Once the bot has achieved IVR authentication, it can obtain basic information like the bank balance to determine which accounts to mark for further targeting.

2. Changing account or profile details

By cloning customers’ voices,scammers are able to dupe call center agents into changing the emails, home addresses, and phone numbers associated with the accounts, which then enables them to do everything from accessing customers’ one-time passwords to ordering new checks or cards.

This method of account takeover (ATO) is becoming more common as attackers attempt to bypass existing security measures. In a recent of call center organizations, nearly two-thirds of financial industry respondents claim the majority of ATOs originate in the call center.

3. Social engineering attacks

Deepfakes significantly enhance the effectiveness of social engineering by making it much harder to distinguish bad actors from legitimate customers. A recent found that fraudsters are not always trying to bypass authentication. Instead, they use a “basic synthetic voice to figure out IVR navigation and gather basic account information.” Once this is achieved, the threat actor calls using their own voice to social engineer the agent.

Contact centers that use video verification calls might assume they’re safe, but fraudsters can now stream live deepfake video feeds that are indistinguishable from the real thing. These gangs operate globally: case in point, the Central Investigation Bureau (CIB) of Thailand issued a about call center gangs that are using AI to create deepfake videos, cloning faces and voices with alarming accuracy.

Why are contact centers vulnerable?

Today’s deepfakes are so good that they’re virtually indistinguishable from reality. Generative AI advancements have made it shockingly simple to quickly and realistically emulate the tone and likeness of someone’s voice, often for free.

A quick Google search finds multiple sites offering “free AI voice cloning” in 60 seconds or less. All you need is a short recording of the person’s voice. In the case of , Voice Engine, just 15 seconds of audio is sufficient. And with so many homemade videos on social media, it’s not difficult to find that few seconds of someone’s voice online.

Contact center agents tend to believe they’re not a target, yet research indicates that call center attacks are rising. According to the , 90% of financial industry respondents reported an increase in call center fraud attacks, with one in five of them claiming attacks are up more than 80%.

Yet, most contact centers lack effective tools to differentiate between fraudsters and real customers. Importantly, agents are also not aware of just how realistic deepfakes can be.

Double-jeopardy: fraudsters impersonating agents

Car dealership software provider CDK Global recently suffered two that caused a shutdown of its systems and disrupted car dealerships, who rely on CDK’s software for everything from inventory to financing. In the wake of this security breach, threat actors called CDK customers posing as CDK support agents to try to gain system access.

This sort of attack is a novel evolution of a traditional vishing attack. Or the classic “” in which threat actors claiming to be from Microsoft support call customers and offer to “fix” nonexistent issues with their device, often gaining access to the customer’s computer and personal data in the process.

How to protect against deepfakes

1. Education

There’s nothing like a human voice on the other end of the line; not only can an agent empathize and calm anxious callers, but they can do a better job compared to bots at telling the difference between live, authentic human voices and deepfakes.

But to be effective, agents need to learn how to spot signs of social engineering, such as creating a false sense of urgency, and how to identify synthetic voices.

2. Process

When people hear about a deepfake attack, they sometimes call it a failure of process. Yet at the end of the day, processes are only as effective as the tools they use. Contact centers must implement strong caller verification processes built on tools that mitigate the risks of deepfake attacks and social engineering.

3. Going beyond status-quo approaches

Contact center agents aren’t cybersecurity experts, and they shouldn’t have to be. Education is important, but agents shouldn’t have to rely on their own ears to detect voice deepfakes. Contact centers need to equip agents with the best tools to do their job.

While AI-powered identity verification technologies can detect AI-generated voices, images, and videos in real time, companies cannot rely solely on AI to detect AI. That’s because deepfakes are now so good, many identity verification (IDV) tools are falling victim.

Over-reliance on MFA is also a mistake, as sending a passcode doesn’t tell you who’s on the other end of the phone. Calls can be intercepted, or the fraudster could be talking to the actual customer and the call center agent simultaneously: while tricking the victim into entering the one-time passcode.

Similarly, placing too much trust in voice biometrics (VB) can leave you vulnerable. While VB providers are working hard to add liveness checks and deepfake detection into their products, the fight against deepfakes is an “AI arms race” that, in many cases, the attackers are winning.

Instead, organizations should look for an approach to IDV that stops deepfakes before they can even be used. TransUnion’s emphasized the importance of stopping bad actors before they reach the call center or IVR system, with 70% of all survey respondents and nearly 67% of financial industry respondents agreeing that caller authentication should start prior to any contact with the call center agent.

Advanced cybersecurity technology is needed that incorporates mobile cryptography, machine learning, and advanced biometric recognition alongside AI. This combination of tools can serve as a “surround sound” approach for call center security that strengthens agents’ guard against deepfakes by preventing the authentication of impersonators at the outset.

Given the reliance on call centers for so much of today’s customer service, it is imperative that companies prioritize the adoption of advanced cybersecurity tools and technologies sooner rather than later to protect consumers, their business, and their reputation.

Aaron Painter
Contributor

Aaron Painter is the CEO of Nametag Inc., the world's first identity verification platform designed to safeguard accounts against impersonators and AI-generated deepfakes. Prior to his tenure at Nametag, Aaron served as CEO of London-based Cloudreach, a Blackstone portfolio company and the world's leading independent multi-cloud solutions provider. He also spent nearly 14 years at Microsoft, where he held various leadership roles, including VP and GM of Business Solutions in Beijing, China, GM of Corporate Accounts and Partner groups in Hong Kong, Chief of Staff to the President of Microsoft International based in Paris, France, and GM of the Windows Business Group while stationed in Sao Paulo, Brazil. Aaron is a Fellow at the Royal Society of Arts, Founder Fellow at OnDeck, a member of Forbes Business Council and a senior External Advisor to Bain & Company. He was named the AWS 2019 Consulting Partner of the Year for his work at Cloudreach. As a frequent media commentator, Aaron has appeared on Bloomberg and Cheddar News. He is also an active speaker, advisor and investor to companies that are pursuing business transformation.

More from this author

Exit mobile version