Fraudsters have become adept at using deepfakes and have the potential to cause significant fraud losses with this terrifying technology.
Learn how deepfakes are being used to defraud customers with effective impersonations of real people and what banks can do to keep their customers safe.
How Deepfake Tech Enables Fraud Losses
Deepfake technology has been used to impersonate numerous public figures, including celebrities like
Tom Cruise, business leaders like
Elon Musk, and Ukrainian president
Volodymyr Zelenskyy. Deepfakes can be used for a wide range of uses, including fun experiments like reimagining movies with different actors (e.g., casting
Nicolas Cage as Superman).
But we’ve also seen deepfakes used for more sinister purposes. Fraud losses resulting from deepfake scams have ranged from $243,000 to $35 million in individual cases. The Musk deepfake was part of a crypto scam that cost US consumers
roughly $2 million over six months. The technology has also been used to simulate famous actors – and sometimes everyday people – into adult films.
What’s truly scary about deepfakes is not just their effectiveness. It’s their newness. This technology is still in its developmental stages and is already capable of producing highly effective illusions. In time, like all other technology, it will only
get more effective. That’s why banks and financial institutions must understand the most frightening types of deepfake fraud to monitor.
4 Terrifying Deepfake Scams to Watch
Deepfake attacks take many different forms. But each deepfake approach can result in significant fraud losses. As you’ll see, each tactic is frightening for different reasons.
Ghost Fraud Deepfakes. A ghost fraud deepfake occurs when a fraudster steals the identity of a recently deceased person. For example, the fraudster can breach the dead person’s account to access their checking or savings account, apply for loans, or hijack
their credit score information. Deepfake technology has (ironically) given this type of fraud new life. The fraud creates a very convincing illusion that a real, living person is accessing the account, making the scam much more believable.
Undead Claims. This type of fraud has been around for a long time. In some cases, a family member collects their late relative’s benefits (such as Social Security, life insurance, or pension payouts) before anyone learns of the death. Once again, deepfake
technology provides cover for fraudsters and can keep fraud losses hidden for a long time.
‘Phantom’ or New Account Fraud. In this type of fraud, fraudsters use deepfake technology to create a fake identity and take advantage of one of banking’s most vulnerable stages: account opening. Criminals use fake or stolen credentials to open new bank
accounts while the deepfake convinces the bank that the applicant is real. Fraudsters can bypass many security checks – including two-factor authentication (2FA) requirements – with this tactic. Once the account is created, bad actors can use it for money
laundering or to accrue debt. According to
recent figures, this type of deepfake has already resulted in significant fraud losses of roughly $3.4 billion.
‘Frankenstein’ or Synthetic Identities. The fictional Dr. Frankenstein built a monster from the remains of different bodies. Fraudsters take a similar approach to synthetic identity fraud by using a combination of real, stolen, or fake credentials to create
an artificial identity. With the aid of deepfakes, fraudsters convince banks that the invented person is real and open credit or debit cards to build up the fake user’s credit score.
How Banks Can Protect Customers from Deepfakes
Deepfakes are likely to become a central component of criminals’ fraud strategies. As they become more effective, it will only get more challenging for banks and FIs to spot them and prevent fraud losses. That’s a truly terrifying vision. But all is not
lost for banks. Here’s what banks can do to prevent deepfake fraud threats:
1. Complement the Account Opening Process with Digital Trust
The account opening stage is one of the most vulnerable points in a bank’s workflow. If a fraudster uses a convincing deepfake during the proof of life stage, banks could unknowingly onboard a very risky actor. Using digital trust – which includes a central
pillar of behavioral biometrics – banks can analyze not just the image or video that is provided during onboarding. Biometric solutions on their own (including facial recognition) will not be enough to detect a deepfake. But the behavioral biometrics component
of digital trust can measure how a customer normally behaves.
But For example, let’s say a new customer claims to be 75 years old. Digital trust solutions can assess whether the customer is really as old as they claim to be from how they handle their device. This includes looking at the way they touch their screen,
the angle at which they hold their phone, or if they type at the typical speed of an elderly customer. These insights can determine if a fake or synthetic identity is being used.
2. Review the Customers’ Device Hygiene
Digital trust solutions can also be used to assess whether the device used by a customer is trustworthy or not. Banks should look at whether a recording provided for a proof of life was recorded in real time. They should also look at whether the device submitting
the identity check is the same device used for the recording. Digital trust solutions can also assess whether a device may have been hacked or compromised by malware. Banks should look at these factors carefully to assess whether or not a submitted video is
real or not.
3. Consult with ID Providers
In the age of deepfakes, banks can’t shoulder the responsibility of detecting fake images alone. That’s why banks who work with outside vendors for onboarding and digital authentication must understand how these firms conducted their services. Ask identity
verification providers how video for proof of life was provided and whether a video was recorded on the submitting device itself. ID providers should perform their own malware and device hygiene checks to ensure a device used for account opening is trustworthy.
4. Teach Customers to Protect Their Data
Consumers have a crucial role to play in protecting themselves against deepfake fraud losses. This is no easy task given how much personal data is publicly available. But banks should still caution their customers about how their data can be manipulated
and urge customers to protect themselves. Some core tips for customers include:
control who sees your information on social media
avoiding giving data to untrustworthy, third party websites or downloading untrustworthy applications
don’t use devices that have a history of being compromise or jailbroken
The threat of deepfake fraud should scare banks all year long. Fortunately, using digital trust solutions gives banks a strong chance to catch fraud before it’s too late.
Leave a Reply