AI Has Rewired Fraud. Here’s What Works.
A good question about fraud came up in one of my Telegram groups: ‘Generative AI has changed fraud, so what controls actually work now?‘ I thought it might make for a good article, because it’s the right question to ask. However, the answer requires recognizing that we’re dealing with an evolved version of a persistent threat, not an entirely new category of risk.
A better way to say that might be that generative AI hasn’t invented new cons so much as it has industrialized the old ones. Voice clones make “CEO calls” sound legitimate. Video deepfakes turn ad-hoc verification on live meetings into a trap. Text generators churn out native-sounding spear-phish in any language. The scale shows up in the numbers: the FBI’s Internet Crime Complaint Center (IC3) logged 859,532 complaints and $16.6 billion in reported losses for 2024. That’s up 33% year over year.
A single case illustrates the shift. In early 2024, an employee at Arup’s Hong Kong office joined a group video call. The “CFO” and colleagues on screen were AI forgeries; the employee wired roughly £20 million before the deception came to light. Can you imagine? It’s an interesting story. Well, the lesson here is simple: if your controls depend on faces and voices, you’re already outmatched, for about four reasons.
What AI actually changed
Impersonation is cheap, fast, and multi-modal. Where criminals once needed language skills, audio engineering, or video editing, off-the-shelf models now do most of that work. The FCC has responded by clarifying that AI-generated voices in robocalls are illegal under the TCPA; enforcement actions and proposed fines followed. That closes a loophole, but it doesn’t neutralize bespoke, one-to-one “vishing” that never touches an autodialer.
Verification rituals are easier to defeat. Financial crime regulators warned that deepfakes are already being used to bypass onboarding and KYC checks. Think forged documents, “liveness” spoofs, and synthetic personas that look good enough to pass a superficial review.
Fraud funnels scale on consumer platforms. “Task” or “product-boosting” job scams (complete micro-tasks, watch your “earnings” rise, then front your own money to unlock them) exploded across messaging apps. FTC data and advisories show steep growth through 2024–2025 and heavy use of crypto rails for victim deposits.
Payments teams remain prime targets. Treasury and AP functions still see concentrated pressure via business email compromise (BEC) and vendor-impersonation changes. The Association for Financial Professionals reported that 79% of organizations encountered attempted payment fraud in 2024; BEC remained the top vector.
Why “spot the fake” is not a strategy
Frankly, if your hope is to “spot the fake,” you’ve already lost. Sure, some suck and are easy to identify, but some are really good. In fact, human judgment on deepfakes hovers near coin-flip in controlled studies, and automated detectors degrade outside lab conditions. Research in 2024–2025 documents modest human accuracy (often ~58%) and demographic bias in machine detectors. You can use tools to flag risk, but you cannot build your payment or identity program on the promise that people (or models) will consistently tell real from fake. Hence, we have to design controls that assume realistic forgeries exist.
That probably sounds daunting. It is, but it’s not all bad news either. While these tools make it easier to commit fraud, they also exacerbate laziness in the criminals who use them. Criminals want easy targets. They also want easy tools. This combo produces mass-manufactured scams with predictable fingerprints, reused prompts and scripts, identical phrasing across messages, look-alike domains, recycled wallets or bank routes, and the same pressure tactics (“urgent,” “confidential,” “can’t talk”). Because speed is the point, they avoid friction: out-of-band callbacks, dual approvals, hold periods, and identity step-ups. Force those steps, and most campaigns collapse or move on. In short, AI raises the ceiling on what a scam can look like, but boring, enforced processes still lower the odds it works.
Security management controls that still work
1) Redefine trust as a workflow, not a perception.
Treat visual or auditory realism as non-evidence. Money movement, vendor banking changes, payroll edits, and sensitive data pulls should be completed only through pre-defined, authenticated workflows and independent callback verification using known coordinates from your system of record, not from the request itself. Industry guidance for ACH and corporate payments has emphasized out-of-band verification and dual control for years; AI merely makes it non-negotiable.
2) Raise assurance for high-risk actions.
Adopt phishing-resistant MFA (WebAuthn/FIDO2) for staff who can move funds, change payees, alter identity records, or export crown-jewel data. Map requirements to NIST SP 800-63: AAL2 as a baseline for privileged finance and admin users, and AAL3 where hardware-backed keys and verifier-impersonation resistance are warranted. CISA’s guidance and federal implementations point in the same direction. CISA NIST
3) Close email impersonation gaps.
Publish and enforce DMARC at p=reject for all second-level domains (including “no-send” domains that criminals spoof). Align SPF/DKIM for every sender—especially third-party platforms—and enforce inbound authentication before your users ever see a message. Federal directives and CISA guidance are explicit about the target posture. CISA
4) Harden voice channels.
NEVER approve changes or payments on inbound calls, no matter how convincing the voice. Require scheduled callbacks on numbers sourced from your CRM/AP master. If you originate or intermediate significant call traffic, confirm your providers’ STIR/SHAKEN implementation and Robocall Mitigation Database filings; this reduces spoofability and raises traceability.
5) Fix vendor and payee-change exposure.
Impose cooling-off periods for first-time or changed beneficiaries; require second-person approval in a separate session; and verify banking changes via an independent channel before the first payment. AFP and Nacha materials reinforce these practices; they are the clearest way to blunt AI-amplified BEC. AFP Nacha
6) Strengthen onboarding and KYC.
Where you accept new customers, contractors, or partners, pair document forensics and strong liveness with secondary possession proofs (e.g., micro-deposits and return-code checks). Train reviewers on FinCEN’s deepfake red flags, and document escalation paths when media looks “too clean.” FinCEN.gov
7) Add media provenance where it matters.
If your business relies on photos, video, or audio as evidence (insurance, journalism, brand protection), pilot C2PA Content Credentials in capture and editing tools. Treat provenance as a signal, not as a universal truth—adoption is uneven across devices and platforms, but it gives investigators and downstream platforms something to verify. Content Credentials The Verge
8) Prepare the workforce for AI-scaled job scams.
Publish your real recruiting process and domains, centralize candidate communication inside corporate systems, and warn explicitly that you never ask applicants to prepay or “boost” anything. Point employees and applicants to current FTC advisories on task scams. Consumer Advice
9) Plan for incident response and funds recovery.
When a suspicious change or wire is discovered, treat it as an incident, preserve artifacts (emails, headers, call recordings, meeting logs), notify your bank immediately, and file at ic3.gov. The IC3 program exists for exactly this purpose; in some cases, rapid reporting enables freezes through the FBI’s Recovery Asset Team via your banks. FBI
Compliance and policy environment you can leverage
Regulators are tightening the screws on impersonation and calling ecosystems. The FTC’s Government and Business Impersonation Rule took effect on April 1, 2024, expanding enforcement options against impersonation scams. The FCC’s February 2024 ruling confirmed that AI-generated voices are “artificial” under the TCPA, reinforcing liability for voice-clone robocalls. Use these anchors in contracts, takedown requests, and provider due diligence.
Metrics that keep you honest
Executives should see a short set of leading indicators every quarter: the percentage of privileged and payments users on phishing-resistant MFA; the percentage of enterprise domains at DMARC p=reject; time-to-escalation for suspicious payment or identity requests; and the share of vendor/payroll changes that were independently verified before execution. If these numbers aren’t trending up, AI will keep turning small control gaps into big losses. We are already seeing it.
Bottom line
AI removed the friction and skill once required to impersonate people convincingly. But that also breaks “verify by look and sound” as a control. The durable response is process, identity assurance, and channel security. All measurable, all enforceable, and all resilient to better fakes. Build around authenticated workflows, strong MFA mapped to NIST assurance levels, DMARC-enforced email, callback-only approvals, and disciplined payment operations. When the media can lie, the mechanism is what you trust.
Once again, we must change our cybersecurity viewpoint. Stay vigilant.