FinCEN Warning on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions

  • Source: fincen.gov

Takeaway

 On November 13, 2024, the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) issued an alert to help financial institutions identify and mitigate fraud schemes involving deepfake media generated by AI tools. The alert outlines fraud typologies and red flags to address the increasing misuse of generative AI (GenAI) and provides examples of relevant fraud schemes that can assist in identifying and reporting suspicious activity.

Treliant has the expertise to advise financial service firms on strategies to identify and mitigate risks related to deepfake fraud schemes and assist with the implementation of robust anti-fraud and customer due diligence measures, ensuring adherence to Bank Secrecy Act (BSA) reporting requirements.

Highlights

FinCEN issued an alert on November 13, 3024 to aid financial institutions in detecting fraud involving deepfake media created through generative AI tools.  FinCEN highlighted that, while GenAI offers significant potential, it can also be exploited by bad actors for fraud against businesses and consumers.  It is worth nothing that FinCEN based its alert on activity observed in suspicious activity reporting by banks between 2003 and 2004 that described the use of deepfake media in fraud schemes targeting financial institutions and their customers.

FinCEN’s alert details the emerging BSA/AML risks posed by generative AI tools, particularly deepfake media, which enable criminals to alter or create fraudulent identification documents (i.e.- driver’s licenses and/or passports) to circumvent verification and authentication checks in the CDD process. These schemes have also been used to conduct fraud, including check fraud, credit card fraud, and loan fraud, as well as phishing attacks and scams.

The alert details red flags that financial institutions can consider in their processes to detect, prevent, and report on potentially suspicious activity related to the use of GenAI for illicit purposes. These red flags are as follows:

  • A customer’s photo is internally inconsistent (e.g., shows visual signs of being altered) or is inconsistent with their other identifying information (e.g., a customer’s date of birth indicates they are much older or younger than the photo suggests).
  • A customer presents multiple identity documents that are inconsistent with each other.
  • In Business Email Compromise (BEC) schemes, criminals use compromised or spoofed accounts, often those actually or purportedly belonging to company leadership, vendors, or lawyers, to target employees with access to a company’s finances to induce them to transfer funds to bank accounts thought to belong to trusted partners.
  • A customer uses a third-party webcam plugin during a live verification check. Alternatively, a customer attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.
  • A customer declines to use multifactor authentication to verify their identity.
  • A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.
  • A customer’s photo or video is flagged by commercial or open source deepfake detection software.
  • GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.
  • A customer’s geographic or device data is inconsistent with the customer’s identity documents.
  • A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.

In conclusion, financial institutions should be vigilant in identifying deepfakes, especially those using fraudulent identity documents used to bypass identity verification processes. FinCEN best practices include multifactor authentication (MFA), including phishing-resistant MFA as well as live verification checks where a customer is prompted to confirm their identity through audio or video. Treliant can advise clients on exploring these technologies and best practices to help implement anti-fraud solutions and ensure suspicious activity is reported per BSA requirements.

Ready to Talk?

We work with you to understand your needs, so we can tailor our approach to your engagement. Learn more when you connect with our team.

Author

Richard Lee

Richard Lee is an Analyst at Treliant. At Treliant, Richard brings his expertise in transaction monitoring and client due diligence to bear in helping financial institutions comply with regulatory requirements and prevent financial crime.