FaceMe®
< All Articles

Deepfakes & Anti-Spoofing: Prevent Identity Theft

Nov 04, 2025

What is a deepfake?

With the rapid development of neural network AI technology, the accuracy of many computer vision applications has greatly improved. Since Google introduced Transformer architecture in 2017, generative AI (GenAI) applications have surged, enabling AI to create content from text prompts and inspire new creative possibilities.

However, along with its benefits, generative AI has introduced controversial applications - most notably, deepfakes. The term "deepfake" combines "deep learning" and "fake," referring to AI generated images, videos, or audio that simulate someone’s appearance or voice, creating content they never actually produced.

Deepfakes can be used responsibly in media, entertainment, and marketing. For example:

  • In The Mandalorian (2020), deepfake technology was used to digitally recreate a young Luke Skywalker.
  • In 2022, Samsung showcased deepfake avatars of athletes delivering personalized advertisements at the Winter Olympics.
  • Hereafter AI allows people to “converse” with digital representations of deceased loved ones.
NVIDIA used deepfake technology in 2021 to generate a video segment of NVIDIA CEO Jensen Huang's speech.

NVIDIA used deepfake technology in 2021 to generate a video segment of NVIDIA CEO Jensen Huang's speech.
Source: CNET

The dangers of deepfakes

Fake News

As Deepfakes can mislead audiences when used maliciously. In 2023, a series of AI-generated videos falsely depicted major global influencers endorsing cryptocurrency scams, misleading thousands of viewers and impacting markets.

Ukrainian YouTuber Olga Loiek exposes how she was subjected to AI face-swapping for pro-China propaganda.

Ukrainian YouTuber Olga Loiek exposes how she was subjected to AI face-swapping for pro-China propaganda.
Source: Olga Loiek YouTube

Political and Election Interference

During the 2024 U.S. elections, deepfake videos circulated online showing candidates making statements they never actually said, including doctored footage of public figures appearing to endorse or criticize policies, creating confusion and influencing voter perception.

Former President Trump shared this image created using deepfake technology that shows mega-popstar Taylor Swift expressing support for him.

Former President Trump shared this image created using deepfake technology that shows mega-popstar Taylor Swift expressing support for him.
Source: TMZ

Fraud & Crime

Deepfakes have been used to replace the faces of celebrities or public figures (usually women) into explicit, pornographic videos. This not only can create severe damage to the victim’s reputation but can also lead to significant psychological trauma. The UK combats this issue with a law that makes the creation of nonconsensual sexually explicit deepfake images a criminal offense punishable with fines and possible jail time.

Additionally, deepfake technology has been used in commercial fraud. In one significant fraud case in early 2024, deepfake technology was used to generate fake video conferences, simulating the voices and appearance of company executives. A duped employee thought they were participating in a video conference with the company's CFO, when in reality, the video and audio were deepfakes generated in real-time. Believing the meeting to be real, the employee was tricked into transferring $25 million USD.

Although face-swapping, video editing, and post-production techniques have existed for a long time, deepfake technology has taken these operations to a new level, making it difficult for the human eye to discern between what is real and what is fake, and even allowing for real-time generation of false content. Because deepfake tools can be easily downloaded and used from the internet, their potential harm is nearly impossible to measure.

Regulations related to deepfakes

The European Union's Artificial Intelligence Act EU AI Act: Requires AI-generated content to carry digital watermarks and clear AI-generated labeling.

United States: Laws like California AB 602/730 and Texas statutes restrict election-related and pornographic deepfakes.

Social media platforms: TikTok, Instagram, and YouTube now enforce stricter detection and labeling of AI-generated content. Taiwan Financial Supervisory Commission (FSC): With the "Core Principles and Policies for the Use of AI in the Financial Industry", banks must verify live identities in video interactions to prevent deepfake-based fraud.

Challenges deepfakes pose to eKYC verification

eKYC (Electronic Know Your Customer) is a process of remotely verifying customer identity through online or digital technologies, helping service-providing companies to quickly and reliably verify and authenticate customer identity. eKYC technology is widely used in industries such as finance and banking, contributing to increased business efficiency and reduced operational costs.

Further Reading: KYC Becomes eKYC with the Addition of Facial Recognition in the BFSI Industry

However, during the process of remotely collecting customer facial images, deepfake technology poses potential security risks. Hackers may exploit the following methods to attack eKYC systems:

  • File selection attack
  • Photo and video reproduction attack
  • Camera signal injection attack
Anti-spoofing security showing file selection, photo video reproduction, and camera signal injection attacks

File selection attack:

If the eKYC digital identity verification process allows customers to upload their own photos, hackers may tamper with identity documents and photos, or even use deepfake technology to create fake photos for identity impersonation. In this case, service providers find it difficult to effectively detect these forged images, not only requiring significant manpower to verify the authenticity of the data but also facing a higher risk of financial crime.

Photo and video reproduction attack:

To enhance security, some eKYC processes now require users to take real-time selfies using their cameras to reduce the risk of hackers impersonation using stolen documents. However, hackers can still utilize computers or tablets to display fake facial photos or videos, or even generate corresponding dynamic videos using deepfake technology to deceive through a mobile phone camera. This type of impersonation attack is referred to as a "Presentation Attack" (PA). Although service providers can rely on real-time manual reviews to block such attacks, this undoubtedly significantly increases the workload for reviewers and poses challenges in terms of efficiency.

Camera signal injection attack:

Finally, hackers may also infiltrate the devices running the eKYC application or website and inject streams of fake images generated by deepfake technology. This means that applications or websites executing eKYC digital identity verification may mistakenly identify these images as real individuals in a live setting, while in reality, all these images are fabricated. This type of attack has a higher technical threshold, significantly increasing the difficulty of identity impersonation, but it also makes prevention more complex. Although this attack method has not yet emerged on a large scale, financial institutions and other companies that require eKYC verification should proactively recognize these risks and take appropriate protective measures.

How does FaceMe block deepfakes?

In the FaceMe eKYC digital identity verification process, FaceMe employs multiple strategies to effectively prevent deepfake and other potential attacks, ensuring that the results of digital identity verification are highly credible, reliable, and secure. FaceMe's solution includes:

  • Real-time processing of camera streams:
    FaceMe has powerful AI analytical capabilities that can process camera stream images in real time, ensuring that all computer vision processing occurs in a live environment. This method effectively prevents users from misusing others' photos, videos, or content generated by deepfake technology, ensuring the authenticity of the verification.
  • High-accuracy facial recognition engine:
    FaceMe is equipped with high-accuracy facial recognition technology that can accurately compare the similarity between users and their identity documents (example: Taiwan ID cards), effectively preventing identity impersonation attacks. According to tests by the U.S. National Institute of Standards and Technology (NIST), FaceMe's facial recognition false acceptance rate is less than one in a million, with a correct identification rate of up to 99.83%, demonstrating its extremely high precision and reliability in facial recognition algorithms on a global scale.
  • Further Reading: How Does Facial Recognition Work?
  • Highly reliable anti-spoofing technology:
    FaceMe supports standard cameras (2D), 3D depth cameras (3D structured light), and infrared dual-camera modules for facial anti-spoofing, effectively preventing attacks from printed photos or videos played on device screens. Whether it’s a pre-recorded video or a deepfake-generated video, FaceMe's anti-spoofing technology can intercept it. FaceMe's anti-spoofing technology has been verified by the iBeta biometric test lab, which is accredited by USA NIST. The test was conducted compliant to ISO/IEC 30107-3 PAD standard Level 2, and FaceMe achieved a 100% spoof prevention rate. FaceMe also ranks first among 82 worldwide facial recognition vendors in the NIST PAD test, achieving 100% spoof detection rate.
  • Further Reading: Can Facial Recognition Anti-Spoofing Technology be Easily Breached?
  • Ultra-reliable document anti-fraud technology:
    FaceMe uses AI computer vision technology to identify visual anti-fraud features on identity cards (example: Taiwan ID cards), including laser labels, laser perforations, color-changing ink, and other characteristics. It can determine whether the identity document held by the user is genuine and can also detect signs of tampering or alterations on the document, effectively defending against deepfake and other identity impersonation attacks.
  • Deepfake detection:
    When hackers attempt to bypass common anti-spoofing technologies using deepfakes combined with camera signal injection attacks, FaceMe's deepfake detection feature comes into play. This specialized detection technology for deepfakes utilizes an independently designed, developed, and trained model that can accurately identify whether the image signals are generated by deepfake technology, further ensuring the security of digital identity verification.

FaceMe: the best choice for comprehensive anti-spoofing and deepfake detection

In summary, during the eKYC authentication process, it is essential to prevent identity impersonation attacks using deepfake technology, as well as to address other forms of attacks, such as presentation attacks (PA), forged identity documents, and counterfeit facial masks. FaceMe provides a comprehensive anti-fraud solution that can withstand various identity impersonation attacks, offering reliable protection for the financial industry and all service providers that require digital identity verification.

FaceMe not only helps these institutions comply with regulatory requirements but also further ensures the high security of accounts and transactions, thereby establishing a safer and more trustworthy digital identity verification system. Choosing FaceMe means choosing reliability and security, ensuring that your business can operate steadily in the face of various identity impersonation risks.

Our comprehensive anti-spoofing solutions will provide secure protection for your digital identity verification.

FaceMe®: CyberLink’s Complete Facial Recognition Solution

Contact
Our Sales Team

Submit Your FaceMe Business Inquiry