Featured Article : 3000% Increase in Deepfake Frauds

A new report from ID Verification Company Onfido shows that the availability of cheap generative AI tools has led to Deepfake fraud attempts increasing by 3,000 per cent (specifically, a factor of 31) in 2023.

Free And Cheap AI Tools 

Although deepfakes have now been around for several years, as the report points out, deepfake fraud has become significantly easier and more accessible due to the widespread availability of free and cheap generative AI tools. In simple terms, these tools have democratised the ability to create hyper-realistic fake images and videos, which were once only possible for those with advanced technical skills and access to expensive software.

Prior to the public availability of AI tools, for example, creating a convincing fake video or image required a deep understanding of computer graphics and access to high-end, often costly, software (a barrier to entry for would-be deep-fakers).

Document and Biometric Fraud – The New Frontier 

The Onfido data reveals a worrying trend in that while physical counterfeits are still prevalent, there’s a notable shift towards digital manipulation of documents and biometrics, facilitated by the availability and sophistication of AI tools. Fraudsters are not only altering documents digitally but also exploiting biometric verification systems through deepfakes and other AI-assisted methods. The Onfido report highlights a dramatic rise in the rate of biometric fraud, which doubled from 2022 to 2023.

Deepfakes – A Growing Threat 

As reinforced by the findings of the report, deepfakes pose an emerging and significant threat, particularly in biometric verification. The accessibility of generative AI and face-swap apps has made the creation of deepfakes easier and highly scalable, which is evidenced by a 31X increase in deepfake attempts in 2023 compared to the previous year!

Minimum Effort (And Cost) For Maximum Return

As the Onfido report points out, simple ‘face swapping’ apps (i.e. apps which leverage advanced AI algorithms to seamlessly superimpose one person’s face onto another in photos or videos) offer ease of use and effectiveness in creating convincing fake identities. They are part of an influx of readily available online AI assisted tools that are providing fraudsters with a new avenue into biometric fraud. For example, the Onfido data shows that Biometric fraud attempts are clearly higher this year than in previous years with fraudsters favouring tools like the face-swapping apps to target selfie biometric checks and create fake identities.

The kind of fakes these cheap, easy apps create have been dubbed “cheapfakes” and this conforms with something that’s long been known about online fraudsters and cyber criminals – they seek methods that require minimum effort, minimum expense and minimum personal risk, yet deliver maximum effect.

Sector-Specific Impact of Deepfakes 

The Identity Fraud Report shows that (perhaps obviously) the gambling and financial sectors in particular are facing the brunt of these sophisticated fraud attempts. The lure of cash rewards and high-value transactions in these sectors makes them attractive targets for deepfake-driven frauds. In the gambling industry, for example, fraudsters may be particularly attracted to the sign-up and referral bonuses. In the financial industry, where frauds tend to be based around money laundering and loan theft, Onfido reports that digital attacks are easy to scale, especially when incorporating AI tools.

Implications For UK Businesses In The Age of (AI) Deepfake-Driven Fraud 

The surge in deepfake-driven fraud highlighted by the somewhat startling statistics in Onfido’s 2024 Identity Fraud Report, suggest that UK businesses navigating this new landscape may require a multifaceted approach. This could be achieved by balancing the implementation of cutting-edge technologies with heightened awareness and strategic planning. In more detail, this could involve:

– UK businesses prioritising the reinforcement of their identity verification processes. The traditional methods may no longer suffice against the sophistication of deepfakes. Therefore, Adopting AI-powered solutions that are specifically designed to detect and counter deepfake attempts could be the way forward. This could work as long as such systems can keep up with the advancements in fraudulent techniques (more advanced techniques may emerge as more AI sophisticated AI tools emerge).

– The training of staff, i.e. educating them about the nature of deepfakes and how they can be used to perpetrate fraud. This could empower employees to better recognise potential threats and respond appropriately, particularly in sectors like customer service and security, where human judgment plays a key role.

– Maintaining customer trust. UK businesses must navigate the fine line between implementing robust security measures and ensuring a frictionless customer experience. Transparent communication about the security measures in place and how they protect customer data can help in maintaining and even enhancing customer trust.

– As the use of deepfakes in fraud rises, regulatory bodies may introduce new compliance requirements and UK businesses will need to ensure that they stay abreast of these changes both to protect customers and remain compliant with legal standards. This in turn could require more rigorous data protection protocols or mandatory reporting of deepfake-related breaches.

– Collaboration with industry peers and participation in broader discussions about combating deepfake fraud may also be a way to gain valuable insights. Sharing knowledge and strategies, for example, could help in developing industry-wide best practices. Also, partnerships with technology providers specialising in AI and fraud detection could offer access to the latest tools and expertise.

– Since deepfake fraud may be an ongoing threat, long-term strategic planning may be essential. This perspective could be integrated into long-term business strategies, thereby (hopefully) making sure that resources are available and allocated not just for immediate solutions but also for future-proofing against evolving digital threats.

What Else Can Businesses Do To Combat Threats Like AI-Generated Deepfakes? 

Other ways that businesses can contribute to the necessary comprehensive approach to tackling the AI-generated deepfake threat may also include:

– Implementing biometric verification technologies that require live interactions (so-called ‘liveness solutions’), such as head movements, which are difficult for deepfakes to replicate.

– The use of SDKs (platform-specific building tools for developers) over APIs. For example, SDKs provide better protection against fraudulent submissions as they incorporate live capture and device integrity checks.

The Dual Nature Of Generative AI 

Although, as you’d expect an ‘Identity Fraud Report’ to do, the Onfido report focuses solely on the threats posed by AI, it’s important to remember that AI tools can be used by all businesses to add value, save time, improve productivity, get more creative, and to defend against the AI threats. AI-driven verification tools, for example, are becoming more adept at detecting and preventing fraud, underscoring the technology’s dual nature as both a tool for fraudsters and a shield for businesses.

What Does This Mean For Your Business? 

Tempering the reading of the startling stats in the report with the knowledge that Onfido is selling its own deepfake (liveness) detection solution and SDKs, it still paints a rather worrying picture for businesses. That said, The Onfido 2024 Identity Fraud Report’s findings, highlighting a 3000 per cent increase in deepfake fraud attempts due to readily available generative AI tools, signal a pivotal shift in the landscape of online fraud. This shift could pose new challenges for UK businesses but also open avenues for innovative solutions.

For businesses, the immediate response may involve upgrading identity verification processes with AI-powered solutions tailored to detect and counter deepfakes. However, it’s not just about deploying advanced technology. It’s also about ensuring these systems evolve with the fraudsters’ tactics. Equally crucial is the role of employee training in recognising and responding to these sophisticated fraud attempts.

As regulatory landscapes adjust to these emerging threats, staying informed and compliant is also likely to become essential. The goal is not only to counter current threats but to build resilience and innovation for future challenges.

Tech Insight : What Are ‘Deepfake Testers’?

Here we look at what deepfake videos are, why they’re made, how they (might) be quickly and easily detected using a variety of deepfake detection and testing tools.

What Are Deepfake Videos? 

Deepfake videos are a kind of synthetic media created using deep learning and artificial intelligence techniques. Making a deepfake video involves manipulating or superimposing existing images, videos, or audio onto other people or objects to create highly realistic (but typically fake) content. The term “deepfake” comes from the combination of “deep learning” and “fake.”

Why Make Deepfake Videos? 

People create deepfake videos for various reasons, driven by both benign and malicious intentions. Here are some of the main motivations behind the creation of deepfake videos:

– Entertainment and art. Deepfakes can be used as a form of artistic expression or for entertainment purposes. AI may be used, for example, to create humorous videos, mimic famous movie scenes with different actors, or explore creative possibilities.

– Special effects and visual media. In the film and visual effects industry, deepfake technology is often used to achieve realistic visual effects, such as de-aging actors or bringing deceased actors back to the screen (a contentious point at the moment, give the actors strike over AI fears). That said, some sportspeople, actors and celebrities have embraced the technology and are allowing their deepfake identities to be used by companies. Examples include Lionel Messi by Lay’s crisps and Singapore celebrity Jamie Yeo agreeing a deal with financial technology firm Hugosave.

– Education and research. Deepfakes can be used for research and educational purposes, helping researchers, academics, and institutions study and understand the capabilities and limitations of AI technology.

– Memes and internet culture. In recent times, deepfakes have become part of internet culture and meme communities, where users create and share entertaining or humorous content featuring manipulated faces and voices.

– Face swapping and avatar creation. Some people use deepfakes to swap faces in videos, such as putting their face on a character in a movie or video game or creating avatars for online platforms.

– Satire and social commentary. Deepfake videos are also made to satirise public figures or politicians, creating humorous or critical content to comment on current events and societal issues.

– Privacy and anonymity. In some cases, people may use deepfakes to protect their privacy or identity by concealing their face and voice in videos.

– Spreading misinformation and disinformation. Unfortunately, deepfake technology has been misused to spread misinformation, fake news, and malicious content. Deepfakes can be used to create convincing videos of individuals saying or doing things they never did, including political figures, leading to potential harm, defamation, and the spread of falsehoods.

– Fraud and scams. This is a very worrying area as criminals can now use deepfakes for fraudulent activities, e.g. impersonating someone in video calls to deceive or extort others. For example, deepfake testing company Deepware says: ”We expect destructive use of deepfakes, particularly as phishing attacks, to materialise very soon”.

What Are Deepfake Testers? 

With AI deepfakes becoming more convincing and easier to produce thanks to rapidly advancing AI developments and many good AI video, image, and voice services widely available online (many for free), tools that can quickly detect deepfakes have become important. In short, deepfake testers are online tools that can be used to scan any suspicious video to discover if it’s synthetically manipulated. In the case of deepfakes made to spread misinformation and disinformation or for fraud and scams, these can be particularly valuable tools.

How Do They Work? 

For the user, deepfake testers typically involve copying and pasting the URL of a suspected deepfake into the online deepfake testing tool and hitting the ‘scan’ button to get a quick opinion about whether it’s likely to be a deepfake video.

Behind the scenes there a number of technologies used by deepfake testers, such as:

– Photoplethysmography (PPG), for detecting changes in blood flow, because deepfake faces don’t give out these signals. This type of detection is more difficult if the deepfake video is pixelated.

– Eye movement analysis. This is because deepfake eyes tend to be divergent, i.e. they don’t look at a central point like real human eyes do.

– Lip Sync Analysis can help highlight a lack of audio and visual synchronisation, something which is a feature of deepfakes.

– Facial landmark detection and tracking algorithms to assess whether the facial movements and expressions align realistically with the audio and overall context of the video.

– Testing for visual irregularities, e.g. unnatural facial movements, inconsistent lighting, or strange artifacts around the face.


Some examples of deepfake testers include:


This is only for detecting AI-generated face manipulations and can be used via the  website, API key, or in an offline environment via SDK. There is also an Android app. There is a maximum limit of 10 minutes for each video.

Intel’s FakeCatcher

With a reported 96 accuracy rate, Intel’s deepfake detection platform, introduced last year, was billed as “the world’s first real-time deepfake detector that returns results in milliseconds.” Using Intel hardware and software, it runs on a server and interfaces through a web-based platform.

Microsoft’s Video Authenticator Tool

Announced 3 years ago, this deepfake detecting tool uses advanced AI algorithms to detect signs of manipulation in media and provides users with a real-time confidence score. This tool was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both of which are key models for training and testing deepfake detection technologies.


This AI-based protection platform is used by governments, defence agencies, and enterprises. Users upload their digital media through the website or API, whereupon Sentinel uses advanced AI algorithms to automatically analyse the media. Users are given a a detailed report of its findings.


This is an open platform allowing users to upload a video (.wav video, maximum size is 50MB), input their email address, and get an assessment of whether a video is fake.

DuckDuckGoose DeepDetector Software 

This is fully automatised deepfake detection software for videos and images which uses explainable output powered by AI to help users understand how the detection was made. It detects deepfake videos and images in real-time and provides explainable outputs


This project aims to develop intelligent human-in-the-loop content verification and disinformation analysis methods and tools whereby social media and web content is analysed and contextualised within the broader online ecosystem. The project offers, for example, a chatbot to guide users through the verification process, an open-source browser plugin, and other open source AI tools, as well as proprietary tools owned by the consortium partners.

What Does This Mean For Your Business? 

Deepfake videos can be fun and satirical, however there are real concerns that with AI advancements, deepfake videos are being made for spreading misinformation and disinformation. Furthermore, fraud and scam deepfakes can be incredibly convincing and, therefore dangerous.

Political interference such as spreading videos of world leaders and politicians saying things they didn’t say, plus using videos to impersonate someone to deceive or extort are now very real problems. With it being so difficult to tell for sure just by watching a video whether it’s fake or not, these deepfake testing tools can have a real value both as a safety measure for businesses, or for anyone who needs a fast way to check out their suspicions.

Deepfake testers, therefore, can contribute to cybercrime prevention and countering of fake news. The issue of deepfakes as a threat is only going to grow, so the hope is that as deepfake videos become ever-more sophisticated, the detection tools are able to keep up in their ability to be able to tell with certainty whether a video is fake.