Featured Article : 3000% Increase in Deepfake Frauds

A new report from ID Verification Company Onfido shows that the availability of cheap generative AI tools has led to Deepfake fraud attempts increasing by 3,000 per cent (specifically, a factor of 31) in 2023.

Free And Cheap AI Tools 

Although deepfakes have now been around for several years, as the report points out, deepfake fraud has become significantly easier and more accessible due to the widespread availability of free and cheap generative AI tools. In simple terms, these tools have democratised the ability to create hyper-realistic fake images and videos, which were once only possible for those with advanced technical skills and access to expensive software.

Prior to the public availability of AI tools, for example, creating a convincing fake video or image required a deep understanding of computer graphics and access to high-end, often costly, software (a barrier to entry for would-be deep-fakers).

Document and Biometric Fraud – The New Frontier 

The Onfido data reveals a worrying trend in that while physical counterfeits are still prevalent, there’s a notable shift towards digital manipulation of documents and biometrics, facilitated by the availability and sophistication of AI tools. Fraudsters are not only altering documents digitally but also exploiting biometric verification systems through deepfakes and other AI-assisted methods. The Onfido report highlights a dramatic rise in the rate of biometric fraud, which doubled from 2022 to 2023.

Deepfakes – A Growing Threat 

As reinforced by the findings of the report, deepfakes pose an emerging and significant threat, particularly in biometric verification. The accessibility of generative AI and face-swap apps has made the creation of deepfakes easier and highly scalable, which is evidenced by a 31X increase in deepfake attempts in 2023 compared to the previous year!

Minimum Effort (And Cost) For Maximum Return

As the Onfido report points out, simple ‘face swapping’ apps (i.e. apps which leverage advanced AI algorithms to seamlessly superimpose one person’s face onto another in photos or videos) offer ease of use and effectiveness in creating convincing fake identities. They are part of an influx of readily available online AI assisted tools that are providing fraudsters with a new avenue into biometric fraud. For example, the Onfido data shows that Biometric fraud attempts are clearly higher this year than in previous years with fraudsters favouring tools like the face-swapping apps to target selfie biometric checks and create fake identities.

The kind of fakes these cheap, easy apps create have been dubbed “cheapfakes” and this conforms with something that’s long been known about online fraudsters and cyber criminals – they seek methods that require minimum effort, minimum expense and minimum personal risk, yet deliver maximum effect.

Sector-Specific Impact of Deepfakes 

The Identity Fraud Report shows that (perhaps obviously) the gambling and financial sectors in particular are facing the brunt of these sophisticated fraud attempts. The lure of cash rewards and high-value transactions in these sectors makes them attractive targets for deepfake-driven frauds. In the gambling industry, for example, fraudsters may be particularly attracted to the sign-up and referral bonuses. In the financial industry, where frauds tend to be based around money laundering and loan theft, Onfido reports that digital attacks are easy to scale, especially when incorporating AI tools.

Implications For UK Businesses In The Age of (AI) Deepfake-Driven Fraud 

The surge in deepfake-driven fraud highlighted by the somewhat startling statistics in Onfido’s 2024 Identity Fraud Report, suggest that UK businesses navigating this new landscape may require a multifaceted approach. This could be achieved by balancing the implementation of cutting-edge technologies with heightened awareness and strategic planning. In more detail, this could involve:

– UK businesses prioritising the reinforcement of their identity verification processes. The traditional methods may no longer suffice against the sophistication of deepfakes. Therefore, Adopting AI-powered solutions that are specifically designed to detect and counter deepfake attempts could be the way forward. This could work as long as such systems can keep up with the advancements in fraudulent techniques (more advanced techniques may emerge as more AI sophisticated AI tools emerge).

– The training of staff, i.e. educating them about the nature of deepfakes and how they can be used to perpetrate fraud. This could empower employees to better recognise potential threats and respond appropriately, particularly in sectors like customer service and security, where human judgment plays a key role.

– Maintaining customer trust. UK businesses must navigate the fine line between implementing robust security measures and ensuring a frictionless customer experience. Transparent communication about the security measures in place and how they protect customer data can help in maintaining and even enhancing customer trust.

– As the use of deepfakes in fraud rises, regulatory bodies may introduce new compliance requirements and UK businesses will need to ensure that they stay abreast of these changes both to protect customers and remain compliant with legal standards. This in turn could require more rigorous data protection protocols or mandatory reporting of deepfake-related breaches.

– Collaboration with industry peers and participation in broader discussions about combating deepfake fraud may also be a way to gain valuable insights. Sharing knowledge and strategies, for example, could help in developing industry-wide best practices. Also, partnerships with technology providers specialising in AI and fraud detection could offer access to the latest tools and expertise.

– Since deepfake fraud may be an ongoing threat, long-term strategic planning may be essential. This perspective could be integrated into long-term business strategies, thereby (hopefully) making sure that resources are available and allocated not just for immediate solutions but also for future-proofing against evolving digital threats.

What Else Can Businesses Do To Combat Threats Like AI-Generated Deepfakes? 

Other ways that businesses can contribute to the necessary comprehensive approach to tackling the AI-generated deepfake threat may also include:

– Implementing biometric verification technologies that require live interactions (so-called ‘liveness solutions’), such as head movements, which are difficult for deepfakes to replicate.

– The use of SDKs (platform-specific building tools for developers) over APIs. For example, SDKs provide better protection against fraudulent submissions as they incorporate live capture and device integrity checks.

The Dual Nature Of Generative AI 

Although, as you’d expect an ‘Identity Fraud Report’ to do, the Onfido report focuses solely on the threats posed by AI, it’s important to remember that AI tools can be used by all businesses to add value, save time, improve productivity, get more creative, and to defend against the AI threats. AI-driven verification tools, for example, are becoming more adept at detecting and preventing fraud, underscoring the technology’s dual nature as both a tool for fraudsters and a shield for businesses.

What Does This Mean For Your Business? 

Tempering the reading of the startling stats in the report with the knowledge that Onfido is selling its own deepfake (liveness) detection solution and SDKs, it still paints a rather worrying picture for businesses. That said, The Onfido 2024 Identity Fraud Report’s findings, highlighting a 3000 per cent increase in deepfake fraud attempts due to readily available generative AI tools, signal a pivotal shift in the landscape of online fraud. This shift could pose new challenges for UK businesses but also open avenues for innovative solutions.

For businesses, the immediate response may involve upgrading identity verification processes with AI-powered solutions tailored to detect and counter deepfakes. However, it’s not just about deploying advanced technology. It’s also about ensuring these systems evolve with the fraudsters’ tactics. Equally crucial is the role of employee training in recognising and responding to these sophisticated fraud attempts.

As regulatory landscapes adjust to these emerging threats, staying informed and compliant is also likely to become essential. The goal is not only to counter current threats but to build resilience and innovation for future challenges.

Tech News : 21-Fold Increase in AI-Assisted Jobs

Human content writers have surged in demand recently, as part of a broader effect, where ‘prompt engineers’ have seen salaries of $300,000.

LinkedIn’s figures show that job posts in English with references to “GPT” or “ChatGPT” have increased 21-fold since November 2022. Interestingly, after almost a year of generative AI and predictions that it could largely replace human creative and content writers, Freelancer.com’s quarterly report shows an increase in demand for ‘human’ freelance writing jobs.


Since ChatGPT was quietly released in November last year and quickly became the fastest-growing consumer app in history by February this year, generative AI products have been integrated into search engines and major platforms, e.g., Microsoft’s Copilot, Google’s Bard, and Duet AI. Multiple AI image generators and other AI products have also now been introduced with businesses discovering (and quickly adopting) and leveraging the power of generative AI to boost productivity, meet their content and creative needs, and to link together and leverage the power of apps like never before.

With chatbots like ChatGPT seemingly able to produce quality content at scale, on-demand, in a fraction of the time and for a fraction of the cost of human writers, many thought that freelancers would struggle to find work with their skills effectively being replaced by AI.

Strong Growth

However, according to Australian online job marketplace Freelancer.com’s quarterly report on jobs posted in its marketplace, jobs related to writing, content creation and marketing have been the fastest growing freelance jobs by percentage growth in Q3 of this year. For example, it reports that compared to Q2, copy typing jobs rose by 28.7 per cent, Microsoft Word projects rose by 24.7 per cent, search engine marketing was up by 24.1 per cent, and copywriting and ghostwriting both rose by more than 23 per cent.

This trend has also been echoed in data from US-based, worldwide employment website ‘Indeed,’ which reports that generative AI-related jobs posted on its platform increased by almost 250 per cent from July 2021 to July 2023.


Tech and employment commentators are suggesting that the main reasons for this trend include:

– Small business may have seen and realised the power and potential of AI, but small business owners are time-poor, and need skilled freelancers to carry out the AI work for their projects.

– With businesses looking for ways to integrate AI into their business platforms, and with many freelancers being quick to learn and utilise AI tools to boost their productivity and skill base, businesses are seeking the support and help from these skilled freelance developers.

Opportunity To Become “Superskilled” 

Far from the automation of AI taking away their work, tech and employment commentators have noted how many freelancers have been able to learn, harness, and leverage generative AI tools to the point where they it has effectively made them ‘superskilled.’ For example, leveraging generative AI tools (e.g. chatbots and image AI image generators) has enabled freelancers to become expert-level in copywriting and creativity (images and videos), dramatically broadening their skill-base and capabilities, increasing their value in the market, and elevating them to now having some of the most in-demand skills. For example, in July, jobs for ‘Prompt Engineers’ were reported to have salaries of up to $300,000 attached.

It’s worth noting here, however, that to some extent, freelancers finding their AI skills in high demand may be at the expense of some in creative professions, such as artists, who are currently involved in legal battles to protect their skills and work over copyright issues relating to AI tools like image generators.

What Does This Mean For Your Business? 

The considerable increase in demand for human freelance AI skills reflects how AI is changing the tech job market. With time-poor businesses owners looking for help and support in leveraging AI in a value-adding way, freelancers who have up-skilled themselves and boosted their productivity by learning how to use AI tools now find themselves able to meet that demand and be well positioned for the future growth of AI.

For example, a recent (US) LinkedIn survey of executives found that 44 per cent intend to expand their use of AI technologies in the next year, with 47 per cent believing it will improve their productivity. Even though many tech freelancers may already have related degrees or experience, learning the new AI concepts and tools is now an important way that tech professionals can advance their careers.

That said, although freelancers can learn how to use AI tools, they will still need to know and to demonstrate how to use the technology in the right way in order to get work on a particular project. It’s also important to look at the sheer speed of developments in generative AI and how rapidly the market and tech jobs are changing to realise that we’re still really at the beginning, and that there’s a lot more to learn and more changes to come as AI alters the employment landscape all the way up the value chain.

Tech News : Watermark Trial To Spot AI Images

Google’s AI research lab DeepMind has announced that in partnership with Google Cloud, it’s launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images.

The AI Image Challenge

Generative AI technologies are rapidly evolving, and AI-generated imagery, also known as ‘synthetic imagery,’ is becoming much harder to distinguish from images not created by an AI system. Many AI generated images are now so good that they can easily fool people, and there are now so many (often free) AI image generators around and being widely used that misuse is becoming more common.

This raises a host of ethical, legal, economic, technological, and psychological concerns ranging from the proliferation of deepfakes that can be used for misinformation and identity theft, to legal ambiguities around intellectual property rights for AI-generated content. Also, there’s potential for job displacement in creative fields as well as the risk of perpetuating social and algorithmic biases. The technology also poses challenges to our perception of reality and could erode public trust in digital media. Although the synthetic imagery challenge calls for a multi-disciplinary approach to tackle it, many believe a system such as ‘watermarking’ may help in terms of issues like ownership, misuse, and accountability.

What Is Watermarking?  

Creating a special kind of watermark for images to identify them as being AI-produced is a relatively new idea, but adding visible watermarks to images is a method that’s been used for many years (to show copyright and ownership) on sites including Getty Images, Shutterstock, iStock Photo, Adobe Stock and many more. Watermarks are designs that can be layered on images to identify them.  Images can have visible or invisible, reversable, or irreversible watermarks added to them. Adding a watermark can make it more difficult for an image to be copied and used without permission.

What’s The Challenge With AI Image Watermarking? 

AI-generated images can be produced on-the-fly and customised and can be very complex, making it challenging to apply a one-size-fits-all watermarking technique. Also, AI can generate a large number of images in a short period of time, making traditional watermarking impractical, plus simply adding visible watermarks to areas of an image (e.g. the extremities) means it could be cropped and the images can be edited to remove it.

Google’s SynthID Watermarking 

Google SynthID tool  works with Google Cloud’s ‘Imagen’ text-to-image diffusion model (AI text to image generator) and uses a combined approach of being able to add and detect watermarks. For example, the SynthID watermarking tool can add an imperceptible watermark to synthetic images produced by Imagen, doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications (e.g. the addition of filters, changing colours, and saving with various lossy compression schemes – most commonly used for JPEGs). SynthID can also be used to scan an image for its digital watermark and can assess the likelihood of an image being created by Imagen and provides the user with three confidence levels for interpreting the results.

Based On Metadata 

Adding Metadata to an image file (e.g. who created it and when), plus adding digital signatures to that metadata can show if an image has been changed. Where metadata information is intact, users can easily identify an image, but metadata can be manually removed when files are edited.

Google says the SynthID watermark is embedded in the pixels of an image and is compatible with other image identification approaches that are based on metadata and, most importantly, the watermark remains detectable even when metadata is lost.

Other Advantages 

Some of the other advantages of the SynthID watermark addition and detection tool are:

– Images are modified so as to be imperceptible to the human eye.

– Even if an image has been heavily edited and the colour, contrast and size changed, the DeepMind technology behind the tool will still be able to tell if an imaged is AI-generated.

Part Of The Voluntary Commitment

The idea of watermarking to expose and filter AI-generated images falls within the commitment of seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) who recently committed to developing AI safeguards. Part of the commitments under the ‘Earning the Public’s Trust’ heading was to develop robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system, thereby enabling creativity AI while reducing the dangers of fraud and deception.

What Does This Mean For Your Business?

It’s now very easy for people to generate AI images with any of the AI image generating tools available, with many of these images able to fool the viewer possibly resulting in ethical, legal, economic, political, technological, and psychological consequences. Having a system that can reliably identify AI-generated images (even if they’ve been heavily edited) is therefore of value to businesses, citizens, and governments.

Although Google admits its SynthID system is still experimental and not foolproof, it at least means something fairly reliable will be available soon at a time when AI seems to be running ahead of regulation and protection. One challenge, however, is that although there is a general commitment by the big tech companies to watermarking, the SynthID tool is heavily linked to Google’s DeepMind, Cloud and Imagen and other companies may also be pursuing different methods. I.e. there may be a lack of standardisation.

That said, it’s a timely development and it remains to be seen how successful it can be and how watermarking and/or other methods develop going forward.

Tech Insight : 70% Of Companies Using Generative AI

A new VentureBeat survey has revealed that 70 per cent of companies are experimenting with generative AI.

Most Experimenting and Some Implementing 

The (ongoing) survey which was started ahead of the tech news and events company’s recently concluded VB Transform 2023 Conference in San Francisco, gathered the opinions of global executives in data, IT, AI, security, and marketing.

The results revealed that more than half (54.6 per cent) of organisations are experimenting with generative AI, with 18.2 per cent already implementing it into their operations. That said, only a relatively small percentage (18.2 per cent) expect to spend more on the technology in the year ahead.

A Third Not Deploying Gen AI 

One perhaps surprising (for those within tech) statistic from the VentureBeat survey is that quite a substantial proportion of respondents (32 per cent) said they weren’t deploying gen AI for other use cases, or not using it at all yet.

More Than A Quarter In The UK Have Used Gen AI 

The general popularity of generative AI is highlighted by a recent Deloitte survey which showed that more than a quarter of UK adults have used gen AI tools like chatbots, while 4 million people have used it for work.

Popular Among Younger People

Deloitte’s figures also show that more than a quarter (26 per cent) of 16-to-75 year-olds have used a generative AI tool (13 million people) with one in 10 of those respondents using it at least once a day.

Adoption Rate of Gen AI Higher Than Smart Speakers 

The Deloitte survey also highlights how the rate of adoption of generative AI exceeds that of voice-assisted speakers like Amazon’s Alexa. For example, it took five years for voice-assisted speakers to achieve the same adoption levels compared to generative AI’s adoption which really began in earnest last November with ChatGPT’s introduction.

How Are Companies Experimenting With AI? 

Returning to the VentureBeat survey, unsurprisingly, it shows that most companies currently use AI for tasks like chat and messaging (46 per cent) as well as content creation (32 per cent), e.g. ChatGPT.

A Spending Mismatch 

However, the fact is that many companies are experimenting, yet few can envisage spending more on AI tools in the year ahead which therefore reveals a mismatch that could challenge implementation of AI. VentureBeat has suggested that possible reasons for this include constrained company budgets and a lack of budget prioritisation for generative AI.

A Cautious Approach 

It is thought that an apparently cautious approach to generative AI adoption by businesses, highlighted by the VentureBeat survey, may be down to reasons like:

– A shortage of talent and/or resources for generative AI (36.4 per cent).

– Insufficient support from leaders or stakeholders (18.2 per cent).

– Being overwhelmed by too many options and possible uses – not sure how best to deploy the new technology.

– The rapid pace of change in the generative AI meaning that some prefer to wait rather than commit now.

What Does This Mean For Your Business? 

Although revolutionary, generative AI is a new technology to businesses and, as the surveys show, while many people have tried it and businesses are using it, there are some challenges to its wider adoption and implementation. For example, the novelty and an uncertainty about how best to use it (with the breadth of possibilities), an AI skills gap / talent shortage in the market, a lack of budget for it, and its stratospheric growth rate (prompting caution or waiting for new and better versions or tools than can be tailored to their needs) are all to be overcome to bring about wider adoption by businesses.

These challenges may also mean that generative AI vendors in the marketplace at the moment need to make very clear, compelling, targeted usage-cases to the sectors and problem areas for prospective clients in order to convince them to take plunge. The rapid growth of generative AI is continuing with a wide variety of text, image, voice tools being released and with the big tech companies all releasing their own versions (e.g. Microsoft’s Copilot and Google’s Bard) so we’re still very much in the early stages of generative AI’s growth with a great deal of rapid change to come.