Security Stop Press : $6 Million Fine For Deepfake Robocalls

A political consultant who paid a local street magician $150 to make a deepfake anti-Biden robocall, asking people not to vote in the New Hampshire Democratic primary, is now facing $6 million fine.

It’s been alleged that Steven Kramer, 54, of New Orleans, commissioned and paid for the bogus Biden AI deepfake voice call, used ID spoofing to hide the source, and hired a telemarketing firm to play fake recording to 5,000+ voters over the phone.

Mr Kramer now faces felony charges of voter suppression and misdemeanor impersonation of a candidate and faces the multi-million dollar fine from the US Federal Communication Commission (FCC) for the bogus call. This is likely to send a powerful message to those looking to misuse AI deepfakes in this year’s US presidential election.

Featured Article : ‘AI Washing’ – Crackdown

The US investment regulator, the Securities and Exchange Commission (SEC), has dished out penalties totalling $400,000 to two investment companies who made misleading claims about how they used AI, a practice dubbed ‘AI Washing’.

What Is AI Washing? 

The term ‘AI washing’ (as used by the investment regulator in this case) refers to the practice of making unsubstantiated or misleading claims about the intelligence or capabilities of a technology product, system, or service in order to give it the appearance of being more advanced (or artificially intelligent) than it actually is.

For example, this can involve overstating the role of AI in products or exaggerating the sophistication of the technology, with the goal often being to attract attention, investment, or market-share by capitalising on the hype and interest surrounding AI technologies.

What Happened? 

In this case, two investment advice companies, Delphia (USA) Inc. and Global Predictions Inc., were judged by the SEC to have made false and misleading statements about their purported use of artificial intelligence (AI).


For example, in the case of Toronto-based Delphia (USA) Inc, the SEC said that from 2019 to 2023, the firm made “false and misleading statements in its SEC filings, in a press release, and on its website regarding its purported use of AI and machine learning that incorporated client data in its investment process”. Delphia claimed that it “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.”  Following the SEC’s investigation, the SEC concluded that Delphia’s statements were false and misleading because it didn’t have the AI and machine learning capabilities that it claimed. Delphia was also charged by the SEC with violating the Marketing Rule, which (among other things) prohibits a registered investment adviser from disseminating any advertisement that includes any untrue statement of material fact.

Delphia neither confirmed nor denied the SEC’s charges but agreed to pay a substantial civil penalty of $225,000.

Global Predictions

In the case of San Franciso-based Global Predictions, the SEC says it made false and misleading claims in 2023 on its website and on social media about its purported use of AI. An example cited by the SEC is that Global Predictions falsely claimed to be the “first regulated AI financial advisor” and misrepresented that its platform provided “expert AI-driven forecasts.” Like Delphia, Global Predictions was also found to have violated the Marketing Rule, falsely claiming that it offered tax-loss harvesting services and included an impermissible liability hedge clause in its advisory contract, among other securities law violations.

Following the SEC’s judgement, Global Predictions also neither confirmed nor denied it and agreed to pay a civil penalty of $175,000.

Investor Alert Issued

The cases of the two investment firms prompted the SEC’s Office of Investor Education and Advocacy to issue a joint ‘Investor Alert’ with the North American Securities Administrators Association (NASAA), and the Financial Industry Regulatory Authority (FINRA) about artificial intelligence and investment fraud.

In the alert, the regulators highlighted the need to “make investors aware of the increase of investment frauds involving the purported use of artificial intelligence (AI) and other emerging technologies.”   

The alert flagged up how “scammers are running investment schemes that seek to leverage the popularity of AI. Be wary of claims — even from registered firms and professionals — that AI can guarantee amazing investment returns” using “unrealistic claims like, ‘Our proprietary AI trading system can’t lose!’ or ‘Use AI to Pick Guaranteed Stock Winners!” 

Beware ‘Pump-and-Dump’ Schemes 

In the alert, the regulators also warned about how “bad actors might use catchy AI-related buzzwords and make claims that their companies or business strategies guarantee huge gains” and how claims about a public company’s products and services relating to AI also might be part of a pump-and-dump scheme. This is a scheme where scammers falsely present an exaggerated view of a company’s stock through misleading positive information online, causing its price to rise as investors rush to buy. The scammers then sell their shares at this inflated price. Once they’ve made their profit and stop promoting the stock, its price crashes, leaving other investors with significant losses.

AI Deepfake Warning 

The regulators also warned of how AI-enabled technology is being used to scam investors using “deepfake” video and audio. Examples of this highlighted by the regulators include:

– Using audio to try to lure older investors into thinking a grandchild is in financial distress and in need of money.

– Scammers using deepfake videos to imitate the CEO of a company announcing false news in an attempt to manipulate the price of a stock.

– Scammers using AI technology to produce realistic-looking websites or marketing materials to promote fake investments or fraudulent schemes.

– Bad actors even impersonating SEC staff and other government officials.

The regulators also highlight high scammers now often use celebrity endorsements (as they have in the UK using Martin Lewis’s name and image without consent). The SEC in the US says making an investment decision just because someone famous says a product or service is a good investment is never a good idea.

Don’t Just Rely On AI-Generated Information For Investments 

In the alert, the US regulators also warn against relying solely on AI-generated information in making investment decisions, e.g. to predict changes in the stock market’s direction or the price of a security. They highlight how AI-generated information might rely on data that is inaccurate, incomplete, or misleading, or how it could be based on false or outdated information about financial, political, or other news events. Also, it could draw from false or misleading information.


The alert offers plenty of advice on how to avoid falling victim to AI-based financial and investment scams with the overriding message being that “Investment claims that sound too good to be true usually are.” The regulators stress the importance of checking credentials and claims, working with registered professionals, and making use of the regulators.

What Does This Mean For Your Business? 

Just as a lack of knowledge about cryptocurrencies has been exploited by fraudsters in Bitcoin scams, regulators are now keen to highlight how a lack of knowledge about AI and its capabilities are now being exploited by bad actors in a similar way.

AI may have many obvious benefits, but the message here, as highlighted by the much-publicised substantial fines given to the two investment companies and the alert issued by regulators to beware ‘too good to be true’ AI claims. The regulators have highlighted how AI is now being exploited for bad purposes in a number of different ways. These include deepfakes and pump-and-dump schemes, via different channels, all of which are designed to exploit the emotions and aspirations of investors, and to build trust to the point where they suspend any critical analysis of what they’re seeing and reading and react impulsively.

With generative AI (e.g. AI images, videos, and AI audio cloning) now becoming so much more realistic and advanced to the point where governments in a key election year are issuing warnings and AI models are being limited on what they can respond to (refer Gemini with election questions), the warning signs are there for financial investors. This story also serves as an example to companies to be very careful about how they represent their usage of AI, what message this gives to customers, and whether claims can be substantiated. It’s likely that we’ll see much more ‘AI washing’ in the near future.

Tech News : OpenAI’s Video Gamechanger

OpenAI’s new ‘Sora’ AI-powered text-to-video tool is so good that its outputs could easily be mistaken for real videos, prompting deepfake fears in a year of important global elections.


Open AI says that its new Sora text-to-video model can generate realistic videos up to a minute long while maintaining visual quality and adherence to the user’s prompt. Sora can both generate entire videos “all at once” or extend generated videos to make them longer.

According to OpenAI Sora can: “generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background”. 


Although Sora is based on OpenAI’s existing technologies such as its DALL-E and image generator and the GPT large language models (LLMs), what makes its outputs so realistic is the combination of Sora being a diffusion model and using “transformer architecture”. For example, as a diffusion model, Sora’s video-making process starts off with something looking like “static noise,” but this is transformed gradually by removing that ‘noise’ over many steps.

Also, transformer architecture means the “model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world”, i.e. it contextualises and pieces together sequential data.

Other aspects that make Sora so special are how its “deep understanding of language” enable it to accurately interpret prompts and “generate compelling characters that express vibrant emotions,” and the fact that Sora can “create multiple shots within a single generated video that accurately persist characters and visual style”. 


OpenAI admits, however, that Sora has its weaknesses, including:

– Not always accurately simulating the “physics of a complex scene” or understanding the cause and effect. OpenAI gives the example of a person taking a bite out of a cookie, but afterward, the cookie may not have a bite mark.

– Confusing spatial details of a prompt, e.g. mixing up left and right.

– Struggling with precise descriptions of events that take place over time, e.g. following a specific camera trajectory.

Testing & Safety 

The potential and the power of Sora (for both good and bad) mean that OpenAI appears to be making sure it’s been thoroughly tested before releasing it to the public. For example, it’s currently only available to ‘red teamers’ who are assessing any potential critical areas for harms or risks, and with a number of visual artists, designers, and filmmakers to gain their feedback on how to advance the model to be most helpful for creative professionals.

Other measures that OpenAI says it’s taking to make sure Sora is safe include:

– Building tools to help detect misleading content, including a detection classifier that can tell when a video was generated by Sora and including C2PA metadata (data that verifies a video’s origin and related information). Both of these could help combat Sora being used for malicious/misleading deepfakes.

– Leveraging the existing safety methods used for DALL·E such as using a text classifier to check and reject text input prompts that violate OpenAI’s usage policies such as requests for extreme violence, sexual content, hateful imagery, celebrity likeness, or intellectual property of others.

– The use of image classifiers that can review each video frame to ensure adherence to OpenAI’s usage policies before a video is shown to the user.


Following the announcement of how realistic Sora’s videos can be, concerns have been expressed online about its potential to be used by bad actors to effectively spread misinformation and disinformation using convincing Sora-produced deepfake videos (if Sora is publicly released in time). The ability of convincing deepfake videos to influence opinion is of particular concern with major elections coming up this year, e.g. in the US, Russia, Taiwan, the UK, and many more countries, and with major high-profile conflicts still ongoing (e.g. in Ukraine and Gaza).

In 2024, more than 50 countries that collectively account for half the planet’s population will be holding their national elections during 2024, and if Sora’s videos are as convincing as has been reported, and/or security measures and tools are not as effective as hoped, the consequences for countries, economies, and world peace could be dire.

What Does This Mean For Your Business? 

For businesses, the ability to create amazingly professional and imaginative videos from simple text prompts whenever they want and as often as they want could significantly strengthen their marketing. For example, it could enable them to add value, reduce cost and complications in video making, improve and bolster their image and the quality of their communications, and develop an extra competitive advantage without needing any special video training, skills, or hires.

Sora could, however, also be a negative, disruptive threat to video-producing businesses and those whose value is their video-making skills. Also, as mentioned above, there is the very real threat of political damage or criminal damage (fraud) being caused by the convincing quality of Sora’s videos being used as deepfakes, and the difficulty of trying to control such a powerful tool. Some tech commentators have suggested that AI companies may need to collaborate with social media networks and governments to help tackle the potential risks, e.g. the spreading of misinformation and disinformation once Sora is released for public use.

That said, it will be interesting to see just how good the finished product’s outputs will be. Competitors of OpenAI (and its Microsoft partner) are also working on getting their own new AI image generator products out there, including Google’s Lumiere model, so it’s also exciting to see how these may compare, and the level of choice that businesses have.

Featured Article : 3000% Increase in Deepfake Frauds

A new report from ID Verification Company Onfido shows that the availability of cheap generative AI tools has led to Deepfake fraud attempts increasing by 3,000 per cent (specifically, a factor of 31) in 2023.

Free And Cheap AI Tools 

Although deepfakes have now been around for several years, as the report points out, deepfake fraud has become significantly easier and more accessible due to the widespread availability of free and cheap generative AI tools. In simple terms, these tools have democratised the ability to create hyper-realistic fake images and videos, which were once only possible for those with advanced technical skills and access to expensive software.

Prior to the public availability of AI tools, for example, creating a convincing fake video or image required a deep understanding of computer graphics and access to high-end, often costly, software (a barrier to entry for would-be deep-fakers).

Document and Biometric Fraud – The New Frontier 

The Onfido data reveals a worrying trend in that while physical counterfeits are still prevalent, there’s a notable shift towards digital manipulation of documents and biometrics, facilitated by the availability and sophistication of AI tools. Fraudsters are not only altering documents digitally but also exploiting biometric verification systems through deepfakes and other AI-assisted methods. The Onfido report highlights a dramatic rise in the rate of biometric fraud, which doubled from 2022 to 2023.

Deepfakes – A Growing Threat 

As reinforced by the findings of the report, deepfakes pose an emerging and significant threat, particularly in biometric verification. The accessibility of generative AI and face-swap apps has made the creation of deepfakes easier and highly scalable, which is evidenced by a 31X increase in deepfake attempts in 2023 compared to the previous year!

Minimum Effort (And Cost) For Maximum Return

As the Onfido report points out, simple ‘face swapping’ apps (i.e. apps which leverage advanced AI algorithms to seamlessly superimpose one person’s face onto another in photos or videos) offer ease of use and effectiveness in creating convincing fake identities. They are part of an influx of readily available online AI assisted tools that are providing fraudsters with a new avenue into biometric fraud. For example, the Onfido data shows that Biometric fraud attempts are clearly higher this year than in previous years with fraudsters favouring tools like the face-swapping apps to target selfie biometric checks and create fake identities.

The kind of fakes these cheap, easy apps create have been dubbed “cheapfakes” and this conforms with something that’s long been known about online fraudsters and cyber criminals – they seek methods that require minimum effort, minimum expense and minimum personal risk, yet deliver maximum effect.

Sector-Specific Impact of Deepfakes 

The Identity Fraud Report shows that (perhaps obviously) the gambling and financial sectors in particular are facing the brunt of these sophisticated fraud attempts. The lure of cash rewards and high-value transactions in these sectors makes them attractive targets for deepfake-driven frauds. In the gambling industry, for example, fraudsters may be particularly attracted to the sign-up and referral bonuses. In the financial industry, where frauds tend to be based around money laundering and loan theft, Onfido reports that digital attacks are easy to scale, especially when incorporating AI tools.

Implications For UK Businesses In The Age of (AI) Deepfake-Driven Fraud 

The surge in deepfake-driven fraud highlighted by the somewhat startling statistics in Onfido’s 2024 Identity Fraud Report, suggest that UK businesses navigating this new landscape may require a multifaceted approach. This could be achieved by balancing the implementation of cutting-edge technologies with heightened awareness and strategic planning. In more detail, this could involve:

– UK businesses prioritising the reinforcement of their identity verification processes. The traditional methods may no longer suffice against the sophistication of deepfakes. Therefore, Adopting AI-powered solutions that are specifically designed to detect and counter deepfake attempts could be the way forward. This could work as long as such systems can keep up with the advancements in fraudulent techniques (more advanced techniques may emerge as more AI sophisticated AI tools emerge).

– The training of staff, i.e. educating them about the nature of deepfakes and how they can be used to perpetrate fraud. This could empower employees to better recognise potential threats and respond appropriately, particularly in sectors like customer service and security, where human judgment plays a key role.

– Maintaining customer trust. UK businesses must navigate the fine line between implementing robust security measures and ensuring a frictionless customer experience. Transparent communication about the security measures in place and how they protect customer data can help in maintaining and even enhancing customer trust.

– As the use of deepfakes in fraud rises, regulatory bodies may introduce new compliance requirements and UK businesses will need to ensure that they stay abreast of these changes both to protect customers and remain compliant with legal standards. This in turn could require more rigorous data protection protocols or mandatory reporting of deepfake-related breaches.

– Collaboration with industry peers and participation in broader discussions about combating deepfake fraud may also be a way to gain valuable insights. Sharing knowledge and strategies, for example, could help in developing industry-wide best practices. Also, partnerships with technology providers specialising in AI and fraud detection could offer access to the latest tools and expertise.

– Since deepfake fraud may be an ongoing threat, long-term strategic planning may be essential. This perspective could be integrated into long-term business strategies, thereby (hopefully) making sure that resources are available and allocated not just for immediate solutions but also for future-proofing against evolving digital threats.

What Else Can Businesses Do To Combat Threats Like AI-Generated Deepfakes? 

Other ways that businesses can contribute to the necessary comprehensive approach to tackling the AI-generated deepfake threat may also include:

– Implementing biometric verification technologies that require live interactions (so-called ‘liveness solutions’), such as head movements, which are difficult for deepfakes to replicate.

– The use of SDKs (platform-specific building tools for developers) over APIs. For example, SDKs provide better protection against fraudulent submissions as they incorporate live capture and device integrity checks.

The Dual Nature Of Generative AI 

Although, as you’d expect an ‘Identity Fraud Report’ to do, the Onfido report focuses solely on the threats posed by AI, it’s important to remember that AI tools can be used by all businesses to add value, save time, improve productivity, get more creative, and to defend against the AI threats. AI-driven verification tools, for example, are becoming more adept at detecting and preventing fraud, underscoring the technology’s dual nature as both a tool for fraudsters and a shield for businesses.

What Does This Mean For Your Business? 

Tempering the reading of the startling stats in the report with the knowledge that Onfido is selling its own deepfake (liveness) detection solution and SDKs, it still paints a rather worrying picture for businesses. That said, The Onfido 2024 Identity Fraud Report’s findings, highlighting a 3000 per cent increase in deepfake fraud attempts due to readily available generative AI tools, signal a pivotal shift in the landscape of online fraud. This shift could pose new challenges for UK businesses but also open avenues for innovative solutions.

For businesses, the immediate response may involve upgrading identity verification processes with AI-powered solutions tailored to detect and counter deepfakes. However, it’s not just about deploying advanced technology. It’s also about ensuring these systems evolve with the fraudsters’ tactics. Equally crucial is the role of employee training in recognising and responding to these sophisticated fraud attempts.

As regulatory landscapes adjust to these emerging threats, staying informed and compliant is also likely to become essential. The goal is not only to counter current threats but to build resilience and innovation for future challenges.

Tech Insight : What Are ‘Deepfake Testers’?

Here we look at what deepfake videos are, why they’re made, how they (might) be quickly and easily detected using a variety of deepfake detection and testing tools.

What Are Deepfake Videos? 

Deepfake videos are a kind of synthetic media created using deep learning and artificial intelligence techniques. Making a deepfake video involves manipulating or superimposing existing images, videos, or audio onto other people or objects to create highly realistic (but typically fake) content. The term “deepfake” comes from the combination of “deep learning” and “fake.”

Why Make Deepfake Videos? 

People create deepfake videos for various reasons, driven by both benign and malicious intentions. Here are some of the main motivations behind the creation of deepfake videos:

– Entertainment and art. Deepfakes can be used as a form of artistic expression or for entertainment purposes. AI may be used, for example, to create humorous videos, mimic famous movie scenes with different actors, or explore creative possibilities.

– Special effects and visual media. In the film and visual effects industry, deepfake technology is often used to achieve realistic visual effects, such as de-aging actors or bringing deceased actors back to the screen (a contentious point at the moment, give the actors strike over AI fears). That said, some sportspeople, actors and celebrities have embraced the technology and are allowing their deepfake identities to be used by companies. Examples include Lionel Messi by Lay’s crisps and Singapore celebrity Jamie Yeo agreeing a deal with financial technology firm Hugosave.

– Education and research. Deepfakes can be used for research and educational purposes, helping researchers, academics, and institutions study and understand the capabilities and limitations of AI technology.

– Memes and internet culture. In recent times, deepfakes have become part of internet culture and meme communities, where users create and share entertaining or humorous content featuring manipulated faces and voices.

– Face swapping and avatar creation. Some people use deepfakes to swap faces in videos, such as putting their face on a character in a movie or video game or creating avatars for online platforms.

– Satire and social commentary. Deepfake videos are also made to satirise public figures or politicians, creating humorous or critical content to comment on current events and societal issues.

– Privacy and anonymity. In some cases, people may use deepfakes to protect their privacy or identity by concealing their face and voice in videos.

– Spreading misinformation and disinformation. Unfortunately, deepfake technology has been misused to spread misinformation, fake news, and malicious content. Deepfakes can be used to create convincing videos of individuals saying or doing things they never did, including political figures, leading to potential harm, defamation, and the spread of falsehoods.

– Fraud and scams. This is a very worrying area as criminals can now use deepfakes for fraudulent activities, e.g. impersonating someone in video calls to deceive or extort others. For example, deepfake testing company Deepware says: ”We expect destructive use of deepfakes, particularly as phishing attacks, to materialise very soon”.

What Are Deepfake Testers? 

With AI deepfakes becoming more convincing and easier to produce thanks to rapidly advancing AI developments and many good AI video, image, and voice services widely available online (many for free), tools that can quickly detect deepfakes have become important. In short, deepfake testers are online tools that can be used to scan any suspicious video to discover if it’s synthetically manipulated. In the case of deepfakes made to spread misinformation and disinformation or for fraud and scams, these can be particularly valuable tools.

How Do They Work? 

For the user, deepfake testers typically involve copying and pasting the URL of a suspected deepfake into the online deepfake testing tool and hitting the ‘scan’ button to get a quick opinion about whether it’s likely to be a deepfake video.

Behind the scenes there a number of technologies used by deepfake testers, such as:

– Photoplethysmography (PPG), for detecting changes in blood flow, because deepfake faces don’t give out these signals. This type of detection is more difficult if the deepfake video is pixelated.

– Eye movement analysis. This is because deepfake eyes tend to be divergent, i.e. they don’t look at a central point like real human eyes do.

– Lip Sync Analysis can help highlight a lack of audio and visual synchronisation, something which is a feature of deepfakes.

– Facial landmark detection and tracking algorithms to assess whether the facial movements and expressions align realistically with the audio and overall context of the video.

– Testing for visual irregularities, e.g. unnatural facial movements, inconsistent lighting, or strange artifacts around the face.


Some examples of deepfake testers include:


This is only for detecting AI-generated face manipulations and can be used via the  website, API key, or in an offline environment via SDK. There is also an Android app. There is a maximum limit of 10 minutes for each video.

Intel’s FakeCatcher

With a reported 96 accuracy rate, Intel’s deepfake detection platform, introduced last year, was billed as “the world’s first real-time deepfake detector that returns results in milliseconds.” Using Intel hardware and software, it runs on a server and interfaces through a web-based platform.

Microsoft’s Video Authenticator Tool

Announced 3 years ago, this deepfake detecting tool uses advanced AI algorithms to detect signs of manipulation in media and provides users with a real-time confidence score. This tool was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both of which are key models for training and testing deepfake detection technologies.


This AI-based protection platform is used by governments, defence agencies, and enterprises. Users upload their digital media through the website or API, whereupon Sentinel uses advanced AI algorithms to automatically analyse the media. Users are given a a detailed report of its findings.


This is an open platform allowing users to upload a video (.wav video, maximum size is 50MB), input their email address, and get an assessment of whether a video is fake.

DuckDuckGoose DeepDetector Software 

This is fully automatised deepfake detection software for videos and images which uses explainable output powered by AI to help users understand how the detection was made. It detects deepfake videos and images in real-time and provides explainable outputs


This project aims to develop intelligent human-in-the-loop content verification and disinformation analysis methods and tools whereby social media and web content is analysed and contextualised within the broader online ecosystem. The project offers, for example, a chatbot to guide users through the verification process, an open-source browser plugin, and other open source AI tools, as well as proprietary tools owned by the consortium partners.

What Does This Mean For Your Business? 

Deepfake videos can be fun and satirical, however there are real concerns that with AI advancements, deepfake videos are being made for spreading misinformation and disinformation. Furthermore, fraud and scam deepfakes can be incredibly convincing and, therefore dangerous.

Political interference such as spreading videos of world leaders and politicians saying things they didn’t say, plus using videos to impersonate someone to deceive or extort are now very real problems. With it being so difficult to tell for sure just by watching a video whether it’s fake or not, these deepfake testing tools can have a real value both as a safety measure for businesses, or for anyone who needs a fast way to check out their suspicions.

Deepfake testers, therefore, can contribute to cybercrime prevention and countering of fake news. The issue of deepfakes as a threat is only going to grow, so the hope is that as deepfake videos become ever-more sophisticated, the detection tools are able to keep up in their ability to be able to tell with certainty whether a video is fake.