Tech News : UK Company Scammed $25 Million Via Deepfakes

It’s been reported that an employee at London-based design and engineering multinational, Arup, was duped by a deepfake video call into paying a staggering $25.6 million to fraudsters.

What Happened? 

According to reports published on CNN, back in January, a finance employee in Arup’s Hong Kong office received what they suspected was a phishing email, purporting to be from the company’s UK office, because it requested a secret transaction.

The employee then reportedly took part in a video call with people who looked and sounded like senior staff members (including the CFO) but who were in fact deepfakes! It’s been reported that this deepfake video call led to the employee putting aside previous doubts and subsequently agreeing to transfer 200 million Hong Kong dollars / $25.6 million via 15 separate transactions.

The fraud was reportedly only discovered following the employee making an official inquiry with the company’s headquarters, which resulted in a police investigation.

Confirmed 

A spokesperson from Arup (the company behind world-famous buildings such as Australia’s iconic Sydney Opera House and the Bird’s Nest Stadium in Beijing) has been reported as saying that whilst they can’t go into details, they “can confirm that fake voices and images were used”.

Financial Stability Not Affected 

Despite $25 million going astray and the initial suspected phishing email, Arup’s reported email statement said: “Our financial stability and business operations were not affected and none of our internal systems were compromised.” 

Many Deepfake Scams 

There have been many high-profile and large-scale deepfake scams in recent years, including:

– In 2023, a deepfake video scam of consumer champion Martin Lewis was circulated on social media to trick people into investing in something called ‘Quantum AI’ (an app) which scammers claimed was Elon Musk’s new project.

– In 2022, the chief communications officer at the world’s largest crypto exchange, Binance, claimed that a deepfake AI hologram of him (made from video footage of interviews and TV appearances) had been used on a Zoom call to scam another business, leading to significant financial losses.

– In 2020, a branch manager of a Japanese company in Hong Kong received an AI deepfake call that sounded like the Director, but was actually from fraudsters. The call used an AI to mimic the CEO’s voice to instruct a bank manager to engage with a fictional lawyer, which then led to the authorisation and transfer of $35 million to fraudulent accounts.

– In 2019, an energy company in the UK was defrauded of €220,000 ($243,000) through a deepfake audio scam. The fraudsters used AI-generated voice technology to impersonate the CEO of the firm’s parent company, instructing a senior executive to transfer funds to a Hungarian supplier.

More Sophisticated Attacks 

Following the recent scamming of Arup, Rob Greig (Arup’s global chief information officer) has been reported as saying : “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.” He noted that “the number and sophistication of these attacks has been rising sharply in recent months”. 

What Does This Mean For Your Business? 

This massive $25 million deepfake scam involving Arup is a reminder of the growing sophistication and severity of digital fraud. Sadly, this incident is not an isolated case but part of a broader trend of increasingly advanced scams leveraging AI. The rapid advancements in AI technology and its wide availability have made it easier for fraudsters to create highly convincing deepfake videos and audio, posing significant risks to businesses of all sizes.

For UK businesses, this incident is a reminder of the urgent need to enhance security measures and verification processes. Traditional methods of authentication, such as emails and video calls, can no longer be solely relied upon. Instead, businesses may want to adopt multi-layered security strategies that include advanced AI-based detection tools, biometric verification, and identity verification protocols. Regular training and awareness programmes for employees may also now be essential to help them recognise and respond to potential threats.

This incident also highlights the critical role of law enforcement and regulatory bodies in combating digital fraud. Enhanced cooperation and information sharing between businesses, cybersecurity experts, and law enforcement agencies are vital to staying ahead of these sophisticated attacks. Implementing stricter regulations on the use and dissemination of AI technology and ensuring that companies have access to the latest detection and prevention tools will be crucial steps in this battle.

The Arup scam demonstrates that even technologically savvy industries are not immune to the threats posed by deepfakes.

Featured Article : Realtime Deepfake Dating Scams

Here we look at how scammers are now reportedly using face-swapping technology to change their appearance in real-time to conduct video-based romance scams.

Yahoo Boys 

Recently, tech news site ‘Wired’ featured a story about romance scammers dubbed ‘Yahoo Boys,’ a slang term for a Nigeria-based collective of scammers who are now using deepfakes and real-time face-swapping technology so they can take on any appearance in their video feed to the targets of their romance scams. They are also known to be involved in phishing, and other cybercrimes.

Romance Scams 

A romance scam is a type of fraud where someone creates a fake identity to form a relationship with their target, often online, to deceive them into sending money or revealing personal or financial information.

How Big Is the Problem?  

According to the US FBI’s 2023 ‘Internet Crime Report’, the category of ‘confidence fraud/romance’ led to the theft of $652,544,805 from victims (which was actually down by a little over $83 million on the previous year).  This is clearly a significant problem and the real-time component of it will doubtless be factor in making this more prevalent.

How? What Tech Have They Been Using? 

As highlighted by the research of David Maimon, Head of Fraud Insights at SentiLink and a professor at Georgia State University, who has been monitoring the ‘Yahoo Boys’ on Telegram for more than four years, they use phones, laptops and several different types of popular face-swapping software and apps to create their deepfakes.

Also, it’s been noted (by Wired) that the so-called Yahoo Boys post videos of themselves online doing so, often showing their faces in the videos, and the videos and photos of their activities and recruitment are posted across many popular social media channels, including TikTok and Facebook.

Professor Maimon has also noted that the Yahoo Boys started using deepfakes for their scams as far back as 2022, meaning that they have gained quite a lot of experience around using these tools and tactics.

Deepfake Call Types 

It’s also been observed (and highlighted by Wired) that the Yahoo Boys scammers use two different types of live deepfake calls to trick their targets. For example:

Using two phones and a face-swapping app. One phone is used to call the target (via Zoom), using the rear camera to record the screen of the second phone (which is pointing at the scammer’s face) and uses a face-swapping app. In this way, the person’s face the target sees on the real-time video call is completely different from the scammer’s real face.

The second method swaps a laptop for the phone, using a webcam and face-swapping software on the laptop to change the face of the scammer. It’s also been reported that videos made by the scammers of them using this method show that they are able to see their real face displayed alongside their deepfake face although it’s only the deepfake face that’s shown to the target in the video call.

Realistic … and Getting Better

In a LinkedIn post from Professor Maimon, showing an example of one of the scammer’s videos, he notes how “Yahoo boys are getting better using AI tools to bring stolen images of social dating users to live” and that the video example he posted “has piqued my interest due to its remarkably natural head movements, overshadowing the only noticeable flaw—the voice, which could be rectified with relative ease.” 

How To Spot Deepfake (Video Calls) 

On her X feed, Rachel Tobac, who describes herself as a ‘Hacker & CEO at SocialProof Security,’ offers some tips on how to help spot a deepfake video call, based on the latest deepfake calls available.  These are:

– Get the person to stick out their tongue and move it around (tongue will look odd).

– Have the person move their head to the right & left or up & down to a large degree (it will look angular and boxy).

– Ask the person to get close to the camera and turn their head through a wide-angle (see angular boxy side of head).

– Ask the person to add another person next to them in the call and have the original person walk away and come back to see if a deepfake ‘flops-over’ to a second face.

– Look for discoloration around the scalp or circumference of the face (it may look like unblended makeup).

– Look for light flickering in their hair when they move.

Meeting In Person

As noted by contributor ‘Ally A’, to the LinkedIn post about the Yahoo Boys from Matt Burgess of Wired, a key piece of advice to people who may be involved in these kinds of romantic video calls is: “You can’t trust your eyes and ears anymore. If you can’t meet the person you are talking to online IN PERSON within 2-3 weeks of meeting, you have to assume that they are a scammer.” 

AI Advances Helping Scammers

The proliferation of AI technologies and their integration into various applications has inadvertently facilitated the activities of online scammers, including those involved in romance scams. AI-driven tools can now generate realistic and engaging text or images, enabling scammers to create convincing fake profiles and carry out sustained, personalised interactions without much effort – just as the Yahoo Boys have been doing. These sophisticated (but now widely available) tools can help scammers tailor their messages and responses based on the victim’s preferences and responses, making the deceit more believable. As a result, the barrier to entry for conducting such scams is lowered, allowing even those with minimal technical skills to now execute complex and convincing scams, thereby increasing the potential for exploitation and harm to unsuspecting individuals.

How To Protect Yourself 

In addition to Rachel Tobac’s tip for spotting deepfakes (such as those used by the Yahoo Boys), some of the key ways people can protect themselves from falling victim to romance scammers, include:

– Verify profiles. Conduct reverse image searches of profile pictures to check if they appear elsewhere on the internet, which can indicate a stolen image.

– Slow down. Be cautious with individuals who escalate the relationship too quickly or profess love unusually early!

– Keep personal information private. Avoid sharing sensitive personal information such as your address, financial details, or social security number.

– Be very skeptical of requests for money. Be highly suspicious if the person you are communicating with requests money, especially if it is for an emergency or a seemingly urgent matter.

– Use secure communication channels. Stick to the platform’s messaging services and avoid switching to less secure or private communication methods too soon.

– Seek second opinions. Discuss your online relationship with friends or family to gain outside perspectives, especially if something feels off.

– Report suspicious behavior. Report any suspicious profiles or messages to the dating platform and consider filing a complaint with relevant authorities if you suspect a scam.

What Does This Mean For Your Business?

For businesses, understanding the dynamics of the evolving scam landscape, as demonstrated by the techniques employed by the “Yahoo Boys”, is crucial. These scammers, using readily available AI technologies such as deepfakes and real-time face-swapping, underscore a growing trend in cybercrime that leverages cutting-edge technology to exploit vulnerabilities in human psychology, particularly through emotional engagement.

The decentralised nature of these scam networks (where individuals or small groups operate in loose associations while sharing tactics and tools), presents a significant challenge to traditional cybersecurity measures. They operate with a brazen openness, often flaunting their capabilities on social media, which shows a troubling confidence in their ability to evade detection.

The ease of access to AI tools means that the sophistication of scams can evolve as quickly as the technology develops. For businesses, this represents a clear and present danger not just in the form of romance scams targeted at individuals, but as a harbinger of more advanced AI-driven threats that could target companies directly. Phishing scams, impersonation, and business email compromise are just a few examples where similar technologies could be used to deceive employees or manipulate systems for fraudulent purposes.

To safeguard against these threats, businesses need to enhance their defensive strategies by incorporating advanced detection systems that can identify anomalies in communication patterns, authenticate digital identities more robustly, and monitor for signs of emerging threats such as deepfakes. Training employees to recognise and report potential scams is also vital. Creating a culture of security awareness and providing tools to verify information independently can act as a crucial barrier against deception.

Tech News : Microsoft Deepfakes Too Dangerous For Release

Microsoft says its new VASA-1 AI framework for generating lifelike talking faces of virtual characters is so good that it could easily be misused for impersonating humans and, therefore, Microsoft says it has “no plans” to release any aspect of it until it can be sure it can be used responsibly.

What’s The Problem? 

2024 is an election year in at least 64 countries (including the US, UK, India, and South Africa) and the risk of AI being misused to spread misinformation has grown dramatically.  In the US, for example, the Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law has held a hearing titled “Oversight of AI: Election Deepfakes”. There is also now widespread recognition of the threats posed by deepfakes and proactive measures are being taken by governments and private sectors to safeguard electoral integrity. AI companies are keenly aware of the risks and have been taking their own measures. For example, Google’s Gemini has been restricted in the kinds of election-related questions that its AI chatbot will return responses to.

Google has also recently (in a blog post) addressed India’s AI concerns as regards its potential impact (deepfakes and misinformation) on what is the world’s largest election. None of the main AI companies have, therefore, wanted to simply release their latest updated generative AI without being seen to test them and include what safeguards they can against misuse. Also, none of the main AI companies are keen to be publicly singled-out as enabling electoral interference.

VASA-1 

Microsoft says its VASA-1 AI can produce lifelike audio-driven talking faces, generated in real-time, all from a single static portrait photo and a speech audio clip.

How Good Is It? 

Microsoft says that its premier model, VASA-1, is “capable of not only producing lip movements that are exquisitely synchronised with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness.” 

The “core innovations” of VASA-1 include “a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos”. 

See some demos of VASA-1 in action here: https://www.microsoft.com/en-us/research/project/vasa-1/

Key Benefits

Microsoft says some of the key benefits of the VASA-1 model that set it apart are:

– Realism and liveliness. The model can produce convincing lip-audio synchronisation, and a large spectrum of expressive facial nuances and natural head motions. It can also handle arbitrary-length audio and stably output seamless talking face videos.

– Controllability of generation. Microsoft says its diffusion model accepts optional signals as conditions, such as main eye gaze direction and head distance, and emotion offsets.

– Out-of-distribution generalisation. In other words, the model can handle photo and audio inputs that weren’t present in its training set, e.g., artistic photos, singing audios, and non-English speech.

– Power of disentanglement. VASA-1’s latent representation disentangles appearance, 3D head pose, and facial dynamics, enabling separate attribute control and editing of the generated content.

– Real-time efficiency. Microsoft says VASA-1 generates video frames of 512×512 size at 45fps in the offline batch processing mode and can support up to 40fps in the online streaming mode with a preceding latency of only 170ms, evaluated on a desktop PC with a single NVIDIA RTX 4090 GPU.

Not Yet 

However, Microsoft says it is holding back the release of VASA-1 pending the addressing of privacy and usage issues, stating that: “we have no plans to release an online demo, API, product, additional implementation details, or any related offerings until we are certain that the technology will be used responsibly and in accordance with proper regulations”. 

What Does This Mean For Your Business?

Given what VASA-1 can do, you’d think Microsoft would be itching to get VASA-1 out there, monetised, and competing with the likes of Google’s Gemini family of models. However, as with Gemini and other generative AI, it may not be fully ready and may have some issues – as Gemini did when it received widespread criticism and had to be worked-on to correct ‘historical inaccuracies’ and woke outputs.

This is also, crucially, an important and busy electoral year globally with governments nervous, trying to introduce legislation and safeguards, and keeping a close eye on AI companies and their products’ potential to cause damaging deepfake and misinformation/disinformation and electoral interference issues, as well as their potential for use in cybercrime. As such, AI companies are queuing up to be seen to be acting as responsibly and ethically as possible, claiming to be holding back and testing every aspect of their products that could be misused – at the same time basically avoiding the eyes of governments and regulators, and potentially bad publicity and penalties.

As some have pointed out, however, it would be difficult for anyone to regulate who uses certain AI models for the right or wrong reasons and that some very sophisticated open source models can be made from source code found on GitHub by those who are determined. All that said, it shouldn’t be forgotten that VASA-1 appears to be very advanced and could offer many benefits and useful value-adding applications, e.g. for personalising emails and other business mass-communication. It remains to be seen how long Microsoft is prepared to wait before making VASA-1 generally available.