Tech News : Watermark Trial To Spot AI Images

Google’s AI research lab DeepMind has announced that in partnership with Google Cloud, it’s launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images.

The AI Image Challenge

Generative AI technologies are rapidly evolving, and AI-generated imagery, also known as ‘synthetic imagery,’ is becoming much harder to distinguish from images not created by an AI system. Many AI generated images are now so good that they can easily fool people, and there are now so many (often free) AI image generators around and being widely used that misuse is becoming more common.

This raises a host of ethical, legal, economic, technological, and psychological concerns ranging from the proliferation of deepfakes that can be used for misinformation and identity theft, to legal ambiguities around intellectual property rights for AI-generated content. Also, there’s potential for job displacement in creative fields as well as the risk of perpetuating social and algorithmic biases. The technology also poses challenges to our perception of reality and could erode public trust in digital media. Although the synthetic imagery challenge calls for a multi-disciplinary approach to tackle it, many believe a system such as ‘watermarking’ may help in terms of issues like ownership, misuse, and accountability.

What Is Watermarking?  

Creating a special kind of watermark for images to identify them as being AI-produced is a relatively new idea, but adding visible watermarks to images is a method that’s been used for many years (to show copyright and ownership) on sites including Getty Images, Shutterstock, iStock Photo, Adobe Stock and many more. Watermarks are designs that can be layered on images to identify them.  Images can have visible or invisible, reversable, or irreversible watermarks added to them. Adding a watermark can make it more difficult for an image to be copied and used without permission.

What’s The Challenge With AI Image Watermarking? 

AI-generated images can be produced on-the-fly and customised and can be very complex, making it challenging to apply a one-size-fits-all watermarking technique. Also, AI can generate a large number of images in a short period of time, making traditional watermarking impractical, plus simply adding visible watermarks to areas of an image (e.g. the extremities) means it could be cropped and the images can be edited to remove it.

Google’s SynthID Watermarking 

Google SynthID tool  works with Google Cloud’s ‘Imagen’ text-to-image diffusion model (AI text to image generator) and uses a combined approach of being able to add and detect watermarks. For example, the SynthID watermarking tool can add an imperceptible watermark to synthetic images produced by Imagen, doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications (e.g. the addition of filters, changing colours, and saving with various lossy compression schemes – most commonly used for JPEGs). SynthID can also be used to scan an image for its digital watermark and can assess the likelihood of an image being created by Imagen and provides the user with three confidence levels for interpreting the results.

Based On Metadata 

Adding Metadata to an image file (e.g. who created it and when), plus adding digital signatures to that metadata can show if an image has been changed. Where metadata information is intact, users can easily identify an image, but metadata can be manually removed when files are edited.

Google says the SynthID watermark is embedded in the pixels of an image and is compatible with other image identification approaches that are based on metadata and, most importantly, the watermark remains detectable even when metadata is lost.

Other Advantages 

Some of the other advantages of the SynthID watermark addition and detection tool are:

– Images are modified so as to be imperceptible to the human eye.

– Even if an image has been heavily edited and the colour, contrast and size changed, the DeepMind technology behind the tool will still be able to tell if an imaged is AI-generated.

Part Of The Voluntary Commitment

The idea of watermarking to expose and filter AI-generated images falls within the commitment of seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) who recently committed to developing AI safeguards. Part of the commitments under the ‘Earning the Public’s Trust’ heading was to develop robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system, thereby enabling creativity AI while reducing the dangers of fraud and deception.

What Does This Mean For Your Business?

It’s now very easy for people to generate AI images with any of the AI image generating tools available, with many of these images able to fool the viewer possibly resulting in ethical, legal, economic, political, technological, and psychological consequences. Having a system that can reliably identify AI-generated images (even if they’ve been heavily edited) is therefore of value to businesses, citizens, and governments.

Although Google admits its SynthID system is still experimental and not foolproof, it at least means something fairly reliable will be available soon at a time when AI seems to be running ahead of regulation and protection. One challenge, however, is that although there is a general commitment by the big tech companies to watermarking, the SynthID tool is heavily linked to Google’s DeepMind, Cloud and Imagen and other companies may also be pursuing different methods. I.e. there may be a lack of standardisation.

That said, it’s a timely development and it remains to be seen how successful it can be and how watermarking and/or other methods develop going forward.

Tech Insight : 70% Of Companies Using Generative AI

A new VentureBeat survey has revealed that 70 per cent of companies are experimenting with generative AI.

Most Experimenting and Some Implementing 

The (ongoing) survey which was started ahead of the tech news and events company’s recently concluded VB Transform 2023 Conference in San Francisco, gathered the opinions of global executives in data, IT, AI, security, and marketing.

The results revealed that more than half (54.6 per cent) of organisations are experimenting with generative AI, with 18.2 per cent already implementing it into their operations. That said, only a relatively small percentage (18.2 per cent) expect to spend more on the technology in the year ahead.

A Third Not Deploying Gen AI 

One perhaps surprising (for those within tech) statistic from the VentureBeat survey is that quite a substantial proportion of respondents (32 per cent) said they weren’t deploying gen AI for other use cases, or not using it at all yet.

More Than A Quarter In The UK Have Used Gen AI 

The general popularity of generative AI is highlighted by a recent Deloitte survey which showed that more than a quarter of UK adults have used gen AI tools like chatbots, while 4 million people have used it for work.

Popular Among Younger People

Deloitte’s figures also show that more than a quarter (26 per cent) of 16-to-75 year-olds have used a generative AI tool (13 million people) with one in 10 of those respondents using it at least once a day.

Adoption Rate of Gen AI Higher Than Smart Speakers 

The Deloitte survey also highlights how the rate of adoption of generative AI exceeds that of voice-assisted speakers like Amazon’s Alexa. For example, it took five years for voice-assisted speakers to achieve the same adoption levels compared to generative AI’s adoption which really began in earnest last November with ChatGPT’s introduction.

How Are Companies Experimenting With AI? 

Returning to the VentureBeat survey, unsurprisingly, it shows that most companies currently use AI for tasks like chat and messaging (46 per cent) as well as content creation (32 per cent), e.g. ChatGPT.

A Spending Mismatch 

However, the fact is that many companies are experimenting, yet few can envisage spending more on AI tools in the year ahead which therefore reveals a mismatch that could challenge implementation of AI. VentureBeat has suggested that possible reasons for this include constrained company budgets and a lack of budget prioritisation for generative AI.

A Cautious Approach 

It is thought that an apparently cautious approach to generative AI adoption by businesses, highlighted by the VentureBeat survey, may be down to reasons like:

– A shortage of talent and/or resources for generative AI (36.4 per cent).

– Insufficient support from leaders or stakeholders (18.2 per cent).

– Being overwhelmed by too many options and possible uses – not sure how best to deploy the new technology.

– The rapid pace of change in the generative AI meaning that some prefer to wait rather than commit now.

What Does This Mean For Your Business? 

Although revolutionary, generative AI is a new technology to businesses and, as the surveys show, while many people have tried it and businesses are using it, there are some challenges to its wider adoption and implementation. For example, the novelty and an uncertainty about how best to use it (with the breadth of possibilities), an AI skills gap / talent shortage in the market, a lack of budget for it, and its stratospheric growth rate (prompting caution or waiting for new and better versions or tools than can be tailored to their needs) are all to be overcome to bring about wider adoption by businesses.

These challenges may also mean that generative AI vendors in the marketplace at the moment need to make very clear, compelling, targeted usage-cases to the sectors and problem areas for prospective clients in order to convince them to take plunge. The rapid growth of generative AI is continuing with a wide variety of text, image, voice tools being released and with the big tech companies all releasing their own versions (e.g. Microsoft’s Copilot and Google’s Bard) so we’re still very much in the early stages of generative AI’s growth with a great deal of rapid change to come.