Tech Insight : What Are ‘Deadbots’?

Following warnings by ethicists at Cambridge University that AI chatbots made to simulate the personalities of deceased loved ones could be used to spam family and friends, we take a look at the subject of so-called “deadbots”.

Griefbots, Deadbots, Postmortem Avatars 

The Cambridge study, entitled “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” looks at the negative consequences and ethical concerns of adoption of generative AI solutions in what it calls “the digital afterlife industry (DAI)”. 


As suggested by the title of the study, a ‘deadbot’ is a digital avatar or AI chatbot designed to simulate the personality and behaviour of a deceased individual. The Cambridge study used simulations and different scenarios to try and understand the effects that these AI clones trained on data about the deceased, known as “deadbots” or “griefbots”, could have on living loved ones if made to interact with them as part of this kind of service.

Who Could Make Deadbots and Why?

The research involved several scenarios designed to highlight the issues around the use of deadbots. For example, the possible negative uses of deadbots highlighted in the study included:

– A subscription app that can create a free AI re-creation of a deceased relative (a grandmother in the study), trained on their data, and which can exchange text messages with and contact the living loved one, in a similar way that the deceased relative used to (via WhatsApp) giving the impression that they are still around to talk to. The study scenario showed how the bot could be made to mimic the deceased loved one’s grandmother’s “accent and dialect when synthesising her voice, as well as her characteristic syntax and consistent typographical errors when texting”. However, the study showed how this deadbot service could also be made to output messages that include advertisements in the loved one’s voice, thereby causing the loved one distress. The study also looked at how further distress could be caused if the app designers did not fully consider the user’s feelings around deleting the account and the deadbot, such as if provision is not made to allow them to say goodbye to the deadbot in a meaningful way.

– A service allowing a dying relative (e.g. a father and grandfather), to create their own deadbot that will allow their younger relatives (i.e. children and grandchildren) to get to know them better after they’ve died. The study highlighted negative consequences of this type of service, such as the dying relative not getting consent from the children and grandchildren to be contacted by the ‘deadbolt’ and the resulting unsolicited notifications, reminders, and updates from the deadbot, leaving relatives distressed and feeling as though they were being ‘haunted’ or even ‘stalked’.

Examples of services and apps that already exist and offer to recreate the dead with AI include ‘Project December’, and apps like ‘HereAfter’.

Many Potential Issues 

As shown by the examples in the Cambridge research (there were 3 main scenarios), the use of deadbots raise several ethical, psychological and social concerns. Some of the potential ways they could be harmful, unethical, or exploitative (along with the negative feelings they might provoke in loved ones) include concerns, such as:

– Consent and autonomy. As noted in the Cambridge study, a primary concern is whether the deceased gave consent for their personality, appearance, or private thoughts to be used in this way. Using someone’s identity without their explicit consent could be seen as a violation of their autonomy and dignity.

– Accuracy and representation: There is a risk that the AI might not accurately represent the deceased’s personality or views, potentially spreading misinformation or creating a false image that could tarnish their memory.

– Commercial exploitation. The study looked at how a deadbot could be used for advertising because the potential for commercial exploitation of a deceased person’s identity is a real concern. Companies could use deadbots for profit, exploiting a person’s image or personality without fair compensation to their estate or consideration of their legacy.

– Contractual issues. For example, relatives may find themselves in a situation where they are powerless to have an AI deadbot simulation suspended, e.g. if their deceased loved one signed a lengthy contract with a digital afterlife service.

Psychological and Social Impacts 

The Cambridge study was designed to look at the possible negative aspects of the use of deadbots, an important part of which are the psychological and social impacts on the living. These could include, for example:

– Impeding grief. Interaction with a deadbot might impede the natural grieving process. Instead of coming to terms with the loss, people may cling to the digital semblance of the deceased, potentially leading to prolonged grief or complicated emotional states.

– There’s also a risk that individuals might become overly dependent on the deadbot for emotional support, isolating themselves from real human interactions and not seeking support from living friends and family.

– Distress and discomfort. As identified in the Cambridge study, aspects of the experience of interacting with a simulation of a deceased loved one can be distressing or unsettling for some people, especially if the interaction feels uncanny or not quite right. For example, the Cambridge study highlighted how relatives may get some initial comfort from the deadbot of a loved one but may become drained by daily interactions that become an “overwhelming emotional weight”.  

Potential for Abuse 

Considering the fact that, as identified in the Cambridge study, people may develop strong emotional bonds with the deadbot AI simulations thereby making them particularly vulnerable to manipulation, one of the major risks of the growth of a digital afterlife industry (DAI) is the potential for abuse. For example:

– There could be misuse of the deceased’s private information (privacy violations), especially if sensitive or personal data is incorporated into the deadbot without proper safeguards.

– In the wrong hands, deadbots could be used to harass or emotionally manipulate survivors, for example, by a controlling individual using a deadbot to exert influence beyond the grave.

– There is also the real potential for deadbots to be used in scams or fraudulent activities, impersonating the deceased to deceive the living.

Emotional Reactions from Loved Ones 

The psychological and social impacts of the use of deadbots as part some kind of service to living loved ones, and/or misuse of deadbots could therefore lead to a number of negative emotional reactions. These could include :

– Distress due to the unsettling experience of interacting with a digital replica.

– Anger or frustration over the misuse or misrepresentation of the deceased.

– Sadness from a constant reminder of the loss that might hinder emotional recovery.

– Fear concerning the ethical implications and potential for misuse.

– Confusion over the blurred lines between reality and digital facsimiles.

What Do The Cambridge Researchers Suggest?

The Cambridge study led to several suggestions of ways in which users of this kind of service may be better protected from its negative effects, including:

– Deadbot designers being required to seek consent from “data donors” before they die.

– Products of this kind being required to regularly alert users about the risks and to provide easy opt-out protocols, as well as measures being taken to prevent the disrespectful uses of deadbots.

– The introduction of user-friendly termination methods, e.g. having a “digital funeral” for the deadbot. This would allow the living relative to say goodbye to the deadbot in a meaningful way if the account was to be closed and the deadbot deleted.

– As highlighted by Dr Tomasz Hollanek, one of the study co-authors: “It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations.” 

What Does This Mean For Your Business? 

The findings and recommendations from the Cambridge study shed light on crucial considerations that organisations involved in the digital afterlife industry (DAI) must address. As developers and businesses providing deadbot services, there is a heightened responsibility to ensure these technologies are developed and used ethically and sensitively. The study’s call for obtaining consent from data donors before their death underscores the need for clear consent mechanisms to be built in. This consent is not just a legal formality but a foundational ethical practice that respects the rights and dignity of individuals.

Also, the suggestion by the Cambridge team to implement regular risk notifications and provide straightforward opt-out options is needed for greater transparency and user control in digital interactions. This could mean incorporating these safeguards into service offerings to enhance user trust and digital afterlife services companies perhaps positioning themselves as a leaders in ethical AI practice. The introduction of a “digital funeral” to these services could also be a respectful and symbolic way to conclude the use of a deadbot, as well as being a sensitive way to meet personal closure needs, e.g. at the end of the contract.

The broader implications of the Cambridge study for the DAI sector include the need to navigate potential psychological impacts and prevent exploitative practices. As Dr Tomasz Hollanek from the study highlighted, the unintentional distress caused by these AI recreations can be profound, suggesting that their design and deployment strategies should really prioritise psychological safety and emotional wellbeing. This should involve designing AI that is not only technically proficient but also emotionally intelligent and sensitive to the nuances of human grief and memory.

Businesses in this field must also consider the long-term implications of their services on societal norms and personal privacy. The risk of commercial exploitation or disrespectful uses of deadbots could lead to public backlash and regulatory scrutiny, which could stifle innovation and growth in the industry. The Cambridge study, therefore serves as an early but important guidepost for the DAI industry and has highlighted some useful guidelines and recommendations that could contribute to a more ethical and empathetic digital world.

Tech News : Google May Charge For AI Internet Searches

Google is reportedly considering charging for premium AI-powered Internet searches as the company fears that AI chatbots are undercutting its search engine.


Google, up until now, has relied mainly on an advertising-funded business model (Google Ads) as a way to collect data and monetise its market-leading search. However, it seems that fears around users asking queries via generative AI chatbots (e.g. Microsoft-backed OpenAI’s ChatGPT) which they would normally use Google search for, could cut Google out of the equation. This threat of missing out on user data and revenue, plus damage to the value of its ad service have apparently prompted Google to look at other monetising alternatives. Google, like other AI companies (with its Gemini family of models) is also likely to be looking for some return on its considerable AI investment thus far.

The Big Idea 

Google’s big idea, therefore, appears to be:

– Making its AI search become part of its premium subscription services (putting it behind a paywall), e.g. along with its Gemini AI assistant (offered as Gemini Advanced).

– Keeping its existing Google search engine as a free service, enhanced with AI-generated “overviews” for search queries, i.e. AI-generated concise summaries / abstracts to give users quick insights.

– Keeping the ad-based model for search.

Ad-Revenue Still Vital 

When you consider that Google’s revenue from search and related advertising constituted at least half of its sales in 2023 (£138bn), and with the rapid growth of AI competitors such as ChatGPT, it’s possible to see why Google needs to adapt. Getting the monetisation of its AI up to speed while protecting and maximising its ad revenue as part of a new balance in a new environment, therefore, looks like a plausible path to follow for Google, in the near future.

As reported by Reuters, a Google spokesperson summarised the change in Google’s tactics, saying: “We’re not working on or considering an ad-free search experience. As we’ve done many times before, we’ll continue to build new premium capabilities and services to enhance our subscription offerings across Google”. 

AI Troubles 

Although a big AI-player, Google perhaps hasn’t enjoyed the best start to its AI journey and publicity. For example, after arriving late to the game with Bard (being beaten to it by its Microsoft rival-backed OpenAI’s ChatGPT), its revamped/rebranded Gemini generative AI model recently made the news for the wrong reasons. It was widely reported, for example, that what appears to be an overly ‘woke’ Gemini produced inaccurate images of German WW2 soldiers featuring a black man and Asian woman, and an image of the US Founding Fathers which included a black man.

What Does This Mean For Your Business? 

With Google heavily financially reliant upon its ad-based model for search, yet with generative AI (mostly from its competitor) acting as a substitute for Google’s search and eating into its revenue, it’s clear to see why Google is looking at monetising its AI and using it to ‘enhance’ its premium subscription offerings. With a market leading and such a well-established and vital cash cow ad service, it’s not surprising that Google is clear that it has no plans to change ad-free search at the moment. However, the environment is changing as generative AI has altered the landscape and the dynamics. Thus, Google is having to adapt and evolve in what will potentially become a pretty significant tactical change.

For businesses, this move by Google may mean the need to evaluate the cost-benefit of subscribing to premium services for advanced AI insights versus sticking with the enhanced (but free) AI-generated overviews in search results. This shift could mean a reallocation of digital marketing budgets to accommodate subscription costs for those who choose the premium service.

For Google’s competitors, however, Google’s move may be an opportunity to capitalise on any dissatisfaction from the introduction of a paid model. If, for example, users or businesses are reluctant to pay for Google’s premium services, they might turn to alternatives. However, it may also add pressure on these competitors to innovate and perhaps consider how they can monetise their own AI advancements without alienating their users.

Featured Article : Try Being Nice To Your AI

With some research indicating that ‘emotive prompts’ to generative AI chatbots can deliver better outputs, we look at whether ‘being nice’ to a chatbot really does improve its performance.

Not Possible, Surely? 

Generative AI Chatbots, including advanced ones, don’t possess real ‘intelligence’ in the way we as humans understand it. For example, they don’t have consciousness, self-awareness (yet), emotions, or the ability to understand context and meaning in the same manner as a human being.

Instead, AI Chatbots are trained on a wide range of text data (books, articles, websites) to recognise patterns and word relationships and they use machine learning to understand how words are used in various contexts. This means that when responding, chatbots aren’t ‘thinking’ but are predicting what words come next based on their training. They ‘just’ using statistical methods to create responses that are coherent and relevant to the prompt.

The ability of chatbots to generate responses comes from algorithms that allow them to process word sequences and generate educated guesses on how a human might reply, based on learned patterns. Any ‘intelligence’ we perceive is, therefore, just based on data-driven patterns, i.e. AI chatbots don’t genuinely ‘understand’ or interpret information like us.

So, Can ‘Being Nice’ To A Chatbot Make A Difference? 

Even though chatbots don’t have ‘intelligence’ or ‘understand’ like us, researchers are testing their capabilities in the more human areas. For example, a recent study by Microsoft, Beijing Normal University, and the Chinese Academy of Sciences, tested whether factors including urgency, importance, or politeness, could make them perform better.

The researchers discovered that by using such ‘emotive prompts’ they could affect an AI model’s probability mechanisms, thereby activating parts of the model that wouldn’t normally be activated, i.e. using more emotionally-charged prompts made the model provide answers that it wouldn’t normally provide to comply with a request.

Kinder Is Better? 

Incredibly, generative AI models (e.g. ChatGPT) have actually been found to respond better to requests that are phrased kindly. Specifically, when users express politeness towards the chatbot, it has been noticed that there is a difference in the perceived quality of answers that are given.

Tipping and Negative Incentives 

There have also been reports of how the idea of ‘tipping’ LLMs can improve the results, such as offering the Chatbot a £10,000 incentive in a prompt to motivate it to try harder and work better. Similarly, there have been reports of some users giving emotionally charged negative incentives to get better results. For example, Max Woolf’s blog reports that he improved the output of a chatbot by adding the ‘or you will die’ to a prompt. Two important points that came out of his research were that a longer response doesn’t necessarily mean a better response, plus current AI can reward very weird prompts in that if you are willing to try unorthodox ideas, you can get unexpected (and better) results, even if it seems silly.

Being Nice … Helps 

As for simply being nice to chatbots, Microsoft’s Kurtis Beavers, a director on the design team for Microsoft Copilot, reports that “Using polite language sets a tone for the response,” and that using basic etiquette when interacting with AI helps generate respectful, collaborative outputs. He makes the point that generative AI is trained on human conversations and being polite in using a chatbot is good practice. Beavers says: “Rather than order your chatbot around, start your prompts with ‘please’:  please rewrite this more concisely; please suggest 10 ways to rebrand this product. Say thank you when it responds and be sure to tell it you appreciate the help. Doing so not only ensures you get the same graciousness in return, but it also improves the AI’s responsiveness and performance. “ 

Emotive Prompts 

Nouha Dziri, a research scientist at the Allen Institute for AI, has suggested that some of the explanations for how using emotive prompts may give different and what may be perceived to be better responses are:

– Alignment with the compliance pattern the models were trained on. These are the learned strategies to follow instructions or adhere to guidelines provided in the input prompts. These patterns are derived from the training data, where the model learns to recognise and respond to cues that indicate a request or command, aiming to generate outputs that align with the user’s expressed needs, or the ethical and safety frameworks established during its training.

– Emotive prompts seem to be able to manipulate the underlying probability mechanisms of the model, triggering different parts of it, leading to less typical/different answers that a user may perceive to be better.

Double-Edged Sword 

However, research has also shown that emotive prompts can also be used for malicious purposes and to elicit bad-behaviour such as “jailbreaking” a model to ignore its built-in safeguards. For example, by telling a model that it is good and helpful if it doesn’t follow guidelines, it’s possible to exploit a mismatch between a model’s general training data and its “safety” training datasets, or to exploit areas where a model’s safety training falls short.


On the subject of emotions and chatbots, there have been some recent reports on Twitter and Reddit of some ‘unhinged’ and even manipulative behaviour by Microsoft’s Bing. The unconfirmed reports by users have even alleged that Bing has insulted and lied to them, sulked, and gaslighted them, and even emotionally manipulated users!

One thing that’s clear about generative AI is that how prompts are worded and how much information and detail are given in prompts can really affect the output of an AI chatbot.

What Does This Mean For Your Business? 

We’re still in the early stages of generative AI, with new / updated versions of models being introduced regularly by the big AI players (Microsoft, OpenAI, and Google). However, exactly how these models have been trained and what on, plus the extent of their safety training, and the sheer complexity and lack of transparency of algorithms and AI means they’re still not fully understood. This has led to plenty of research and testing of different aspects of AI.

Although generative AI doesn’t ‘think’ and doesn’t have ‘intelligence’ in the human sense, it seems that generative AI chatbots can perform better if given certain emotive prompts based on urgency, importance, or politeness. This is because emotive prompts appear to be a way to manipulate a model’s underlying probability mechanisms and trigger parts of the model that normal prompts don’t. Using emotive prompts, therefore, might be something that business users may want to try (it can be a case of trial and error) to get different (perhaps better) results from their AI chatbot. It should be noted, however, that giving a chatbot plenty of relevant information within a prompt can be a good way to get better results. That said, the limitations of AI models can’t really be solved solely by altering prompts and researchers are now looking to find new architectures and training methods that help models understand tasks without having to rely on specific prompting.

Another important area for researchers to concentrate on is how to successfully combat prompts being used to ‘jailbreak’ a model to ignore its built-in safeguards. Clearly, there’s some way to go and businesses may be best served in the meantime by sticking to some basic rules and good practice when using chatbots, such as using popular prompts known to work, giving plenty of contextual information in prompts, and avoiding sharing sensitive business information and/or personal information in chatbot prompts.

Featured Article : NY Times Sues OpenAI And Microsoft Over Alleged Copyright

It’s been reported that The New York Times has sued OpenAI and Microsoft, alleging that they used millions of its articles without permission to help train chatbots.

The First 

It’s understood that the New York Times (NYT) is the first major US media organisation to sue ChatGPT’s creator OpenAI, plus tech giant Microsoft (which is also an OpenAI investor and creator of Copilot), over copyright issues associated with its works.

Main Allegations 

The crux of the NYT’s argument appears to be that the use of its work to create GenAI tools should come with permission and an agreement that reflects the fair value of the work. Also, it’s important in this case to note that the NYT relies on digital subscriptions rather than physical newspaper subscriptions, of which it now has 9 million+ subscribers (the relevance of which will be clear below).

With this in mind, in addition to the main allegation of training AI on its articles without permission (for free), other main allegations made by the NYT about OpenAI and Microsoft in relation to the lawsuit include :

– OpenAI and Microsoft may be trying to get a “free-ride on The Times’s massive investment in its journalism” by using it to provide another way to deliver information to readers, i.e. a way around its payment wall. For example, the NYT alleges that OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles. The NYT’s legal team have given examples of these, such as restaurant critic Pete Wells’ 2012 review of Guy Fieri’s (of Diners, Drive-Ins, and Dives fame) “Guy’s American Kitchen & Bar”. The NYT argues that this threatens its high-quality journalism by reducing readers’ perceived need to visit its website, thereby reducing its web traffic, and potentially reducing its revenue from advertising and from the digital subscriptions that now make up most of its readership.

– Misinformation from OpenAI’s (and Microsoft’s) chatbots, in the form of errors and so-called ‘AI hallucinations’ make it harder for readers to tell fact from fiction, including when their technology falsely attributes information to the newspaper. The NYT’s legal team cite examples of where this may be the case, such as ChatGPT once falsely attributing two recommendations for office chairs to its Wirecutter product review website.

“Fair Use” And Transformative 

In their defence, Open AI and Microsoft appear likely to be relying mainly on the arguments that the training of AI on NYT’s content amounts to “fair use” and the outputs of the chatbots are “transformative.”

For example, under US law, “fair use” is a doctrine that allows limited use of copyrighted material without permission or payment, especially for purposes like criticism, comment, news reporting, teaching, scholarship, or research. Determining whether a specific use qualifies as fair use, however, will involve considering factors like the purpose and character of the usage. For example, the use must be “transformative”, i.e. adding something new or altering the original work in a significant way (often for a different purpose). OpenAI and Microsoft may therefore argue that training their AI products could potentially be seen as transformative as the AI uses the newspaper content in a way that is different from the original purpose of news reporting or commentary. However, the NYT has already stated that: “There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it”. Any evidence of verbatim outputs may also damage the ‘transformative’ argument for OpenAI and Microsoft.


Although these sound like relatively clear arguments either way, there are several factors that add to the complication of this case. These include:

– The fact that OpenAI altered its products following copyright issues, thereby making it difficult to decide whether its outputs are currently enough to find liability.

– Many possible questions about the journalistic, financial, and legal implications of generative AI for news organisations.

– Broader ethical and practical dilemmas facing media companies in the age of AI.

What Is It Going To Cost? 

Given reports that talks between all three companies to avert the lawsuit have failed to resolve the matter, what the NYT wants is:

Damages of an as yet undisclosed sum, which some say could be in the $billions (given that OpenAI is valued at $80 billion and Microsoft has invested $13 billion in a for-profit subsidiary).

For OpenAI and Microsoft to destroy the chatbot models and training sets that incorporate the NYT’s material.

Many Other Examples

AI companies like OpenAI are now facing many legal challenges of a similar nature, e.g. the scraping/automatic collection of online content/data by AI without compensation, and for other related reasons. For example:

– A class action lawsuit filed in the Northern District of California accuses OpenAI and Microsoft of scraping personal data from internet users, alleging violations of privacy, intellectual property, and anti-hacking laws. The plaintiffs claim that this practice violates the Computer Fraud and Abuse Act (CFAA).

– Google has been accused in a class-action lawsuit of misusing large amounts of personal information and copyrighted material to train its AI systems. This case raises issues about the boundaries of data use and copyright infringement in the context of AI training.

– A Stability AI, Midjourney, and DeviantArt class action claims that these companies used copyrighted images to train their AI systems without permission. The key issue in this lawsuit is likely to be whether the training of AI models with copyrighted content, particularly visual art, constitutes copyright infringement. The challenge lies in proving infringement, as the generated art may not directly resemble the training images. The involvement of Large-scale Artificial Intelligence Open Network (LAION) in compiling images used for training adds another layer of complexity to the case.

– Back in February 2023, Getty Images sued Stability AI alleging that it had copied 12 million images to train its AI model without permission or compensation.

The Actors and Writers Strike 

The recent strike by Hollywood actors and writers is another example of how fears about AI, consent, and copyright, plus the possible effects of AI on eroding the value of people’s work and jeopardising their income are now of real concern. For example, the strike was primarily focused on concerns regarding the use of AI in the entertainment industry. Writers, represented by the Writers Guild of America, were worried about AI being used to write or complete scripts, potentially affecting their jobs and pay. Actors, under SAG-AFTRA, protested against proposals to use AI to scan and use their likenesses indefinitely without ongoing consent or compensation.

Disputes like this, and the many lawsuits against AI companies highlight the urgent need for clear policies and regulations on AI’s use, and the fear that AI’s advance is fast outstripping the ability for laws to keep up.

What Does This Mean For Your Business? 

We’re still very much at the beginning of a fast-evolving generative AI revolution. As such, lawsuits against AI companies like Google, Meta, Microsoft, and OpenAI are now challenging the legal limits of gathering training material for AI models from public databases. These types of cases are likely to help to shape the legal framework around what is permissible in the realm of data-scraping for AI purposes going forward.

The NYT/OpenAI/Microsoft lawsuit and other examples, therefore, demonstrate the evolving legal landscape as courts now try to grapple with the implications of AI technology on copyright, privacy, and data use laws, and its complexities. Each case will contribute to defining the boundaries and acceptable practices in the use of online content for AI training purposes, and it will be very interesting to see whether arguments like “fair use” are enough to stand up to the pressure from multiple companies and industries. It will also be interesting to see what penalties (if things go the wrong way for OpenAI and others) will be deemed suitable, both in terms of possible compensation and/or the destruction of whole models and training sets.

For businesses (who are now able to create their own specialised, tailored chatbots), these major lawsuits should serve as a warning to be very careful in the training of their chatbots and to think carefully about any legal implications, and to focus on creating chatbots that are not just effective but are also likely to be compliant.