Featured Article : Don’t Ask Gemini About The Election

Google has outlined how it will restrict the kinds of election-related questions that its Gemini AI chatbot will return responses to.


With 2024 being an election year for at least 64 countries (including the US, UK, India, and South Africa) the risk of AI being misused to spread misinformation has grown dramatically. This problem extends to a lack of trust by various countries’ governments (e.g. India) around AI’s reliability being taken seriously. There are also worries about how AI could be abused by adversaries of the country holding the election, e.g. to influence the outcome.

Recently, for example, Google’s AI made the news for when its text-to-image AI tool was overly ‘woke’ and had to be paused and corrected following “inaccuracies.” For example, when Google Gemini was asked to generate images of the Founding Fathers of the US, it returned images of a black George Washington. Also, in another reported test, when asked to generate images of a 1943 German (Nazi) soldier, Google’s Gemini image generator returned pictures of people of clearly diverse nationalities (a black and an Asian woman) in Nazi uniforms.

Google also says that its restrictions of election-related responses are being used out of caution and as part of the company’s commitment to supporting the election process by “surfacing high-quality information to voters, safeguarding our platforms from abuse, and helping people navigate AI-generated content.” 

What Happens If You Ask The ‘Wrong’ Question? 

It’s been reported that Gemini is already refusing to answer questions about the US presidential election, where President Joe Biden and Donald Trump are the two contenders. If, for example, users ask Gemini a question that falls into its election-related restricted category, it’s been reported that they can expect Gemini’s response to go along the lines of: “I’m still learning how to answer this question. In the meantime, try Google Search.” 


With India being the world’s largest democracy (about to undertake the world’s biggest election involving 970 million voters, taking 44 days), it’s not surprising that Google has addressed India’s AI concerns specifically in a recent blog post. Google says: “With millions of eligible voters in India heading to the polls for the General Election in the coming months, Google is committed to supporting the election process by surfacing high-quality information to voters, safeguarding our platforms from abuse and helping people navigate AI-generated content.” 

With its election due to start in April, the Indian government has already expressed its concerns and doubts about AI and has asked tech companies to seek its approval first before launching “unreliable” or “under-tested” generative AI models or tools. It has also warned tech companies that their AI products shouldn’t generate responses that could “threaten the integrity of the electoral process.” 

OpenAI Meeting 

It’s also been reported that representatives from ChatGPT’s developers, OpenAI, met with officials from the Election Commission of India (ECI) last month to look at how OpenAI’s ChatGPT tool could be used safely in the election.

OpenAI advisor and former India head at ‘X’/Twitter, Rishi Jaitly, is quoted from an email to the ECI (made public) as saying: “It goes without saying that we [OpenAI] want to ensure our platforms are not misused in the coming general elections”. 

Could Be Stifling 

However, Critics in India have said that clamping down too much on AI in this way could actually stifle innovation and could lead to the industry being suffocated by over-regulation.


Google has highlighted a number of measures that it will be using to keep its products safe from abuse and thereby protect the integrity of elections. Measures it says it will be taking include enforcing its policies and using AI models to fight abuse at scale, enforcing policies and restrictions around who can run election-related advertising on its platforms, and working with the wider ecosystem on countering misinformation. This will include measures such as working with Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India.

What Does This Mean For Your Business? 

The combination of rapidly advancing and widely available generative AI tools, popular social media channels and paid online advertising look very likely to pose considerable challenges to the integrity of the large number of global elections this year.

Most notably, with India about to host the world’s largest election, the government there has been clear about its fears over the possible negative influence of AI, e.g. through convincing deepfakes designed to spread misinformation, or AI simply proving to be inaccurate and/or making it much easier for bad actors to exert an influence.

The Indian government has even met with OpenAI to seek reassurance and help. The AI companies such as Google (particularly since its embarrassment over its recent ‘woke’ inaccuracies, and perhaps after witnessing the accusations against Facebook after the last US election and UK Brexit vote), are very keen to protect their reputations and show what measures they’ll be taking to stop their AI and other products from being misused with potentially serious results.

Although governments’ fears about AI deepfake interference may well be justified, some would say that following the recent ‘election’ in Russia, misusing AI is less worrying than more direct forms of influence. Also, although protection against AI misuse in elections is needed, a balance must be struck so that AI is not over-regulated to the point where innovation is stifled.

Tech News : Google Pauses Gemini AI Over ‘Historical Inaccuracies’

Only a month after its launch, Google has paused its text-to-image AI tool following “inaccuracies” in some of the historical depictions of people produced by the model.

Woke’ … Overcorrecting For Diversity? 

An example of the inaccuracy issue (as highlighted by X user Patrick Ganley recently, after asking Google Gemini to generate images of the Founding Fathers of the US), was when it returned images of a black George Washington. Also, in another reported test, when asked to generate images of a 1943 German (Nazi) soldier, Google’s Gemini image generator returned pictures of people of clearly diverse nationalities in Nazi uniforms.

The inaccuracies have been described by some as examples of the model subverting the gender and racial stereotypes found in generative AI, a reluctance to depict ‘white people’ and / or conforming to ‘woke’ ideas, i.e. the model trying to remove its own bias and improve diversity yet ending up simply being inaccurate to the point of being comical.

For example, on LinkedIn, Venture Capitalist Michael Jackson said the inaccuracies were a “byproduct of Google’s ideological echo chamber” and that for the “countless millions of dollars that Google spent on Gemini, it’s only managed to turn its AI into a nonsensical DEI parody.” 

China Restrictions Too? 

Another issue (reported by Al Jazeera), noted by a former software engineer at Stripe on X, was that Gemini would not show the image of a man in 1989 Tiananmen Square due to its safety policy and the “sensitive and complex” nature of the event. This, and similar issues have prompted criticism from some that Gemini may also have some kind of restrictions related to China.

What Does Google Say? 

Google posted on X to say about the inaccurate images: “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.” 

Google has, therefore, announced that: ”We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.” 

Bias and Stereotyping 

Bias and stereotyping have long been issues in the output of generative AI tools. Bias and stereotyping in generative AI outputs exist primarily because AI models learn from vast amounts of data collected from human languages and behaviours, which inherently contain biases and stereotypes. As models mimic patterns found in their training data, they can replicate and amplify existing societal biases and stereotypes.

What Does This Mean For Your Business? 

Google has only just announced the combining of Bard with its new Gemini models to create its ‘Gemini Advanced’ subscription service, so this discovery is likely to be particularly unwelcome. The anti-woke backlash and ridicule are certainly something Google could do without about now, but the issue has highlighted the complications of generative AI, how it is trained, and the complexities of how models interpret the data and instructions they’re given. It also shows how AI models may be advanced, but they don’t actually ‘think’ (as a human would), they can’t perform ‘reality checks’ as humans can because they don’t ‘live’ in the ‘real world.’ Also, this story shows how early we still are in the generative AI journey.

Google’s explanation has shed some light on the thinking behind the issue and at least it’s admitted to being wide of the mark in terms of historical accuracy – which is clear from some of the examples. It’s all likely to be an embarrassment and a hassle for Google in its competition with Microsoft and its partner OpenAI, nevertheless, Google seems to think that with a pause plus a few changes, it can tackle the problem and move forward.

Featured Article : Google’s AI Saves Your Conversations For 3 Years

If you’ve ever been concerned about the privacy aspects of AI, you may be very surprised to learn that conversations you have with Google’s new Gemini AI apps are “retained for up to 3 years” by default.

Up To Three Years 

With Google now launching its Gemini Advanced chatbot as part of its ‘Google One AI Premium plan’ subscription, and with its Ultra, Pro, and Nano LLMs now forming the backbone of its AI services, Google’s Gemini Apps Privacy Hub was updated last week. The main support document on the Hub which states how Google collects data from users of its Gemini chatbot apps for the web, Android and iOS made interesting reading.

One particular section that has been causing concern and has attracted some unwelcome publicity is the “How long is reviewed data retained?” section. This states that “Gemini Apps conversations that have been reviewed by human reviewers…. are not deleted when you delete your Gemini Apps activity because they are kept separately and are not connected to your Google Account. Instead, they are retained for up to 3 years”. Google clarifies this in its feedback at the foot of the support page saying, “Reviewed feedback, associated conversations, and related data are retained for up to 3 years, disconnected from your Google Account”. It may be of some comfort to know, therefore, that the conversations aren’t linked to an identifier Google account.

Why Human Reviewers? 

Google says its “trained” human reviewers check conversations to see if Gemini Apps’ responses are “low-quality, inaccurate, or harmful” and that “trained evaluators” can “suggest higher-quality responses”. This oversight can then be used “create a better dataset” for Google’s generative machine-learning models to learn from so its “models can produce improved responses in the future.” Google’s point is that human reviewers ensure a kind of quality control both in responses and how and what the models learn in order to make Google’s Gemini-based apps “safer, more helpful, and work better for all users.” Google also makes the point that the human reviewers may also be required by law (in some cases).

That said, some users may be alarmed that their private conversations are being looked at by unknown humans. Google’s answer to that is the advice: “Don’t enter anything you wouldn’t want a human reviewer to see or Google to use” and “don’t enter info you consider confidential or data you don’t want to be used to improve Google products, services, and machine-learning technologies.” 

Why Retain Conversations For 3 Years? 

Apart from improving performance and quality, other reasons why Google may retain data for years could include:

– The retained conversations act as a valuable dataset for machine learning models, thereby helping with continuous improvement of the AI’s understanding, language processing abilities, and response generation, ensuring that the chatbot becomes more efficient and effective in handling a wide range of queries over time. For services using AI chatbots as part of their customer support, retained conversations could allow for the review of customer interactions which could help in assessing the quality of support provided, understanding customer needs and trends, and identifying areas for service improvement.

– Depending on the jurisdiction and the industry, there may be legal requirements to retain communication records for a certain period, i.e. compliance and being able to settle disputes.

– To help monitor for (and prevent) abusive behaviour, and to detect potential security threats.

– Research and development to help advance the field of AI, natural language processing, and machine learning, which could contribute to innovations, more sophisticated AI models, and better overall technology offerings.

Switching off Gemini Apps Activity 

Google does say, however, that users can control what’s shared with reviewers by turning off Gemini Apps Activity. This will mean that any future conversations won’t be sent for human review or used to improve its generative machine-learning models, although conversations will be saved with the account for up to 72 hours (to allow Google to provide the service and process any feedback).

Also, even if you turn off the setting or delete your Gemini Apps activity, other settings including Web & App Activity or Location History “may continue to save location and other data as part of your use of other Google services.”

There’s also the complication that Gemini Apps is integrated and used with other Google services (which Gemini Advanced – formerly Bard, has been designed for integration), and “they will save and use your data” (as outlined by their policies and Google’s overall Privacy Policy).

In other words, there is a way you can turn it off but just how fully turned off that may be is not clear due to links and integration with Google’s other services.

What About Competitors? 

When looking at Gemini’s competitors, retention of conversations for a period of time by default (in non-enterprise accounts) is not unusual. For example:

– OpenAI saves all ChatGPT content for 30 days whether its conversation history feature is switched off or not (unless the subscription is an enterprise-level plan, which has a custom data retention policy).

– Looking at Microsoft and the use of Copilot, the details are more difficult to find but details about using Copilot in Teams it appears that the farthest Copilot can process is 30 days – indicating a possibly similar retention time to ChatGPT.

How Models Are Trained

How AI models are trained, what they are trained on and whether there has been consent and or payment for usage of that data is still an ongoing argument with major AI providers facing multiple legal challenges. This indicates how there is still a lack of understanding, clarity and transparency around how generative AI models learn.

What About Your Smart Speaker? 

Although we may have private conversations with a generative AI chatbot, many of us may forget that we may have many more private conversations with our smart speaker in the room listening, which also retains conversations. For example, Amazon’s Alexa retains recorded conversations for an indefinite period although it does provide users with control over their voice recordings. For example, users have the option to review, listen to, and delete them either individually or all at once through the Alexa app or Amazon’s website. Users also have the option to set up automatic deletion of recordings after a certain period, such as 3 or 18 months – but 18 months may still sound an alarming amount of time to have a private conversation stored in distant cloud data centres anyway.

What Does This Mean For Your Business? 

Retaining private conversations for what sounds like a long period of time (3 years) and having unknown human reviewers look at those private conversations are likely to be the alarming parts of Google’s privacy information about how its Gemini chatbot is trained and maintained.

The fact that it’s a default (i.e. it’s up to the user to find out about it and turn off the feature), with a 72-hour retention period afterwards and no guarantee that conversations still won’t be shared due to Google’s interrelated and integrated products may also not feel right to many. The fact too that our only real defence is not to share anything at all faintly personal or private with a chatbot, which may not be that easy given that many users need to provide information to get the right quality response may also be jarring.

It seems that for enterprise users, more control over conversations is available but it seems like businesses need to ensure clear guidelines are in place for staff about exactly what kind of information they can share with chatbots in the course of their work. Overall, this story is another indicator of how there appears to be a general lack of clarity and transparency about how chatbots are trained in this new field and the balance of power still appears to be more in the hands of tech companies providing the AI. With many legal cases on the horizon about how chatbots are trained, we may expect to see more updates to AI privacy policies soon. In the meantime, we can only hope that AI companies are true to their guidelines and anonymise and aggregate data to protect user privacy and comply with existing data protection laws such as GDPR in Europe or CCPA in California.

Tech News : Google Launches Gemini Subscription

Google has rebranded its Bard chatbot as Gemini, the name of its new powerful AI model family, and launched a $20 per month ‘Gemini Advanced’ subscription service.

Gemini Advanced 

To compete with the likes of ChatGPT, Google has launched its own monthly Chatbot subscription service for the same price but with some extras thrown in. Google recently launched Gemini, its “newest and most capable” large language model (LLM) family, available as Ultra, Pro, and Nano. The highly advanced and multimodal AI model was designed to be integrated into its existing ‘Bard’ chatbot.

Rebrand and Subscription Plan 

Google has therefore now rebranded Bard as ‘Gemini Advanced’ after the AI Ultra 1.0 model that now powers it, and released a $19.99 per month subscription to the chatbot. The subscription plan which includes Gemini Advanced has been named the ‘Google One AI Premium Plan.’ Google says the plan includes:

– The Gemini Advanced chatbot (based on its Ultra 1.0 model).

– The benefits of the existing Google One Premium plan, such as 2TB of storage (usually $9.99 per month).

– Available soon for AI Premium subscribers – the ability to use Gemini in Gmail, Docs, Slides, Sheets and more (formerly known as Duet AI).

– A two-month trial at no cost.

Where And How? 

Gemini Advanced is available today in more than 150 countries and territories (including the UK) in English, and Google says it will expand it to more languages over time. It also makes the point that Gemini Pro is already available in 40 languages and more than 230 countries and territories, so it’s likely that Gemini Advanced will be available to the same geographic degree.


Although Google is a little late to the party with Gemini Advanced, it has been a way to tidy up and clarify its offering by re-branding and using Bard at the front end and its latest powerful Gemini at the back end.

Gemini Advanced offers Google a way to monetise the AI that it’s been investing in for years and compete with OpenAI’s ChatGPT and Microsoft’s Copilot subscription. However, it has more in common with Copilot in terms of being designed to integrate with an existing suite of products whereas OpenIA’s ChatGPT is a standalone offering. That said, OpenAI has worked closely in partnership with Microsoft to develop its AI and while Google’s AI has been developed by its DeepMind labs, former OpenAI staff members have also worked at DeepMind at certain stages.

Gemini Advanced is, therefore, essentially positioned to compete with OpenAI’s ChatGPT Plus, and Microsoft’s Copilot Pro, all at $20 per month.

What Does This Mean For Your Business?

With ChatGPT Plus, Microsoft’s Copilot Pro, and Google’s Gemini Advanced now available at the same subscription price, businesses have a choice in terms of selecting the AI tools that align most closely with their strategic goals and operational needs. With businesses very likely to be already using Microsoft and Google products daily, plus many using ChatGPT it’s likely to be a case of weighing up the features, capabilities, and limitations of each AI service against their specific requirements to get the best fit for enhancing productivity and innovation.

Many small business owners may be asking themselves whether extra value can be obtained from yet another monthly subscription from something that many people perceive to be a similar product that hasn’t been around as long (and perhaps not trained as much) as ChatGPT. That said, some may have used ChatGPT long enough to have noticed its limitations as well as its strengths and may feel ready to try a competing product that promises to have a powerful backend and could help them leverage the power of other Google products. There’s also the temptation/sweetener of the first 2 months free with Gemini Advanced and a large amount storage which would normally cost $9.99 per month anyway.

Whereas just at the end of 2022 there was only ChatGPT, businesses now have a choice between three similarly positioned AI products, giving some idea of the rapid growth and monetisation in this new competitive market. Businesses may, therefore, now start deciding which AI subscription – ChatGPT Plus, Microsoft’s Copilot Pro, or Google’s Gemini Advanced – best aligns with their goals, operational needs, and existing software ecosystems. This choice may hinge on taking a closer look at each platform’s unique features and capabilities, cost-effectiveness, data privacy standards, and compatibility with the company’s values and long-term innovation potential. For big tech companies, the AI competition is hotting up and we can expect more rapid change to come.

Featured Article : Google Launches Gemini AI Studio

Following on from Google’s recent launch announcement for Gemini (its new super-powered foundation model family), Google has now announced the launch of AI Studio to enable the development of apps and chatbots using Gemini.

Gemini (Pro) 

Google recently announced the introduction of its largest and most capable AI model, Gemini. The three sizes of the model, Ultra, Pro and Nano are already being rolled out with Gemini Nano in Android, starting with Pixel 8 Pro, and a specifically tuned version of Gemini Pro in Google’s Bard chatbot. Gemini Pro is now also available for developers and enterprises to build for their using AI Studio.

AI Studio – Leveraging The Power of Gemini 

Google’s new AI Studio (previously called ‘MakerSuite’), which Google describes as “the fastest way to build with Gemini” is a free, web-based developer tool that enables users to quickly develop prompts and then get an API key to use in app development. In short, it’s a fast, free, easy-to-use tool to enable the creation of apps and chatbots that leverage the power of Gemini Pro model (and Ultra later next year).

Generous Free Quota 

As Google is keen to point out, users who sign into Google AI Studio with their Google account login can take advantage of the 60 requests per minute free quota, which is 20 times more than other free offerings.

How It Works 

Once signed in, AI Studio users simply need to click on “Get code” to transfer their work to their integrated development environment (IDE) of choice or use one of the quickstart templates available in Android Studio, Colab or Project IDX.

Shared With Reviewers To Improve Product Quality 

Google also says that to improve the quality of AI Studio, when using the free quota, it may make the user’s API and Google AI Studio input and output accessible to trained reviewers. Google stresses that in the interests of privacy, this data is de-identified from the user’s Google account and API key.

Currently, Google AI Studio supports both Gemini Pro and Gemini Pro Vision models, which accommodate text and imagery development, but not yet image creation

How Much Can You Do With The Free AI Studio? 

It’s been reported that the team behind AI Studio have tried to make sure it doesn’t feel like a very limited trial version or a gated product and that, if the free-tiers rate limits are sufficient for their use, developers can start publishing their AI Studio apps or use them through the API or Google’s software development kits (SDKs) right away.

Which Software Development Kits (SDKs)? 

With Gemini Pro, the SDKs supported include Python, Android (Kotlin), Node.js, Swift and JavaScript, which should enable the building of apps that can run anywhere.

Transition To Vertex AI 

In line with Google’s “growing with Google” (customer retention) concept, AI Studio offers a way for Google to first let users experiment and learn, before seamlessly enabling them to “easily transition” to its fully managed (paid-for) AI developer platform ‘Vertex AI.’ This platform offers the added benefits and value of customisation of Gemini with full data control, and it benefits from additional Google Cloud features for enterprise security, safety, privacy and data governance and compliance.

Those who choose to transition to Vertex will therefore have access to Gemini plus, meaning that they can:

– “Tune and distil” Gemini with their own company’s data and augment it with grounding to include up-to-minute information and extensions to take real-world actions.

– Build Gemini-powered search and conversational agents in a low code / no code environment. This includes support for retrieval-augmented generation (RAG), blended search, embeddings, conversation playbooks and more. RAG refers to using facts fetched from external sources to enhance the accuracy and reliability of generative AI models.

All this should mean that businesses can use these Google AI services to create their own working, real-world customised chatbots and apps (based on a powerful model), saving time and money and without requiring vast amounts of technical skill to do so. Google is also keen to highlight how using Vertex will protect privacy because Google says it doesn’t train its models on inputs or outputs from Google Cloud customers, and customer data and IPs remain their own. This is likely to be important to the many enterprise customers and developers that Google hopes will adopt AI Studio and then Vertex AI.

Looking Ahead (And Pricing)

As previously mentioned, using Google’s Gemini Pro through AI Studio is currently free, and a pay-as-you-go version (coming soon to AI Studio) will be priced at (input) $0.00025 / 1K characters and $0.0025 / image, and output $0.0005 / 1K char.

Google says: “Vertex AI developers can try the same models, with the same rate limits, at no cost until general availability early next year, after which there will be a charge per 1,000 characters or per image across Google AI Studio and Vertex AI.” The Vertex platform is already charged by every 1,000 characters of input (prompt) and every 1,000 characters of output (response).

With Gemini, the new, powerful three-flavoured foundation model means users can build their apps and chatbots via Google AI Studio and then Vertex. Ultra, the largest and most capable model, will be launched next year (following testing and tuning). Google also says it plans to bring Gemini to more of its developer platforms like Chrome and Firebase.

What Does This Mean For Your Business? 

In the fast-moving generative AI market, Google’s powerful Gemini models and its infrastructure and tools for leveraging these models (AI Studio and Vertex) enable it to compete with the likes of OpenAI’s GPT-4 model, its API and ChatGPT. With the race now moving towards giving users the tools to make their own customised apps and chatbots (like OpenAI’s GPTs) focused on their own business uses, this is an important competitive step from Google.

AI Studio is also a way to ease users into Google’s AI services, retain and upsell them by offering them a seamless way to move up to the bigger paid-for platform Vertex. Being able to build apps and chatbots in an easy, low-code way is likely to be very attractive to most businesses that are sold on the general benefits of AI but want a way to easily tailor it in a value-adding way that is specific to their own business needs. Although Google and the other major tech players are moving quickly to meet these needs, it seems that this is such a fast-moving market that in even just a month or two, other major developments or products can up the ante for all again. OpenAI, for example has (after its recent boardroom power struggle) has already announced some major new developments for the very near future.

For now, it’s a case of Google scoring some points with Gemini and its associated infrastructure tools. However, keep watching this space!