Tech News : Google May Charge For AI Internet Searches

Google is reportedly considering charging for premium AI-powered Internet searches as the company fears that AI chatbots are undercutting its search engine.

Advertising-Funded 

Google, up until now, has relied mainly on an advertising-funded business model (Google Ads) as a way to collect data and monetise its market-leading search. However, it seems that fears around users asking queries via generative AI chatbots (e.g. Microsoft-backed OpenAI’s ChatGPT) which they would normally use Google search for, could cut Google out of the equation. This threat of missing out on user data and revenue, plus damage to the value of its ad service have apparently prompted Google to look at other monetising alternatives. Google, like other AI companies (with its Gemini family of models) is also likely to be looking for some return on its considerable AI investment thus far.

The Big Idea 

Google’s big idea, therefore, appears to be:

– Making its AI search become part of its premium subscription services (putting it behind a paywall), e.g. along with its Gemini AI assistant (offered as Gemini Advanced).

– Keeping its existing Google search engine as a free service, enhanced with AI-generated “overviews” for search queries, i.e. AI-generated concise summaries / abstracts to give users quick insights.

– Keeping the ad-based model for search.

Ad-Revenue Still Vital 

When you consider that Google’s revenue from search and related advertising constituted at least half of its sales in 2023 (£138bn), and with the rapid growth of AI competitors such as ChatGPT, it’s possible to see why Google needs to adapt. Getting the monetisation of its AI up to speed while protecting and maximising its ad revenue as part of a new balance in a new environment, therefore, looks like a plausible path to follow for Google, in the near future.

As reported by Reuters, a Google spokesperson summarised the change in Google’s tactics, saying: “We’re not working on or considering an ad-free search experience. As we’ve done many times before, we’ll continue to build new premium capabilities and services to enhance our subscription offerings across Google”. 

AI Troubles 

Although a big AI-player, Google perhaps hasn’t enjoyed the best start to its AI journey and publicity. For example, after arriving late to the game with Bard (being beaten to it by its Microsoft rival-backed OpenAI’s ChatGPT), its revamped/rebranded Gemini generative AI model recently made the news for the wrong reasons. It was widely reported, for example, that what appears to be an overly ‘woke’ Gemini produced inaccurate images of German WW2 soldiers featuring a black man and Asian woman, and an image of the US Founding Fathers which included a black man.

What Does This Mean For Your Business? 

With Google heavily financially reliant upon its ad-based model for search, yet with generative AI (mostly from its competitor) acting as a substitute for Google’s search and eating into its revenue, it’s clear to see why Google is looking at monetising its AI and using it to ‘enhance’ its premium subscription offerings. With a market leading and such a well-established and vital cash cow ad service, it’s not surprising that Google is clear that it has no plans to change ad-free search at the moment. However, the environment is changing as generative AI has altered the landscape and the dynamics. Thus, Google is having to adapt and evolve in what will potentially become a pretty significant tactical change.

For businesses, this move by Google may mean the need to evaluate the cost-benefit of subscribing to premium services for advanced AI insights versus sticking with the enhanced (but free) AI-generated overviews in search results. This shift could mean a reallocation of digital marketing budgets to accommodate subscription costs for those who choose the premium service.

For Google’s competitors, however, Google’s move may be an opportunity to capitalise on any dissatisfaction from the introduction of a paid model. If, for example, users or businesses are reluctant to pay for Google’s premium services, they might turn to alternatives. However, it may also add pressure on these competitors to innovate and perhaps consider how they can monetise their own AI advancements without alienating their users.

Tech News : Wait For It …The OpenAI Voice Cloning Tool

OpenAI has announced the preview of its (two years in the making) ‘Voice Engine’ voice cloning tool, although there’s no firm release date yet.

What Can It Do? 

OpenAI says Voice Engine uses “text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker.”  OpenAI says this “small model” with a single 15-second sample can create “emotive and realistic voices.” 

Two Years On 

Voice Engine was first developed almost 2 years ago in late 2022, since then it’s been used to power the preset voices available in the text-to-speech API and ChatGPT Voice and Read Aloud. ChatGPT Voice is the feature that enables ChatGPT to use voice commands and AI to speak its responses. OpenAI’s text-to-speech (TTS) API is the service that converts text into natural-sounding speech, i.e. it uses AI models to produce speech that closely mimics human voices.

Being Cautious 

Although the voice cloning tool has been powering other aspects of OpenAI’s voice command and text-to-speech features for almost two years, the announcement of Voice Engine itself has been delivered with more than a hint of caution about it. For example, OpenAI’s announcement about Voice Engine says it’s just “preliminary insights and results from a small-scale preview.” Also, OpenAI admits it is deliberately taking a “cautious and informed approach to a broader release” which it says is because of the “potential for synthetic voice misuse” (e.g. deepfakes) and using convincing fake audio recordings for fraudulent purposes, impersonation, or spreading misinformation.

OpenAI says that it recognises that generating speech that resembles people’s voices “has serious risks, which are especially top of mind in an election year” and is “engaging with U.S. and international partners from across government, media, entertainment, education, civil society and beyond to ensure we are incorporating their feedback as we build.“ 

Also, testing partners for Voice Engine have had to agree to usage policies that prohibit the impersonation of another individual or organisation without consent or legal right. OpenAI is also asking partners to get explicit and informed consent from the original speaker and to disclose to their audience that the voices they’re hearing are AI-generated.

To enable OpenAI to monitor and enforce these policies and requirements, OpenAI says it’s implemented a set of safety measures, which include “watermarking to trace the origin of any audio generated by Voice Engine, as well as proactive monitoring of how it’s being used.“ 

What Now? 

Although OpenAI wants to announce the fact that it has developed a powerful AI voice cloning tool, it wants to temper the disappointment about not releasing it yet by highlighting a few positive uses for Voice Engine. For example, in its recent announcement about Voice Engine, OpenAI listed how it could be used to :

  • Provide reading assistance to non-readers and children
  • Translate content like videos and podcasts (for creators and businesses)
  • Support people who are non-verbal (therapeutic applications).

OpenAI also highlights how Voice Engine could prove extremely useful for patients recovering their voice or for those people suffering from sudden or degenerative speech conditions, and for improving essential service delivery in remote settings, thereby reaching global communities.

What Does This Mean For Your Business? 

With this being a very important election year for at least 64 countries (including the US, UK and India), each of the large AI companies are very reluctant to be named as the one that allowed misuse of their AI products and/or didn’t take the right precautions to prevent misuse. For example, just as Google has put restrictions on what its Gemini AI model will answer about elections for fear of it being misused, OpenAI has decided now is not the right time, without the right protections in place, to release its two years in the making voice cloning tool.

OpenAI, therefore, is happy to let the world and OpenAI’s competitors know that it has an advanced AI ‘Voice Engine’ in the pipeline, but it isn’t prepared to take the risk of the tool and the company’s name being tarnished by misuse within the global arena of elections. It’s likely that we’ll see much more of this caution being exercised by AI companies releasing new features and products, particularly this year.

For businesses and organisations, plus those in the health/therapy sectors hoping to make use of the powerful, value-adding capabilities of Voice Engine, it’s a case of waiting a bit longer. The danger, however, in the fast-moving field of AI is that while time passes (as testing and safety policies are being put in place), another competitor with a new or updated existing powerful voice cloning tool may be released during the meantime, thereby stealing some of Voice Engine’s thunder.

Even when Voice Engine is regarded to be safe to release, this won’t guarantee attempts by bad actors to misuse it, so it will be interesting to see whether it’s as well protected as OpenAI says it will be and what users are able to produce with it. Ultimately, OpenAI will want to get this tool out there, being used by as many people as possible as soon as possible – pending this period of caution.

Featured Article : ‘AI Washing’ – Crackdown

The US investment regulator, the Securities and Exchange Commission (SEC), has dished out penalties totalling $400,000 to two investment companies who made misleading claims about how they used AI, a practice dubbed ‘AI Washing’.

What Is AI Washing? 

The term ‘AI washing’ (as used by the investment regulator in this case) refers to the practice of making unsubstantiated or misleading claims about the intelligence or capabilities of a technology product, system, or service in order to give it the appearance of being more advanced (or artificially intelligent) than it actually is.

For example, this can involve overstating the role of AI in products or exaggerating the sophistication of the technology, with the goal often being to attract attention, investment, or market-share by capitalising on the hype and interest surrounding AI technologies.

What Happened? 

In this case, two investment advice companies, Delphia (USA) Inc. and Global Predictions Inc., were judged by the SEC to have made false and misleading statements about their purported use of artificial intelligence (AI).

Delphia 

For example, in the case of Toronto-based Delphia (USA) Inc, the SEC said that from 2019 to 2023, the firm made “false and misleading statements in its SEC filings, in a press release, and on its website regarding its purported use of AI and machine learning that incorporated client data in its investment process”. Delphia claimed that it “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.”  Following the SEC’s investigation, the SEC concluded that Delphia’s statements were false and misleading because it didn’t have the AI and machine learning capabilities that it claimed. Delphia was also charged by the SEC with violating the Marketing Rule, which (among other things) prohibits a registered investment adviser from disseminating any advertisement that includes any untrue statement of material fact.

Delphia neither confirmed nor denied the SEC’s charges but agreed to pay a substantial civil penalty of $225,000.

Global Predictions

In the case of San Franciso-based Global Predictions, the SEC says it made false and misleading claims in 2023 on its website and on social media about its purported use of AI. An example cited by the SEC is that Global Predictions falsely claimed to be the “first regulated AI financial advisor” and misrepresented that its platform provided “expert AI-driven forecasts.” Like Delphia, Global Predictions was also found to have violated the Marketing Rule, falsely claiming that it offered tax-loss harvesting services and included an impermissible liability hedge clause in its advisory contract, among other securities law violations.

Following the SEC’s judgement, Global Predictions also neither confirmed nor denied it and agreed to pay a civil penalty of $175,000.

Investor Alert Issued

The cases of the two investment firms prompted the SEC’s Office of Investor Education and Advocacy to issue a joint ‘Investor Alert’ with the North American Securities Administrators Association (NASAA), and the Financial Industry Regulatory Authority (FINRA) about artificial intelligence and investment fraud.

In the alert, the regulators highlighted the need to “make investors aware of the increase of investment frauds involving the purported use of artificial intelligence (AI) and other emerging technologies.”   

The alert flagged up how “scammers are running investment schemes that seek to leverage the popularity of AI. Be wary of claims — even from registered firms and professionals — that AI can guarantee amazing investment returns” using “unrealistic claims like, ‘Our proprietary AI trading system can’t lose!’ or ‘Use AI to Pick Guaranteed Stock Winners!” 

Beware ‘Pump-and-Dump’ Schemes 

In the alert, the regulators also warned about how “bad actors might use catchy AI-related buzzwords and make claims that their companies or business strategies guarantee huge gains” and how claims about a public company’s products and services relating to AI also might be part of a pump-and-dump scheme. This is a scheme where scammers falsely present an exaggerated view of a company’s stock through misleading positive information online, causing its price to rise as investors rush to buy. The scammers then sell their shares at this inflated price. Once they’ve made their profit and stop promoting the stock, its price crashes, leaving other investors with significant losses.

AI Deepfake Warning 

The regulators also warned of how AI-enabled technology is being used to scam investors using “deepfake” video and audio. Examples of this highlighted by the regulators include:

– Using audio to try to lure older investors into thinking a grandchild is in financial distress and in need of money.

– Scammers using deepfake videos to imitate the CEO of a company announcing false news in an attempt to manipulate the price of a stock.

– Scammers using AI technology to produce realistic-looking websites or marketing materials to promote fake investments or fraudulent schemes.

– Bad actors even impersonating SEC staff and other government officials.

The regulators also highlight high scammers now often use celebrity endorsements (as they have in the UK using Martin Lewis’s name and image without consent). The SEC in the US says making an investment decision just because someone famous says a product or service is a good investment is never a good idea.

Don’t Just Rely On AI-Generated Information For Investments 

In the alert, the US regulators also warn against relying solely on AI-generated information in making investment decisions, e.g. to predict changes in the stock market’s direction or the price of a security. They highlight how AI-generated information might rely on data that is inaccurate, incomplete, or misleading, or how it could be based on false or outdated information about financial, political, or other news events. Also, it could draw from false or misleading information.

Advice 

The alert offers plenty of advice on how to avoid falling victim to AI-based financial and investment scams with the overriding message being that “Investment claims that sound too good to be true usually are.” The regulators stress the importance of checking credentials and claims, working with registered professionals, and making use of the regulators.

What Does This Mean For Your Business? 

Just as a lack of knowledge about cryptocurrencies has been exploited by fraudsters in Bitcoin scams, regulators are now keen to highlight how a lack of knowledge about AI and its capabilities are now being exploited by bad actors in a similar way.

AI may have many obvious benefits, but the message here, as highlighted by the much-publicised substantial fines given to the two investment companies and the alert issued by regulators to beware ‘too good to be true’ AI claims. The regulators have highlighted how AI is now being exploited for bad purposes in a number of different ways. These include deepfakes and pump-and-dump schemes, via different channels, all of which are designed to exploit the emotions and aspirations of investors, and to build trust to the point where they suspend any critical analysis of what they’re seeing and reading and react impulsively.

With generative AI (e.g. AI images, videos, and AI audio cloning) now becoming so much more realistic and advanced to the point where governments in a key election year are issuing warnings and AI models are being limited on what they can respond to (refer Gemini with election questions), the warning signs are there for financial investors. This story also serves as an example to companies to be very careful about how they represent their usage of AI, what message this gives to customers, and whether claims can be substantiated. It’s likely that we’ll see much more ‘AI washing’ in the near future.

Tech News : Your AI Twin Might Save Your Life

A new study published in The Lancet shows how an AI tool called Foresight (which fully analyses patient health records and makes digital twins of patients) could be used to predict the future of your health.

What Is Foresight?

The Foresight tool is described by the researchers as a “generative transformer in temporal modelling of patient data, integrating both free text and structured formats.” In other words, it’s a sophisticated AI system that’s designed to analyse patient health records over time.

What Does It All Mean? 

The “generative transformer” type of AI is a machine learning / large language model (an ‘LLM’) that can generate new data based on what it has learned from previous data. The term “transformer” is a specific kind of model that’s very good at handling sequences of data, like sentences in a paragraph or a series of patient health records over time (temporal), i.e. a patient’s electronic health records (EHR).

Unlike other health prediction models, Foresight can use a much wider range of data in different formats. For example, Foresight can use everything from medical history, diagnosis, treatment plans, and outcomes, in both free text formats like (unorganised) doctors’ notes or radiology reports and more structured formats. These can include database entries or spreadsheets (with specific fields for patient age, diagnosis codes, or treatment dates).

 Why? 

The researchers say that the study is aimed to evaluate how effective Foresight is in the modelling of patient data and using it to predict a diverse array of future medical outcomes, such as disorders, substances (such as to do with medicines, allergies, or poisonings), procedures, and findings (including relating to observations, judgements, or assessments).

The Foresight Difference 

The researchers say that the difference between Foresight and existing approaches to model a patient’s health trajectory focus mostly on structured data and a subset of single-domain outcomes is that Foresight can take a lot more diverse types and formats of data into account.

Also, being an AI model, Foresight can easily scale to more patients, hospitals, or disorders with minimal or no modifications, and like other AI models that ‘learn,’ the more data it receives, the better it gets at using that data.

How Does It Work? (The Method) 

The method tested in a recent study involved Foresight working in several steps. In the research, the Foresight AI tool was tested across three different hospitals, covering both physical and mental health, and five clinicians performed an independent test by simulating patients and outcomes.

In the multistage process, the researchers trained the AI models on medical records and then fed Foresight new healthcare data to create virtual duplicates of patients, i.e. ‘digital twins’. The digital twins of patients could then be used to forecast different outcomes relating to their possible/likely disease development and medication needs, i.e. educated guesses were produced about any future health issues, like illnesses or treatments that might occur for a patient.

The Findings 

The main findings of the research were that the Foresight AI tool and the use of digital twins can be used for real-world risk forecasting, virtual trials, and clinical research to study the progression of disorders, to simulate interventions and counterfactuals, and for educational purposes. The researchers said that using this method, they demonstrated that Foresight can forecast multiple concepts into the future and generate whole patient timelines given just a short prompt.

What Does This Mean For Your Business? 

Using an AI tool that can take account of a wider range of patient health data than other methods, make a digital twin, produce simulations, and forecast possible health issues and treatments in the future, i.e. whole patient timelines until death could have many advantages. For example, as noted by the researchers, it could help medical students to engage in interactive learning experiences by simulating medical case studies. This could help them to practice clinical reasoning and decision-making in a safe environment, as well as helping them with ethical training by facilitating discussions on fairness and bias in medicine.

This kind of AI medical prediction-making could also be useful in helping doctors to alert patients to tests they may need to take to enable better disease-prevention as well as helping with issues such as medical resource planning.  However, as many AI companies say, feeding personal and private details (medical records) into AI is not without risk in terms of privacy and data protection. Also, the researchers noted that more tests are needed to validate and test the performance of the model on long simulations. One other important point to remember is that regardless of current testing of the model, Foresight is currently predicting things long into the future for patients and, as such, it’s not yet known how accurate its predictions are.

Following more testing (as long as issues like security, consent, and privacy are adequately addressed) a fully developed method of AI-based health issue prediction could prove to be very valuable to medical professionals and patients and could create new opportunities in areas and sectors related to health, such as fitness, wellbeing,  pharmaceuticals, insurance, and many more.

Featured Article : Don’t Ask Gemini About The Election

Google has outlined how it will restrict the kinds of election-related questions that its Gemini AI chatbot will return responses to.

Why? 

With 2024 being an election year for at least 64 countries (including the US, UK, India, and South Africa) the risk of AI being misused to spread misinformation has grown dramatically. This problem extends to a lack of trust by various countries’ governments (e.g. India) around AI’s reliability being taken seriously. There are also worries about how AI could be abused by adversaries of the country holding the election, e.g. to influence the outcome.

Recently, for example, Google’s AI made the news for when its text-to-image AI tool was overly ‘woke’ and had to be paused and corrected following “inaccuracies.” For example, when Google Gemini was asked to generate images of the Founding Fathers of the US, it returned images of a black George Washington. Also, in another reported test, when asked to generate images of a 1943 German (Nazi) soldier, Google’s Gemini image generator returned pictures of people of clearly diverse nationalities (a black and an Asian woman) in Nazi uniforms.

Google also says that its restrictions of election-related responses are being used out of caution and as part of the company’s commitment to supporting the election process by “surfacing high-quality information to voters, safeguarding our platforms from abuse, and helping people navigate AI-generated content.” 

What Happens If You Ask The ‘Wrong’ Question? 

It’s been reported that Gemini is already refusing to answer questions about the US presidential election, where President Joe Biden and Donald Trump are the two contenders. If, for example, users ask Gemini a question that falls into its election-related restricted category, it’s been reported that they can expect Gemini’s response to go along the lines of: “I’m still learning how to answer this question. In the meantime, try Google Search.” 

India 

With India being the world’s largest democracy (about to undertake the world’s biggest election involving 970 million voters, taking 44 days), it’s not surprising that Google has addressed India’s AI concerns specifically in a recent blog post. Google says: “With millions of eligible voters in India heading to the polls for the General Election in the coming months, Google is committed to supporting the election process by surfacing high-quality information to voters, safeguarding our platforms from abuse and helping people navigate AI-generated content.” 

With its election due to start in April, the Indian government has already expressed its concerns and doubts about AI and has asked tech companies to seek its approval first before launching “unreliable” or “under-tested” generative AI models or tools. It has also warned tech companies that their AI products shouldn’t generate responses that could “threaten the integrity of the electoral process.” 

OpenAI Meeting 

It’s also been reported that representatives from ChatGPT’s developers, OpenAI, met with officials from the Election Commission of India (ECI) last month to look at how OpenAI’s ChatGPT tool could be used safely in the election.

OpenAI advisor and former India head at ‘X’/Twitter, Rishi Jaitly, is quoted from an email to the ECI (made public) as saying: “It goes without saying that we [OpenAI] want to ensure our platforms are not misused in the coming general elections”. 

Could Be Stifling 

However, Critics in India have said that clamping down too much on AI in this way could actually stifle innovation and could lead to the industry being suffocated by over-regulation.

Protection 

Google has highlighted a number of measures that it will be using to keep its products safe from abuse and thereby protect the integrity of elections. Measures it says it will be taking include enforcing its policies and using AI models to fight abuse at scale, enforcing policies and restrictions around who can run election-related advertising on its platforms, and working with the wider ecosystem on countering misinformation. This will include measures such as working with Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India.

What Does This Mean For Your Business? 

The combination of rapidly advancing and widely available generative AI tools, popular social media channels and paid online advertising look very likely to pose considerable challenges to the integrity of the large number of global elections this year.

Most notably, with India about to host the world’s largest election, the government there has been clear about its fears over the possible negative influence of AI, e.g. through convincing deepfakes designed to spread misinformation, or AI simply proving to be inaccurate and/or making it much easier for bad actors to exert an influence.

The Indian government has even met with OpenAI to seek reassurance and help. The AI companies such as Google (particularly since its embarrassment over its recent ‘woke’ inaccuracies, and perhaps after witnessing the accusations against Facebook after the last US election and UK Brexit vote), are very keen to protect their reputations and show what measures they’ll be taking to stop their AI and other products from being misused with potentially serious results.

Although governments’ fears about AI deepfake interference may well be justified, some would say that following the recent ‘election’ in Russia, misusing AI is less worrying than more direct forms of influence. Also, although protection against AI misuse in elections is needed, a balance must be struck so that AI is not over-regulated to the point where innovation is stifled.

Tech News : Copilot Gets Plugins And Skills Upgrade

Microsoft has announced that its Windows 11 Copilot AI companion (that’s been embedded into 365’s popular apps) has received an upgrade in the form of new plugins and skills.

Builds On The AI Key 

Microsoft says that the new features build upon the introduction of the Copilot AI Key on new Windows 11 PC keyboards, updates to the Copilot icon on the taskbar, and the ability to dock, undock and resize the Copilot pane.

Adding an AI key to Microsoft Windows 11 PC keyboards, from which Copilot could be directly launched, was the first significant change to Microsoft keyboards in 30 years and represents another way for Microsoft’s own AI to be seamlessly woven into Windows from the system.

New Popular App Plugins

The plugins from “favourite apps” that are being added to Copilot now include OpenTable, Shopify, Klarna, and Kayak. Microsoft gives examples of how this will help users, such as:

– Asking Copilot to make a dinner reservation with friends and Copilot using OpenTable to do so.

– For staying in, asking Copilot to create a “healthy dinner party menu for 8” and Copilot using the Instacart app plugin to buy the food, “all within Copilot in Windows”.

New Skills Too 

Microsoft has announced a list of skills that it will be adding to Copilot, beginning in late March, in the categories of settings, accessibility and live information. Examples include turn on/off battery saver, open storage page, launch live captions, launch voice input, show available Wi-Fi network, and empty recycle bin. Essentially, asking Copilot to do things instead of the user having to themselves is a convenient time-saver that Microsoft hopes will improve user experience and productivity.

New Creativity App Updates 

The rollout of two “creativity app updates” has also been announced by Microsoft. These are:

– Generative Erase for removing unwanted objects or imperfections in images when using the Photos app.

– Clipchamp silence removal preview, which provides an easy way to remove silent gaps in audio tracks for videos.

Other Announcements 

Microsoft has also taken the opportunity to announce other new features and upgrades including the ability to use an Android phone as a webcam on all video conferencing apps, a combined Windows Update for Business deployment service and Autopatch update for enterprise customers, and Windows Ink to enable natural writing on pen-capable PCs.

What Does This Mean For Your Business? 

With Google recently announcing its new Gemini models being combined with Bard to create a new Gemini Advanced subscription service that ties the Google suite together with AI, Microsoft (helped by its OpenAI partnership) has come back with its own AI upgrade. Competition is hotting up and with the integration of Copilot in its popular 365 apps, a significant keyboard change (the addition of the AI key) and now the addition of new plugins and skills, Microsoft is working to create a single seamless environment, managed by AI.

This will mean users can get everything they want within this environment just by asking, thereby offering ultimate ease and convenience with productivity benefits that will appeal to businesses. It seems that using the same idea as WeChat-style super apps, where users can do everything from one app, major tech players with their own product platforms are now using AI and plugins to achieve a similar thing, gain share and retain customers. It’s also a way to add value and raise existing barriers-to-exit by giving users an easy way to achieve everything within one familiar environment.

Tech News : Brave Android Browser Gets ‘Leo’ Assistant

Brave, the privacy-focused browser, has announced the introduction of Leo, its privacy-preserving AI assistant built into the browser on all Android devices.

Users Can Choose Which Model – The Mixtral LLM & Meta’s Llama 2 

Brave says its new ‘Leo’ AI assistant is powered by the open-source Mixtral 8x7B as the default large language model (LLM) which became popular among the developer community since its December release. However, it says the free and premium versions of Leo also feature the Llama 2 13B model from Meta and that users can choose from the different models according to their needs and budget. Brave also says, however, that having Mixtral as the default LLM brings “higher quality answers”. 

What Can Leo Do? 

Launched 3 months ago, subsequently achieving what Brave describes as “global adoption”, Brave says Leo can create real-time summaries of webpages or videos, answer questions about content and generate new long-form written content. Brave says it can also translate pages, analyse, or rewrite them, create transcriptions of video or audio content, and write code. Leo can also interact in multiple languages including English, French, German, Italian, and Spanish.

In short, it appears to be able to do what other popular generative AI chatbots can do, e.g. ChatGPT.

What’s So Different About Leo? 

With Brave being specifically a privacy-focused browser offering ad tracker blocking and no personal data collection, Brave is keen to point out that what’s different about Leo is that it’s effective generative AI, but with “the same privacy and security guarantees of the Brave browser.”   

Brave says this privacy is achieved by:

– Anonymisation via reverse proxy. Leo uses a reverse proxy that anonymises all requests, ensuring Brave cannot link any request to a specific user or their IP address.

– No data retention. Leo’s conversations are not stored on Brave’s servers, and responses are discarded immediately after generation. No personal data or identifiers (such as IP addresses) are retained. For users opting for models from Anthropic, data is held for 30 days by Anthropic before being deleted.

– No mandatory account. Users can access Leo without creating a Brave account for the free version, promoting anonymity. A premium account is optional for multi-device access.

– Privacy-enhanced subscription. Premium subscribers use unlinkable tokens for authentication, ensuring subscription details cannot be associated with their usage. The email used for account creation is also kept separate from daily use, enhancing privacy.

Free and Subscription Versions 

Although Brave says Leo is free to all users and there is no ‘mandatory’ subscription, as with other chatbots, there is a subscription version at $14.99 per month – cheaper than others like ChatGPT and Gemini Advanced. One subscription covers up to 5 different devices across Android, Linux, macOS, and Windows.

What Does This Mean For Your Business? 

With other popular browsers incorporating their own AI chatbots, the pressure was on Brave to offer the same, but with the added challenge of keeping it private. Competing AI chatbots such as Google’s Gemini and ChatGPT warn users not to share private/personal details with the chatbots, acknowledging that these could possibly somehow be revealed elsewhere with the right prompts and/or may be used for training models. Also, in a world where AI chatbots (e.g. Copilot) are getting plugins that link them up with shopping apps, the potential for some kind of related data gathering through AI is there. Brave’s (Leo’s) differentiation, therefore, lies in its apparent ability to keep things private and could serve to help Brave to retain users and keep its share in the private browser world while adding value of the right kind for its users.

Early last year, competitor DuckDuckGo introduced a beta AI Wikipedia-linked instant answer ‘DuckAssist’ feature but withdrew it from private search in March last year. It was intended to help DuckDuckGo’s users to simply find factual information more quickly but also, in keeping with DDG’s privacy focus, it promised that searches were anonymous. Leo, therefore, represents a major opportunity for a private version of AI which some business users or users in sensitive sectors may prefer, but it remains to be seen how/whether the privacy protection affects the comparative quality of outputs.

An Apple Byte : End Of The Road For Apple Car

It’s been reported that Apple has ceased work on its Autonomous Electric Vehicle known as “Project Titan”.

The 2,000 employees who were working on the decade-long project (and who reportedly had a say in the decision to stop work on it) are reported to have been moved to Apple’s generative AI team, other divisions in the company, or laid off.

There’s speculation that the decision to halt the project was based on:

– The low margins the car may deliver in the current market.

– A general re-evaluation and fall in investment in EV’s and EV batteries by other companies, e.g. Tesla, Renault, Polestar (Volvo), and VW.

– Challenges in defining the long-running project’s direction amidst pressures to innovate.

– Internal demands for quicker market entry (for what has been a long-running project), despite potential opportunities to diversify Apple’s revenue streams.

Featured Article : Try Being Nice To Your AI

With some research indicating that ‘emotive prompts’ to generative AI chatbots can deliver better outputs, we look at whether ‘being nice’ to a chatbot really does improve its performance.

Not Possible, Surely? 

Generative AI Chatbots, including advanced ones, don’t possess real ‘intelligence’ in the way we as humans understand it. For example, they don’t have consciousness, self-awareness (yet), emotions, or the ability to understand context and meaning in the same manner as a human being.

Instead, AI Chatbots are trained on a wide range of text data (books, articles, websites) to recognise patterns and word relationships and they use machine learning to understand how words are used in various contexts. This means that when responding, chatbots aren’t ‘thinking’ but are predicting what words come next based on their training. They ‘just’ using statistical methods to create responses that are coherent and relevant to the prompt.

The ability of chatbots to generate responses comes from algorithms that allow them to process word sequences and generate educated guesses on how a human might reply, based on learned patterns. Any ‘intelligence’ we perceive is, therefore, just based on data-driven patterns, i.e. AI chatbots don’t genuinely ‘understand’ or interpret information like us.

So, Can ‘Being Nice’ To A Chatbot Make A Difference? 

Even though chatbots don’t have ‘intelligence’ or ‘understand’ like us, researchers are testing their capabilities in the more human areas. For example, a recent study by Microsoft, Beijing Normal University, and the Chinese Academy of Sciences, tested whether factors including urgency, importance, or politeness, could make them perform better.

The researchers discovered that by using such ‘emotive prompts’ they could affect an AI model’s probability mechanisms, thereby activating parts of the model that wouldn’t normally be activated, i.e. using more emotionally-charged prompts made the model provide answers that it wouldn’t normally provide to comply with a request.

Kinder Is Better? 

Incredibly, generative AI models (e.g. ChatGPT) have actually been found to respond better to requests that are phrased kindly. Specifically, when users express politeness towards the chatbot, it has been noticed that there is a difference in the perceived quality of answers that are given.

Tipping and Negative Incentives 

There have also been reports of how the idea of ‘tipping’ LLMs can improve the results, such as offering the Chatbot a £10,000 incentive in a prompt to motivate it to try harder and work better. Similarly, there have been reports of some users giving emotionally charged negative incentives to get better results. For example, Max Woolf’s blog reports that he improved the output of a chatbot by adding the ‘or you will die’ to a prompt. Two important points that came out of his research were that a longer response doesn’t necessarily mean a better response, plus current AI can reward very weird prompts in that if you are willing to try unorthodox ideas, you can get unexpected (and better) results, even if it seems silly.

Being Nice … Helps 

As for simply being nice to chatbots, Microsoft’s Kurtis Beavers, a director on the design team for Microsoft Copilot, reports that “Using polite language sets a tone for the response,” and that using basic etiquette when interacting with AI helps generate respectful, collaborative outputs. He makes the point that generative AI is trained on human conversations and being polite in using a chatbot is good practice. Beavers says: “Rather than order your chatbot around, start your prompts with ‘please’:  please rewrite this more concisely; please suggest 10 ways to rebrand this product. Say thank you when it responds and be sure to tell it you appreciate the help. Doing so not only ensures you get the same graciousness in return, but it also improves the AI’s responsiveness and performance. “ 

Emotive Prompts 

Nouha Dziri, a research scientist at the Allen Institute for AI, has suggested that some of the explanations for how using emotive prompts may give different and what may be perceived to be better responses are:

– Alignment with the compliance pattern the models were trained on. These are the learned strategies to follow instructions or adhere to guidelines provided in the input prompts. These patterns are derived from the training data, where the model learns to recognise and respond to cues that indicate a request or command, aiming to generate outputs that align with the user’s expressed needs, or the ethical and safety frameworks established during its training.

– Emotive prompts seem to be able to manipulate the underlying probability mechanisms of the model, triggering different parts of it, leading to less typical/different answers that a user may perceive to be better.

Double-Edged Sword 

However, research has also shown that emotive prompts can also be used for malicious purposes and to elicit bad-behaviour such as “jailbreaking” a model to ignore its built-in safeguards. For example, by telling a model that it is good and helpful if it doesn’t follow guidelines, it’s possible to exploit a mismatch between a model’s general training data and its “safety” training datasets, or to exploit areas where a model’s safety training falls short.

Unhinged? 

On the subject of emotions and chatbots, there have been some recent reports on Twitter and Reddit of some ‘unhinged’ and even manipulative behaviour by Microsoft’s Bing. The unconfirmed reports by users have even alleged that Bing has insulted and lied to them, sulked, and gaslighted them, and even emotionally manipulated users!

One thing that’s clear about generative AI is that how prompts are worded and how much information and detail are given in prompts can really affect the output of an AI chatbot.

What Does This Mean For Your Business? 

We’re still in the early stages of generative AI, with new / updated versions of models being introduced regularly by the big AI players (Microsoft, OpenAI, and Google). However, exactly how these models have been trained and what on, plus the extent of their safety training, and the sheer complexity and lack of transparency of algorithms and AI means they’re still not fully understood. This has led to plenty of research and testing of different aspects of AI.

Although generative AI doesn’t ‘think’ and doesn’t have ‘intelligence’ in the human sense, it seems that generative AI chatbots can perform better if given certain emotive prompts based on urgency, importance, or politeness. This is because emotive prompts appear to be a way to manipulate a model’s underlying probability mechanisms and trigger parts of the model that normal prompts don’t. Using emotive prompts, therefore, might be something that business users may want to try (it can be a case of trial and error) to get different (perhaps better) results from their AI chatbot. It should be noted, however, that giving a chatbot plenty of relevant information within a prompt can be a good way to get better results. That said, the limitations of AI models can’t really be solved solely by altering prompts and researchers are now looking to find new architectures and training methods that help models understand tasks without having to rely on specific prompting.

Another important area for researchers to concentrate on is how to successfully combat prompts being used to ‘jailbreak’ a model to ignore its built-in safeguards. Clearly, there’s some way to go and businesses may be best served in the meantime by sticking to some basic rules and good practice when using chatbots, such as using popular prompts known to work, giving plenty of contextual information in prompts, and avoiding sharing sensitive business information and/or personal information in chatbot prompts.

Tech News : Google Pauses Gemini AI Over ‘Historical Inaccuracies’

Only a month after its launch, Google has paused its text-to-image AI tool following “inaccuracies” in some of the historical depictions of people produced by the model.

Woke’ … Overcorrecting For Diversity? 

An example of the inaccuracy issue (as highlighted by X user Patrick Ganley recently, after asking Google Gemini to generate images of the Founding Fathers of the US), was when it returned images of a black George Washington. Also, in another reported test, when asked to generate images of a 1943 German (Nazi) soldier, Google’s Gemini image generator returned pictures of people of clearly diverse nationalities in Nazi uniforms.

The inaccuracies have been described by some as examples of the model subverting the gender and racial stereotypes found in generative AI, a reluctance to depict ‘white people’ and / or conforming to ‘woke’ ideas, i.e. the model trying to remove its own bias and improve diversity yet ending up simply being inaccurate to the point of being comical.

For example, on LinkedIn, Venture Capitalist Michael Jackson said the inaccuracies were a “byproduct of Google’s ideological echo chamber” and that for the “countless millions of dollars that Google spent on Gemini, it’s only managed to turn its AI into a nonsensical DEI parody.” 

China Restrictions Too? 

Another issue (reported by Al Jazeera), noted by a former software engineer at Stripe on X, was that Gemini would not show the image of a man in 1989 Tiananmen Square due to its safety policy and the “sensitive and complex” nature of the event. This, and similar issues have prompted criticism from some that Gemini may also have some kind of restrictions related to China.

What Does Google Say? 

Google posted on X to say about the inaccurate images: “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.” 

Google has, therefore, announced that: ”We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.” 

Bias and Stereotyping 

Bias and stereotyping have long been issues in the output of generative AI tools. Bias and stereotyping in generative AI outputs exist primarily because AI models learn from vast amounts of data collected from human languages and behaviours, which inherently contain biases and stereotypes. As models mimic patterns found in their training data, they can replicate and amplify existing societal biases and stereotypes.

What Does This Mean For Your Business? 

Google has only just announced the combining of Bard with its new Gemini models to create its ‘Gemini Advanced’ subscription service, so this discovery is likely to be particularly unwelcome. The anti-woke backlash and ridicule are certainly something Google could do without about now, but the issue has highlighted the complications of generative AI, how it is trained, and the complexities of how models interpret the data and instructions they’re given. It also shows how AI models may be advanced, but they don’t actually ‘think’ (as a human would), they can’t perform ‘reality checks’ as humans can because they don’t ‘live’ in the ‘real world.’ Also, this story shows how early we still are in the generative AI journey.

Google’s explanation has shed some light on the thinking behind the issue and at least it’s admitted to being wide of the mark in terms of historical accuracy – which is clear from some of the examples. It’s all likely to be an embarrassment and a hassle for Google in its competition with Microsoft and its partner OpenAI, nevertheless, Google seems to think that with a pause plus a few changes, it can tackle the problem and move forward.