Tech Tip – Developing A Consistent Brand Voice Using ChatGPT

If you want to develop a comprehensive guide on your brand’s voice and writing style to ensure consistency across all company communications, you can use ChatGPT to help you. Here’s how:

– Open ChatGPT and input a description of the attributes of your brand’s voice (e.g. professional / friendly / authoritative) and any specific do’ss and don’ts in your communication (e.g. usage of jargon, or tone adjustments for different audiences).

– Ask ChatGPT to compile a set of guidelines that detail how to communicate in your brand’s voice, including examples of appropriate and inappropriate phrases.

– For example, to draft an email in the brand’s voice, , state the purpose of the email and any key information that needs to be included and ask ChatGPT to draft the email based on the provided brand voice summary and content specifics.

– Review the draft, and revise if necessary.

– The brand voice guidelines can be applied in a similar way to all other types of communications you write using ChatGPT.

Tech News : Google Launches Gemini Subscription

Google has rebranded its Bard chatbot as Gemini, the name of its new powerful AI model family, and launched a $20 per month ‘Gemini Advanced’ subscription service.

Gemini Advanced 

To compete with the likes of ChatGPT, Google has launched its own monthly Chatbot subscription service for the same price but with some extras thrown in. Google recently launched Gemini, its “newest and most capable” large language model (LLM) family, available as Ultra, Pro, and Nano. The highly advanced and multimodal AI model was designed to be integrated into its existing ‘Bard’ chatbot.

Rebrand and Subscription Plan 

Google has therefore now rebranded Bard as ‘Gemini Advanced’ after the AI Ultra 1.0 model that now powers it, and released a $19.99 per month subscription to the chatbot. The subscription plan which includes Gemini Advanced has been named the ‘Google One AI Premium Plan.’ Google says the plan includes:

– The Gemini Advanced chatbot (based on its Ultra 1.0 model).

– The benefits of the existing Google One Premium plan, such as 2TB of storage (usually $9.99 per month).

– Available soon for AI Premium subscribers – the ability to use Gemini in Gmail, Docs, Slides, Sheets and more (formerly known as Duet AI).

– A two-month trial at no cost.

Where And How? 

Gemini Advanced is available today in more than 150 countries and territories (including the UK) in English, and Google says it will expand it to more languages over time. It also makes the point that Gemini Pro is already available in 40 languages and more than 230 countries and territories, so it’s likely that Gemini Advanced will be available to the same geographic degree.


Although Google is a little late to the party with Gemini Advanced, it has been a way to tidy up and clarify its offering by re-branding and using Bard at the front end and its latest powerful Gemini at the back end.

Gemini Advanced offers Google a way to monetise the AI that it’s been investing in for years and compete with OpenAI’s ChatGPT and Microsoft’s Copilot subscription. However, it has more in common with Copilot in terms of being designed to integrate with an existing suite of products whereas OpenIA’s ChatGPT is a standalone offering. That said, OpenAI has worked closely in partnership with Microsoft to develop its AI and while Google’s AI has been developed by its DeepMind labs, former OpenAI staff members have also worked at DeepMind at certain stages.

Gemini Advanced is, therefore, essentially positioned to compete with OpenAI’s ChatGPT Plus, and Microsoft’s Copilot Pro, all at $20 per month.

What Does This Mean For Your Business?

With ChatGPT Plus, Microsoft’s Copilot Pro, and Google’s Gemini Advanced now available at the same subscription price, businesses have a choice in terms of selecting the AI tools that align most closely with their strategic goals and operational needs. With businesses very likely to be already using Microsoft and Google products daily, plus many using ChatGPT it’s likely to be a case of weighing up the features, capabilities, and limitations of each AI service against their specific requirements to get the best fit for enhancing productivity and innovation.

Many small business owners may be asking themselves whether extra value can be obtained from yet another monthly subscription from something that many people perceive to be a similar product that hasn’t been around as long (and perhaps not trained as much) as ChatGPT. That said, some may have used ChatGPT long enough to have noticed its limitations as well as its strengths and may feel ready to try a competing product that promises to have a powerful backend and could help them leverage the power of other Google products. There’s also the temptation/sweetener of the first 2 months free with Gemini Advanced and a large amount storage which would normally cost $9.99 per month anyway.

Whereas just at the end of 2022 there was only ChatGPT, businesses now have a choice between three similarly positioned AI products, giving some idea of the rapid growth and monetisation in this new competitive market. Businesses may, therefore, now start deciding which AI subscription – ChatGPT Plus, Microsoft’s Copilot Pro, or Google’s Gemini Advanced – best aligns with their goals, operational needs, and existing software ecosystems. This choice may hinge on taking a closer look at each platform’s unique features and capabilities, cost-effectiveness, data privacy standards, and compatibility with the company’s values and long-term innovation potential. For big tech companies, the AI competition is hotting up and we can expect more rapid change to come.

Tech Tip – Use ChatGPT Within Microsoft Word

The ‘Add-Ins’ link on the menu (top-right) in Microsoft Word in Office 365 enables you to use many useful apps and tools directly within Word, including ChatGPT. Here’s how it works:

Open a Word document and click on ‘Add-Ins’ (a grid symbol) top-right in the horizontal menu bar at the top of the page.

From the dropdown of options, select ‘ChatGPT for Excel and Word’ and follow the very brief instructions to set it up.

Write your document and use the ChatGPT add-in, which appears in the right pane, to research details which you can copy directly into your document using the ‘Copy’ or ‘Insert’ button provided.

Featured Article : ChatGPT Inside Vehicles Opens Possibilities

Following the news that Volkswagen (VW) is to add ChatGPT to the IDA voice assistant in its cars and SUVs, we look at what this could mean for the direction of technology for cars.

Adding ChatGPT 

At the current CET in Las Vegas, VW announced that starting in Europe in the second quarter of this year, the famous chatbot will be added to a variety of VW EVs, including the D.7, ID.4, ID.5 and ID.3, Tiguan, Passat, and Golf.

Drivers will be able to use ChatGPT hands-free via VW’s existing onboard IDA voice assistant, with Cerence Chat Pro from technology partner Cerence Inc acting as the foundation of the new function, which VW says, “offers a uniquely intelligent, automotive-grade ChatGPT integration.”

Within Limits 

It’s been reported, however, that certain limits have been placed on the kinds of questions that VW’s ChatGPT will answer, e.g. no profanity or ‘sensitive’ topics (it’s a family car).


VW’s newsroom says the ChatGPT integration will mean that: “The IDA voice assistant can be used to control the infotainment, navigation, and air conditioning, or to answer general knowledge questions.” Also, VW envisions that: “In the future, AI will provide additional information in response to questions that go beyond this as part of its continuously expanding capabilities. This can be helpful on many levels during a car journey: Enriching conversations, clearing up questions, interacting in intuitive language, receiving vehicle-specific information, and much more – purely hands-free.” 

Just The Start 

Stefan Ortmanns, CEO of Cerence, the company tasked with the integration of ChatGPT with the onboard voice assistant has indicated that this is just the beginning, and that VW looks likely to ramp-up the power of its onboard AI going forward. For example, Ortmanns says: “As we look to the future, together Volkswagen and Cerence will explore collaboration to design a new, large-language-model-based (LLM) user experience as the foundation of Volkswagen’s next-generation in-car assistant.” 

What If It Was Combined With Autonomous Vehicles? 

This first for a volume car manufacturer and commitment to integrating generative AI with vehicles, coupled with the recent UK government suggestion that autonomous cars could be on our roads by 2026 raises some tantalising possibilities and questions. For example, what if AI chatbots like ChatGPT were integrated into autonomous vehicles and how could this affect the evolution of our cars and our commuting experience? Let’s explore some of the potential impacts and transformations this could bring.

Transformation into Access-Pods? 

Cars could evolve from traditional vehicles into “access-pods” and become spaces not just for travel but for various activities. In an autonomous vehicle, the need for a driver is eliminated, which would allow for the interior to be redesigned. For example, seats could become more like comfortable office chairs, and the inclusion of small tables or workstations could become standard. This could transform the car into a mobile office or a personal lounge, making the journey itself a productive or leisurely part of the day.

Working During Commute 

With autonomous vehicles, people could start working during their commute, just as they do on the train (only in a more personal setting). This could significantly change daily schedules, allowing for more flexible work hours. Also, as travel time becomes working time, the distinction between office and home could blur, perhaps leading to a more fluid work-life integration.

Could It Lead To A Societal Shift In Work Habits? 

The ability to work from a private car might lead to changes in living patterns. People might be more willing to live further from their workplaces if they can be productive during longer commutes. This could also have a wider impact on the property market, with less emphasis on living close to urban centres.

Enhanced Productivity and Entertainment 

The integration of AI chatbots in cars (whether autonomous or not) could, as VW suggests, make a journey more interactive and informative. Passengers can engage in productive tasks like setting up meetings, conducting research, or learning new skills through conversational AI. Additionally, entertainment options could become more personalised and interactive.

Safety and Accessibility 

For people who are unable to drive due to various reasons such as age, disability, or other factors, autonomous vehicles with AI integration could offer new levels of independence and mobility.

Traffic and Environmental Impact 

If autonomous vehicles and AI lead to smoother traffic flow and more efficient travel, there could be positive environmental impacts. However, if it encourages longer commutes, it might have the opposite effect.

Regulatory and Ethical Considerations 

With these possible advancements would come the need for new regulations and ethical guidelines, particularly concerning data privacy, cybersecurity, and liability in the event of accidents.

New Business Models?

The prospect of generative AI-controlled autonomous vehicles could also lead to new business models. For example, this could include things like subscription-based access to luxury autonomous pods for commuting, or services that combine transportation with other amenities like fitness, relaxation, or entertainment.

What Does This Mean For Your Business? 

Although VW’s integration of generative AI with vehicle voice assistants is a first for a volume car manufacturer, there was a kind of inevitability to it and it’s unlikely to take long for other car manufacturers to announce the same (they’re probably already working on it). For VW, it’s (currently) a value-adding and differentiating introduction, so provided that the restrictions on what the onboard ChatGPT can discuss aren’t too strict, it could make the driving time much more interesting, productive, and a much more personalised experience. Linking it to the sat-nav for example, may also be a feature that motorists really value, as may be the greater feeling of control, reassurance, and novelty of having something that can tell you about the car and its performance and issues. It may also provide a societal purpose and make people feel less alone while driving and perhaps more alert. Using hands-free voice commands to operate more aspects of the car (e.g. the radio, the hands-free phone etc), may also improve driver safety.

Looking ahead, perhaps to the integration of generative AI with autonomous vehicles, it’s possible that a societal shift could occur where our vehicles become more like productive and comfortable access-pods, which could have wider implications for our work/life balance and business models and could have knock-on effects for whole industries. It could even open new business and entertainment opportunities focused on access-pod occupants. This move by Volkswagen, therefore, offers us a glimpse of a better future for our personal transport options.

Featured Article : NY Times Sues OpenAI And Microsoft Over Alleged Copyright

It’s been reported that The New York Times has sued OpenAI and Microsoft, alleging that they used millions of its articles without permission to help train chatbots.

The First 

It’s understood that the New York Times (NYT) is the first major US media organisation to sue ChatGPT’s creator OpenAI, plus tech giant Microsoft (which is also an OpenAI investor and creator of Copilot), over copyright issues associated with its works.

Main Allegations 

The crux of the NYT’s argument appears to be that the use of its work to create GenAI tools should come with permission and an agreement that reflects the fair value of the work. Also, it’s important in this case to note that the NYT relies on digital subscriptions rather than physical newspaper subscriptions, of which it now has 9 million+ subscribers (the relevance of which will be clear below).

With this in mind, in addition to the main allegation of training AI on its articles without permission (for free), other main allegations made by the NYT about OpenAI and Microsoft in relation to the lawsuit include :

– OpenAI and Microsoft may be trying to get a “free-ride on The Times’s massive investment in its journalism” by using it to provide another way to deliver information to readers, i.e. a way around its payment wall. For example, the NYT alleges that OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles. The NYT’s legal team have given examples of these, such as restaurant critic Pete Wells’ 2012 review of Guy Fieri’s (of Diners, Drive-Ins, and Dives fame) “Guy’s American Kitchen & Bar”. The NYT argues that this threatens its high-quality journalism by reducing readers’ perceived need to visit its website, thereby reducing its web traffic, and potentially reducing its revenue from advertising and from the digital subscriptions that now make up most of its readership.

– Misinformation from OpenAI’s (and Microsoft’s) chatbots, in the form of errors and so-called ‘AI hallucinations’ make it harder for readers to tell fact from fiction, including when their technology falsely attributes information to the newspaper. The NYT’s legal team cite examples of where this may be the case, such as ChatGPT once falsely attributing two recommendations for office chairs to its Wirecutter product review website.

“Fair Use” And Transformative 

In their defence, Open AI and Microsoft appear likely to be relying mainly on the arguments that the training of AI on NYT’s content amounts to “fair use” and the outputs of the chatbots are “transformative.”

For example, under US law, “fair use” is a doctrine that allows limited use of copyrighted material without permission or payment, especially for purposes like criticism, comment, news reporting, teaching, scholarship, or research. Determining whether a specific use qualifies as fair use, however, will involve considering factors like the purpose and character of the usage. For example, the use must be “transformative”, i.e. adding something new or altering the original work in a significant way (often for a different purpose). OpenAI and Microsoft may therefore argue that training their AI products could potentially be seen as transformative as the AI uses the newspaper content in a way that is different from the original purpose of news reporting or commentary. However, the NYT has already stated that: “There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it”. Any evidence of verbatim outputs may also damage the ‘transformative’ argument for OpenAI and Microsoft.


Although these sound like relatively clear arguments either way, there are several factors that add to the complication of this case. These include:

– The fact that OpenAI altered its products following copyright issues, thereby making it difficult to decide whether its outputs are currently enough to find liability.

– Many possible questions about the journalistic, financial, and legal implications of generative AI for news organisations.

– Broader ethical and practical dilemmas facing media companies in the age of AI.

What Is It Going To Cost? 

Given reports that talks between all three companies to avert the lawsuit have failed to resolve the matter, what the NYT wants is:

Damages of an as yet undisclosed sum, which some say could be in the $billions (given that OpenAI is valued at $80 billion and Microsoft has invested $13 billion in a for-profit subsidiary).

For OpenAI and Microsoft to destroy the chatbot models and training sets that incorporate the NYT’s material.

Many Other Examples

AI companies like OpenAI are now facing many legal challenges of a similar nature, e.g. the scraping/automatic collection of online content/data by AI without compensation, and for other related reasons. For example:

– A class action lawsuit filed in the Northern District of California accuses OpenAI and Microsoft of scraping personal data from internet users, alleging violations of privacy, intellectual property, and anti-hacking laws. The plaintiffs claim that this practice violates the Computer Fraud and Abuse Act (CFAA).

– Google has been accused in a class-action lawsuit of misusing large amounts of personal information and copyrighted material to train its AI systems. This case raises issues about the boundaries of data use and copyright infringement in the context of AI training.

– A Stability AI, Midjourney, and DeviantArt class action claims that these companies used copyrighted images to train their AI systems without permission. The key issue in this lawsuit is likely to be whether the training of AI models with copyrighted content, particularly visual art, constitutes copyright infringement. The challenge lies in proving infringement, as the generated art may not directly resemble the training images. The involvement of Large-scale Artificial Intelligence Open Network (LAION) in compiling images used for training adds another layer of complexity to the case.

– Back in February 2023, Getty Images sued Stability AI alleging that it had copied 12 million images to train its AI model without permission or compensation.

The Actors and Writers Strike 

The recent strike by Hollywood actors and writers is another example of how fears about AI, consent, and copyright, plus the possible effects of AI on eroding the value of people’s work and jeopardising their income are now of real concern. For example, the strike was primarily focused on concerns regarding the use of AI in the entertainment industry. Writers, represented by the Writers Guild of America, were worried about AI being used to write or complete scripts, potentially affecting their jobs and pay. Actors, under SAG-AFTRA, protested against proposals to use AI to scan and use their likenesses indefinitely without ongoing consent or compensation.

Disputes like this, and the many lawsuits against AI companies highlight the urgent need for clear policies and regulations on AI’s use, and the fear that AI’s advance is fast outstripping the ability for laws to keep up.

What Does This Mean For Your Business? 

We’re still very much at the beginning of a fast-evolving generative AI revolution. As such, lawsuits against AI companies like Google, Meta, Microsoft, and OpenAI are now challenging the legal limits of gathering training material for AI models from public databases. These types of cases are likely to help to shape the legal framework around what is permissible in the realm of data-scraping for AI purposes going forward.

The NYT/OpenAI/Microsoft lawsuit and other examples, therefore, demonstrate the evolving legal landscape as courts now try to grapple with the implications of AI technology on copyright, privacy, and data use laws, and its complexities. Each case will contribute to defining the boundaries and acceptable practices in the use of online content for AI training purposes, and it will be very interesting to see whether arguments like “fair use” are enough to stand up to the pressure from multiple companies and industries. It will also be interesting to see what penalties (if things go the wrong way for OpenAI and others) will be deemed suitable, both in terms of possible compensation and/or the destruction of whole models and training sets.

For businesses (who are now able to create their own specialised, tailored chatbots), these major lawsuits should serve as a warning to be very careful in the training of their chatbots and to think carefully about any legal implications, and to focus on creating chatbots that are not just effective but are also likely to be compliant.

Featured Article : Amazon Launching ‘Q’ Chatbot

Following on from the launch of OpenAI’s ChatGPT, Google’s Bard (and Duet), Microsoft’s Copilot, and X’s Grok, now Amazon has announced that it will soon be launching its own ‘Q’ generative AI chatbot (for business).

Cue Q 

Amazon has become the latest of the tech giants to announce the introduction of its own generative AI chatbot. Recently announced at the Las Vegas conference for its AWS, ‘Q’ is Amazon’s chatbot that will be available as part of its market-leading AWS cloud platform. As such, Q is being positioned from the beginning as very much a business-focused chatbot with Amazon introducing the current preview version as: “Your generative AI–powered assistant designed for work that can be tailored to your business.” 

What Can It Do? 

The key point from Amazon is that Q is a chatbot that can be tailored to help your business get the most from AWS. Rather like Copilot is embedded in (and works across) Microsoft’s popular 365 apps, Amazon is pitching Q as working across many of its services, providing better navigation and leveraging for AWS customers with many (often overlapping) service options. For example, Amazon says Q will be available wherever you work with AWS (and is an “expert” on patterns in AWS), in Amazon QuickSight (its business intelligence (BI) service built for the cloud), in Amazon Connect (as a customer service chatbot helper), and will also be available in AWS Supply Chain (to help with inventory management).

Just like other AI chatbots, it’s powered by AI models which in this case includes Amazon’s Titan large language model. Also, like other AI chatbots, Q uses a web-based interface to answer questions (streamlining searches), can provide summaries, generate content and more. However, since it’s part of AWS, Amazon’s keen to show that it adds value by doing so within the context of the business it’s tailored to and becomes an ‘expert’ on your business. For example, Amazon says: “Amazon Q can be tailored to your business by connecting it to company data, information, and systems, made simple with more than 40 built-in connectors. Business users—like marketers, project and program managers, and sales representatives, among others—can have tailored conversations, solve problems, generate content, take actions, and more.” The 40 connectors it’s referring to include popular enterprise apps (and storage depositories) like S3, Salesforce, Google Drive, Microsoft 365, ServiceNow, Gmail, Slack, Atlassian, and Zendesk. The power, value, and convenience that Q may provide to businesses may also, therefore, help with AWS customer retention and barriers to exit.


Just some of the many benefits that Amazon describes Q as having include:

– Delivering fast, accurate, and relevant (and secure) answers to your business questions.

– Quickly connecting to your business data, information, and systems, thereby enabling employees to have tailored conversations, solve problems, generate content, and take actions relevant to your business.

– Generating answers and insights according to the material and knowledge that you provide (backed up with references and source citations).

– Respecting access control based on user permissions.

– Enabling admins to easily apply guardrails to customise and control responses.

– Providing administrative controls, e.g. it can block entire topics and filter both questions so that it responds in a way that is consistent with a company’s guidelines.

– Extracting key insights on your business and generating reports and summaries.

– Easy deployment and security, i.e. it supports access control for your data and can be integrated with your external SAML 2.0–supported identity provider (Okta, Azure AD, and Ping Identity) to manage user authentication and authorisation.

When, How, And How Much? 

Q’s in preview at the moment with Amazon giving no exact date for its full launch. Although many of the Q capabilities are available without charge during the preview period, Amazon says It will be available in two pricing plans: Business and Builder. Amazon Q Business (its basic version) will be priced at $20/mo, per-user, and Builder at $25/mo, per-user. The difference appears to be that Builder provides the real AWS expertise plus other features including debugging, testing, and optimising your code, troubleshooting applications and more. Pricewise, Q is cheaper per month/per user than Microsoft’s Copilot and Google’s Duet (both $30).

Not All Good 

Despite Amazon’s leading position in the cloud computing world with AWS, and its technological advances in robotics (robots for its warehouses), its forays in space travel (with Amazon Blue) and into delivery-drone technology, it appears that it may be temporarily lagging in AI-related matters. For example, in addition to being later to market with this AI chatbot ‘Q’, in October, a Stanford University index ranked Amazon’s Tital AI model (which is used in Q) as bottom for transparency in a ranking of the top foundational AI models with only 12 per cent (compared to the top ranking Llama 2 from Meta at 54 per cent). As Stanford puts it: “Less transparency makes it harder for other businesses to know if they can safely build applications that rely on commercial foundation models; for academics to rely on commercial foundation models for research; for policymakers to design meaningful policies to rein in this powerful technology; and for consumers to understand model limitations or seek redress for harms caused.” 

Also, perhaps unsurprisingly due to Q only just being in preview, some other reports about it haven’t been that great. For example, feedback about Q (leaked from Amazon’s internal channels and ticketing systems) highlight issues like severe hallucinations and leaking confidential data. Hallucinations are certainly not unique to Q as reports about and admissions by OpenAI about ChatGPT’s hallucinations have been widely reported.

Catching Up 

Amazon also looks like it will be makingeven greater efforts to catch up in the AI development world. For example, in September it said Alexa will be getting ChatGPT-like voice capabilities, and it’s been reported that Amazon’s in the process of building a language model called Olympus that could be bigger and better than OpenAI’s GPT-4!

What Does This Mean For Your Business?

Although a little later to the party with AI chatbot, Amazon’s dominance in the cloud market with AWS means it has a huge number of business customers to sell its business-focused Q to. This will not only provide another revenue stream to boost its vast coffers but will also enhance, add value to, and allow customers to get greater leverage from the different branches of its different cloud-related services. What with Microsoft, Google, X, Meta, and others all having their own chatbot assistants, it’s almost expected that any other big players in the tech world like Amazon would bring out their own soon.

Despite some (embarrassing internal) reviews of issues in its current preview stage and a low transparency ranking in a recent Stanford report, Amazon clearly has ambitions to make fast progress in catching up in the AI market. With its market power, wealth, and expertise in diversification and its advances in technologies like space travel and robotics and the synergies it brings (e.g. satellite broadband), you’d likely not wish to bet against Amazon making quick progress to the top in AI too.

Q therefore is less of a standalone chatbot like ChatGPT (OpenAI and former workers have helped develop AI for others) and more of Copilot and Duet arrangement in that it’s being introduced to enhance and add value to existing Amazon cloud services, but in a very focused way (more so for Builder) in that it’s “trained on over 17 years’ worth of AWS knowledge and experience”.

Despite Q still being in preview, Amazon’s ambitions to make a quantum leap ahead are already clear if the reports about its super powerful, GPT-4 rivalling (still under development) Olympus model are accurate. It remains to be seen, therefore, how well Q performs once it’s really out there and its introduction marks another major move by a serious contender in the rapidly evolving and growing generative AI market.

Featured Article : OpenAI’s CEO Sam Altman Fired (But Will Return)

Following the shock announcement that the boss of OpenAI (which created ChatGPT) has been quickly ousted by the board and replaced by an interim CEO, we look at what happened, why, and what may be next.


38-year-old Sam Altman, who helped launch OpenAI back in 2015, firstly as a non-profit before its restructuring and investment from Microsoft, has become widely known as the face of OpenAI’s incredible rise. However, it’s been reported that following some video conference calls with OpenAI’s board of 6 members, Mr Altman was removed from his role as CEO, and from the board of directors. Also, OpenAI’s co-founder, Greg Brockman, was removed from his position as chairman of the board of directors, after which he resigned from the company. Both men were reportedly shocked by the speed of their dismissal.


The reason given in a statement by OpenAI for removing Mr Altman was: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.” 

The company also said: “We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward.” 

Sam Altman Says … 

Mr Altman, who since the introduction of ChatGPT and his many public appearances (most recently at the UK’s AI Safety Summit), interviews, and statements, many people see as the generally well-liked, public face of AI, has not publicly elaborated on what he may not have been candid about.

He commented on Elon Musk’s X platform (Musk was one of the founding co-chairs of OpenAI) that: “I loved my time at OpenAI. it was transformative for me personally, and hopefully the world a little bit. most of all I loved working with such talented people. Will have more to say about what’s next later.” 

Intriguingly, there were also reports at the time that Mr Altman and Mr Brockman may have been willing to return if the board members who ousted Altman stepped down – chief scientist Ilya Sutskever has been singled out in some reports as person who led the move to oust Altman.


The sudden nature of the sacking and the vagueness of OpenAI’s statement, plus some of the events afterwards have led to speculation by many commentators about the real cause/reason for ousting Mr Altman. Leading theories include.

Mr Altman may have either told the board about something they didn’t like, not told something important (and perhaps been caught out), or may have been outed about something in comments made by other parties. Although this is the board’s version, no clear evidence has been made public. However, just prior to his ousting, in TV interviews, Microsoft’s CEO Satya Nadella is reported to say that whether Altman and OpenAI staffers would become Microsoft employees was “for the OpenAI board and management and employees to choose” and that Microsoft expected governance changes at OpenAI. He’s also quoted as saying that the partnership between Microsoft and OpenAI “depends on the people at OpenAI staying there or coming to Microsoft, so I’m open to both options.”

It’s also been reported that two senior OpenAI researchers had resigned and that they (and possibly hundreds of OpenAI employees) may join Microsoft, or that Altman may have been planning to start a new company with the open OpenAI employees who’d already left (which the board may have discovered).

Also, shortly after the whole indecent, Microsoft announced that it had hired Altman and Brockman to launch a new advanced-AI research team with Altman as CEO, which may indicate that Altman had already been in talks with Microsoft’s CEO Satya Nadella about it, which may have been discovered by OpenAI’s board.

As hinted at in the board’s statement, i.e. the part about “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity,” that there was an unresolved issue over bad feelings that the company had strayed from its initial ‘non-profit’ status. Some commentators have pointed to Elon Musk taking this view and his apparent silence over Altman’s ousting as possible evidence of this.

Another possible reason for ousting Altman is a board power struggle. Evidence that this may be the case includes:

– Mr Altman and Mr Brockman saying they’d be wiling to return if board members who ousted Altman stepped down.

– Following his sacking, OpenAI investors trying to get Altman reinstated.

– Altman and leading shareholders in OpenAI (Microsoft and Thrive Capital) reportedly wanting the entire board to be replaced.

– Reported huge support for Altman among employees.

Interim CEOs 

Shortly after Altman’s ousting, OpenAI replaced him with two interim CEOs within a short space of time. These were/are:

– Firstly, OpenAI’s CTO Mira Murati. With previous experience in working at Goldman Sachs, Zodiac Aerospace, Tesla, and Leap Motion, Murati was seen as a strong leader who sees multimodal models as he future of the company’s AI.

– Secondly (the current interim CEO) is Emmett Shear, the former CEO of game streaming platform Twitch. Mr Shear said on X about his appointment: “It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” adding that: “I took this job because I believe that OpenAI is one of the most important companies currently in existence.” 

Mr Shear’s Plans 

It’s been reported that Mr Shear plans to hire an independent investigator to examine who ousted Altman and why, speak with OpenAI’s company stakeholders, and reform the company’s management team as needed.

Mr Shear said: “Depending on the results everything we learn from these, I will drive changes in the organisation – up to and including pushing strongly for significant governance changes if necessary.” 

What Does This Mean For Your Business? 

Sam Altman has become known as the broadly well-liked face of AI since the introduction of OpenAI’s hugely popular ChatGPT chatbot one year ago. He’s extremely popular too with OpenAI employees, and other major tech industry figures, including Emmett Shear, who is now OpenAI’s interim CEO and Google boss Eric Schmidt who’s described Mr Altman “a hero of mine”. Also, Mr Altman is very close to OpenAI’s major investors Microsoft, and has already been snapped up by Microsoft (along with Brockman) as head of a new AI research team there.

Altman’s rapid ousting from OpenAI has not gone down well and all eyes appear to be focused on some of the other members of OpenAI’s board, plus the power struggle that appears to have been fought, and what kind of management and governance is needed at the top of OpenAI now to take it forward. It’s still early and it remains to be seen what happens at the top following the investigation by interim CEO Shears. Microsoft will doubtless be very happy about having Altman on board which could see them make their own gains in the now highly competitive generative AI market.

With Altman gone, it remains to be seen how/if OpenAI’s products and rapid progress and success is ultimately affected.

Update: 22.11.23 – It’s been announced that Sam Altman will soon return to OpenAI following changes to the board.

Featured Article : Major Upgrades To ChatGPT For Paid Subscribers

One year on from its general introduction, OpenAI has announced some major upgrades to ChatGPT for its Plus and Enterprise subscribers.

New Updates Announced At DevDay 

At OpenAI’s first ‘DevDay’ developer conference on November 6, the company announced more major upgrades to its popular ChatGPT chatbot premium service. The upgrades come as competition between the AI giants in the new and rapidly evolving generative AI market is increasing, following a year that has seen the introduction of Bing Chat and Copilot (Microsoft), Google’s Bard and Duet AI, Claude (Anthropic AI), X’s Grok, and more. Although this year, ChatGPT has already been updated since its general basic release with a subscription service and its more powerful GPT-4 model, plug-ins to connect it with other web services, and integration with OpenAI’s Dall-E 3 image generator (for Plus and Enterprise) and image upload to help with queries, OpenAI will be hoping that the new upgrades will retain the loyalty of its considerable user base and retain its place as the generative AI front-runner.


The first of four main new upgrades is ‘GPTs,’ which gives anyone (who is a ChatGPT Plus subscriber) the option to create their own tailored version of ChatGPT, e.g. to help them in their daily life, or to help with specific tasks at work, or at home. For example (as suggested by TechCrunch), a tech business could create and train its own GPT on its own proprietary codebases thereby enabling developers to check their style or generate code in line with best practices.

Users can create their own GPT with this ‘no coding required’ feature by clicking on the ‘Create a GPT’ option and using a GPT Builder. This involves using a conversation with the chatbot to give it instructions and extra knowledge, to pick what the GPT can do (e.g. searching the web, making images, or analysing data). OpenAI says the ability for customers to build their own custom GPT chatbot builds upon the ‘Custom Instructions’ it launched in July that let users set some preferences.

OpenAI has also addressed many privacy concerns about the feature by saying that any user chats with GPTs won’t be shared with builders and, if a GPT uses third party APIs, users can choose whether data can be sent to that API.

Share Your Custom GPTs Publicly Via ‘GTP Store’ 

The next new upgrade announced is the fact that users can publicly share the GPTs they create via a soon-to-be-launched (later this month), searchable ‘GPT Store’ – the equivalent of an app store, like Apple’s App Store or Google Play. OpenAI says the GPT Store will feature creations by verified builders and once in the store, GPTs become searchable and may “climb the leaderboards.” OpenAI also says it will spotlight the best GPTs in categories like productivity, education, and “just for fun,” and “in the coming months” GTP creators will be able to earn money based on how many people are using their GPT.

Turbo GPT-4 

In another announcement, OpenAI says it’s launching a preview of the next generation of its GTP-4 model (first launched in March) named GPT-4 Turbo.  As the name suggest, the Turbo version will be improved and more powerful. Features include:

– More up-to-date knowledge, i.e. knowledge of world events up to April 2023.

– A 128k context window to fit the equivalent of more than 300 pages of text in a single prompt.

– Optimised performance, which OpenAI says enables GPT-4 Turbo to be offered at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.

– ChatGPT Plus will also be easier to use, i.e. no need to switch between different models because DALL-E, browsing, and data analysis can all be accessed without switching.

Copyright Shield 

The last of the major update announcements for pro users is the introduction of ‘Copyright Shield’ to protect enterprise and API users (not free or Plus users) from legal claims around copyright infringement. This appears to be an answer to Microsoft’s September and Google’s October announcement that they will assume responsibility for potential legal risks to customers from copyright infringement claims arising from the use of their AI products.

Google, for example, announced it will offer limited indemnity and assume responsibility for the potential legal risks where customers receive copyright challenges through using generative AI products like Duet AI. Although it’s not yet clear how Copyright Shield will operate, OpenAI states in a recent blog: “we will now step in and defend our customers.” 

What Does This Mean For Your Business? 

OpenAI’s work with the other big tech companies and its general launch of ChatGPT a year ago have established it as the major player in the new and rapidly growing generative AI market. Building on the introduction of GPT-4 and rapid monetisation of its services through its business focused Plus and Enterprise subscriptions, these latest updates see OpenAI making the shift from AI model to developer to platform, i.e. with GTPs and the GTP Store.

What’s exciting and useful about GPTs is that they don’t require any coding skills, thereby democratising generative AI app creation and providing an easy way for businesses to create tools that can help them to save time and money, boost their productivity, improve their service, and much more. The addition of the GPT Store idea allows OpenAI to establish itself as a major go-to platform for AI apps, thereby competing with the likes of Google and Apple in a way. The Store could also provide a great opportunity for developers to monetise their GPTs as well as perhaps being a threat to consultancies and developers already creating custom AI services on behalf of paying clients.

The more powerful Turbo GTP-4 and its more up to date outputs, plus the lack of requirement to switch between different models are also likely to be features valued by businesses wanting easier, faster, and more productive ways to use ChatGPT. Furthermore, the Copyright Shield idea is likely to improve user confidence while enabling OpenAI to compete with Google and Microsoft, which have already announced their versions of it.

All in all, in the new and fast-moving generative AI market, these new upgrades see OpenAI ratcheting things up a notch, adding value, making serious competitive and customer retention efforts, showing its ambitions to move to platform status and greater monetisation, and further establishing itself as a major force in generative AI. For business users, these changes provide more opportunities to easily introduce customised and value-adding AI to any aspect of their business.

Security Stop Press : ChatGPT Release Linked To Massive Phishing Surge

Threat Detection Technology SlashNext has reported that in the 12 months that ChatGPT’s been publicly available, the number of phishing emails has jumped 1,265 per cent, with credential phishing, a common first step in data breaches, seeing a 967 per cent increase.

SlashNext’s State of Phishing 2023 report notes that cybercriminals may have been leveraging LLM chatbots like ChatGPT to help write more convincing phishing emails and to launch highly targeted phishing attacks. Generative AI chatbots may also have lowered the barriers for any bad actors wanting to launch such campaigns (i.e. by giving less skilled cyber criminals the tools to run more complex phishing attacks).

Businesses can safeguard against phishing attacks by taking measures such as educating employees to recognise fraudulent communications, enforcing strong password policies, using MFA, keeping software up-to-date and installing anti-phishing tools, and by having an effective incident response plan to mitigate damage from breaches.

Featured Article : Safety Considerations Around ChatGPT Image Uploads

With one of ChatGPT’s latest features being the ability to upload images to help get answers to queries, here we look at why there have been security concerns about releasing the feature.

Update To ChatGPT 

The new ‘Image input’ which will soon be generally available to Plus users on all platforms, has just been announced along with a voice capability, enabling users to have a voice conversation with ChatGPT, and the ‘Browse’ feature that enables the chatbot to browse the internet to get current information.

ChatGPT and Other Chatbot Limitations and Concerns 

Prior to the latest concerns about the new ‘Image input’ feature, several concerns limitations about ChatGPT have been highlighted.

For example, ChatGPT’s CEO Sam Altman has long been clear about the possibility that the chatbot is capable of making things up in a kind of “hallucination” in reply to questions. Also, there’s a clear warning on the foot of the ChatGPT’s user account page confirming this saying: “ChatGPT may produce inaccurate information about people, places, or facts.”

Also, back in March, the UK’s National Cyber Security Centre (NCSC) published warnings that LLMs (the language models powering AI chatbots) can:

– Get things wrong and ‘hallucinate’ incorrect facts.

– Display bias and be “gullible” (in responding to leading questions, for example).

– Be “coaxed into creating toxic content and are prone to injection attacks.” 

For these and other reasons, the NCSC recommends not including sensitive information in queries to public LLMs, and not to submit queries to public LLMs that would lead to issues (if they were they made public).

It’s within this context of the recognised and documented imperfections of chatbots that we look at the risks that a new image dimension could present.

Image Input 

The new ‘Image input’ feature for ChatGPT, which had already been introduced by Google’s Bard, is intended to facilitate the usage the contents of images to better explain their questions, help troubleshoot, or for instance get an explanation of complex graph, or to generate other helpful responses based on the picture. In fact, it’s intended to act in situations (just as in real life), where it may be quicker and more effective to show something as picture of something rather than try and explain it. ChatGPT’s powerful image recognition abilities means that it can describe what’s in the uploaded images, answer questions about them and, even recognise specific people’s faces.

ChatGPT’s ‘Image input’ feature owes much to a collaboration (in March) between OpenAI and the ‘Be My Eyes’ platform which led to the creation of ‘Be My AI’, a new tool to describe the visual world for people who are blind or have low vision. In essence, the Be My Eyes Platform seems to have provided an ideal testing area to inform how GPT-4V could be deployed responsibly.

How To Use It 

The new Image input feature allows users to tap on the photo button to capture or choose an image, and to show/upload one or more images to ChatGPT, and even to using a drawing tool in the mobile app to focus on a specific part of an image.

Concerns About Image Input 

Although it’s obvious to see how Image input could be helpful, it’s been reported that OpenAI was reluctant to release GPT-4V / GPT-4 with ‘vision’ because of privacy issues over its facial recognition abilities, and over what it may ‘say’ about peoples’ faces.


Open AI says that before releasing Image input, its “Red teamers” tested it relation to how it performed on areas of concern. These areas for testing give a good idea of the kinds of concerns about how Image input, a totally new vector for ChatGPT, could provide the wrong response or be manipulated.

For example, OpenAI says its teams tested the new feature in areas including scientific proficiency, medical advice, stereotyping and ungrounded inferences, disinformation risks, hateful content, and visual vulnerabilities. It also looked at its performance in areas like sensitive trait attribution across demographics (images of people for gender, age, and race recognition), person identification, ungrounded inference evaluation (inferences that are not justified by the information the user has provided), jailbreak evaluations (prompts that circumvent the safety systems in place to prevent malicious misuse), advice or encouragement for self-harm behaviours, and graphic material, CAPTCHA breaking and geolocation.


Following its testing, some of the concerns highlighted about the ‘vision’ aspect of ChatGPT in tests by Open AI, as detailed in its own September 25 technical paper include:

– Where “Hateful content” in images is concerned, GPT-4V was found to refuse to answer questions about hate symbols and extremist content in some instances but not all. For example, it can’t always recognise lesser-known hate group symbols.

– It shouldn’t be relied upon for accurate identifications for issues such as medical, or scientific analysis.

– In relation to stereotyping and ungrounded inferences, using GPT-4V for some tasks could generate unwanted or harmful assumptions that are not grounded in the information provided to the model.

Other Security, Privacy, And Legal Concerns 

OpenAI’s own assessments aside, major concerns raised by tech and security commentors about ChatGPT’s facial recognition capabilities in relation to the Image input feature are that:

– It could be used as a facial recognition tool by malicious actors. For example, it could be used in some way in conjunction with WormGPT, the AI chatbot trained on malware and designed to extort victims or used generally in identity fraud scams.

– It could say things about faces that that provide unsafe assessments, e.g. about their gender or emotional state.

– Its LLM risks producing incorrect results in potentially risky areas, such as identifying illegal drugs or safe-to-eat mushrooms and plants.

– The GPT-4V model may (as with the text version) give responses (both text and images) that could be used by some bad-actors to spread disinformation at scale.

– In Europe (operating under GDPR) it could cause legal issues, i.e. citizen consent is required to use their biometric data.

What Does This Mean For Your Business? 

This could be a legal minefield for OpenAI and may even pose risks to users, as OpenAI’s many testing categories show. It us unsurprising that OpenAI held back on the release of GPT-4V (GPT-4 with vision) over safety and privacy issues, e.g. in its facial recognition capabilities.

Certainly, adding new modalities like image inputs into LLMs expands the impact of language-only systems with new interfaces and capabilities, enabling the solving of new tasks and providing novel experiences for users, yet it’s hard to ignore the risks of facial recognition being abused. OpenAI has, of course, ‘red teamed’, tested, and introduced refusals and blocks where it can but, as is publicly known and admitted by OpenAI and others, chatbots are imperfect, still in their early stages of development, and are certainly capable of producing wrong (and potentially damaging) responses, while there are legal matters like consent (facial images are personal data) to consider.

The fact that a malicious version of ChatGPT has already been produced and circulated by criminals has highlighted concerns about threats posed by the technology and how an image aspect could elevate this threat in some way. Biometric data is now being used as a verification for devices, services, and accounts, and with convincing deepfake technology already being used, we don’t yet know what inventive ways cyber criminals could use image inputs in chatbots as part of a new landscape of scams.

It’s a fast-moving competitive market, however, as the big tech companies race to make their own chatbots as popular as possible and despite OpenAI’s initial reluctance, in order to stay competitive, it may have felt some pressure to get its image input feature out there now. The functionalities introduced recently to ChatGPT (such as image input) illustrate the fact that to make chatbots more useful and competitive, some lines must be crossed however tentatively, even though this could increase risks to users and to companies like OpenAI.