Featured Article : New Certification For Copyright Compliant AI

Following many legal challenges to AI companies about copyrighted content being scraped and used to train their AI models (without consent or payment), a new certification for copyright-compliant AI has been launched.

The Issue 

As highlighted in the recent case of the New York Times suing OpenAI over the alleged training of its AI on New York Times articles without permission for free (with the likelihood of a ‘fair use’ claim in defence), how AI companies train their models is now a big issue.

The organisation ‘Fairly Trained’ says that its new Licensed Model certification is intended to highlight this difference between AI companies who scrape data (and claim fair usage) and AI companies who license it, thereby getting permission and pay for training data (i.e. they choose to do so for ethical and legal reasons). As Fairly Trained’s CEO, Ed Newton-Rex says: “You’ve got a bunch of people who want to use licenced models and you’ve got a bunch of people who are providing those. I didn’t see any way of being able to tell them apart” 

Fairly Trained says it hopes its certification will “reinforce the principle that rights-holder consent is needed for generative AI training.” 

Fairly Trained – The Certification Initiative

The non-profit ‘Fairly Trained’ initiative has introduced a Licensed Model (L) certification for AI providers that can be obtained by (awarded to) any generative AI model that doesn’t use any copyrighted work without a licence.

Who? 

Fairly Trained says the certification can go to “any company, organisation, or product that makes generative AI models or services available” and meets certain criteria.

The Criteria  

The main criteria for the certification includes:

– The data used for the model(s) must be explicitly provided to the model developer for the purposes of being used as training data, or available under an open license appropriate to the use-case, or in the public domain globally, or fully owned by the model developer.

– There must be a “robust process for conducting due diligence into the training data,” including checks into the rights position of the training data provider.

– There must also be a robust process for keeping records of the training data that was used for each model training.

The Price 

In addition to meeting the criteria, AI companies will also have to pay for their certification. The price is based on an organisation’s annual revenue and ranges from $150 submission fee and $500 annual certification fee for an organisation with a $100k annual revenue to a $500 submission fee and $6,000 annual certification fee for an organisation with a $10M annual revenue.

What If The Company Changes Its Training Data Practices? 

If an organisation acquires the certification and then changes its data practices afterwards (i.e. it no longer meets the criteria), Fairly Trained says it is up to that organisation to inform Fairly Trained of the change, which suggests that there’s no pro-active checking in place. Fairly Trained does, however, say it reserves the right to withdraw certification without reimbursement if “new information comes to light” that shows an organisation no longer meets the criteria.

None Would Meet The Criteria For Text 

Although Fairly Trained accepts that its certification scheme is not an end to the debate over what creator consent should look like, the scheme does appear to have one significant flaw at the moment.

As Fairly Trained’s CEO, Ed Newton-Rex has acknowledged, it’s unlikely that any of the major text generation models could currently get certified because they have been trained upon a large amount of copyrighted work, i.e. even ChatGPT is unlikely to meet the criteria.

The AI companies argue, however, that they have had little choice but to do so because copyright protection seems to cover so many different things including blog and forum posts, photos, code, government documents, and more.

Alternative? 

Mr Newton-Rex has been reported as saying he’s hopeful that there will be models (in future) that are trained on a small amount of data and end up being licensed, and that there may also be other alternatives. Examples of some ways AI models could be trained without using copyrighted material (but probably not without consent) include:

– Using open datasets that are explicitly marked for free use, modification, and distribution. These can include government datasets, datasets released by academic institutions, or datasets available through platforms like Kaggle (provided their licenses permit such use).

– Using works that have entered the public domain, meaning copyright no longer applies. This includes many classic literary works, historical documents, and artworks. Generating synthetic data using algorithms. This could include text, images, and other media. Generative models can create new, original images based on certain parameters or styles (but could arguably still allow copyrighted styles to creep in).

– Using crowdsourcing and user contribution, i.e. contributions from users under an open license.

– Using data from sources that have been released under Creative Commons or other licenses that allow for reuse, sometimes with certain conditions (like attribution or non-commercial use).

– Partnering / collaboratiing with artists, musicians, and other creators to generate original content specifically for training the AI. This can also involve contractual agreements where the rights for AI training are clearly defined.

– Using web scraping but with strict filters to only collect data from pages that explicitly indicate the content is freely available or licensed for reuse.

Collaboration and Agreements 

Alternatively, AI companies could choose to partner with artists, musicians, and other creators to generate original content (using contractual agreements) specifically for training the AI. Also, they could choose to Enter into agreements with organisations or individuals to use private or proprietary data, ensuring that the terms of use permit AI training.

What Does This Mean For Your Business? 

It’s possible to see both sides of the argument to a degree. For example, so many things are copyrighted and AI companies such as OpenAI with ChatGPT wouldn’t have been able to create and get a reasonable generative AI chatbot out there if it had to get consent from everyone for everything and pay for all the licenses needed.

On the other hand, it’s understandable that creatives such as artists or journalistic sources such as the New York Times are angry that their output may have been used for free (with no permission) to train an LLM and thereby create the source of its value that it may then charge users for. Although the idea of providing a way to differentiate between AI companies that had paid and acquired permission (i.e. acted ethically for their training content sounds like a fair idea), the fact that the LLMs from the main AI companies (including ChatGPT) may not even meet the criteria does make it sound a little self-defeating and potentially not that useful for the time being.

Also, some would say that relying upon companies to admit when they may have changed their AI training practices and potentially lose the certification they’ve paid for (when Fairly Trained isn’t checking anyway) may also sound as though this may not work. All that said, there are other possible alternatives (as mentioned above) that could require consent and organisations working together that could result in useful, trained LLMs without copyright headaches.

Although the Fairly Trained scheme sounds reasonable, Fairly Trained admits that it’s not a definitive answer to the problem. It’s probably more likely that the outcomes of the many lawsuits will help shape how AI companies act as regards training their LLMs in the near future.

Featured Article : NY Times Sues OpenAI And Microsoft Over Alleged Copyright

It’s been reported that The New York Times has sued OpenAI and Microsoft, alleging that they used millions of its articles without permission to help train chatbots.

The First 

It’s understood that the New York Times (NYT) is the first major US media organisation to sue ChatGPT’s creator OpenAI, plus tech giant Microsoft (which is also an OpenAI investor and creator of Copilot), over copyright issues associated with its works.

Main Allegations 

The crux of the NYT’s argument appears to be that the use of its work to create GenAI tools should come with permission and an agreement that reflects the fair value of the work. Also, it’s important in this case to note that the NYT relies on digital subscriptions rather than physical newspaper subscriptions, of which it now has 9 million+ subscribers (the relevance of which will be clear below).

With this in mind, in addition to the main allegation of training AI on its articles without permission (for free), other main allegations made by the NYT about OpenAI and Microsoft in relation to the lawsuit include :

– OpenAI and Microsoft may be trying to get a “free-ride on The Times’s massive investment in its journalism” by using it to provide another way to deliver information to readers, i.e. a way around its payment wall. For example, the NYT alleges that OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles. The NYT’s legal team have given examples of these, such as restaurant critic Pete Wells’ 2012 review of Guy Fieri’s (of Diners, Drive-Ins, and Dives fame) “Guy’s American Kitchen & Bar”. The NYT argues that this threatens its high-quality journalism by reducing readers’ perceived need to visit its website, thereby reducing its web traffic, and potentially reducing its revenue from advertising and from the digital subscriptions that now make up most of its readership.

– Misinformation from OpenAI’s (and Microsoft’s) chatbots, in the form of errors and so-called ‘AI hallucinations’ make it harder for readers to tell fact from fiction, including when their technology falsely attributes information to the newspaper. The NYT’s legal team cite examples of where this may be the case, such as ChatGPT once falsely attributing two recommendations for office chairs to its Wirecutter product review website.

“Fair Use” And Transformative 

In their defence, Open AI and Microsoft appear likely to be relying mainly on the arguments that the training of AI on NYT’s content amounts to “fair use” and the outputs of the chatbots are “transformative.”

For example, under US law, “fair use” is a doctrine that allows limited use of copyrighted material without permission or payment, especially for purposes like criticism, comment, news reporting, teaching, scholarship, or research. Determining whether a specific use qualifies as fair use, however, will involve considering factors like the purpose and character of the usage. For example, the use must be “transformative”, i.e. adding something new or altering the original work in a significant way (often for a different purpose). OpenAI and Microsoft may therefore argue that training their AI products could potentially be seen as transformative as the AI uses the newspaper content in a way that is different from the original purpose of news reporting or commentary. However, the NYT has already stated that: “There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it”. Any evidence of verbatim outputs may also damage the ‘transformative’ argument for OpenAI and Microsoft.

Complicated 

Although these sound like relatively clear arguments either way, there are several factors that add to the complication of this case. These include:

– The fact that OpenAI altered its products following copyright issues, thereby making it difficult to decide whether its outputs are currently enough to find liability.

– Many possible questions about the journalistic, financial, and legal implications of generative AI for news organisations.

– Broader ethical and practical dilemmas facing media companies in the age of AI.

What Is It Going To Cost? 

Given reports that talks between all three companies to avert the lawsuit have failed to resolve the matter, what the NYT wants is:

Damages of an as yet undisclosed sum, which some say could be in the $billions (given that OpenAI is valued at $80 billion and Microsoft has invested $13 billion in a for-profit subsidiary).

For OpenAI and Microsoft to destroy the chatbot models and training sets that incorporate the NYT’s material.

Many Other Examples

AI companies like OpenAI are now facing many legal challenges of a similar nature, e.g. the scraping/automatic collection of online content/data by AI without compensation, and for other related reasons. For example:

– A class action lawsuit filed in the Northern District of California accuses OpenAI and Microsoft of scraping personal data from internet users, alleging violations of privacy, intellectual property, and anti-hacking laws. The plaintiffs claim that this practice violates the Computer Fraud and Abuse Act (CFAA).

– Google has been accused in a class-action lawsuit of misusing large amounts of personal information and copyrighted material to train its AI systems. This case raises issues about the boundaries of data use and copyright infringement in the context of AI training.

– A Stability AI, Midjourney, and DeviantArt class action claims that these companies used copyrighted images to train their AI systems without permission. The key issue in this lawsuit is likely to be whether the training of AI models with copyrighted content, particularly visual art, constitutes copyright infringement. The challenge lies in proving infringement, as the generated art may not directly resemble the training images. The involvement of Large-scale Artificial Intelligence Open Network (LAION) in compiling images used for training adds another layer of complexity to the case.

– Back in February 2023, Getty Images sued Stability AI alleging that it had copied 12 million images to train its AI model without permission or compensation.

The Actors and Writers Strike 

The recent strike by Hollywood actors and writers is another example of how fears about AI, consent, and copyright, plus the possible effects of AI on eroding the value of people’s work and jeopardising their income are now of real concern. For example, the strike was primarily focused on concerns regarding the use of AI in the entertainment industry. Writers, represented by the Writers Guild of America, were worried about AI being used to write or complete scripts, potentially affecting their jobs and pay. Actors, under SAG-AFTRA, protested against proposals to use AI to scan and use their likenesses indefinitely without ongoing consent or compensation.

Disputes like this, and the many lawsuits against AI companies highlight the urgent need for clear policies and regulations on AI’s use, and the fear that AI’s advance is fast outstripping the ability for laws to keep up.

What Does This Mean For Your Business? 

We’re still very much at the beginning of a fast-evolving generative AI revolution. As such, lawsuits against AI companies like Google, Meta, Microsoft, and OpenAI are now challenging the legal limits of gathering training material for AI models from public databases. These types of cases are likely to help to shape the legal framework around what is permissible in the realm of data-scraping for AI purposes going forward.

The NYT/OpenAI/Microsoft lawsuit and other examples, therefore, demonstrate the evolving legal landscape as courts now try to grapple with the implications of AI technology on copyright, privacy, and data use laws, and its complexities. Each case will contribute to defining the boundaries and acceptable practices in the use of online content for AI training purposes, and it will be very interesting to see whether arguments like “fair use” are enough to stand up to the pressure from multiple companies and industries. It will also be interesting to see what penalties (if things go the wrong way for OpenAI and others) will be deemed suitable, both in terms of possible compensation and/or the destruction of whole models and training sets.

For businesses (who are now able to create their own specialised, tailored chatbots), these major lawsuits should serve as a warning to be very careful in the training of their chatbots and to think carefully about any legal implications, and to focus on creating chatbots that are not just effective but are also likely to be compliant.

Featured Article : Anti-Trust : OpenAI And Microsoft

Following the recent boardroom power struggle that led to the sacking and reinstatement of OpenAI boss Sam Altman, Microsoft’s relationship with OpenAI is now under US and UK antitrust scrutiny.

What Happened? 

A recent boardroom battle at OpenAI (ChatGPT’s creator and working partner of Microsoft), led to the rapid ousting of OpenAI’s boss Sam Altman and resignation of OpenAI’s co-founder Greg Brockman. Both men were reported to have been immediately hired by Microsoft to launch a new advanced AI research team with Altman as CEO. Then, just days later and following the board being replaced (apart from Adam D’Angelo) by a new initial version, Sam Altman returned and was reinstated as OpenAI’s CEO.

What’s The Issue? 

The factors that appear to have attracted US and UK regulators over antitrust concerns are:

– Microsoft has long been a significant supporter and backer of OpenAI, investing in the company and also integrating OpenAI’s technologies within Microsoft’s own products and cloud services. This collaboration has helped in scaling OpenAI’s research and the implementation of AI technologies, particularly in areas like large language models, cloud computing, and AI ethics and safety. It could also, however, be a kind of background evidence of a close relationship between the two companies.

– As mentioned earlier, when Sam Altman was ousted, Microsoft reportedly immediately hired him as CEO of a new research team there (further evidence of a very close relationship).

– Microsoft has been granted a non-voting, observer position at OpenAI by a new three-member initial board. This means that Microsoft’s representative can attend OpenAI’s board meetings and access confidential information (but can’t vote on matters including electing or choosing directors). However, it’s not yet been reported who from Microsoft will take the non-voting position and what a final (rather than the initial) OpenAI board would look like.

– More specifically, the main concern of regulators appears to be whether the partnership between OpenAI and Microsoft has resulted in an “acquisition of control”. This is whether one party has material influence, de facto control, or more than 50 per cent of the voting rights over another entity. Such control, for example, could negatively impact market competition. The UK’s Competition and Markets Authority (CMA) is particularly looking into whether there have been changes in the governance of OpenAI and the nature of Microsoft’s influence over its affairs.

– The CMA recently stated that it’s considering whether it is (or may be) the case that Microsoft’s partnership with OpenAI (or any changes thereto) has resulted in the creation of a relevant merger situation under the merger provisions of the Enterprise Act 2002. Also, if so, the CMA has stated that it’s interested in whether the creation of that situation may be expected to result in a substantial lessening of competition within any market or markets in the United Kingdom for goods or services. The CMA has opened an investigation of the partnership between Microsoft and OpenAI which is currently at the comments and information gathering stage which closes on 3 January 2024.

– Although OpenAI’s parent is a non-profit company (a type of entity thai is rarely subject to antitrust scrutiny) in 2019, it set up a for-profit subsidiary, in which Microsoft is reported to own a 49 per cent stake. It’s also been reported that Microsoft is prepared to invest more than $10 billion into the startup.

In The US? 

Although the above points relate to the UK, the US Federal Trade Commission (FTC) is also reported to be examining the nature of Microsoft’s investment in ChatGPT maker OpenAI in relation to whether it may violate antitrust laws but hasn’t yet opened a formal investigation.

What Does Microsoft Say? 

Microsoft has stated publicly that it doesn’t own any part of OpenAI. Company spokesman, Frank Shaw, said: “While details of our agreement remain confidential, it is important to note that Microsoft does not own any portion of OpenAI and is simply entitled to share of profit distributions”. 

Meaning? 

Microsoft’s statement that it doesn’t own any part of OpenAI and is merely entitled to a share of profit distributions addresses only one facet of potential antitrust concerns, i.e. mainly the question of ownership. However, antitrust issues often encompass more than just ownership stakes. They can involve questions of influence, control, or exclusive agreements that might affect market competition.

Regulators may still be interested in the broader implications of the Microsoft-OpenAI relationship. This could include the extent of influence that Microsoft might have over OpenAI’s decisions, the potential for their partnership to impact market dynamics in the AI sector, or any exclusive benefits Microsoft might gain. The focus of antitrust authorities, therefore, often extends to how such partnerships influence market fairness, innovation, and consumer choice.

What Does This Mean For Your Business?

In the aftermath of the boardroom changes at OpenAI, including the dramatic sacking and reinstatement of CEO Sam Altman, the antitrust spotlight has turned to the intricate relationship between Microsoft and OpenAI. This scrutiny, in both the US and UK, may go beyond just speculation of a merger and is likely to look at broader concerns of influence and control within the fast-evolving AI sector. The investigations are, therefore, part of a regulatory interest in ensuring competitive fairness in the fast-growing and evolving AI industry.

For businesses, this could translate into an era of increased oversight on AI collaborations and investments and regulators’ concerns over the concentration of power in the AI industry signals a need for businesses to be cautious. The focus is not just on maintaining competitive markets but also on preventing any monopolistic control over emerging and critical technologies like AI. This evolving regulatory landscape indicates that businesses need to consider the broader implications of their strategic partnerships beyond mere ownership stakes.

Microsoft’s assertion that it doesn’t own any part of OpenAI and is only entitled to profit distributions addresses direct ownership concerns but doesn’t fully alleviate antitrust concerns. The nature of their collaboration, potential influence on business decisions, and any exclusive benefits or access could still be under scrutiny.

The parallel inquiries by the FTC in the US and the CMA also appear to suggest a harmonised approach towards regulating major AI partnerships and means that companies operating transnationally in the AI space must be aware of regulatory developments in multiple jurisdictions. The CMA’s investigation into whether the Microsoft-OpenAI partnership has created a “relevant merger situation” under the Enterprise Act 2002, and its potential impact on market competition, could also set precedents affecting future tech collaborations.

Featured Article : Amazon Launching ‘Q’ Chatbot

Following on from the launch of OpenAI’s ChatGPT, Google’s Bard (and Duet), Microsoft’s Copilot, and X’s Grok, now Amazon has announced that it will soon be launching its own ‘Q’ generative AI chatbot (for business).

Cue Q 

Amazon has become the latest of the tech giants to announce the introduction of its own generative AI chatbot. Recently announced at the Las Vegas conference for its AWS, ‘Q’ is Amazon’s chatbot that will be available as part of its market-leading AWS cloud platform. As such, Q is being positioned from the beginning as very much a business-focused chatbot with Amazon introducing the current preview version as: “Your generative AI–powered assistant designed for work that can be tailored to your business.” 

What Can It Do? 

The key point from Amazon is that Q is a chatbot that can be tailored to help your business get the most from AWS. Rather like Copilot is embedded in (and works across) Microsoft’s popular 365 apps, Amazon is pitching Q as working across many of its services, providing better navigation and leveraging for AWS customers with many (often overlapping) service options. For example, Amazon says Q will be available wherever you work with AWS (and is an “expert” on patterns in AWS), in Amazon QuickSight (its business intelligence (BI) service built for the cloud), in Amazon Connect (as a customer service chatbot helper), and will also be available in AWS Supply Chain (to help with inventory management).

Just like other AI chatbots, it’s powered by AI models which in this case includes Amazon’s Titan large language model. Also, like other AI chatbots, Q uses a web-based interface to answer questions (streamlining searches), can provide summaries, generate content and more. However, since it’s part of AWS, Amazon’s keen to show that it adds value by doing so within the context of the business it’s tailored to and becomes an ‘expert’ on your business. For example, Amazon says: “Amazon Q can be tailored to your business by connecting it to company data, information, and systems, made simple with more than 40 built-in connectors. Business users—like marketers, project and program managers, and sales representatives, among others—can have tailored conversations, solve problems, generate content, take actions, and more.” The 40 connectors it’s referring to include popular enterprise apps (and storage depositories) like S3, Salesforce, Google Drive, Microsoft 365, ServiceNow, Gmail, Slack, Atlassian, and Zendesk. The power, value, and convenience that Q may provide to businesses may also, therefore, help with AWS customer retention and barriers to exit.

Benefits 

Just some of the many benefits that Amazon describes Q as having include:

– Delivering fast, accurate, and relevant (and secure) answers to your business questions.

– Quickly connecting to your business data, information, and systems, thereby enabling employees to have tailored conversations, solve problems, generate content, and take actions relevant to your business.

– Generating answers and insights according to the material and knowledge that you provide (backed up with references and source citations).

– Respecting access control based on user permissions.

– Enabling admins to easily apply guardrails to customise and control responses.

– Providing administrative controls, e.g. it can block entire topics and filter both questions so that it responds in a way that is consistent with a company’s guidelines.

– Extracting key insights on your business and generating reports and summaries.

– Easy deployment and security, i.e. it supports access control for your data and can be integrated with your external SAML 2.0–supported identity provider (Okta, Azure AD, and Ping Identity) to manage user authentication and authorisation.

When, How, And How Much? 

Q’s in preview at the moment with Amazon giving no exact date for its full launch. Although many of the Q capabilities are available without charge during the preview period, Amazon says It will be available in two pricing plans: Business and Builder. Amazon Q Business (its basic version) will be priced at $20/mo, per-user, and Builder at $25/mo, per-user. The difference appears to be that Builder provides the real AWS expertise plus other features including debugging, testing, and optimising your code, troubleshooting applications and more. Pricewise, Q is cheaper per month/per user than Microsoft’s Copilot and Google’s Duet (both $30).

Not All Good 

Despite Amazon’s leading position in the cloud computing world with AWS, and its technological advances in robotics (robots for its warehouses), its forays in space travel (with Amazon Blue) and into delivery-drone technology, it appears that it may be temporarily lagging in AI-related matters. For example, in addition to being later to market with this AI chatbot ‘Q’, in October, a Stanford University index ranked Amazon’s Tital AI model (which is used in Q) as bottom for transparency in a ranking of the top foundational AI models with only 12 per cent (compared to the top ranking Llama 2 from Meta at 54 per cent). As Stanford puts it: “Less transparency makes it harder for other businesses to know if they can safely build applications that rely on commercial foundation models; for academics to rely on commercial foundation models for research; for policymakers to design meaningful policies to rein in this powerful technology; and for consumers to understand model limitations or seek redress for harms caused.” 

Also, perhaps unsurprisingly due to Q only just being in preview, some other reports about it haven’t been that great. For example, feedback about Q (leaked from Amazon’s internal channels and ticketing systems) highlight issues like severe hallucinations and leaking confidential data. Hallucinations are certainly not unique to Q as reports about and admissions by OpenAI about ChatGPT’s hallucinations have been widely reported.

Catching Up 

Amazon also looks like it will be makingeven greater efforts to catch up in the AI development world. For example, in September it said Alexa will be getting ChatGPT-like voice capabilities, and it’s been reported that Amazon’s in the process of building a language model called Olympus that could be bigger and better than OpenAI’s GPT-4!

What Does This Mean For Your Business?

Although a little later to the party with AI chatbot, Amazon’s dominance in the cloud market with AWS means it has a huge number of business customers to sell its business-focused Q to. This will not only provide another revenue stream to boost its vast coffers but will also enhance, add value to, and allow customers to get greater leverage from the different branches of its different cloud-related services. What with Microsoft, Google, X, Meta, and others all having their own chatbot assistants, it’s almost expected that any other big players in the tech world like Amazon would bring out their own soon.

Despite some (embarrassing internal) reviews of issues in its current preview stage and a low transparency ranking in a recent Stanford report, Amazon clearly has ambitions to make fast progress in catching up in the AI market. With its market power, wealth, and expertise in diversification and its advances in technologies like space travel and robotics and the synergies it brings (e.g. satellite broadband), you’d likely not wish to bet against Amazon making quick progress to the top in AI too.

Q therefore is less of a standalone chatbot like ChatGPT (OpenAI and former workers have helped develop AI for others) and more of Copilot and Duet arrangement in that it’s being introduced to enhance and add value to existing Amazon cloud services, but in a very focused way (more so for Builder) in that it’s “trained on over 17 years’ worth of AWS knowledge and experience”.

Despite Q still being in preview, Amazon’s ambitions to make a quantum leap ahead are already clear if the reports about its super powerful, GPT-4 rivalling (still under development) Olympus model are accurate. It remains to be seen, therefore, how well Q performs once it’s really out there and its introduction marks another major move by a serious contender in the rapidly evolving and growing generative AI market.

An Apple Byte : ChatGPT Voice Free To All iOS Users

OpenAI’s president and co-founder, Greg Brockman, has announced that ‘ChatGPT Voice’ in its ChatGPT app, previously only available to Plus and Enterprise subscribers, is now available free to all iOS and Android users.

ChatGPT Voice (originally introduced in September) integrates voice capabilities with the existing ChatGPT text-based model. This allows users to have a conversation with it and ask the ChatGPT chatbot questions and be given answers, all by voice, i.e. talking to the app on your device. Greg Brockman said on X that the feature “totally changes the ChatGPT experience.”

iOS users who want to try ChatGPT Voice can access it in their ChatGPT app now. An example video of what ChatGPT Voice can do has been posted by Greg Brockman on X here.

Featured Article : OpenAI’s CEO Sam Altman Fired (But Will Return)

Following the shock announcement that the boss of OpenAI (which created ChatGPT) has been quickly ousted by the board and replaced by an interim CEO, we look at what happened, why, and what may be next.

Ousted 

38-year-old Sam Altman, who helped launch OpenAI back in 2015, firstly as a non-profit before its restructuring and investment from Microsoft, has become widely known as the face of OpenAI’s incredible rise. However, it’s been reported that following some video conference calls with OpenAI’s board of 6 members, Mr Altman was removed from his role as CEO, and from the board of directors. Also, OpenAI’s co-founder, Greg Brockman, was removed from his position as chairman of the board of directors, after which he resigned from the company. Both men were reportedly shocked by the speed of their dismissal.

Why? 

The reason given in a statement by OpenAI for removing Mr Altman was: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.” 

The company also said: “We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward.” 

Sam Altman Says … 

Mr Altman, who since the introduction of ChatGPT and his many public appearances (most recently at the UK’s AI Safety Summit), interviews, and statements, many people see as the generally well-liked, public face of AI, has not publicly elaborated on what he may not have been candid about.

He commented on Elon Musk’s X platform (Musk was one of the founding co-chairs of OpenAI) that: “I loved my time at OpenAI. it was transformative for me personally, and hopefully the world a little bit. most of all I loved working with such talented people. Will have more to say about what’s next later.” 

Intriguingly, there were also reports at the time that Mr Altman and Mr Brockman may have been willing to return if the board members who ousted Altman stepped down – chief scientist Ilya Sutskever has been singled out in some reports as person who led the move to oust Altman.

Theories  

The sudden nature of the sacking and the vagueness of OpenAI’s statement, plus some of the events afterwards have led to speculation by many commentators about the real cause/reason for ousting Mr Altman. Leading theories include.

Mr Altman may have either told the board about something they didn’t like, not told something important (and perhaps been caught out), or may have been outed about something in comments made by other parties. Although this is the board’s version, no clear evidence has been made public. However, just prior to his ousting, in TV interviews, Microsoft’s CEO Satya Nadella is reported to say that whether Altman and OpenAI staffers would become Microsoft employees was “for the OpenAI board and management and employees to choose” and that Microsoft expected governance changes at OpenAI. He’s also quoted as saying that the partnership between Microsoft and OpenAI “depends on the people at OpenAI staying there or coming to Microsoft, so I’m open to both options.”

It’s also been reported that two senior OpenAI researchers had resigned and that they (and possibly hundreds of OpenAI employees) may join Microsoft, or that Altman may have been planning to start a new company with the open OpenAI employees who’d already left (which the board may have discovered).

Also, shortly after the whole indecent, Microsoft announced that it had hired Altman and Brockman to launch a new advanced-AI research team with Altman as CEO, which may indicate that Altman had already been in talks with Microsoft’s CEO Satya Nadella about it, which may have been discovered by OpenAI’s board.

As hinted at in the board’s statement, i.e. the part about “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity,” that there was an unresolved issue over bad feelings that the company had strayed from its initial ‘non-profit’ status. Some commentators have pointed to Elon Musk taking this view and his apparent silence over Altman’s ousting as possible evidence of this.

Another possible reason for ousting Altman is a board power struggle. Evidence that this may be the case includes:

– Mr Altman and Mr Brockman saying they’d be wiling to return if board members who ousted Altman stepped down.

– Following his sacking, OpenAI investors trying to get Altman reinstated.

– Altman and leading shareholders in OpenAI (Microsoft and Thrive Capital) reportedly wanting the entire board to be replaced.

– Reported huge support for Altman among employees.

Interim CEOs 

Shortly after Altman’s ousting, OpenAI replaced him with two interim CEOs within a short space of time. These were/are:

– Firstly, OpenAI’s CTO Mira Murati. With previous experience in working at Goldman Sachs, Zodiac Aerospace, Tesla, and Leap Motion, Murati was seen as a strong leader who sees multimodal models as he future of the company’s AI.

– Secondly (the current interim CEO) is Emmett Shear, the former CEO of game streaming platform Twitch. Mr Shear said on X about his appointment: “It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” adding that: “I took this job because I believe that OpenAI is one of the most important companies currently in existence.” 

Mr Shear’s Plans 

It’s been reported that Mr Shear plans to hire an independent investigator to examine who ousted Altman and why, speak with OpenAI’s company stakeholders, and reform the company’s management team as needed.

Mr Shear said: “Depending on the results everything we learn from these, I will drive changes in the organisation – up to and including pushing strongly for significant governance changes if necessary.” 

What Does This Mean For Your Business? 

Sam Altman has become known as the broadly well-liked face of AI since the introduction of OpenAI’s hugely popular ChatGPT chatbot one year ago. He’s extremely popular too with OpenAI employees, and other major tech industry figures, including Emmett Shear, who is now OpenAI’s interim CEO and Google boss Eric Schmidt who’s described Mr Altman “a hero of mine”. Also, Mr Altman is very close to OpenAI’s major investors Microsoft, and has already been snapped up by Microsoft (along with Brockman) as head of a new AI research team there.

Altman’s rapid ousting from OpenAI has not gone down well and all eyes appear to be focused on some of the other members of OpenAI’s board, plus the power struggle that appears to have been fought, and what kind of management and governance is needed at the top of OpenAI now to take it forward. It’s still early and it remains to be seen what happens at the top following the investigation by interim CEO Shears. Microsoft will doubtless be very happy about having Altman on board which could see them make their own gains in the now highly competitive generative AI market.

With Altman gone, it remains to be seen how/if OpenAI’s products and rapid progress and success is ultimately affected.

Update: 22.11.23 – It’s been announced that Sam Altman will soon return to OpenAI following changes to the board.

Featured Article : Major Upgrades To ChatGPT For Paid Subscribers

One year on from its general introduction, OpenAI has announced some major upgrades to ChatGPT for its Plus and Enterprise subscribers.

New Updates Announced At DevDay 

At OpenAI’s first ‘DevDay’ developer conference on November 6, the company announced more major upgrades to its popular ChatGPT chatbot premium service. The upgrades come as competition between the AI giants in the new and rapidly evolving generative AI market is increasing, following a year that has seen the introduction of Bing Chat and Copilot (Microsoft), Google’s Bard and Duet AI, Claude (Anthropic AI), X’s Grok, and more. Although this year, ChatGPT has already been updated since its general basic release with a subscription service and its more powerful GPT-4 model, plug-ins to connect it with other web services, and integration with OpenAI’s Dall-E 3 image generator (for Plus and Enterprise) and image upload to help with queries, OpenAI will be hoping that the new upgrades will retain the loyalty of its considerable user base and retain its place as the generative AI front-runner.

GPT’s 

The first of four main new upgrades is ‘GPTs,’ which gives anyone (who is a ChatGPT Plus subscriber) the option to create their own tailored version of ChatGPT, e.g. to help them in their daily life, or to help with specific tasks at work, or at home. For example (as suggested by TechCrunch), a tech business could create and train its own GPT on its own proprietary codebases thereby enabling developers to check their style or generate code in line with best practices.

Users can create their own GPT with this ‘no coding required’ feature by clicking on the ‘Create a GPT’ option and using a GPT Builder. This involves using a conversation with the chatbot to give it instructions and extra knowledge, to pick what the GPT can do (e.g. searching the web, making images, or analysing data). OpenAI says the ability for customers to build their own custom GPT chatbot builds upon the ‘Custom Instructions’ it launched in July that let users set some preferences.

OpenAI has also addressed many privacy concerns about the feature by saying that any user chats with GPTs won’t be shared with builders and, if a GPT uses third party APIs, users can choose whether data can be sent to that API.

Share Your Custom GPTs Publicly Via ‘GTP Store’ 

The next new upgrade announced is the fact that users can publicly share the GPTs they create via a soon-to-be-launched (later this month), searchable ‘GPT Store’ – the equivalent of an app store, like Apple’s App Store or Google Play. OpenAI says the GPT Store will feature creations by verified builders and once in the store, GPTs become searchable and may “climb the leaderboards.” OpenAI also says it will spotlight the best GPTs in categories like productivity, education, and “just for fun,” and “in the coming months” GTP creators will be able to earn money based on how many people are using their GPT.

Turbo GPT-4 

In another announcement, OpenAI says it’s launching a preview of the next generation of its GTP-4 model (first launched in March) named GPT-4 Turbo.  As the name suggest, the Turbo version will be improved and more powerful. Features include:

– More up-to-date knowledge, i.e. knowledge of world events up to April 2023.

– A 128k context window to fit the equivalent of more than 300 pages of text in a single prompt.

– Optimised performance, which OpenAI says enables GPT-4 Turbo to be offered at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.

– ChatGPT Plus will also be easier to use, i.e. no need to switch between different models because DALL-E, browsing, and data analysis can all be accessed without switching.

Copyright Shield 

The last of the major update announcements for pro users is the introduction of ‘Copyright Shield’ to protect enterprise and API users (not free or Plus users) from legal claims around copyright infringement. This appears to be an answer to Microsoft’s September and Google’s October announcement that they will assume responsibility for potential legal risks to customers from copyright infringement claims arising from the use of their AI products.

Google, for example, announced it will offer limited indemnity and assume responsibility for the potential legal risks where customers receive copyright challenges through using generative AI products like Duet AI. Although it’s not yet clear how Copyright Shield will operate, OpenAI states in a recent blog: “we will now step in and defend our customers.” 

What Does This Mean For Your Business? 

OpenAI’s work with the other big tech companies and its general launch of ChatGPT a year ago have established it as the major player in the new and rapidly growing generative AI market. Building on the introduction of GPT-4 and rapid monetisation of its services through its business focused Plus and Enterprise subscriptions, these latest updates see OpenAI making the shift from AI model to developer to platform, i.e. with GTPs and the GTP Store.

What’s exciting and useful about GPTs is that they don’t require any coding skills, thereby democratising generative AI app creation and providing an easy way for businesses to create tools that can help them to save time and money, boost their productivity, improve their service, and much more. The addition of the GPT Store idea allows OpenAI to establish itself as a major go-to platform for AI apps, thereby competing with the likes of Google and Apple in a way. The Store could also provide a great opportunity for developers to monetise their GPTs as well as perhaps being a threat to consultancies and developers already creating custom AI services on behalf of paying clients.

The more powerful Turbo GTP-4 and its more up to date outputs, plus the lack of requirement to switch between different models are also likely to be features valued by businesses wanting easier, faster, and more productive ways to use ChatGPT. Furthermore, the Copyright Shield idea is likely to improve user confidence while enabling OpenAI to compete with Google and Microsoft, which have already announced their versions of it.

All in all, in the new and fast-moving generative AI market, these new upgrades see OpenAI ratcheting things up a notch, adding value, making serious competitive and customer retention efforts, showing its ambitions to move to platform status and greater monetisation, and further establishing itself as a major force in generative AI. For business users, these changes provide more opportunities to easily introduce customised and value-adding AI to any aspect of their business.

Featured Article : Live Information From ChatGPT

OpenAI’s ChatGPT has announced that as part of three big changes, it can now access current information by browsing the internet.

Previously  

Prior to the new (Beta) change, ChatGPT had only been trained on information up until September 2021, although ChatGPT’s newer GPT-4 architecture was trained up until January 2022. This has meant that unless using a plugin, accessing current information hasn’t been possible, which has been seen by many users as one of the main weaknesses of the chatbot.

Now 

OpenAI has announced that ChatGPT can now browse the internet to provide Plus users first (with all users to follow later) with: “current and authoritative information, complete with direct links to sources.” 

This effectively means that some ChatGPT users will soon be able to ask questions and receive up to date answers about current affairs and access current news and topics.

How? 

OpenAI says that the ‘Browse’ feature is rolling out to all Plus users. Users may also notice that there is also a “ChatGPT September 25 Version” link at the foot of page, going to ‘Settings & Beta > Beta features’ where users can move the toggle to ‘on’ for ‘Browse with Bing’ (in the selector under GPT-4).

The Implications

In addition to making ChatGPT a more attractive tool to many users, this could mean that ChatGPT will take queries away from search engines and other online news sources, thereby seeing the chatbot acting as a competitor (to a degree). This will, of course, be less of a worry to Microsoft because of its close partnership with OpenAI and the fact that its Bing search will be used to enable ChatGPT to access current information.

Two Other Changes To ChatGPT 

ChatGPT has also announced two other new capabilities for ChatGPT. As of September 25, OpenAI says it’s rolling out voice and image capabilities to Plus and Enterprise users over the next two weeks. The capabilities will enable users to ask questions and “have a voice conversation” with ChatGPT (like users of smart speakers can do e.g., Amazon Echo) or “show ChatGPT what you’re talking about” (Google’s Bard can currently do this).

Voice 

The Voice (Beta) capability, which is being rolled out to Plus users on iOS and Android, enables users to do a number of things such as have a conversation or “speak with it on the go, request a bedtime story, or settle a dinner table debate.”  It’s interesting that in its announcement, OpenAI describes ChatGPT in this context as “your assistant,” perhaps positioning it alongside digital assistants, e.g. Amazon Alexa, Google Assistant, and Apple’s Siri.

How? 

OpenAI says to activate it, users should head to ‘Settings → New Features’ on the mobile app and opt into voice conversations. Then, it’s a case of tapping the headphone button (top-right corner of the home screen) and choosing the preferred voice out of five different voice options.

Images

Open AI also says that ‘Image input’ will soon be generally available to Plus users on all platforms. This will allow users to tap the photo button to capture or choose an image, and show/upload one or more images to ChatGPT to help get answers to queries. For example, OpenAI says users can “troubleshoot why your grill won’t start, explore the contents of your fridge to plan a meal, or analyse a complex graph for work-related data.”  Image input will also enable users to focus on a specific part of the image by using a drawing tool in the mobile app.

Challenges 

Despite ChatGPT becoming the fastest growing consumer app in history (UBS research, 2023), and OpenAI introducing these new value-adding features to the app, ChatGPT and its new features are not without their widely acknowledged challenges. For example:

– As ChatGPT states clearly at the foot of its search page “ChatGPT may produce inaccurate information about people, places, or facts,” and its CEO, Sam Altman, has spoken freely about the chatbots ability to have ‘hallucinations’, i.e. produce content that looks plausible but is simply made up. For example, back in July, the Federal Trade Commission (FTC) sent a letter to the Microsoft-backed business requesting information on how it addresses risks to people’s reputations caused ChatGPT’s potential to “generate statements about real individuals that are false, misleading, or disparaging.” 

– As some technology commentators have noted, in addition to potentially helping to bring more creative and accessibility-focused applications, the new voice technology feature could potentially be open to misuse, e.g. malicious actors using it to impersonate public figures or commit fraud.

– Some commentators have also noted how the new image input feature could create safety issues for users. This could include a situation when people rely on the model when it hallucinates – perhaps misreading a safety-diagram for example. That said, OpenAI has said that the model has been tested with red teamers for risk domains (e.g. extremism and scientific proficiency) and with a diverse set of alpha testers. OpenAI is also reported to have worked with the ‘Be My Eyes’ (free) mobile app for blind and low-vision people. Measures have also reportedly been taken to limit ChatGPT’s ability to make direct statements about people in its analysis of images (because it’s widely accepted that these aren’t always accurate).

Amazon and Anthropic – Challenging Microsoft 

Just as Microsoft and OpenAI’s partnership and Microsoft’s investment have given Microsoft Copilot, and these new capabilities in ChatGPT, and Google has Bard and Duet, Amazon is now teaming up with Anthropic (which has the ‘Claude’ chatbot) to enter the generative AI world and take on Microsoft. It’s been reported that Amazon is to invest up to £3.3bn in San Francisco-based AI firm Anthropic to get Claude 2 and to create new apps and improve its existing ones for its customers. As part of the deal Anthropic will be able to leverage Amazon’s huge computing power (Amazon has the AWS cloud computing service). Chatbots typically need large amounts of computing power for their LLMs and to handle the numbers and variations of customer queries. OpenAI, for example, is able leverage Microsoft’s Azure.

Another Perspective 

Whereas many commentators see OpenAI’s new features for ChatGPT as part of the fight-back from other tech companies against Microsoft and OpenAI (which is to be expected as companies race to offer their own value-adding version of the relatively new generative AI technology) not all agree. For example, some tech commentators have suggested that the Anthropic deal (with Amazon) is also a sign that companies like Amazon and Google are looking to challenge Nvidia’s dominance in the market for specialist AI chips.

What Does This Mean For Your Business?

For UK businesses navigating the rapidly evolving digital landscape, these advancements in generative AI signal an era of unparalleled access to real-time information and enhanced user engagement. OpenAI’s groundbreaking features in ChatGPT come at a time when tech giants are all recognising the commercial potential of AI-driven chatbots, a fact underscored by Amazon’s timely announcement to supercharge Alexa’s AI capabilities. Such competitive moves are not just coincidences, but they mark the onset of a race where big tech firms are vying to seamlessly integrate generative AI into their product ecosystems, a shift that will inevitably reshape how businesses and consumers interact.

In the case of ChatGPT’s competitors, these new features could have a negative effect on them, likely by taking queries away from search engines and other online news sources.

For most UK enterprises, big tech firms vying to seamlessly integrate generative AI presents a dual-edged sword. On one hand, the ability to pull current data and have more interactive user experiences could elevate customer service, streamline operations, and drive innovation. On the other, the challenges posed by ‘hallucinations’ in AI outputs, potential misuses, and concerns over data integrity may necessitate a cautious approach. Companies, therefore, must be discerning in their adoption, weighing the transformative potential against the risks. Also, with Amazon’s massive investment in Anthropic and the resultant potential synergies with AWS, businesses may soon be faced with a broader array of AI-driven solutions, further intensifying the competitive landscape.

As the dust begins to settle in this technological race, some would say that UK businesses stand at a crossroads, e.g. to embrace these advancements as pivotal tools for future growth, or to tread cautiously, ever mindful of the evolving implications of AI in the business realm. Others would say, on balance, using a common-sense approach and being careful to check ChatGPT’s outputs for any obvious errors, these new features and others will provide further time and cost savings cost, and efficiency, and productivity benefits to businesses as they learn the many ways they can leverage advances in generative AI and its widescale adoption.

Tech News : Copyrights Conundrum: OpenAI Sued

It’s been reported that a trade group for U.S. authors (including John Grisham) has sued OpenAI, accusing it of unlawfully training its chatbot ChatGPT on their work.

Which Authors? 

The Authors Guild trade group has filed the lawsuit (in Manhattan federal court) on behalf of a number of prominent authors including John Grisham, Jonathan Franzen, George Saunders, Jodi Picoult, “Game of Thrones” novelist George R.R. Martin, “The Lincoln Lawyer” writer Michael Connelly and lawyer-novelists David Baldacci and Scott Turow.

Why? 

The Guild’s lawsuit alleges that the datasets that have been used to train OpenAI’s large language model (LLM) to respond to human prompts include text from the authors’ books, which may have been taken from illegal online “pirate” book repositories.

As proof, the Guild alleges that ChatGPT can generate accurate summaries of the authors’ books when prompted (including details not available in reviews anywhere else online), which indicates that that their text must be included in its database.

Also, the Authors Guild has expressed concerns that ChatGPT could be used to replace authors and instead could simply “generate low-quality eBooks, impersonating authors and displacing human-authored books.” 

Threat 

The Authors Guild said it organised the lawsuit after witnessing first-hand, “the harm and existential threat to the author profession wrought by the unlicensed use of books to create large language models that generate texts.”  

The Guild cites its latest author income survey as an example of how the income of authors could be adversely affected by LLMs. For example, in 2022 authors (according to the survey) earned just over $20,000, including book and other author-related activities, and although 10 percent of authors earn far above the median, half earn even less.

The Authors Guild says, “Generative AI threatens to decimate the author profession.”  

The Point 

To illustrate the main point of the Guild’s allegations, Scott Sholder, a partner with Cowan, DeBaets, Abrahams & Sheppard and co-counsel for Plaintiffs and the Proposed Class, is reported on their website as saying : “Plaintiffs don’t object to the development of generative AI, but Defendants had no right to develop their AI technologies with unpermitted use of the authors’ copyrighted works. Defendants could have ‘trained’ their large language models on works in the public domain or paid a reasonable licensing fee to use copyrighted works.”  

Open Letter With 10,000 Signatures 

The lawsuit may have been the inevitable next step considering that back in July, the Authors Guild submitted a 10,000 signature open letter to the CEOs of prominent AI companies (OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft) complaining about the building of lucrative generative AI technologies using copyrighted works and asking AI developers get consent from, credit, and fairly compensate authors.

What Does Open AI Say? 

As expected in a case where so much may be at stake, no direct comment has been made public by OpenAI (so far) although one source (Forbes) reported online that an OpenAI spokesperson has told it was involved in “productive conversation” many creators around (including the Authors Guild) to discuss their AI concerns.

Where previous (copyright) lawsuits have been filed against it, in its defence OpenAI is reported to have pointed the idea of fair use that could be applied to LLMs.

Others 

Other generative AI providers are also facing similar lawsuits, e.g. Meta Platforms and Stability AI.

What Does This Mean For Your Business? 

Ever since ChatGPT’s disruptive introduction last November with its amazing generative abilities (e.g. with text and code, plus the abilities of image generators), creators (artists, authors, coders etc) have felt AI’s negative effects, expressed their fears about it, and felt the need to protest. For example, the Hollywood actors and writers strikes, complaints from artists that AI image generators have copied their styles, and now the Authors Guild are all part of a growing opposition who feel threatened and exploited.

We are still in the very early stages of generative AI where it appears to many that the technology may be running way ahead of regulation, and where AI providers may appear to be able to bypass areas of consent, copyright, and crediting, and in doing so, use the work of others to generate profits for themselves. This has led to authors, writers, actors, and other creatives fearing a reduction or loss of income and fearing that their skills and professions could be devalued, and that they can and will be replaced by AI. Also, they fear that generative AI could be preferred by studios and other content providers to reduce costs and complication, leading to the inevitable, multiple legal fights that we’re seeing now to clarify boundaries and protect themselves and their livelihoods. In the case of the very powerful Authors Guild, OpenAI will need to bring its ‘A’ game to the dispute as the Authors Guild points out it’s “here to fight” and has “a formidable legal team” with “expertise in copyright law.”

This is not the only lawsuit against an AI provider and there are likely to be many more and many similar protests until legal outcomes provide more clarity of the boundaries in the altered environment created by generative AI.

Tech News : Seven Safeguarding SamurAI?

Following warnings about threats posed by the rapid growth of AI, the US White House has reported that seven leading AI companies have committed to developing safeguards.

Voluntary Commitments Made 

A recent White House fact sheet has highlighted how, in a bid to manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety, President Biden met with and secured voluntary commitments from seven leading AI companies “to help move toward safe, secure, and transparent development of AI technology”. 

The companies who have made the voluntary commitments are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

What Commitments? 

In order to improve safety, security, and trust, and to help develop responsible AI, the voluntary commitments from the companies are:

Ensuring Products are Safe Before Introducing Them to the Public

– Internal and external security testing of their AI systems before their release, carried out in part by independent experts, to guard against AI risks like biosecurity and cybersecurity.

– Sharing information across the industry and with governments, civil society, and academia on managing AI risks, e.g. best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

Building Systems that Put Security First 

– Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights (regarded as the most essential part of an AI system). The model weights will be released only when intended and when security risks are considered.

– Facilitating third-party discovery and reporting of vulnerabilities in their AI systems, e.g. putting a robust reporting mechanism in place to enable vulnerabilities to be found and fixed quickly.

Earning the Public’s Trust 

– Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system, thereby enabling creativity AI while reducing the dangers of fraud and deception.

– Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security risks and societal risks (e.g. the effects on fairness and bias).

– Prioritising research on the societal risks that AI systems can pose, including those on avoiding harmful bias and discrimination, and protecting privacy.

– Developing and deploying advanced AI systems to help address society’s greatest challenges, e.g. cancer prevention, mitigating climate change, thereby (hopefully) contributing to the prosperity, equality, and security of all.

To Be Able To Spot AI-Generated Content Easily 

One of the key aspects of more obvious issues of risk associated with AI is the fact that people need to be able to definitively tell the difference between real content and AI generated content. This could help mitigate the risk of people falling victim to fraud and scams involving deepfakes or believing misinformation and disinformation spread using AI deepfakes which could have wider political and societal consequences.

One example of how this may be achieved, with the help of the AI companies, is the use of watermarks. This refers to embedding a digital marking in images and videos which is not visible to the human eye but can be read by certain software and algorithms and give information about whether it’s been produced by AI. Watermarks could help in tackling all kinds of issues including passing-off, plagiarism, stopping the spread of false information, tackling cybercrime (scams and fraud), and more.

What Does This Mean For Your Business? 

Although AI is a useful business tool, the rapid growth-rate of AI has outstripped the pace of regulation. This has led to fears about the risks of AI when used to deceive, spread falsehoods, and commit crime (scams and fraud) as well as the bigger threats such as political manipulation, societal destabilisation, and even the existential threat to humanity. This, in-turn, has led to the first stage action. Governments, particularly, need to feel that they can get the lid partially back on the “genie’s bottle” so that they can at least ensure safeguards are built-in early-on to mitigate risks and threats.

The Biden administration getting at least some wide-ranging voluntary commitments from the Big AI companies is, therefore, a start. Given that many of signatories to the open letter calling for 6-month moratorium on systems more powerful that GPT-4 were engineers from those big tech companies, it’s also a sign that more action may not be too far behind. Ideas like watermarking look a likely option and no doubt there’ll be more ideas.

AI is transforming businesses in a positive way although many also fear how the automation it offers could result in big job losses, thereby affecting economies. This early stage is, therefore, the best time to make a real start in building in the right controls and regulations that allow the best aspects of AI to flourish and keep the negative aspects in check, but this complex subject clearly has a long way to run.