Tech News : AI Job Risks – Finance & Insurance

Analysis by the Department for Education’s Unit for Future Skills to try and quantify the impact of AI on the UK jobs market found the finance and insurance sector was more exposed than any other.

The Analysis 

“The impact of AI on UK jobs and training” report published online by the government highlights the results of a study that used US methodology to look at the abilities needed to perform different job roles, and the extent to which these can be aided by a selection of 10 common AI applications.

These applications are:

  1. Abstract Strategy Games: The ability to play abstract games involving sometimes complex strategy and reasoning ability, such as chess, go, or checkers, at a high level.
  2. Real-time Video Games: The ability to play a variety of real-time video games of increasing complexity at a high level.
  3. Image Recognition: The determination of what objects are present in a still image.
  4. Visual Question Answering: The recognition of events, relationships, and context from a still image.
  5. Image Generation: The creation of complex images.
  6. Reading Comprehension: The ability to answer simple reasoning questions based on an understanding of text.
  7. Language Modelling: The ability to model, predict, or mimic human language.
  8. Translation: The translation of words or text from one language into another.
  9. Speech Recognition: The recognition of spoken language into text.
  10. Instrumental Track Recognition: The recognition of instrumental musical tracks.

These AI applications were selected based on their relevance and the progress in technology from 2010 onwards, as recorded by the Electronic Frontier Foundation (EFF). They represent fundamental applications of AI that are likely to have implications for the workforce and cover the most likely and most common uses of AI.

The study also focuses on which occupations, sectors and areas within the UK labour market are expected to be most impacted by AI and large language models, and how this could impact workers in different UK geographic areas.

The Findings 

The key findings of the study show that:

– Professional occupations are more exposed to AI, especially those associated with more clerical work and across finance, law, and business management roles.

– The industries least exposed to AI and to LLMs across industries are accommodation and food services, motor trades, agriculture, forestry, and fishing, transport and storage and construction.

– The finance and insurance sector is more exposed to AI than any other sector.

– The occupations most exposed to all AI applications are management consultants and business analysts.

– The occupations most exposed to large language modelling are telephone salespersons, followed by solicitors and psychologists.

– Workers in London and the South East have the highest exposure to AI (five times as exposed as the North-East of England), reflecting the greater concentration of professional occupations in those areas.

These findings led to some press reports that AI’s incursion into our working lives would most affect ‘city highflyers.’

Qualifications and Training

The study also exposes the qualifications and training routes that most commonly lead to these highly impacted jobs, concluding that:

– Employees with more advanced qualifications are typically in jobs more exposed to AI, e.g. those with a level 6 qualification (equivalent to a degree).

– Employees with qualifications in accounting and finance through Further Education or apprenticeships, and economics and mathematics through Higher Education are typically in jobs more exposed to AI.

Other Studies 

Other studies highlighting levels of exposure to AI (AI taking jobs) include:

– A Pew Research Centre Study (2022) which found that 19 per cent of US workers were in jobs highly exposed to AI, where key activities might be replaced or assisted by AI.

– A Goldman Sachs Report (2023) suggesting that AI could replace the equivalent of 300 million full-time jobs globally. It indicates that about a quarter of work tasks in the US and Europe could be replaced by AI, impacting two-thirds of jobs in these regions to some degree.


A recent (October 2023) paper also highlights the dual nature of AI in advanced economies – AI’s potential as either a complement or a substitute for labour. The paper also highlights the important point that women and highly educated workers face greater occupational exposure to AI.

It’s worth noting that the Goldman Sachs Report (shown above) also highlighted this dual effect of AI, showing that AI also has the potential to create new jobs and boost productivity, potentially increasing the total annual value of goods and services produced globally by 7 per cent.

What Does This Mean For Your Business? 

As highlighted in the report for this study (and as supported by the findings of other studies), 10-30 per cent of jobs are automatable with fast-evolving AI putting many of those jobs at risk. This government study confirms largely what many people may have expected – that those in more clerical work and across finance, law, and business management roles (where generative AI’s outputs are particularly effective) are most at risk from AI diminishing their value as workers. There are, of course, many other areas (some highlighted by this report) where generative AI is clearly able to replace or reproduce/copy human efforts to an acceptable degree, e.g. from customer service roles to creative work (artists). Some people may find that it’s disconcerting that jobs/professions which have taken years of study and have a specialist element and high social value (e.g. solicitors and psychologists) are shown in the report to be suddenly and significantly at risk from what are, basically, algorithms.

The report’s findings also makes what seems to be quite a logical conclusion that since there’s a greater concentration of professional occupations in London and the South East, it’s more likely to be negatively affected by AI.

The report of the study also makes the valid point about the dual nature of AI’s effects, i.e., that in addition to threatening many jobs, AI also has the potential to increase productivity and create new high value jobs in the UK economy. However, the main focus of this and other studies may appear to confirm the fears of many, that fast-advancing AI is likely to have a profound and widespread effect on the UK economy and society, and not necessarily in a good way for many peoples’ jobs, skills, and value.

As highlighted in the report, the UK education system and employers will now need to adapt to ensure that individuals in the workforce have the skills they need to make the most of the potential benefits advances in AI will bring. As individual workers, many may now want to look at the ways they can maximise their value and be in a position where they can use and orchestrate what are essentially tools more effectively than others, and in a way that adds value to themselves and their own positions, and/or in a way that creates new opportunities.

Featured Article : 3000% Increase in Deepfake Frauds

A new report from ID Verification Company Onfido shows that the availability of cheap generative AI tools has led to Deepfake fraud attempts increasing by 3,000 per cent (specifically, a factor of 31) in 2023.

Free And Cheap AI Tools 

Although deepfakes have now been around for several years, as the report points out, deepfake fraud has become significantly easier and more accessible due to the widespread availability of free and cheap generative AI tools. In simple terms, these tools have democratised the ability to create hyper-realistic fake images and videos, which were once only possible for those with advanced technical skills and access to expensive software.

Prior to the public availability of AI tools, for example, creating a convincing fake video or image required a deep understanding of computer graphics and access to high-end, often costly, software (a barrier to entry for would-be deep-fakers).

Document and Biometric Fraud – The New Frontier 

The Onfido data reveals a worrying trend in that while physical counterfeits are still prevalent, there’s a notable shift towards digital manipulation of documents and biometrics, facilitated by the availability and sophistication of AI tools. Fraudsters are not only altering documents digitally but also exploiting biometric verification systems through deepfakes and other AI-assisted methods. The Onfido report highlights a dramatic rise in the rate of biometric fraud, which doubled from 2022 to 2023.

Deepfakes – A Growing Threat 

As reinforced by the findings of the report, deepfakes pose an emerging and significant threat, particularly in biometric verification. The accessibility of generative AI and face-swap apps has made the creation of deepfakes easier and highly scalable, which is evidenced by a 31X increase in deepfake attempts in 2023 compared to the previous year!

Minimum Effort (And Cost) For Maximum Return

As the Onfido report points out, simple ‘face swapping’ apps (i.e. apps which leverage advanced AI algorithms to seamlessly superimpose one person’s face onto another in photos or videos) offer ease of use and effectiveness in creating convincing fake identities. They are part of an influx of readily available online AI assisted tools that are providing fraudsters with a new avenue into biometric fraud. For example, the Onfido data shows that Biometric fraud attempts are clearly higher this year than in previous years with fraudsters favouring tools like the face-swapping apps to target selfie biometric checks and create fake identities.

The kind of fakes these cheap, easy apps create have been dubbed “cheapfakes” and this conforms with something that’s long been known about online fraudsters and cyber criminals – they seek methods that require minimum effort, minimum expense and minimum personal risk, yet deliver maximum effect.

Sector-Specific Impact of Deepfakes 

The Identity Fraud Report shows that (perhaps obviously) the gambling and financial sectors in particular are facing the brunt of these sophisticated fraud attempts. The lure of cash rewards and high-value transactions in these sectors makes them attractive targets for deepfake-driven frauds. In the gambling industry, for example, fraudsters may be particularly attracted to the sign-up and referral bonuses. In the financial industry, where frauds tend to be based around money laundering and loan theft, Onfido reports that digital attacks are easy to scale, especially when incorporating AI tools.

Implications For UK Businesses In The Age of (AI) Deepfake-Driven Fraud 

The surge in deepfake-driven fraud highlighted by the somewhat startling statistics in Onfido’s 2024 Identity Fraud Report, suggest that UK businesses navigating this new landscape may require a multifaceted approach. This could be achieved by balancing the implementation of cutting-edge technologies with heightened awareness and strategic planning. In more detail, this could involve:

– UK businesses prioritising the reinforcement of their identity verification processes. The traditional methods may no longer suffice against the sophistication of deepfakes. Therefore, Adopting AI-powered solutions that are specifically designed to detect and counter deepfake attempts could be the way forward. This could work as long as such systems can keep up with the advancements in fraudulent techniques (more advanced techniques may emerge as more AI sophisticated AI tools emerge).

– The training of staff, i.e. educating them about the nature of deepfakes and how they can be used to perpetrate fraud. This could empower employees to better recognise potential threats and respond appropriately, particularly in sectors like customer service and security, where human judgment plays a key role.

– Maintaining customer trust. UK businesses must navigate the fine line between implementing robust security measures and ensuring a frictionless customer experience. Transparent communication about the security measures in place and how they protect customer data can help in maintaining and even enhancing customer trust.

– As the use of deepfakes in fraud rises, regulatory bodies may introduce new compliance requirements and UK businesses will need to ensure that they stay abreast of these changes both to protect customers and remain compliant with legal standards. This in turn could require more rigorous data protection protocols or mandatory reporting of deepfake-related breaches.

– Collaboration with industry peers and participation in broader discussions about combating deepfake fraud may also be a way to gain valuable insights. Sharing knowledge and strategies, for example, could help in developing industry-wide best practices. Also, partnerships with technology providers specialising in AI and fraud detection could offer access to the latest tools and expertise.

– Since deepfake fraud may be an ongoing threat, long-term strategic planning may be essential. This perspective could be integrated into long-term business strategies, thereby (hopefully) making sure that resources are available and allocated not just for immediate solutions but also for future-proofing against evolving digital threats.

What Else Can Businesses Do To Combat Threats Like AI-Generated Deepfakes? 

Other ways that businesses can contribute to the necessary comprehensive approach to tackling the AI-generated deepfake threat may also include:

– Implementing biometric verification technologies that require live interactions (so-called ‘liveness solutions’), such as head movements, which are difficult for deepfakes to replicate.

– The use of SDKs (platform-specific building tools for developers) over APIs. For example, SDKs provide better protection against fraudulent submissions as they incorporate live capture and device integrity checks.

The Dual Nature Of Generative AI 

Although, as you’d expect an ‘Identity Fraud Report’ to do, the Onfido report focuses solely on the threats posed by AI, it’s important to remember that AI tools can be used by all businesses to add value, save time, improve productivity, get more creative, and to defend against the AI threats. AI-driven verification tools, for example, are becoming more adept at detecting and preventing fraud, underscoring the technology’s dual nature as both a tool for fraudsters and a shield for businesses.

What Does This Mean For Your Business? 

Tempering the reading of the startling stats in the report with the knowledge that Onfido is selling its own deepfake (liveness) detection solution and SDKs, it still paints a rather worrying picture for businesses. That said, The Onfido 2024 Identity Fraud Report’s findings, highlighting a 3000 per cent increase in deepfake fraud attempts due to readily available generative AI tools, signal a pivotal shift in the landscape of online fraud. This shift could pose new challenges for UK businesses but also open avenues for innovative solutions.

For businesses, the immediate response may involve upgrading identity verification processes with AI-powered solutions tailored to detect and counter deepfakes. However, it’s not just about deploying advanced technology. It’s also about ensuring these systems evolve with the fraudsters’ tactics. Equally crucial is the role of employee training in recognising and responding to these sophisticated fraud attempts.

As regulatory landscapes adjust to these emerging threats, staying informed and compliant is also likely to become essential. The goal is not only to counter current threats but to build resilience and innovation for future challenges.

Tech News : Autumn Statement Suggests IT Spending Boost

The announcement of measures intended to boost investment in innovation and technology in UK Chancellor Jeremy Hunt’s Autumn Statement could mean increased spending on IT and AI.

Measures To Boost The Tech Sector

The UK Chancellor’s Autumn Statement introduced a range of measures aimed at boosting the tech sector, with potentially significant implications for tech spending and investment in innovation. Some tech commentators have suggested that this could mean that private-sector IT buyers will see a long-term boost. Here we take a look at how the measures announced could affect tech spending, their potential overall impact, and any negative effects they might have.

Positive Impacts on Tech Spending 

Some of the key announcements in the Autumn Statement that could have a positive effect on tech spending include :

– A permanent full expensing policy. Mr Hunt’s decision to make the full expensing policy permanent allows private sector IT buyers to write off the cost of IT equipment against tax. This policy, therefore, looks likely to encourage more investment in IT infrastructure, as companies can deduct these expenses from taxable profits.

– Enhanced R&D tax credits. The merger of the R&D Expenditure Credit and SME schemes from April 2024 will make more companies eligible for claims supporting innovation. It’s thought that around 5,000 additional small businesses may benefit, thereby helping to foster a more innovative environment in the UK tech sector.

– Investment in AI and quantum technologies. The government’s commitment of £500 million over two years to establish additional ‘compute innovation centres’ and the funding being part of a larger £1.5 billion investment is intended to enhance the UK’s capabilities in AI. The Statement also outlined five quantum missions as part of the ‘National Quantum Strategy.’ These missions focus on establishing advanced quantum computing capabilities and networks and incorporating quantum technologies in various sectors such as healthcare, transportation, and defence by 2030 and 2035. One key benefit of quantum computing being made available to healthcare could of course be breakthroughs in areas like drug discovery. Thinking back to the pandemic, many peopel may remember how quantum computing was something that was being used to help speed the way to developing effective vaccines.

– Skills development Initiatives. It’s long been known that the UK has a tech skills gap which is something that threatens to hamper its ambition to become an international technology superpower. Therefore, a £50 million investment to pilot ways to increase apprenticeships in key growth sectors (including engineering) aligns with the need for a skilled workforce to sustain tech advancements. Mr Hunt also announced three more investment zones (on top of the 12 announced in March) in order to boost advanced manufacturing in the West Midlands, East Midlands, and Greater Manchester.

– Support for clean energy and infrastructure. The outlined efforts to cut grid access delays and provide financial incentives for clean energy businesses will likely accelerate the UK’s transition to a low-carbon economy, benefiting green tech initiatives. For example, a £960m Green Industries Growth Accelerator fund may help to support emerging technologies in clean energy and the transition to net-zero.

Not All Positive 

Some of the Autumn Statement announcements, however, may not be such good news for the UK’s tech sector. For example, some of the potential challenges and negative effects include:

– Negative economic forecasts and tight public spending: Despite some of the ambitious measures announced, their success is essentially contingent on economic forecasts, which are currently revised downwards. A significant squeeze on public spending due to inflation may also hamper the plans for digital transformation, especially if budget constraints affect public sector investments.

– Implementation and collaboration needs. The effectiveness of the many potentially positive measures announced depends on the government’s ability to implement them quickly and efficiently. Also, government collaboration with the tech sector is crucial to ensure these policies translate into tangible growth and innovation. For example, the government will need to work alongside tech companies, startups, and industry experts to understand their needs, address potential challenges, and ensure that the policies are actually practical and beneficial.

– A reliance on estimates and uncertainties. Some reforms, like those to the energy grid and pension and capital market reforms, are unfortunately based on estimates that may not actually materialise as expected. If these projections fall short, it could limit the overall impact of the statement’s measures on tech investment and growth.

What Does This Mean For Your Business? 

The Autumn Statement’s initiatives offer a promising landscape for businesses beyond just IT buyers, possibly signalling a transformative shift in the UK’s approach to technological advancement and innovation. The decision to make full expensing permanent, coupled with enhanced R&D tax credits, may help present a financially viable path for businesses across various sectors to invest more boldly in new technology and innovation projects. This change not only eases the financial burden of such investments but may also go some way to encouraging a culture of continuous innovation.

The substantial investments in AI, quantum computing, and compute infrastructure could open up new avenues for businesses to access and leverage advanced technologies. These technologies, for example, have the potential to revolutionise product development and operational efficiency across a wide range of industries. As a result, organisations can look forward to not only improved business-processes but also the possibility of developing groundbreaking new products and services.

The focus on developing much-needed tech skills in the UK workforce through apprenticeships and training initiatives is another critical aspect. This approach could help give UK businesses access to employees equipped with the necessary skills to navigate and contribute to an increasingly complex technological landscape. This is particularly crucial at a time when technology is evolving rapidly, and the demand for skilled professionals is at an all-time high.

Businesses with a focus on green technologies or those looking to transition to more sustainable practices may get support through initiatives aimed at reducing grid access delays and promoting clean energy. This not only aligns with global trends toward sustainability but also offers a competitive edge to businesses that prioritise environmental responsibility.

However, businesses in what are challenging economic times are likely to see the announcements in the broader economic context. The success of these measures is not guaranteed and is contingent upon effective implementation amidst economic uncertainties and potential public spending constraints. Therefore, businesses need to stay informed and agile, ready to adapt to changing regulations and economic conditions.

Looking on the bright side, this year’s Autumn Statement generally appears to present a multifaceted opportunity for businesses to grow, innovate, and adapt in a rapidly evolving technological environment. If UK businesses can capitalise on the initiatives announced and navigate the associated challenges, they may be better positioned to make the most of new technologies like AI.

Featured Article : OpenAI’s CEO Sam Altman Fired (But Will Return)

Following the shock announcement that the boss of OpenAI (which created ChatGPT) has been quickly ousted by the board and replaced by an interim CEO, we look at what happened, why, and what may be next.


38-year-old Sam Altman, who helped launch OpenAI back in 2015, firstly as a non-profit before its restructuring and investment from Microsoft, has become widely known as the face of OpenAI’s incredible rise. However, it’s been reported that following some video conference calls with OpenAI’s board of 6 members, Mr Altman was removed from his role as CEO, and from the board of directors. Also, OpenAI’s co-founder, Greg Brockman, was removed from his position as chairman of the board of directors, after which he resigned from the company. Both men were reportedly shocked by the speed of their dismissal.


The reason given in a statement by OpenAI for removing Mr Altman was: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.” 

The company also said: “We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward.” 

Sam Altman Says … 

Mr Altman, who since the introduction of ChatGPT and his many public appearances (most recently at the UK’s AI Safety Summit), interviews, and statements, many people see as the generally well-liked, public face of AI, has not publicly elaborated on what he may not have been candid about.

He commented on Elon Musk’s X platform (Musk was one of the founding co-chairs of OpenAI) that: “I loved my time at OpenAI. it was transformative for me personally, and hopefully the world a little bit. most of all I loved working with such talented people. Will have more to say about what’s next later.” 

Intriguingly, there were also reports at the time that Mr Altman and Mr Brockman may have been willing to return if the board members who ousted Altman stepped down – chief scientist Ilya Sutskever has been singled out in some reports as person who led the move to oust Altman.


The sudden nature of the sacking and the vagueness of OpenAI’s statement, plus some of the events afterwards have led to speculation by many commentators about the real cause/reason for ousting Mr Altman. Leading theories include.

Mr Altman may have either told the board about something they didn’t like, not told something important (and perhaps been caught out), or may have been outed about something in comments made by other parties. Although this is the board’s version, no clear evidence has been made public. However, just prior to his ousting, in TV interviews, Microsoft’s CEO Satya Nadella is reported to say that whether Altman and OpenAI staffers would become Microsoft employees was “for the OpenAI board and management and employees to choose” and that Microsoft expected governance changes at OpenAI. He’s also quoted as saying that the partnership between Microsoft and OpenAI “depends on the people at OpenAI staying there or coming to Microsoft, so I’m open to both options.”

It’s also been reported that two senior OpenAI researchers had resigned and that they (and possibly hundreds of OpenAI employees) may join Microsoft, or that Altman may have been planning to start a new company with the open OpenAI employees who’d already left (which the board may have discovered).

Also, shortly after the whole indecent, Microsoft announced that it had hired Altman and Brockman to launch a new advanced-AI research team with Altman as CEO, which may indicate that Altman had already been in talks with Microsoft’s CEO Satya Nadella about it, which may have been discovered by OpenAI’s board.

As hinted at in the board’s statement, i.e. the part about “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity,” that there was an unresolved issue over bad feelings that the company had strayed from its initial ‘non-profit’ status. Some commentators have pointed to Elon Musk taking this view and his apparent silence over Altman’s ousting as possible evidence of this.

Another possible reason for ousting Altman is a board power struggle. Evidence that this may be the case includes:

– Mr Altman and Mr Brockman saying they’d be wiling to return if board members who ousted Altman stepped down.

– Following his sacking, OpenAI investors trying to get Altman reinstated.

– Altman and leading shareholders in OpenAI (Microsoft and Thrive Capital) reportedly wanting the entire board to be replaced.

– Reported huge support for Altman among employees.

Interim CEOs 

Shortly after Altman’s ousting, OpenAI replaced him with two interim CEOs within a short space of time. These were/are:

– Firstly, OpenAI’s CTO Mira Murati. With previous experience in working at Goldman Sachs, Zodiac Aerospace, Tesla, and Leap Motion, Murati was seen as a strong leader who sees multimodal models as he future of the company’s AI.

– Secondly (the current interim CEO) is Emmett Shear, the former CEO of game streaming platform Twitch. Mr Shear said on X about his appointment: “It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” adding that: “I took this job because I believe that OpenAI is one of the most important companies currently in existence.” 

Mr Shear’s Plans 

It’s been reported that Mr Shear plans to hire an independent investigator to examine who ousted Altman and why, speak with OpenAI’s company stakeholders, and reform the company’s management team as needed.

Mr Shear said: “Depending on the results everything we learn from these, I will drive changes in the organisation – up to and including pushing strongly for significant governance changes if necessary.” 

What Does This Mean For Your Business? 

Sam Altman has become known as the broadly well-liked face of AI since the introduction of OpenAI’s hugely popular ChatGPT chatbot one year ago. He’s extremely popular too with OpenAI employees, and other major tech industry figures, including Emmett Shear, who is now OpenAI’s interim CEO and Google boss Eric Schmidt who’s described Mr Altman “a hero of mine”. Also, Mr Altman is very close to OpenAI’s major investors Microsoft, and has already been snapped up by Microsoft (along with Brockman) as head of a new AI research team there.

Altman’s rapid ousting from OpenAI has not gone down well and all eyes appear to be focused on some of the other members of OpenAI’s board, plus the power struggle that appears to have been fought, and what kind of management and governance is needed at the top of OpenAI now to take it forward. It’s still early and it remains to be seen what happens at the top following the investigation by interim CEO Shears. Microsoft will doubtless be very happy about having Altman on board which could see them make their own gains in the now highly competitive generative AI market.

With Altman gone, it remains to be seen how/if OpenAI’s products and rapid progress and success is ultimately affected.

Update: 22.11.23 – It’s been announced that Sam Altman will soon return to OpenAI following changes to the board.

Tech News : Warning Over Lessening Of AI Facial Recognition Supervision

Computer Weekly recently reported that in an interview with the outgoing England and Wales biometrics and surveillance camera commissioner, Professor Fraser Sampson, he warned of the declining state of oversight in AI facial recognition surveillance deployment by UK police.


Professor Fraser Sampson emailed his resignation letter to (then) Home Secretary Suella Braverman in August, stating his intention to resign by October 31. The reason given was that the Data Protection and Digital Information Bill will essentially make his role redundant by removing the responsibilities of the Biometrics Commissioner’s position and giving these powers to the Investigatory Powers Commissioner.

Professor Sampson, who was only appointed to the role in March 2021, said: “Having explored a number of alternatives with officials, I am unable to find a practical way in which I can continue to discharge the functions of these two roles beyond 1st November.” 

Professor Sampson’s responsibilities in the role had included overseeing how the police collect, retain and use biometric material (such as digital facial images), and encouraging their compliance with the surveillance camera code of practice.

Past Concerns and Criticisms 

In addition to espousing the many benefits of AI facial recognition’s deployment in the UK (e.g. catching known criminals – including those who are involved in child sexual abuse material), finding missing or vulnerable people, locating terror suspects, and helping to prevent the suffering of inhumane or degrading treatment of citizens, Professor Sampson has also previously criticised and raised concerns about aspects of its deployment. For example, in February, he noted:

– The absence of a clear set of legal rules or a framework to regulate the police’s use of AI and biometric material.

– A lack of clarity about the scale and extent of public space surveillance, particularly in relation to the proliferation of Chinese surveillance technology across the public sector.

Professor Sampson has also been vocal about a number of other related issues and concerns, such as:

– Issues related to the questionable legality of using public cloud infrastructure to store and process law enforcement data and the police’s general culture of retaining biometric data.

– Concerns about the unlawful retention of millions of custody images of people who have been charged with a crime. Despite Professor Sampson raising the issue, and the High Court ruling in 2012 that they should be deleted, it’s been reported that the Home Office, which owns UK police biometric databases, hasn’t done so because it has no bulk deletion capacity.

– The dangers of the UK slipping into becoming an “all-encompassing” surveillance state if concerns about these technologies (facial recognition) are not addressed. He has expressed his surprise at the disconnected approach of the UK government and his shock at how little the police and local authorities know about the capabilities and implications of the surveillance equipment they were using.

– Concerns about the possible misuse of facial recognition and AI technologies in controversial settings ( i.e. that the approach taken by UK police / their deployment methods in controversial settings could negate any benefits of the usage of the technologies). Controversial settings could include mass surveillance at public events, targeting specific communities, routine public surveillance, application in schools or educational institutions, and use in workplaces, all of which raise concerns about privacy, discrimination, and infringement on individuals’ rights.

– Rejection of the “nothing to worry about” defence, i.e. he challenged the common justification for surveillance that people who have done nothing wrong have nothing to worry about, stating this misses the point entirely.

– The government’s data reform proposals. For example, he criticised the government’s Data Protection and Digital Information (DPDI) Bill, arguing that it would lead to weaker oversight by subsuming biometric oversight under the Investigatory Powers Commissioner and removing the obligation to publish a Surveillance Camera Code of Practice.

– Efficacy and ethical concerns. Professor Sampson questioned the effectiveness of facial recognition in preventing serious crimes and highlighted the risk of pervasive facial-recognition surveillance. He also noted the chilling effect of such surveillance, where people might alter their behaviour due to the knowledge of being watched and warned against the abuse of these powers.

– He also advocated for a robust, clear, and intuitive oversight accountability framework for facial-recognition and biometric technologies, expressing concern about the fragmentation of the existing regulatory framework.

– The government’s lack of understanding and direction. For example, Professor Sampson commented on the lack of understanding and rationale in the government’s direction with police technology oversight and emphasised the need for public trust and confidence as a prerequisite, not just a desired outcome, for the rollout of new technologies.

– Predictive policing concerns. He warned against the dangers of using algorithms or AI for predictive policing, arguing that such approaches rely heavily on assumptions and create a baseline level of suspicion around the public.

Wider Concerns About Police Surveillance Using Facial Recognition 

Professor Sampson’s concerns about the police using Live Facial Recognition (LFR) surveillance at special assignments and high-profile events echo many of those expressed by others over the last few years. For example:

– Back in 2018, Elizabeth Denham, the then UK Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. In the same year, a letter written by privacy campaign group Big Brother Watch and signed by more than 18 politicians, 25 campaign groups, plus numerous academics and barristers, highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In the EU, in January 2020, the European Commission considered a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place. In June this year, the EU actually adopted a blanket ban on AI-powered facial recognition in public spaces.

What Does This Mean For Your Business? 

The evolving landscape of the Data Protection and Digital Information Bill, particularly in the context of Professor Fraser Sampson’s resignation, could hold significant implications for UK businesses. This shift indicates a potential realignment of regulatory focus from physical biometric surveillance to digital data protection. For businesses, this underscores the need to adapt to a framework that prioritises digital data security and privacy.

The possible consolidation of regulatory bodies, like merging the roles of the Biometrics Commissioner into the Investigatory Powers Commissioner, may not necessarily suggest a decline of oversight, as suggested by Professor Fraser, but could also suggest a more streamlined oversight process. On the upside, this could mean simpler compliance procedures for businesses, but may also demand a broader understanding of a wider set of regulations. On the downside, companies (especially those dealing with biometric data) may need to very closely track these changes to ensure they remain compliant.

As the bill is likely to address the complexities of digital data, businesses will need to be proactive in understanding how these complexities are regulated. This is crucial for those handling large volumes of customer data or relying heavily on digital platforms. Adapting to evolving technologies and staying abreast of technological advancements will, therefore, be key.

All in all, in the light of the changes (and possible decline in oversight) highlighted by Professor Fraser, businesses will now need to be mindful of shifting political and public sentiments around privacy and surveillance, as these can influence consumer behaviour. While the changing regulatory landscape presents challenges, it also offers opportunities for businesses to align with contemporary data protection standards. Staying informed and adaptable may therefore be essential for navigating these changes successfully going forward.

Featured Article : Major Upgrades To ChatGPT For Paid Subscribers

One year on from its general introduction, OpenAI has announced some major upgrades to ChatGPT for its Plus and Enterprise subscribers.

New Updates Announced At DevDay 

At OpenAI’s first ‘DevDay’ developer conference on November 6, the company announced more major upgrades to its popular ChatGPT chatbot premium service. The upgrades come as competition between the AI giants in the new and rapidly evolving generative AI market is increasing, following a year that has seen the introduction of Bing Chat and Copilot (Microsoft), Google’s Bard and Duet AI, Claude (Anthropic AI), X’s Grok, and more. Although this year, ChatGPT has already been updated since its general basic release with a subscription service and its more powerful GPT-4 model, plug-ins to connect it with other web services, and integration with OpenAI’s Dall-E 3 image generator (for Plus and Enterprise) and image upload to help with queries, OpenAI will be hoping that the new upgrades will retain the loyalty of its considerable user base and retain its place as the generative AI front-runner.


The first of four main new upgrades is ‘GPTs,’ which gives anyone (who is a ChatGPT Plus subscriber) the option to create their own tailored version of ChatGPT, e.g. to help them in their daily life, or to help with specific tasks at work, or at home. For example (as suggested by TechCrunch), a tech business could create and train its own GPT on its own proprietary codebases thereby enabling developers to check their style or generate code in line with best practices.

Users can create their own GPT with this ‘no coding required’ feature by clicking on the ‘Create a GPT’ option and using a GPT Builder. This involves using a conversation with the chatbot to give it instructions and extra knowledge, to pick what the GPT can do (e.g. searching the web, making images, or analysing data). OpenAI says the ability for customers to build their own custom GPT chatbot builds upon the ‘Custom Instructions’ it launched in July that let users set some preferences.

OpenAI has also addressed many privacy concerns about the feature by saying that any user chats with GPTs won’t be shared with builders and, if a GPT uses third party APIs, users can choose whether data can be sent to that API.

Share Your Custom GPTs Publicly Via ‘GTP Store’ 

The next new upgrade announced is the fact that users can publicly share the GPTs they create via a soon-to-be-launched (later this month), searchable ‘GPT Store’ – the equivalent of an app store, like Apple’s App Store or Google Play. OpenAI says the GPT Store will feature creations by verified builders and once in the store, GPTs become searchable and may “climb the leaderboards.” OpenAI also says it will spotlight the best GPTs in categories like productivity, education, and “just for fun,” and “in the coming months” GTP creators will be able to earn money based on how many people are using their GPT.

Turbo GPT-4 

In another announcement, OpenAI says it’s launching a preview of the next generation of its GTP-4 model (first launched in March) named GPT-4 Turbo.  As the name suggest, the Turbo version will be improved and more powerful. Features include:

– More up-to-date knowledge, i.e. knowledge of world events up to April 2023.

– A 128k context window to fit the equivalent of more than 300 pages of text in a single prompt.

– Optimised performance, which OpenAI says enables GPT-4 Turbo to be offered at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.

– ChatGPT Plus will also be easier to use, i.e. no need to switch between different models because DALL-E, browsing, and data analysis can all be accessed without switching.

Copyright Shield 

The last of the major update announcements for pro users is the introduction of ‘Copyright Shield’ to protect enterprise and API users (not free or Plus users) from legal claims around copyright infringement. This appears to be an answer to Microsoft’s September and Google’s October announcement that they will assume responsibility for potential legal risks to customers from copyright infringement claims arising from the use of their AI products.

Google, for example, announced it will offer limited indemnity and assume responsibility for the potential legal risks where customers receive copyright challenges through using generative AI products like Duet AI. Although it’s not yet clear how Copyright Shield will operate, OpenAI states in a recent blog: “we will now step in and defend our customers.” 

What Does This Mean For Your Business? 

OpenAI’s work with the other big tech companies and its general launch of ChatGPT a year ago have established it as the major player in the new and rapidly growing generative AI market. Building on the introduction of GPT-4 and rapid monetisation of its services through its business focused Plus and Enterprise subscriptions, these latest updates see OpenAI making the shift from AI model to developer to platform, i.e. with GTPs and the GTP Store.

What’s exciting and useful about GPTs is that they don’t require any coding skills, thereby democratising generative AI app creation and providing an easy way for businesses to create tools that can help them to save time and money, boost their productivity, improve their service, and much more. The addition of the GPT Store idea allows OpenAI to establish itself as a major go-to platform for AI apps, thereby competing with the likes of Google and Apple in a way. The Store could also provide a great opportunity for developers to monetise their GPTs as well as perhaps being a threat to consultancies and developers already creating custom AI services on behalf of paying clients.

The more powerful Turbo GTP-4 and its more up to date outputs, plus the lack of requirement to switch between different models are also likely to be features valued by businesses wanting easier, faster, and more productive ways to use ChatGPT. Furthermore, the Copyright Shield idea is likely to improve user confidence while enabling OpenAI to compete with Google and Microsoft, which have already announced their versions of it.

All in all, in the new and fast-moving generative AI market, these new upgrades see OpenAI ratcheting things up a notch, adding value, making serious competitive and customer retention efforts, showing its ambitions to move to platform status and greater monetisation, and further establishing itself as a major force in generative AI. For business users, these changes provide more opportunities to easily introduce customised and value-adding AI to any aspect of their business.

Tech News : Musk Launches Preview of Grok AI Chatbot

Elon Musk’s ‘xAI’ company has launched the preview of ‘Grok’, a new and rebellious AI chatbot that’s modelled after the Hitchhiker’s Guide to the Galaxy.


Grok is Musk’s answer to OpenAI’s ChatGPT and Google’s Bard. Musk was a co-founder ChatGPT’s OpenAI before stepping down from its board in 2018, and his new xAI company is staffed with (ex) Google DeepMind, Microsoft, and other top AI research personnel.

Truth Seeking

Back in July, Musk said that his new xAI company would “understand the true nature of the universe” and would be an alternative to other popular AI companies that are “biased.” Musk said that xAI’s AI product would, therefore, serve as a “maximum truth-seeking AI that tries to understand the nature of the universe”, would be “maximally curious” instead of having morality programmed into it, and in a Tweet (a while ago) warned of the “danger of training AI to be woke – in other words, lie”. This stance ties-in with Musk’s vision for X (Twitter) being a platform of free speech. For example, there has been some criticism of Musk’s X recently re-instating the accounts of far-right influencers Katie Hopkins and Tommy Robinson.

The Grok Difference 

It is against this backdrop that Grok’s introduction has been announced. The key differences between Grok and competing AI chatbots such as ChatGPT and Bard are:

Grok has real-time knowledge of the world via its training on the X platform (and probably on Oracle’s cloud). Other chatbots have only been trained to access information up to certain points in the past (ChatGPT-3 up to September 2021, and ChatGPT-4 April 2023) and (until very recently) needed plugins to access more current information. Back in April, Musk angrily accused Microsoft of training its AI programs through the ‘illegal’ use of Twitter data.

Also, in keeping with Musk’s ‘free speech’ stance and focus on ‘truth’ rather than ‘woke,’ X says that Grok will answer “spicy questions that are rejected by most other AI systems.”

Musk says “Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humour!”

Some commentators have described its ability to use ‘sarcasm’ in its answers.


Despite Musk’s early involvement with OpenAI, X was essentially beaten to the generative AI chatbot market by OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Copilot, and Meta’s announcement of truth GPT. After venting his frustration with competitors, Musk (who’s been busy fighting to create revenue streams for X other than advertising and trying to counter criticism about X under his leadership) while advancing his new company xAI and planning to turn X into a super app is keen to get his own differentiated generative AI product out there in order to compete. In fact, Grok will first be included as part of the X Premium+ membership, as a way to add value and help justify the subscription fee (all adding to the much-needed new X revenue stream).

Also, Musk’s concern over the threats posed by unchecked and unregulated AI, which led him to be one of the famous open letter signatories calling for a 6-month moratorium on the development of AI systems more capable than OpenAI’s latest GPT-4 (which may have given him more development time of his own), may have been a motivator for him to create a more positive and less threatening version of AI.

For example, Musk suggests his broader reasons for creating Grok include “building AI tools that maximally benefit all of humanity,” and empowering users with “AI tools, subject to the law.” Also, Musk says the ultimate goal for AI tools like Grok is to “assist in the pursuit of understanding.”


Musk’s xAI says that the prototype release of Grok is just the first step for a chatbot that is initially being offered to a limited number of users in the United States to try out. Those wanting to try it following testing can join a Grok waitlist. Grok is ultimately intended to be offered as part of the X Premium+ membership.

When and How Much? 

When out of beta, and since it will be incorporated into X Premium+ subscription, the price will be $16 per month (less than ChatGPT’s $20 per month).

What Does This Mean For Your Business? 

Grok is the latest in a line of generative AI chatbot/AI assistant releases from the big tech companies. It’s the first big release from Elon Musk’s new xAI company, which is predominantly staffed by people from competing AI companies.

For Musk, therefore, it’s a way to compete in the new and rapidly evolving generative AI market, establish xAI as a significant player, benefit from synergies and add value to the troubled X while making the much-needed X Premium+ subscription more attractive (a new, non-advertising revenue stream). It’s also a way to put an AI assistant in place, ready for the proposed expansion of X to becoming a ‘super app’.

Crucially, Grok is a way to differentiate Musk’s late offering in the generative AI marketplace – a chatbot that’s a representation of the X brand and of Musk’s own public persona, and of his vision for AI. For other AI companies, it’s a threat but may not yet be seen as a major one (i.e. only being in beta, needing more training, and being confined within X membership).  For users, who may be already familiar and relatively happy with ChatGPT, and who may not be tempted by X Premium+ membership and yet another subscription for the same thing (in tight economic times), it remains to be seen how much of a lure a more novel, (possibly right-wing) chatbot style will be.

Tech News : New Wearable Smartphone AND Virtual Assistant

Humane has announced the release of its Apple-style AI-powered badge/pin wearable that combines a smartphone with virtual assistant capabilities for when you’re on the move.


Humane, a startup founded by ex-Apple design and engineering team Imran Chaudhri and Bethany Bongiorno, has announced the launch of its $699 (five years in the making) AI-powered pin (badge wearable) which it developed using funding of $230 million from investors, including Salesforce CEO Marc Benioff.

AI Pin

The “first-of-its-kind” very simply named ‘Ai Pin,’ available to order in the US from November 16th, is a magnetically attached, small, round-cornered square wearable, with more than a hint of apple styling, and a camera (like a phone camera or bodycam) in the top corner. It comes in colours described as Eclipse, Equinox, and Lunar (black, silver, or white to us) and has a two-piece design, consisting of the main computer and a battery booster.

Created Through Unique Collaborations 

Humane says AI Pin’s development is the result of “unique collaborations with Microsoft and OpenAI,” which give it access to some of the world’s most powerful AI models and platforms, thereby providing the foundation for new capabilities to be added as the technology evolves.

Cosmos & AI Bus 

Humane says its ‘Cosmos’ operating system blends “intelligent technologies with intuitive interaction and advanced security” and that its new “AI Bus” software framework is what “brings AI Pin to life.” The AI Bus software is used to connect the user to the right AI experience or service instantly, thereby removing the need to download, manage, or launch apps.

What Can The AI Pin Do? 

The main difference between this and other wearables isn’t just how it’s worn (on the body as a badge), but the fact that it incorporates a dedicated Qualcomm® AI Engine (AI assistant), and uses a “laser projection system” instead of a display which essentially projects AR glasses onto your hand, enabling you to use the projection as a touchscreen! It can also an identify objects in the real world and apply digital imagery to them. Other key features and capabilities include:

– It operates by natural speech (via an “AI Mic”) or by using the intuitive touchpad, by holding up objects to it, by using gestures, or by interacting with the “Laser Ink Display projected onto your palm.” 

– AI-powered messaging enables you to craft messages in your tone of voice.

– Its ‘Catch Me Up’ function sorts through inbox noise.

– Humane’s partnership with TIDAL* enables the AI Pin to deliver AI-driven music experiences.

– The AI-powered photographer helps you capture and recall important memories.

– It can translate foreign languages and can support your nutrition goals by identifying food using computer vision.

– Its “perpetual power system” means you can hot-swap the battery booster on-the-go, thereby ensuring uninterrupted usage and all-day battery life.

– The ultra-wide RGB camera, depth sensor and motion sensors, allow AI Pin to “see the world as you see it.” 

– Humane says the Personic speaker creates a “bubble of sound,” offering both intimacy and volume as needed.

– AI Pin can also pair with headphones via Bluetooth although it is a standalone device that doesn’t need to be paired with a smartphone or other companion device. Interestingly, Humane plans to provide its own MVNO (mobile virtual network operator) wireless service for AI Pin, connected by its exclusive U.S. partner, T-Mobile.

– Its own cloud-based ‘’ central hub, (which AI Pin instantly connects to) allowing access to your data, including photos, videos, and notes.

Humane says that as the device and platform evolves with future updates, so will the possibilities it unlocks.

Privacy Features

Humane has highlighted the main privacy features of AI Pin as being:

– The AI Pin only activates upon user engagement and doesn’t use ‘wake words’ (like other AI assistants), thereby ensuring it is not always listening or recording.

– The AI Pin has a “Trust Light” which indicates when any sensors are active and is managed via a dedicated privacy chip.

– If compromised, AI Pin will shut down completely and require professional service from Humane.

– Upon purchasing the AI Pin, users are invited to onboard via a privacy-protected portal, allowing the device to tailor its services to individual preferences.

– It has a phone number (it connects to a mobile network), plus it supports international roaming, GPS, Wi-Fi, and Bluetooth.

Concerns and Criticism

While the device sounds like a step forward for wearables, some of the concerns and criticism around the AI Pin include:

– Just because the company Humane has the faith of investors and the expertise of leading AI specialists, it doesn’t mean that the AI Pin will be a success. The (still relatively new) AI wearables field is littered with companies and products that have been wide of the mark, such as Magic Leap’s AR headset failure (despite AT&T, Google and Alibaba Group as investors, the Google Clips, a body-worn, hands-free smart camera’s failure.

– The hype about and huge investment in Humane with no real product to speak of until now (after five years).

– Rumours and allegations that Humane’s founders, Bongiorno and Chaudhri didn’t leave Google on good terms due to (allegedly) taking most of the credit for work done by a larger team.

– Concerns that, in the near future, Humane could charge additional fees for “capacity” (as indicated by Chaudhri) although services like unlimited web searches via Ai Mic and unlimited media storage on Microsoft’s cloud are free.

– Concerns that the camera on the front of the badge and how it is used may not comply with data privacy laws in other countries (e.g. like smart glasses). In the case of AI Pin, although the microphone and camera aren’t always on, the camera is still visible and could prompt objections and problems if worn publicly, e.g. issues over consent.

– Other much more powerful and more well-known tech companies and brands such as Apple (where Humane’s founders came from) are already a long way ahead in the wearables market.

– Concerns that, as with smart glasses, the high price tag, and concerns over limited understanding about consumer use cases, could make them most popular with businesses, potentially negatively affecting the scope of the market for them.

– Reported criticisms about the quality of the photos taken with its camera, and the lack of partnerships with social media companies to enable instant uploading of photos to favourite social networks, e.g. Instagram.

What Does This Mean For Your Business?

Humane’s new AI Pin’s styling displays the Apple origins of the company’s founders, which may have some positive rub-off value, and the AI Pin can rightly claim to be a new kind of wearable. It clearly has the considerable backing of investors and was developed through collaborations with Microsoft and OpenAI, both of which suggest and inspire confidence in its ground-breaking potential.

The AI Pin’s projected touchscreen, AI incorporation, multiple ways to operate it, and its multiple functions and potential uses also make it a promising and different product and alternative to smart glasses and other wearables. That said, Humane is up against some tough competition in the wearables market from major tech competitors that also have their own considerable AI investment and products and are already leading the wearables market. The relatively high price coupled with perhaps understandable concerns that people may not like the idea of appearing to be filmed by what looks like a bodycam (without their consent), concerns whether the impressive projected keyboard idea is enough of a draw for anyone other than certain businesses and tech early adopters, and many examples of failed wearables suggest that it remains to be seen how much long-term interest there will be in the AI Pin.

Tech Insight : UK’s AI Safety Summit : Some Key Takeaways

Following the UK government hosting the first major global summit on AI safety at historic Bletchley Park, we look at some of the key outcomes and comments.


The UK hosted the first major global AI Safety Summit on the 1 and 2 November at Bletchley Park, the historic centre of the UK’s WWII code breaking operation, where the father of modern computer science Alan Turing worked. The summit brought together international governments (of 28 countries plus the EU), leading AI companies, civil society groups and experts in research.

The aims of the summit were to develop a shared understanding the risks of AI, especially at the frontier of development, and to discuss how they can be mitigated through internationally coordinated action, and of the opportunities that safe AI may bring.

Some notable attendees included Elon Musk, OpenAI CEO Sam Altman, UK Prime Minister Rishi Sunak, US vice president Kamala Harris, EU Commission president Ursula von der Leyen, and Wu Zhaohui, Chinese vice minister of science and tech.

Key Points 

The two-day summit, which involved individual speakers, panel discussions, and group meetings covered many aspects of AI safety. Some of the key points to take away include:

– UK Prime Minister Rishi Sunak announced in his opening speech that he and US Vice President Kamala Harris had already decided that the US and UK will establish world-leading AI Safety Institutes to test the most advanced frontier AI models. Mr Sunak said the UK’s Safety Institute will develop its evaluations process in time to assess the next generation of models before they are deployed next year.

– Days before the summit (thereby setting part of the agenda for the summit), US President Joe Biden issued an executive order requiring tech firms to submit test results for powerful AI systems to the US government prior to their release to the public.  At the summit, in response to this, UK tech secretary, Michelle Donelan, made the point that this may not be surprising since most of the main AI companies are based in the US.

– The U.S. and China, two countries often in opposition, agreed to find global consensus on how to tackle some of the complex questions about AI, such as how to develop it safely and regulate it.

– In a much-publicised interview with UK Prime Minister Rishi Sunak, 𝕏’s Elon Musk described AI as “the most disruptive force in history” and said that “there will come a point where no job is needed”. Mr Musk added: “You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything.”  Mr Musk said that as a result of this: “One of the challenges in the future will be how do we find meaning in life.” It was also noted by some that Mr Musk had been using his X platform to mock politicians at the AI summit ahead of his headlining interview with the UK Prime Minister. Mr Musk’s comments were perhaps nor surprising given that he was one of the many signatories to an open letter earlier in the year calling for a moratorium on the development of AI more advanced than OpenAI’s GPT-4 software. That said, Mr Musk has just announced the launch of a new AI chatbot called ‘Grok,’ a rival to ChatGPT and Bard, which has real-time knowledge of the world via the 𝕏 platform, and Mr Musk says has been “designed to answer questions with a bit of wit and has a rebellious streak.” 

– As highlighted by Ian Hogarth, chair of the UK government’s £100m Frontier AI Taskforce, “there’s a wide range of beliefs” about the severity of the most serious risks posed by AI, such as the catastrophic risks of technology outstripping human ability to safeguard society (the existential risk). As such, despite the summit, the idea that AI could wipe out humanity remains a divisive issue. For example, on the first day of the summit, Meta’s president of global affairs and former UK Deputy Prime Minister, Nick Clegg, said that AI was caught in a “great hype cycle” with fears about it being overplayed.

– Different countries are moving at different speeds with regards to the regulatory process around AI. For example, the EU started talking about AI four years ago and is now close to passing an AI act, whereas other countries are still some way from this point.


Although the narrative around the summit was that it was a great global opportunity and step in the righty direction, some commentators have criticised the summit as being a missed opportunity for excluding workers and trade unions, and for simply being an event that was dominated by the already dominant big tech companies.

What Does This Mean For Your Business? 

The speed at which AI technology is moving, mostly ahead of regulation, and with its opportunities and threats (which some believe to be potentially catastrophic), and the fact that no real global framework for co-operation in exploring and controlling AI made this summit (and future ones), inevitable and necessary.

Although it involved representatives from many countries, to some extent it was overshadowed in the media by the dominant personality representing technology companies, i.e. Elon Musk. The summit highlighted divided opinions on the extent of the risks posed by AI but did appear to achieve some potentially important results, such as establishing AI Safety Institutes, plus the US agreeing with China on something for a change. That said, although much focus has been put on the risks posed by AI, it’s worth noting that for the big tech companies, many of whose representatives were there, AI is something they’re heavily invested in as the next major source of revenue and to compete with each other, and that governments also have commercial, as well as political interest in AI.

It’s also worth noting critics’ concerns that the summit was really a meeting of the already dominant tech companies and governments and not workers, many of whom may be most directly affected by continuing AI developments. With each week, it seems, there’s a new AI development, and whether concerns are over-hyped (as Nick Clegg suggests) or fully justified, nobody really knows as yet.

Many would agree, however, that countries getting together to focus on the issues and understand the subject and its implications and agree on measures that could mitigate risks and maximise the opportunities and benefits of AI going forward is positive and to be expected at this point.

Security Stop Press : ChatGPT Release Linked To Massive Phishing Surge

Threat Detection Technology SlashNext has reported that in the 12 months that ChatGPT’s been publicly available, the number of phishing emails has jumped 1,265 per cent, with credential phishing, a common first step in data breaches, seeing a 967 per cent increase.

SlashNext’s State of Phishing 2023 report notes that cybercriminals may have been leveraging LLM chatbots like ChatGPT to help write more convincing phishing emails and to launch highly targeted phishing attacks. Generative AI chatbots may also have lowered the barriers for any bad actors wanting to launch such campaigns (i.e. by giving less skilled cyber criminals the tools to run more complex phishing attacks).

Businesses can safeguard against phishing attacks by taking measures such as educating employees to recognise fraudulent communications, enforcing strong password policies, using MFA, keeping software up-to-date and installing anti-phishing tools, and by having an effective incident response plan to mitigate damage from breaches.