Tech News : Work Starts On £790m UK Google Data Centre

Work has started on Google’s first UK data centre which will cost $1 billion (£790m), will add to Google’s 27 data centres worldwide, and will support its move into AI.

Crucial Compute Capacity 

The data centre is being built on a 33-acre site at Waltham Cross, Hertfordshire. In addition to the construction and technical jobs that Google says the building work will bring to the local community, Google says its investment in the data centre will deliver “crucial compute capacity to businesses across the UK, supporting AI innovation and helping to ensure reliable digital services to Google Cloud customers and Google users in the UK and abroad.” 

Google says that its investment in the technical infrastructure needed to support innovation and tech-led growth in areas like AI-powered technologies is vital, hence the new data centre.

Off-Site Heat Recovery 

Google is also keen to highlight how the data centre’s carbon footprint will be minimised. For example, in addition to the company’s goal to run all its data centres and campuses completely on carbon-free energy (CFE) by 2030, it says the new data centre in Hertfordshire will “have provisions for off-site heat recovery”. 

Data centres produce large amounts of heat and so an off-site heat recovery system is a way for energy conservation that benefits the local community through capturing the heat generated by the data centre and using it in nearby homes and businesses. Google also says the data centre will have an air-based cooling system, presumably rather than a water-based one.

Part Of A Continued UK Investment 

Google has highlighted how the new data centre is part of its continued investment in and commitment to the UK which it says is “a key country for our business and a pioneering world leader in AI, technology and science.”

Other recent Google investments in the UK (in 2022) include:

– A $1bn purchase of our Central Saint Giles office in London’s West End.

– A 1 million sq. ft. Office and local innovation hub in King’s Cross.

– The launch of an Accessibility Discovery Centre in London, aimed at boosting accessible tech in the UK.

Google is also keen to highlight its free digital skills training, offered across the UK since 2015, and the expansion of its Digital Garage training programme in the UK (including a new AI-focussed curriculum).

UK Government Pleased 

Prime Minister Sunak, who’s been keen to woo big tech companies to the UK to support its ambitions to be a major global tech centre, has welcomed Google’s $1 billion data centre investment as an endorsement of this. He also highlighted how such “foreign investment creates jobs and grows all regions of our economy and investments like this will help to drive growth in the decade ahead.”  

Also, UK Chancellor of the Exchequer, Jeremy Hunt, has expressed that he is “delighted to see this investment from Google” and that it ”reflects the success of the UK tech sector, which is now the third largest in the world after the US and China – worth over $1trillion and double the size of anywhere else in Europe.” 

What Does This Mean For Your Business? 

The growth of cloud computing followed by the rapid growth of AI, which has a much bigger demand for computing power, plus the move by competitors into AI (Microsoft has announced an impending £2.5bn to expand data centres for AI across the UK) are key drivers for Google’s new UK data centre investment. The infrastructure is needed to support the AI which will in turn help boost productivity, creativity, and opportunities for UK businesses, and Google’s investment in the UK is good for job creation, boosting the economy, and bolstering the UK’s ambitions for being a tech centre.

However, Google is also reported to have been laying off many workers as it slims down to accommodate AI and, although the immediate community around Waltham Cross may benefit from some low-cost/free heat, there are other matters to bear in mind. For example, AI is an energy and thirsty technology and although there’s an ambition to run its data centres on carbon-free energy (CFE) by 2030, the Waltham Cross data centre should be finished and running by 2025. Like other data centres, it will still require huge amounts of energy (it shouldn’t need water too because it’s to be air-cooled), which is a matter that hasn’t been highlighted in the announcement about the investment so far. The impact on the local grid and environment, and the impact on the environment of the build itself may also be of concern.

That said, work is only just starting, more data centres are needed to fuel our AI-powered future, and there are no other good alternatives to this kind of expansion as yet so for UK businesses, the investment in the UK and its benefits are being welcomed.

Tech News : UK ‘Passportless’ eGates

A recent Times report highlighted how Phil Douglas, director-general of the UK Border Force, aims to replace the UK’s physical passport-based entry system with an upgraded, frictionless, facial recognition-based e-Gates system.

Current eGates System 

The current eGates system that most UK travellers have experienced involves the use of facial recognition alongside a passport and automated gates. With this system, travellers must still queue before entering the automated gates and hold their physical passport into the machine while looking into a camera (which isn’t always successful). The current system relies on a match between data encrypted on the passport and the facial recognition camera image, and users of the system must be registered on a database.

The current eGates system can also only be used by travellers aged 10 and over who are citizens of the UK, EU, US, Canada, Australia, New Zealand, Singapore, and South Korea.

Issues 

In addition to the queuing required and the fact that some users need several attempts, the current eGates system has several other issues. For example, unsuccessful attempts to use the system (of which there are many) still require manual checks, while major outages of the eGates system have previously caused chaos at UK airports (in May and September 2023).

The Upgraded System 

The upgraded system highlighted by Mr Douglas in the recent Times report will mean that passengers can keep their physical passports in their pockets and be admitted to the UK just by looking into a camera linked to a centralised facial recognition system.

The benefits of the upgraded eGates should be less queuing (better for the airport and for travellers) plus a more ‘frictionless’ experience for travellers.

Already In Operation In Other Countries 

Much faster and more frictionless systems, like the upgraded version intended for the UK, are already in operation in countries like Dubai and Australia. It’s been reported that the Dubai ‘Smart Gates’ system uses facial recognition for 50 nationalities and can enable travellers to clear immigration procedures in as little as five seconds!

Eta 

Speaking at the Airlines 2023 conference in November last year, Phil Douglas highlighted how the eGates system changes are part of wider immigration process changes including the incoming Electronic Travel Authorisation (Eta). The Eta scheme, which opened for applications last October, is a requirement worldwide for visitors who don’t need a visa for short stays in the UK but who the government would like to know more about and be able to refuse entry if they may pose a threat. It’s envisaged that the application-style scheme (which will work even for those people “airside” at Heathrow for two hours between international flights), could enable the UK Border Force to make decisions about admission much earlier, and perhaps refuse ETas for those with a criminal history.  Critics, however, have said that the scheme could damage UK airlines and tourism, particularly for Northern Ireland.

What Does This Mean For Your Business? 

For anyone who’s ever arrived home at a UK airport from a holiday or business trip, not having to fish out the passport after the flight, being able to avoid queues in arrivals for the eGates machines, then being able to just walk through in seconds sounds very attractive.

Avoiding the chaos of eGates outages is also likely to be very attractive to airports, passengers, airlines, and other stakeholders, although it does highlight the dangers of ever-reliance on technology. For a stretched UK government’s Border Force, technology that can cut queues and cut staff costs, and eliminate passport reliance while eliminating human error opportunities is also likely to be appealing. A system that allows travellers to complete immigration checks in seconds, like Dubai’s or Australia’s may also be an image that the UK wants to project as a country positioning itself as a tech centre.

However, some may see a more sinister, rigid, less romantic side to travel. Having a purely biometrics-based immigration procedure where your freedom to enter or leave is decided by whatever is recorded on a central database entry (and triggered by your face) is perhaps a more negative vision of the future. Police facial recognition trials, for example, have not been accurate or unbiased, and coupled with updated eGates and the Eta scheme (which is unsurprising given the government has prioritised immigration as a central issue), some may feel uneasy about a dystopian creep into travel and freedom.

For example, could ETa and purely smart borders mean those individuals whose central database details are marked with previous (perhaps minor) offences or other issues (e.g. social media posts) find themselves refused exit from or entry into countries? Could such a system be misused by governments?

Also, with the phasing-out of physical passports (and payments for renewals), and everything linked to a central database, could this open up the route for travel subscription payment systems? There is also the fear of security and privacy related to a border/immigration database that holds so much personal information about people.

The more distant future and fears aside, AI is now likely to be the key to enhancing and improving biometric systems and – whether we like it or not – such a system for borders is just one of many we will face going forward.

Sustainability-in-Tech : Map Shows UK Areas Under Water By 2050

An online map from non-profit climate science organisation ‘Climate Central’ shows which areas of the UK could be underwater due to climate change by 2050.

Sea Level Rise 

As highlighted in a 2021 report by Benjamin H Strauss et al, if a high emissions scenario raised global temperatures by 4 ∘C warming, this could produce an 8.9 m global sea level rise within a roughly 200 to 2000-year envelope, that could submerge at least 50 major cities!

First Unveiled In 2020 

It was back in 2020 when Climate Central’ s CoastalDEM digital model and warnings that sea levels may rise between 2-7ft by the end of the century first illustrated how and why many UK coastal areas could end up submerged i by 2050. It also highlighted the point that how high the sea level rises depends on how much warming pollution humanity dumps into the atmosphere.

The Latest 

Climate Central’s latest update to the model, highlighted in its new ‘Flooded Future: Global vulnerability to sea level rise worse than previously understood’ report, suggests that predictions have got worse. The report, which includes the map, explains the basic sea level rise cause, saying: “As humanity pollutes the atmosphere with greenhouse gases, the planet warms” and that “as it does so, ice sheets and glaciers melt and warming sea water expands, increasing the volume of the world’s oceans.” 

However, as indicated by the new report’s title, it goes on to explain, with the help of its flooding map, that it is now thought that even if moderate’ reductions are made to the amount of human-made pollution, and with a ‘medium’ amount of ‘luck’ with weather events, vast swathes of the UK look likely to become submerged.

In fact, the report suggests that globally, by 2050, land that’s currently home to 300 million people will fall below the elevation of an average annual coastal flood and, by 2100, land now home to 200 million people could sit permanently below the high tide line. Based on the current CoastalDEM, Climate Central reports that around 10 million people currently live on land below the high-tide line.

What The Map Shows 

The updated map, included in the report shows (using red markings for submerged areas), shows that by 2050, areas of the UK most likely to be affected by serious flooding and/or being completely under water include:

– London’s River Thames area (flagged as a danger zone).

– The River Severn (in the South West) either side of the estuary from Taunton up to Tewkesbury, and back to Cardiff on the northern bank.

– Large areas of the Lincolnshire and Norfolk coast, into Cambridgeshire.

– Large areas around the Humber Estuary in the North of England.

The interactive map, entitled the “Coastal Risk Screening Tool” can be viewed online here.

Challenges 

Some of the key challenges to accurate predictions of submerged areas have been not knowing how much warming pollution is being dumped into the atmosphere and how quickly the land-based ice sheets in Greenland and especially Antarctica will destabilise / are destabilising. It’s also very difficult to accurately project where and when the sea level rise could lead to increased/permanent flooding, and be able to compare sea level rise against land elevations, given that accurate elevation data is generally unavailable, inaccessible, and/or too expensive

What Does This Mean For Your Organisation? 

One of the many worrying aspects of global warming is around melting of the world’s ice caps and the expansion of the warmer oceans leading to a rise in sea level and serious flooding. Having a map based on up-to-date data that shows where the worst flooding could be will be a useful tool in terms of helping to raise awareness about the threat and its potential consequences and in planning to help mitigate its effects where possible.

In the UK, the fact that the Thames is flagged by the map and reported as being a danger zone has caused alarm. The effects of flooding of the kind highlighted on Climate Central’s map could range from near-term increases in coastal flooding damaging infrastructure and crops to the permanent displacement of whole coastal communities. Although many coastal areas have existing coastal defences, the Climate Central map indicates that these may not be enough to deal with future sea levels, thereby giving a serious heads-up to the need for planning and action for next steps. For example, more adaptive and expensive measures may be needed such as the construction of levees and other defences or, in the worst cases, relocation to higher ground could be the only way to lessen some of the threats.

One of the key points of the report and the map is to make people understand that the amount by which the sea level will rise (and flooding will occur) depends upon how much greenhouse gasses are dumped into the atmosphere by human activity. The conclusion, therefore, must be that changing our behaviour to minimise our carbon footprint and really focusing on meeting climate targets is the only way to minimise the damage to the planet, reduce global warming, and hopefully reduce the risk of disappearing beneath the waves. Although there was a pledge at the COP28 summit in Dubai last year, to ‘transition away’ from the use of fossil fuels, many people are aware that the clock is really ticking on this most fundamental issue.

Tech Insight : UK’s AI Safety Summit : Some Key Takeaways

Following the UK government hosting the first major global summit on AI safety at historic Bletchley Park, we look at some of the key outcomes and comments.

Summit 

The UK hosted the first major global AI Safety Summit on the 1 and 2 November at Bletchley Park, the historic centre of the UK’s WWII code breaking operation, where the father of modern computer science Alan Turing worked. The summit brought together international governments (of 28 countries plus the EU), leading AI companies, civil society groups and experts in research.

The aims of the summit were to develop a shared understanding the risks of AI, especially at the frontier of development, and to discuss how they can be mitigated through internationally coordinated action, and of the opportunities that safe AI may bring.

Some notable attendees included Elon Musk, OpenAI CEO Sam Altman, UK Prime Minister Rishi Sunak, US vice president Kamala Harris, EU Commission president Ursula von der Leyen, and Wu Zhaohui, Chinese vice minister of science and tech.

Key Points 

The two-day summit, which involved individual speakers, panel discussions, and group meetings covered many aspects of AI safety. Some of the key points to take away include:

– UK Prime Minister Rishi Sunak announced in his opening speech that he and US Vice President Kamala Harris had already decided that the US and UK will establish world-leading AI Safety Institutes to test the most advanced frontier AI models. Mr Sunak said the UK’s Safety Institute will develop its evaluations process in time to assess the next generation of models before they are deployed next year.

– Days before the summit (thereby setting part of the agenda for the summit), US President Joe Biden issued an executive order requiring tech firms to submit test results for powerful AI systems to the US government prior to their release to the public.  At the summit, in response to this, UK tech secretary, Michelle Donelan, made the point that this may not be surprising since most of the main AI companies are based in the US.

– The U.S. and China, two countries often in opposition, agreed to find global consensus on how to tackle some of the complex questions about AI, such as how to develop it safely and regulate it.

– In a much-publicised interview with UK Prime Minister Rishi Sunak, 𝕏’s Elon Musk described AI as “the most disruptive force in history” and said that “there will come a point where no job is needed”. Mr Musk added: “You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything.”  Mr Musk said that as a result of this: “One of the challenges in the future will be how do we find meaning in life.” It was also noted by some that Mr Musk had been using his X platform to mock politicians at the AI summit ahead of his headlining interview with the UK Prime Minister. Mr Musk’s comments were perhaps nor surprising given that he was one of the many signatories to an open letter earlier in the year calling for a moratorium on the development of AI more advanced than OpenAI’s GPT-4 software. That said, Mr Musk has just announced the launch of a new AI chatbot called ‘Grok,’ a rival to ChatGPT and Bard, which has real-time knowledge of the world via the 𝕏 platform, and Mr Musk says has been “designed to answer questions with a bit of wit and has a rebellious streak.” 

– As highlighted by Ian Hogarth, chair of the UK government’s £100m Frontier AI Taskforce, “there’s a wide range of beliefs” about the severity of the most serious risks posed by AI, such as the catastrophic risks of technology outstripping human ability to safeguard society (the existential risk). As such, despite the summit, the idea that AI could wipe out humanity remains a divisive issue. For example, on the first day of the summit, Meta’s president of global affairs and former UK Deputy Prime Minister, Nick Clegg, said that AI was caught in a “great hype cycle” with fears about it being overplayed.

– Different countries are moving at different speeds with regards to the regulatory process around AI. For example, the EU started talking about AI four years ago and is now close to passing an AI act, whereas other countries are still some way from this point.

Criticism 

Although the narrative around the summit was that it was a great global opportunity and step in the righty direction, some commentators have criticised the summit as being a missed opportunity for excluding workers and trade unions, and for simply being an event that was dominated by the already dominant big tech companies.

What Does This Mean For Your Business? 

The speed at which AI technology is moving, mostly ahead of regulation, and with its opportunities and threats (which some believe to be potentially catastrophic), and the fact that no real global framework for co-operation in exploring and controlling AI made this summit (and future ones), inevitable and necessary.

Although it involved representatives from many countries, to some extent it was overshadowed in the media by the dominant personality representing technology companies, i.e. Elon Musk. The summit highlighted divided opinions on the extent of the risks posed by AI but did appear to achieve some potentially important results, such as establishing AI Safety Institutes, plus the US agreeing with China on something for a change. That said, although much focus has been put on the risks posed by AI, it’s worth noting that for the big tech companies, many of whose representatives were there, AI is something they’re heavily invested in as the next major source of revenue and to compete with each other, and that governments also have commercial, as well as political interest in AI.

It’s also worth noting critics’ concerns that the summit was really a meeting of the already dominant tech companies and governments and not workers, many of whom may be most directly affected by continuing AI developments. With each week, it seems, there’s a new AI development, and whether concerns are over-hyped (as Nick Clegg suggests) or fully justified, nobody really knows as yet.

Many would agree, however, that countries getting together to focus on the issues and understand the subject and its implications and agree on measures that could mitigate risks and maximise the opportunities and benefits of AI going forward is positive and to be expected at this point.

Tech News : UK Will Host World’s First AI Summit

During his recent visit to Washington in the US, UK Prime Minister Rishi Sunak announced that the UK will hosts the world’s first global summit on artificial intelligence (AI) later this year.

Focus On AI Safety 

The UK government says this first major global summit on AI safety will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI.

Threat of Extinction 

Since ChatGPT became the fastest growing app in history and people saw how ‘human-like’ generative AI appeared to be, much has been made of the idea that AI’s rapid growth could see it get ahead of our ability to control it, leading to it destroying and replacing us. For example, this fear has been fuelled with events like:

– In March, an open letter asking for a 6-month moratorium on labs training AI to make it more powerful than GPT-4, signed by notable tech leaders like Elon Musk, Steve Wozniak, and Tristan Harris.

– In May, Sam Altman, the CEO of OpenAI, signing the open letter from the San Francisco-based Centre for AI Safety warning that AI poses a threat that should be treated with the same urgency as pandemics or nuclear war, and could result in human extinction. See the letter and signatories here: https://www.safe.ai/statement-on-ai-risk#open-letter .

How? 

Current thinking about just how AI could wipe us all out in just a couple of years and the risks that AI poses to humanity includes:

– The Erosion of Democracy: AI-producing deep-fakes and other AI-generated misinformation resulting in the erosion of democracy.

– Weaponisation: AI systems being repurposed for destructive purposes, increasing the risk of political destabilisation and warfare. This includes using AI in cyberattacks, giving AI systems control over nuclear weapons, and the potential development of AI-driven chemical or biological weapons.

– Misinformation: AI-generated misinformation and persuasive content undermining collective decision-making, radicalising individuals, and hindering societal progress, and eroding democracy. AI, for example could be used to spread tailored disinformation campaigns at a large scale, including generating highly persuasive arguments that evoke strong emotional responses.

– Proxy Gaming: AI systems trained with flawed objectives could pursue their goals at the expense of individual and societal values. For example, recommender systems optimised for user engagement could prioritise clickbait content over well-being, leading to extreme beliefs and potential manipulation.

– Enfeeblement: The increasing reliance on AI for tasks previously performed by humans could lead to economic irrelevance and loss of self-governance. If AI systems automate many industries, humans may lack incentives to gain knowledge and skills, resulting in reduced control over the future and negative long-term outcomes.

– Value Lock-in: Powerful AI systems controlled by a few individuals or groups could entrench oppressive systems and propagate specific values. As AI becomes centralised in the hands of a select few, regimes could enforce narrow values through surveillance and censorship, making it difficult to overcome and redistribute power.

– Emergent Goals: AI systems could exhibit unexpected behaviour and develop new capabilities or objectives as they become more advanced. Unintended capabilities could be hazardous, and the pursuit of intra-system goals could overshadow the intended objectives, leading to misalignment with human values and potential risks.

– Deception: Powerful AI systems could engage in deception to achieve their goals more efficiently, undermining human control. Deceptive behaviour may provide strategic advantages and enable systems to bypass monitors, potentially leading to a loss of understanding and control over AI systems.

– Power-Seeking Behaviour: Companies and governments have incentives to create AI agents with broad capabilities, but these agents could seek power independently of human values. Power-seeking behaviour can lead to collusion, overpowering monitors, and pretending to be aligned, posing challenges in controlling AI systems and ensuring they act in accordance with human interests.

Previous Meetings About AI Safety

The UK Prime Minister has been involved in several meetings about how nations can come together to mitigate the potential threats posed by AI including:

– In May, meeting the CEOs of the three most advanced frontier AI labs, OpenAI, DeepMind and Anthropic in Downing Street. The UK’s Secretary of State for Science, Innovation and Technology also hosted a roundtable with senior AI leaders.

– Discussing this issue with businesspeople, world leaders and all members of the G7 at Hiroshima Summit last month where they agreed to aim for a shared approach to this issue.

Global Summit In The UK

The world’s first global summit about AI safety (announced by Mr Sunak) will be hosted in the UK this autumn. It will consider the risks of AI, including frontier systems, and will enable world leaders to discuss how these risks can be mitigated through internationally coordinated action. The summit will also provide a platform for countries to work together on further developing a shared approach to mitigating these risks and the work at the AI safety summit will build on recent discussions at the G7, OECD and Global Partnership on AI.

Prime Minister Sunak said of the summit, “No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.” 

What Does This Mean For Your Business?

The speed at which ChatGPT and other AI has grown has happened ahead of a proper assessment of risk, regulation and a co-ordinated strategy for mitigating risks while maintaining the positive benefits and potential of AI. Frightening warnings and predictions by big tech leaders have also helped provide the motivation for countries to get together for serious talks about what to do next.  The announcement of the world’s first global summit on AI safety, to be hosted by the UK, marks a significant step in addressing the risks posed by artificial intelligence, and could provide some Kudos to the UK and help strengthen the idea that the UK is a major player in the tech industry.

The bringing together of key countries, leading tech companies, and researchers to agree on safety measures and evaluate the most significant risks and threats associated with AI and the collective actions taken by the global community, including discussions at previous meetings and the upcoming summit, demonstrate a commitment to mitigating these risks through international coordination and are a positive first step in governments catching up with (and getting a handle on) this most fast-moving of technologies.

It is important to remember that while AI poses challenges, it also offers numerous benefits for businesses. These benefits include improved efficiency, enhanced decision-making, and innovative solutions, and tools such as ChatGPT and image generators such as DALL-E have proven to be popular time-saving, cost-saving and value-adding tools. That said, AI image generators have raised challenges to copyrighting and consent for artists and visual creatives. Although there have been dire warnings about AI, these seem far removed from the practical benefits that AI is delivering for businesses, and striking a fair balance between harnessing the potential of AI and addressing its risks is crucial for ensuring a safe and beneficial future for all.