Tech News : AI Improving Coding & Data Sorting

Google DeepMind’s AlphaDev artificial intelligence (AI) system has produced new algorithms that can write code better than humans and could create better programs.

Discovering Enhanced Algorithms 

In a recent research paper, Google’s Deepmind has outlined how AlphaDev is able to use ‘reinforcement learning’ to discover enhanced computer science algorithms which surpass those honed by scientists and engineers over decades.

Why? 

Google’s AI research organisation, Deepmind, says that with a digital society driving increasing demand for computation and energy use, plus with only a reliance on improvements in hardware so far to keep pace, microchips are approaching their physical limits. This means that it’s now critical to improve the code that runs on them to make computing more powerful and sustainable going forward.

What Is Reinforcement Learning?

Reinforcement learning is a subfield of AI where an agent learns by interacting with an environment to maximise cumulative rewards. The agent observes the current state, takes actions, and receives feedback to improve decision-making over time. The policy guides the agent’s action selection based on a set of rules. The iterative process involves observing the state, taking actions, receiving rewards, and updating the policy using algorithms like Q-learning or policy gradients.

Reinforcement learning is applied in game playing, robotics, recommendation systems, finance, and healthcare, achieving impressive results such as training agents to play games at superhuman levels and enabling autonomous systems to learn complex tasks independently.

A Faster Algorithm For ‘Sorting’ Discovered 

In computing, ‘sorting’ is a method for ordering data, with sorting algorithms underpinning everything from ranking online search results and social posts to how data is processed on computers and phones.

The recent research paper revealed that using reinforcement learning, following being tasked with developing a new way to sort short sequences in numbers in the popular coding language C++, AlphaDev was able to uncover a faster algorithm for sorting, beating well-established sorting algorithms.

In essence, the AlphaDev has been able to improve how the structure of a working assembly program is represented in the AI code, thereby allowing its reward system to better narrow down the possibilities, making the AI better and faster.

C++ Running Faster, & More 

This discovery has led to C++ running faster and improved algorithms for sorting and other basic tasks like hashing. For example, AlphaDev’s new C++ sorting algorithms are 1.7 per cent more efficient than previous methods for sorting long sequences of numbers, and up to 70 per cent faster for five-item sequences.

When scaled up, this could transform how we program computers and could impact all aspects of our increasingly digital society.

Added To The Main C++ Library 

The improved algorithms have been open sourced and added to the main the main C++ library, used by millions of developers and companies in many industries around the world on AI applications. DeepMind says it’s the first change to this part of the sorting library in over a decade and is the first time an algorithm designed through reinforcement learning has been added to this library.

What Does This Mean For Your Business? 

With so many AI applications in so many industries using sorting algorithms, and with AlphaDev’s improved algorithms being open sourced, AlphaDev’s discovery is already making an impact by speeding many apps and programs. DeepMind believes that many improvements exist at a lower level that may be difficult to discover in a higher-level coding language and sees this as just the beginning and an important steppingstone for using AI to optimise the world’s code, one algorithm at a time.  This means that AlphaDev’s reinforcement learning approach could point to many more small algorithm improvements to come that could be quickly fed into the hands of developers through open sourcing thereby benefitting businesses, industries, and users around the world.

Tech Insight : Google Deleting Dormant Data

In this insight we look at Google’s updated inactive account policy whereby Google accounts not used for 2 years could be deleted, meaning the loss of important emails, photos, data and more.

Gmail, YouTube, & Google Photos Accounts

The inactive account deletions are part of a policy change for Google’s products and will apply to personal accounts (not business accounts) and their contents, i.e. content within the workspace including Gmail, Docs, Drive, Meet, Calendar, YouTube, and Google Photos.

Why?

The main reason for deletions of inactive accounts is to improve security. For example, older dormant accounts tend to rely on old or re-used passwords that may have been compromised and, Google says, are ten times less likely to have multi-factor authentication (2FA) set up on them. This means they’re more vulnerable to hacking and when compromised, could be used for other malicious activity, e.g., identity theft, or sending spam. Also, the move by Google is a step towards aligning its own policies with “industry standards” around retention and account deletion and is a way to help limit the amount of time Google retains users’ personal information.

When? 

Google says it will begin a slow rollout from 1 December and will give “plenty of notice”, i.e. multiple notifications over months to the account email address and the recovery email (if there is one linked to the account).

How Can You Stop Your Old Account From Being Deleted?

To keep a Google account ‘active’ so it doesn’t end up being deleted as part of the policy, users should sign-in at least once every 2 years and take certain actions such as:

– Reading or sending an email.

– Using Google Drive.

– Watching a YouTube video.

– Downloading an app on the Google Play Store.

– Using Google Search.

– Using Sign in with Google to sign in to a third-party app or service.

– Google Photos, Subscriptions & YouTube Videos.

Users who have subscriptions set up through their Google account, e.g. to Google One, a news publication, or an app won’t have their account deleted as this constitutes activity. As for YouTube videos, Google says it currently has no plans to delete accounts with YouTube videos.

Google says that to retain Googe Photos, users should sign in every 2 years to be show activity and avoid any deletion.

Take A Backup

To avoid any issues, in addition to providing a recovery email address (to receive notifications), Google is encouraging users to take a backup of their account anyway, e.g. using its ‘Takeout’ feature. Users can also try using Google’s ‘Inactive Account Manager’ to tell Google in advance what should happen to their account if it’s inactive for up to 18 months.

What Does This Mean For Your Business? 

Although the policy doesn’t apply to business accounts, many businesses use domestic free Google accounts for business, and may have valued and important photos, archived emails, data and more stored in one or several Google accounts. The good news is that it doesn’t come into force until December so there’s time to revisit old accounts and indicate that they’re active, e.g. by simply sending an email from them. Google’s also made it clear that there’ll be many reminders along the way, which will only really be useful if a recovery email address has been set up. Google account users can, of course, choose to make a backup of their important data and files. It makes sense and is understandable that Google would want to pursue this policy from both a security and privacy (how long they hold on to user data) standpoint.

Tech Insight : The Impact of Generative AI On Datacentres

Generative AI tools like ChatGPT plus the rapid and revolutionary growth of AI are changing the face of most industries and generating dire warnings about the future, but what about the effects on datacentres?

Data Centres And Their Importance 

Data centres are the specialised facilities that house a large number of computer servers and networking equipment, serving as centralised locations where businesses and organisations store, manage, process, and distribute their digital data. These facilities are designed to provide a secure, controlled environment for storing and managing vast amounts of data in the cloud.

In our digital, cloud computing business world, data centres, therefore play a crucial role in supporting various industries and services that rely on large-scale data processing and storage, and are utilised by organisations ranging from small businesses to multinational corporations, cloud service providers, internet companies, government agencies, and research institutions.

The Impacts of Generative AI

There are number of ways that generative AI is impacting data centres. These include:

– The need for more data centres. Generative AI applications require significant computational resources, including servers, GPUs, and data storage devices. As the adoption of generative AI grows, data centres will need to invest and expand their infrastructure to accommodate the increased demand for processing power and storage capacity and this will change the data centre landscape. For example, greater investment in (and greater numbers of) data centres will be needed. It’s been noted that the massive data-crunching requirements of AI platforms like ChatGPT, for example, couldn’t continue to operate without using Microsoft’s (soon-to-be-updated) Azure cloud platform. This has led to Microsoft now building a new 750K SF hyperscale data centre campus near Quincy, WA, to house three 250K SF server farms on land costing $9.2M. Presumably, with more data centres there will also need to be greater efforts and investment to reduce and offset their carbon consumption.

– Greater power consumption and more cooling needed. Generative AI models are computationally intensive and consume substantial amounts of power. Data centres also have backup power sources to ensure a smooth supply, such as uninterruptible power supplies (UPS) and generators. With more use of generative AI, data centres will need to ensure they have sufficient power supply and cooling infrastructure to support the energy demands of generative AI applications. This could mean that data centres will now need to improve power supplies to cope with the demands of generative AI by conducting power capacity planning, upgrading infrastructure, implementing redundancy and backup systems, optimising power distribution efficiency, integrating renewable energy sources, implementing power monitoring and management systems, and collaborating with power suppliers. These measures could enhance power capacity, reliability, efficiency, and sustainability. More data centres may also need to be built with their own power plants (like Microsoft did in Dublin in 2017).

In terms of the greater need for cooling, i.e. to improve cooling capacity for generative AI in data centres, strategies include optimising airflow management, adopting advanced cooling technologies like liquid cooling, implementing intelligent monitoring systems, utilising computational fluid dynamics simulations, exploring innovative architectural designs, and leveraging AI algorithms for cooling control optimisation. These measures could all enhance airflow efficiency, prevent hotspots, improve heat dissipation, proactively adjust cooling parameters, inform cooling infrastructure design, and dynamically adapt to workload demands to meet the cooling challenges posed by generative AI.

– The need for scalability and flexibility. Generative AI models often require distributed computing and parallel processing to handle the complexity of training and inference tasks. Data centres therefore need to provide scalable and flexible infrastructure that can efficiently handle the workload and accommodate the growth of generative AI applications. Data centres will, therefore, need to be able to support generative AI applications through means such as:

– Virtualisation for dynamic resource allocation.
– High-performance Computing (HPC) clusters for computational power.
– Distributed storage systems for large datasets.
– Enhanced network infrastructure for increased data transfer.
– Edge computing for reduced latency and real-time processing.
– Containerisation platforms for flexible deployment and resource management.

– Data storage and retrieval. Generative AI models require extensive amounts of training data, which must be stored and accessed efficiently. Data centres now need to be able to optimise their data storage and retrieval systems to handle large datasets and enable high-throughput training of AI models.

– Security and privacy. Generative AI introduces new security and privacy challenges. Data centres must now be able to ensure the protection of sensitive data used in training and inferencing processes. Additionally, they need to address potential vulnerabilities associated with generative AI, such as the generation of realistic but malicious content or the potential for data leakage. Generative AI also poses cybersecurity challenges, as it can be used to create vast quantities of believable phishing emails or generate code with security vulnerabilities. Rather than just relying upon a lot of verification, there is likely to be increased dependency on skilled workers and smart software may be necessary to address these security risks effectively.

– Customisation and integration. Generative AI models often require customisation and integration into existing workflows and applications. This means that data centres need to provide the necessary tools and support for organisations to effectively integrate generative AI into their systems and leverage its capabilities.

– Skillset requirements. Managing and maintaining generative AI infrastructure requires specialised skills and data centres will need to invest in training their personnel and/or attracting professionals with expertise in AI technologies to effectively operate and optimise the infrastructure supporting generative AI.

– Optimisation for AI workloads. The rise of generative AI also means that data centres need to find ways to optimise their operations and infrastructure to cater to the specific requirements of AI workloads. This includes considerations for power efficiency, cooling systems, network bandwidth, and storage architectures that are tailored to the demands of generative AI applications.

– Uncertain infrastructure requirements. The power consumption and hardware requirements of the increasing use of generative AI applications are yet to be fully understood and this means that the impact on software and hardware remains uncertain, and the scale of infrastructure needed to support generative AI is still not clear. The implications of this for data centres are, for example:

– A lack of clarity on power consumption and hardware needs/the specific power and hardware requirements of generative AI applications are not fully understood, makes it challenging for data centres to accurately plan and allocate resources.
– The impact of generative AI on software and hardware is still unclear which makes it difficult for data centres to determine the necessary upgrades or modifications to support these applications.
– Without a clear understanding of the demands of generative AI, data centres cannot accurately estimate the scale of infrastructure required, potentially leading to under-provisioning or over-provisioning of resources.

– The need for flexibility and adaptability. Data centres must now be prepared to adjust their infrastructure dynamically to accommodate the evolving requirements of generative AI applications as more information becomes available.

AI Helping AI 

Ironically, data centres could use AI itself to help optimise their operations and infrastructure. For example, through:

– Predictive maintenance. AI analysing sensor data to detect equipment failures, minimising downtime.

– Energy efficiency. AI optimising power usage, cooling, and workload placement, reducing energy waste.

– Workload Optimisation. AI maximising performance by analysing workload patterns and allocating resources efficiently.

– Anomaly Detection. AI monitoring system metrics, identifies abnormal patterns and flags security or performance issues.

– Capacity Planning. AI analysing data to predict resource demands, optimising infrastructure expansion.

– Dynamic Resource Allocation. AI dynamically scaling computing resources, storage, and network capacity based on workload demands.

What Does This Mean For Your Business? 

Overall, while generative AI offers opportunities for increased efficiency and productivity for businesses, it also poses several challenges related to infrastructure, trust, security, and compliance. Data centres in our digital society and cloud-based business world now play a crucial role in supporting industries, business, services and as such, whole economies so how data centres to adapt quickly and effectively to the challenges posed by AI (or not) is something that could potentially affect all businesses going forward.

As a data centre operator or a business relying on data centres for smooth operations, the impact of generative AI on data centres presents both opportunities and challenges. On the one hand, the increased demand for processing power and storage capacity necessitates investments in infrastructure expansion and upgrades, providing potential business opportunities for data centre operators. It may lead to the establishment of more data centres and the need for greater efforts to reduce their carbon footprint.

However, this growth in generative AI also brings challenges that need to be addressed. Data centres must ensure sufficient power supply and cooling infrastructure to support the energy demands and heat dissipation requirements of generative AI applications. This may require capacity planning, infrastructure upgrades, integration of renewable energy sources, and the adoption of advanced cooling technologies. It also presents huge challenges in terms of trying to provide the necessary capacity in a way that minimises carbon emissions and meeting environmental targets.

Additionally, with the rise of generative AI, data centres now need to consider scalability, flexibility, security, and privacy implications. They must provide the necessary infrastructure and tools for businesses to integrate generative AI into their workflows and applications securely. Skillset requirements also come into play, as personnel need to be trained in AI technologies to effectively operate and optimise the data centre infrastructure.

Overall, understanding and addressing the implications of generative AI on data centres is crucial for both data centre operators and businesses relying on these facilities. By adapting to the evolving demands of generative AI and investing in optimised infrastructure and pursuing innovation, data centre operators can provide reliable and efficient services to businesses, ensuring seamless operations and unlocking the potential of generative AI for various industries.

Tech News : 20 NHS Trusts Shared Personal Data With Facebook

An Observer investigation has reported uncovering evidence that 20 NHS Trusts have been collecting data about patients’ medical conditions and sharing it with Facebook.

Using A Covert Tracking Tool 

The newspaper’s investigation found that over several years, the trusts have been using the Meta Pixel analytics tool to collect patient browsing data on their websites. The kind of data collected includes page views, buttons clicked, and keywords searched. This data can be matched with IP address and Facebook accounts to identify individuals and reveal their personal medical details.

Sharing this collected personal data, albeit unknowingly, with Facebook’s parent company without the consent of NHS Trust website users and, therefore, illegally (data protection/GDPR) is a breach of privacy rights.

Meta Pixel 

The Meta Pixel analytics tool is a piece of code which enables website owners to track visitor activities on their website, helps identify Facebook and Instagram users and see how they interacted with the content on your website. This information can then be used for targeted advertising.

17 Have Now Removed It 

It’s been reported that since the details of the newspaper’s investigation were made public, 17 of the 20 NHS trusts identified as using the Meta Pixel tool have now removed it from their website, with 8 of those trusts issuing an apology.

The UK’s Infromation Commissioner’s Office (ICO) is now reported to have begun an investigation into the activities of the trust.

UK GDPR 

Under the UK Data Protection Act 2018 and the EU General Data Protection Regulation (GDPR), organisations processing personal data must obtain lawful grounds for processing, which typically includes obtaining user consent. Personal data is any information that can directly or indirectly identify an individual.

An NHS trust using an analytics tool like Meta Pixel on their website to collect and share personal data without obtaining user consent, could potentially be illegal and both the NHS trust and the analytics tool provider (Meta) have responsibilities under data protection laws.

The GDPR and the UK Data Protection Act require organisations to provide transparent information to individuals about the collection and use of their personal data, including the purposes of processing and any third parties with whom the data is shared. Individuals must be given the opportunity to provide informed consent before their personal data is collected, unless another lawful basis for processing applies.

What Does This Mean For Your Business? 

The recent revelation that 20 NHS Trusts have been collecting and sharing personal data with Facebook through the use of the Meta Pixel analytics tool raises important lessons for businesses regarding their data protection practices. The Trusts’ actions, conducted without user consent, appear to represent a breach of privacy rights and potentially violate data protection laws, including the UK Data Protection Act 2018 and GDPR.

The Meta Pixel analytics tool, although widely used as an advertising effectiveness measurement tool, can have unintended consequences when it comes to personal data, such as medical data, and data privacy. The amount of information shared through this tool is often underestimated, and the implications for the NHS trusts could be severe. As several online commentators have pointed out, the trusts may have known little about how the Meta Pixel tool works and, therefore, collected, and shared user data unwittingly, however ignorance is unlikely to stand up as an excuse.

It is, of course encouraging that in response to the investigation, 17 out of the 20 identified NHS Trusts have at least removed the Meta Pixel tool from their websites, with some going on to issue apologies. To avoid similar privacy breaches and maintain the trust of customers, businesses should take immediate action.

Examples of how businesses could ensure their data protection compliance as regards their website and any tools used could include establishing a cross-functional data protection team with members from legal, technology, and marketing, and with the support of senior management. They could also conduct a thorough analysis of all data collected and transferred by websites and apps and identify the data necessary for their operations and ensure that legal grounds (such as consent) are in place for collecting and processing that data. For most smaller businesses it’s a case of remembering to stay on top of data protection matters, check what any tools are collecting and keep the importance of consent top-of-mind.

The implications for Meta of the newspaper’s report and the impending ICO investigation are significant as well. The incident highlights the need for greater transparency and understanding of the tools and services offered by companies like Meta, especially when it comes to sensitive topics and personal data. Privacy concerns arise when information from browsing habits is shared with social media platforms. Meta must address these concerns and ensure that the data collected through its tools is handled in accordance with data protection laws and user consent.

Overall, this case emphasises the importance of data protection compliance, informed consent, and transparency in the handling of personal data. Businesses must prioritise privacy and data security to maintain customer trust and avoid legal consequences.