Tech News : Glassdoor Site Shows Real Users’ Names

It’s been reported that Glassdoor (the website that allows current employees to anonymously review their employer) posted users’ real names to their profiles without their consent.

What Is Glassdoor? 

By allowing users to register anonymously, Glassdoor is a website that allows current and former employees to anonymously review their companies and management. Founded in 2007 in Mill Valley, California, the platform is used for obtaining insights into company cultures, salaries, and interview processes. Its aim is to foster workplace transparency, enabling job seekers to make better-informed decisions about their careers by learning from the experiences of others.


Unfortunately for Glassdoor, a user’s account (taken from her personal blog) of her recent negative experience with Glassdoor (following her contacting Glassdoor’s customer support ) has been widely reported in the press.

Added Name To Profile 

Following the lady (reportedly named Monica) sending an email to Glassdoor’s customer support that showed her full name in the ‘From’ line, Monica alleges that she then discovered that Glassdoor had updated her profile by adding her real name and location (the name pulled from the email), without her consent.

Users Leaving Glassdoor 

It’s been reported that the experience of Monica, identified as a Midwest-based software professional who joined Glassdoor 10 years ago, has now led to other members leaving the platform over fears they could also be ‘outed’. Not only could this be regarded as a breach of trust of the anonymity and privacy that users signed up with but could also have adverse employment consequences from employer retaliation.

Following reports of Monica’s experience in the media, it’s been reported that another user, identified as Josh Simmons, has also said Glassdoor added information about him to his personal profile, again without his consent.

Had To Delete Account 

It’s been reported that although Glassdoor’s privacy policy states “If we have collected and processed your personal information with your consent, then you can withdraw your consent at any time,”  Monica claims that she was not given this option, that Glassdoor stored her name, and that her only recommended option to remove her details was to delete her account altogether. Deleting also meant deleting her reviews.

Shared With Fishbowl

One of the complications of the case appears to be the fact that Glassdoor was integrated with Fishbowl (an app for work-related discussions), three years ago. This led to:

– Glassdoor now saying that it “may update your Profile with information we obtain from third parties. We may also use personal data you provide to us via your resume(s) or our other services.” 

– Glassdoor staff reportedly consult publicly-available sources of information to verify information that is then used to update users’ Glassdoor accounts, in order to improve the accuracy of information for Fishbowl users.

– Glassdoor updating users’ profiles without notifying the user, e.g. if inaccuracies are found, because of its commitment to keeping Fishbowl’s information accurate.

What Does Glassdoor Say? 

Glassdoor has issued a statement saying: “Glassdoor is committed to providing a platform for people to share their opinions and experiences about their jobs and companies, anonymously – without fear of intimidation or retaliation. User reviews on Glassdoor have always and will always be anonymous.” 

What Does This Mean For Your Business? 

A large part of the value of Glassdoor is the fact that users are willing to share their ‘honest’ views about their employers and managers. One of the key reasons they feel able to do so is the anonymity that they had during registration and the assumption that this would remain and that their privacy would be protected. However, if reports are to be believed, integration with and cross-pollination between Fishbowl and Glassdoor has led to policy changes and a new approach whereby a user’s details can be updated, allegedly without consent, and obtained from other sources thereby potentially meaning that users could be unmasked to employers.

The widely publicised stories of this allegedly happening appear likely to have damaged a key source of Glassdoor’s value – the trust that users have that their anonymity will be protected. This may explain why users are reportedly leaving the platform. This story illustrates how important matters of data protection are to businesses and individuals, particularly around privacy and consent, plus how risks can increase for users if aspects of data protection are damaged and changed.

The consequences of putting users in what could be described as a difficult and risky position could potentially be severe and/or long-lasting damage for Glassdoor’s business and reputation.

Tech News : Google Files Fiasco

An issue with Google’s cloud that locked some Google Drive for desktop users out of some of their files from the last six months has led to some angry comments being left on Google’s community support site.

Reported Over A Week Ago 

The issue was first reported by a user (known as ‘Yeonjoong) back on 22 November. The user describes the issue (on the Google Drive Help page) as:

“The Drive literally went back to condition in May 2023. Data from May until today disappeared, and the folder structure went back to status in May. Google Drive activity doesn’t show any changes (only show activity that was in May). No files were deleted manually, so no files in Trash. I never sync or shared my files and drive to anyone, I used the drive locally.“ 

Still Investigating

At the time of writing this article, Google says it is still investigating the issue. Also, the user who first reported it claims that none of the fixes suggested by Google so far have worked.

What Are Other Users Saying? 

Posts from other affected users have highlighted issues such as:

– Losing access to important files from recent months.

– Questioning the dependability of the Drive Service.

– Asking for a full explanation of what had happened and to be informed about when their data will appear back.

– Fears that important data may have been altered or permanently lost, e.g. by clicking on the disconnect button (which users have now been informed not to do, but some claim were told to do originally).

– Reports of stress and worry, with some users threatening legal action.

What Does Google Advise? 

At present, Google’s advice to affected Drive for desktop users in relation to what Google is calling the “Drive for desktop (v84.0.0.0 – Sync Issue” is:

– Not to click “Disconnect account” within Drive for desktop.

– Not to delete or move the app data folder: Windows: %USERPROFILE%\AppData\Local\Google\DriveFS or macOS: ~/Library/Application Support/Google/DriveFS.

– (Optional) If users have room on your hard drive, to make a copy of the app data folder.

Google Cloud Vulnerabilities 

This latest story comes hot on the heels of Bitdefender researchers reporting recently that they’d discovered vulnerabilities in Google Workspace and Google Cloud Platform which, after first compromising the local machine, could allow threat actors to extend their activities to a “chain reaction” network-wide breach, potentially leading to ransomware attacks or data exfiltration.

What Does This Mean For Your Business? 

The main concern for businesses directly impacted by this issue revolves around data integrity and reliability. Losing access to recent files can disrupt ongoing projects, delay deadlines, potentially lead to financial losses, and make users very angry and frustrated (as the comments on Google’s help page show). This incident highlights the importance of having a robust backup strategy that doesn’t rely solely on cloud services. Those businesses who have been directly affected or those who may have been spooked by this story may now want to reassess their data management policies, considering additional local or multi-cloud backups for critical data.

For the wider base of Google Cloud users, this incident could be seen as a kind of cautionary tale that underscores the need for vigilance in cloud data management, and the importance of understanding the potential risks associated with cloud storage solutions. Really (time and resources permitting) users should try to stay informed about best practices for data safety and be proactive in implementing them. This could include regular audits of data access, backup strategies, and staying updated on service updates and potential vulnerabilities. That said, it seems fair for most businesses who are paying Google for aspects of its cloud service to at least expect to be able to access their files when they need them and if there is there is a problem, expect Google to sort it out quite quickly (not a week or so later). Also, many users may have been even more frustrated by a possible lack of communication on Google’s side about the issue, e.g. at least an estimate of when they could expect it to be fixed and regular updates on the situation.

For Google, this lockout issue could obviously be damaging to its reputation as a reliable cloud service provider. In the competitive cloud market, reliability and trust are paramount. Google will need to not only address the current issue swiftly and transparently but also take proactive steps to prevent similar occurrences in the future. This could involve investing more in their infrastructure, enhancing their communication protocols during crises, and possibly reviewing their update and deployment strategies to ensure minimal disruption to users. The way Google handles this situation (which many affected users haven’t been too impressed with so far) could have lasting effects on its market position and user trust.

Although this issue has posed some challenges to affected businesses, it could also be seen on reflection as providing valuable lessons for all stakeholders in the cloud services arena. Lessons include understanding more fully what their customers value the most and making more of a commitment to matters of reliability, transparency, and communication.

Tech News : AI Improving Coding & Data Sorting

Google DeepMind’s AlphaDev artificial intelligence (AI) system has produced new algorithms that can write code better than humans and could create better programs.

Discovering Enhanced Algorithms 

In a recent research paper, Google’s Deepmind has outlined how AlphaDev is able to use ‘reinforcement learning’ to discover enhanced computer science algorithms which surpass those honed by scientists and engineers over decades.


Google’s AI research organisation, Deepmind, says that with a digital society driving increasing demand for computation and energy use, plus with only a reliance on improvements in hardware so far to keep pace, microchips are approaching their physical limits. This means that it’s now critical to improve the code that runs on them to make computing more powerful and sustainable going forward.

What Is Reinforcement Learning?

Reinforcement learning is a subfield of AI where an agent learns by interacting with an environment to maximise cumulative rewards. The agent observes the current state, takes actions, and receives feedback to improve decision-making over time. The policy guides the agent’s action selection based on a set of rules. The iterative process involves observing the state, taking actions, receiving rewards, and updating the policy using algorithms like Q-learning or policy gradients.

Reinforcement learning is applied in game playing, robotics, recommendation systems, finance, and healthcare, achieving impressive results such as training agents to play games at superhuman levels and enabling autonomous systems to learn complex tasks independently.

A Faster Algorithm For ‘Sorting’ Discovered 

In computing, ‘sorting’ is a method for ordering data, with sorting algorithms underpinning everything from ranking online search results and social posts to how data is processed on computers and phones.

The recent research paper revealed that using reinforcement learning, following being tasked with developing a new way to sort short sequences in numbers in the popular coding language C++, AlphaDev was able to uncover a faster algorithm for sorting, beating well-established sorting algorithms.

In essence, the AlphaDev has been able to improve how the structure of a working assembly program is represented in the AI code, thereby allowing its reward system to better narrow down the possibilities, making the AI better and faster.

C++ Running Faster, & More 

This discovery has led to C++ running faster and improved algorithms for sorting and other basic tasks like hashing. For example, AlphaDev’s new C++ sorting algorithms are 1.7 per cent more efficient than previous methods for sorting long sequences of numbers, and up to 70 per cent faster for five-item sequences.

When scaled up, this could transform how we program computers and could impact all aspects of our increasingly digital society.

Added To The Main C++ Library 

The improved algorithms have been open sourced and added to the main the main C++ library, used by millions of developers and companies in many industries around the world on AI applications. DeepMind says it’s the first change to this part of the sorting library in over a decade and is the first time an algorithm designed through reinforcement learning has been added to this library.

What Does This Mean For Your Business? 

With so many AI applications in so many industries using sorting algorithms, and with AlphaDev’s improved algorithms being open sourced, AlphaDev’s discovery is already making an impact by speeding many apps and programs. DeepMind believes that many improvements exist at a lower level that may be difficult to discover in a higher-level coding language and sees this as just the beginning and an important steppingstone for using AI to optimise the world’s code, one algorithm at a time.  This means that AlphaDev’s reinforcement learning approach could point to many more small algorithm improvements to come that could be quickly fed into the hands of developers through open sourcing thereby benefitting businesses, industries, and users around the world.

Tech Insight : Google Deleting Dormant Data

In this insight we look at Google’s updated inactive account policy whereby Google accounts not used for 2 years could be deleted, meaning the loss of important emails, photos, data and more.

Gmail, YouTube, & Google Photos Accounts

The inactive account deletions are part of a policy change for Google’s products and will apply to personal accounts (not business accounts) and their contents, i.e. content within the workspace including Gmail, Docs, Drive, Meet, Calendar, YouTube, and Google Photos.


The main reason for deletions of inactive accounts is to improve security. For example, older dormant accounts tend to rely on old or re-used passwords that may have been compromised and, Google says, are ten times less likely to have multi-factor authentication (2FA) set up on them. This means they’re more vulnerable to hacking and when compromised, could be used for other malicious activity, e.g., identity theft, or sending spam. Also, the move by Google is a step towards aligning its own policies with “industry standards” around retention and account deletion and is a way to help limit the amount of time Google retains users’ personal information.


Google says it will begin a slow rollout from 1 December and will give “plenty of notice”, i.e. multiple notifications over months to the account email address and the recovery email (if there is one linked to the account).

How Can You Stop Your Old Account From Being Deleted?

To keep a Google account ‘active’ so it doesn’t end up being deleted as part of the policy, users should sign-in at least once every 2 years and take certain actions such as:

– Reading or sending an email.

– Using Google Drive.

– Watching a YouTube video.

– Downloading an app on the Google Play Store.

– Using Google Search.

– Using Sign in with Google to sign in to a third-party app or service.

– Google Photos, Subscriptions & YouTube Videos.

Users who have subscriptions set up through their Google account, e.g. to Google One, a news publication, or an app won’t have their account deleted as this constitutes activity. As for YouTube videos, Google says it currently has no plans to delete accounts with YouTube videos.

Google says that to retain Googe Photos, users should sign in every 2 years to be show activity and avoid any deletion.

Take A Backup

To avoid any issues, in addition to providing a recovery email address (to receive notifications), Google is encouraging users to take a backup of their account anyway, e.g. using its ‘Takeout’ feature. Users can also try using Google’s ‘Inactive Account Manager’ to tell Google in advance what should happen to their account if it’s inactive for up to 18 months.

What Does This Mean For Your Business? 

Although the policy doesn’t apply to business accounts, many businesses use domestic free Google accounts for business, and may have valued and important photos, archived emails, data and more stored in one or several Google accounts. The good news is that it doesn’t come into force until December so there’s time to revisit old accounts and indicate that they’re active, e.g. by simply sending an email from them. Google’s also made it clear that there’ll be many reminders along the way, which will only really be useful if a recovery email address has been set up. Google account users can, of course, choose to make a backup of their important data and files. It makes sense and is understandable that Google would want to pursue this policy from both a security and privacy (how long they hold on to user data) standpoint.

Tech Insight : The Impact of Generative AI On Datacentres

Generative AI tools like ChatGPT plus the rapid and revolutionary growth of AI are changing the face of most industries and generating dire warnings about the future, but what about the effects on datacentres?

Data Centres And Their Importance 

Data centres are the specialised facilities that house a large number of computer servers and networking equipment, serving as centralised locations where businesses and organisations store, manage, process, and distribute their digital data. These facilities are designed to provide a secure, controlled environment for storing and managing vast amounts of data in the cloud.

In our digital, cloud computing business world, data centres, therefore play a crucial role in supporting various industries and services that rely on large-scale data processing and storage, and are utilised by organisations ranging from small businesses to multinational corporations, cloud service providers, internet companies, government agencies, and research institutions.

The Impacts of Generative AI

There are number of ways that generative AI is impacting data centres. These include:

– The need for more data centres. Generative AI applications require significant computational resources, including servers, GPUs, and data storage devices. As the adoption of generative AI grows, data centres will need to invest and expand their infrastructure to accommodate the increased demand for processing power and storage capacity and this will change the data centre landscape. For example, greater investment in (and greater numbers of) data centres will be needed. It’s been noted that the massive data-crunching requirements of AI platforms like ChatGPT, for example, couldn’t continue to operate without using Microsoft’s (soon-to-be-updated) Azure cloud platform. This has led to Microsoft now building a new 750K SF hyperscale data centre campus near Quincy, WA, to house three 250K SF server farms on land costing $9.2M. Presumably, with more data centres there will also need to be greater efforts and investment to reduce and offset their carbon consumption.

– Greater power consumption and more cooling needed. Generative AI models are computationally intensive and consume substantial amounts of power. Data centres also have backup power sources to ensure a smooth supply, such as uninterruptible power supplies (UPS) and generators. With more use of generative AI, data centres will need to ensure they have sufficient power supply and cooling infrastructure to support the energy demands of generative AI applications. This could mean that data centres will now need to improve power supplies to cope with the demands of generative AI by conducting power capacity planning, upgrading infrastructure, implementing redundancy and backup systems, optimising power distribution efficiency, integrating renewable energy sources, implementing power monitoring and management systems, and collaborating with power suppliers. These measures could enhance power capacity, reliability, efficiency, and sustainability. More data centres may also need to be built with their own power plants (like Microsoft did in Dublin in 2017).

In terms of the greater need for cooling, i.e. to improve cooling capacity for generative AI in data centres, strategies include optimising airflow management, adopting advanced cooling technologies like liquid cooling, implementing intelligent monitoring systems, utilising computational fluid dynamics simulations, exploring innovative architectural designs, and leveraging AI algorithms for cooling control optimisation. These measures could all enhance airflow efficiency, prevent hotspots, improve heat dissipation, proactively adjust cooling parameters, inform cooling infrastructure design, and dynamically adapt to workload demands to meet the cooling challenges posed by generative AI.

– The need for scalability and flexibility. Generative AI models often require distributed computing and parallel processing to handle the complexity of training and inference tasks. Data centres therefore need to provide scalable and flexible infrastructure that can efficiently handle the workload and accommodate the growth of generative AI applications. Data centres will, therefore, need to be able to support generative AI applications through means such as:

– Virtualisation for dynamic resource allocation.
– High-performance Computing (HPC) clusters for computational power.
– Distributed storage systems for large datasets.
– Enhanced network infrastructure for increased data transfer.
– Edge computing for reduced latency and real-time processing.
– Containerisation platforms for flexible deployment and resource management.

– Data storage and retrieval. Generative AI models require extensive amounts of training data, which must be stored and accessed efficiently. Data centres now need to be able to optimise their data storage and retrieval systems to handle large datasets and enable high-throughput training of AI models.

– Security and privacy. Generative AI introduces new security and privacy challenges. Data centres must now be able to ensure the protection of sensitive data used in training and inferencing processes. Additionally, they need to address potential vulnerabilities associated with generative AI, such as the generation of realistic but malicious content or the potential for data leakage. Generative AI also poses cybersecurity challenges, as it can be used to create vast quantities of believable phishing emails or generate code with security vulnerabilities. Rather than just relying upon a lot of verification, there is likely to be increased dependency on skilled workers and smart software may be necessary to address these security risks effectively.

– Customisation and integration. Generative AI models often require customisation and integration into existing workflows and applications. This means that data centres need to provide the necessary tools and support for organisations to effectively integrate generative AI into their systems and leverage its capabilities.

– Skillset requirements. Managing and maintaining generative AI infrastructure requires specialised skills and data centres will need to invest in training their personnel and/or attracting professionals with expertise in AI technologies to effectively operate and optimise the infrastructure supporting generative AI.

– Optimisation for AI workloads. The rise of generative AI also means that data centres need to find ways to optimise their operations and infrastructure to cater to the specific requirements of AI workloads. This includes considerations for power efficiency, cooling systems, network bandwidth, and storage architectures that are tailored to the demands of generative AI applications.

– Uncertain infrastructure requirements. The power consumption and hardware requirements of the increasing use of generative AI applications are yet to be fully understood and this means that the impact on software and hardware remains uncertain, and the scale of infrastructure needed to support generative AI is still not clear. The implications of this for data centres are, for example:

– A lack of clarity on power consumption and hardware needs/the specific power and hardware requirements of generative AI applications are not fully understood, makes it challenging for data centres to accurately plan and allocate resources.
– The impact of generative AI on software and hardware is still unclear which makes it difficult for data centres to determine the necessary upgrades or modifications to support these applications.
– Without a clear understanding of the demands of generative AI, data centres cannot accurately estimate the scale of infrastructure required, potentially leading to under-provisioning or over-provisioning of resources.

– The need for flexibility and adaptability. Data centres must now be prepared to adjust their infrastructure dynamically to accommodate the evolving requirements of generative AI applications as more information becomes available.

AI Helping AI 

Ironically, data centres could use AI itself to help optimise their operations and infrastructure. For example, through:

– Predictive maintenance. AI analysing sensor data to detect equipment failures, minimising downtime.

– Energy efficiency. AI optimising power usage, cooling, and workload placement, reducing energy waste.

– Workload Optimisation. AI maximising performance by analysing workload patterns and allocating resources efficiently.

– Anomaly Detection. AI monitoring system metrics, identifies abnormal patterns and flags security or performance issues.

– Capacity Planning. AI analysing data to predict resource demands, optimising infrastructure expansion.

– Dynamic Resource Allocation. AI dynamically scaling computing resources, storage, and network capacity based on workload demands.

What Does This Mean For Your Business? 

Overall, while generative AI offers opportunities for increased efficiency and productivity for businesses, it also poses several challenges related to infrastructure, trust, security, and compliance. Data centres in our digital society and cloud-based business world now play a crucial role in supporting industries, business, services and as such, whole economies so how data centres to adapt quickly and effectively to the challenges posed by AI (or not) is something that could potentially affect all businesses going forward.

As a data centre operator or a business relying on data centres for smooth operations, the impact of generative AI on data centres presents both opportunities and challenges. On the one hand, the increased demand for processing power and storage capacity necessitates investments in infrastructure expansion and upgrades, providing potential business opportunities for data centre operators. It may lead to the establishment of more data centres and the need for greater efforts to reduce their carbon footprint.

However, this growth in generative AI also brings challenges that need to be addressed. Data centres must ensure sufficient power supply and cooling infrastructure to support the energy demands and heat dissipation requirements of generative AI applications. This may require capacity planning, infrastructure upgrades, integration of renewable energy sources, and the adoption of advanced cooling technologies. It also presents huge challenges in terms of trying to provide the necessary capacity in a way that minimises carbon emissions and meeting environmental targets.

Additionally, with the rise of generative AI, data centres now need to consider scalability, flexibility, security, and privacy implications. They must provide the necessary infrastructure and tools for businesses to integrate generative AI into their workflows and applications securely. Skillset requirements also come into play, as personnel need to be trained in AI technologies to effectively operate and optimise the data centre infrastructure.

Overall, understanding and addressing the implications of generative AI on data centres is crucial for both data centre operators and businesses relying on these facilities. By adapting to the evolving demands of generative AI and investing in optimised infrastructure and pursuing innovation, data centre operators can provide reliable and efficient services to businesses, ensuring seamless operations and unlocking the potential of generative AI for various industries.

Tech News : 20 NHS Trusts Shared Personal Data With Facebook

An Observer investigation has reported uncovering evidence that 20 NHS Trusts have been collecting data about patients’ medical conditions and sharing it with Facebook.

Using A Covert Tracking Tool 

The newspaper’s investigation found that over several years, the trusts have been using the Meta Pixel analytics tool to collect patient browsing data on their websites. The kind of data collected includes page views, buttons clicked, and keywords searched. This data can be matched with IP address and Facebook accounts to identify individuals and reveal their personal medical details.

Sharing this collected personal data, albeit unknowingly, with Facebook’s parent company without the consent of NHS Trust website users and, therefore, illegally (data protection/GDPR) is a breach of privacy rights.

Meta Pixel 

The Meta Pixel analytics tool is a piece of code which enables website owners to track visitor activities on their website, helps identify Facebook and Instagram users and see how they interacted with the content on your website. This information can then be used for targeted advertising.

17 Have Now Removed It 

It’s been reported that since the details of the newspaper’s investigation were made public, 17 of the 20 NHS trusts identified as using the Meta Pixel tool have now removed it from their website, with 8 of those trusts issuing an apology.

The UK’s Infromation Commissioner’s Office (ICO) is now reported to have begun an investigation into the activities of the trust.


Under the UK Data Protection Act 2018 and the EU General Data Protection Regulation (GDPR), organisations processing personal data must obtain lawful grounds for processing, which typically includes obtaining user consent. Personal data is any information that can directly or indirectly identify an individual.

An NHS trust using an analytics tool like Meta Pixel on their website to collect and share personal data without obtaining user consent, could potentially be illegal and both the NHS trust and the analytics tool provider (Meta) have responsibilities under data protection laws.

The GDPR and the UK Data Protection Act require organisations to provide transparent information to individuals about the collection and use of their personal data, including the purposes of processing and any third parties with whom the data is shared. Individuals must be given the opportunity to provide informed consent before their personal data is collected, unless another lawful basis for processing applies.

What Does This Mean For Your Business? 

The recent revelation that 20 NHS Trusts have been collecting and sharing personal data with Facebook through the use of the Meta Pixel analytics tool raises important lessons for businesses regarding their data protection practices. The Trusts’ actions, conducted without user consent, appear to represent a breach of privacy rights and potentially violate data protection laws, including the UK Data Protection Act 2018 and GDPR.

The Meta Pixel analytics tool, although widely used as an advertising effectiveness measurement tool, can have unintended consequences when it comes to personal data, such as medical data, and data privacy. The amount of information shared through this tool is often underestimated, and the implications for the NHS trusts could be severe. As several online commentators have pointed out, the trusts may have known little about how the Meta Pixel tool works and, therefore, collected, and shared user data unwittingly, however ignorance is unlikely to stand up as an excuse.

It is, of course encouraging that in response to the investigation, 17 out of the 20 identified NHS Trusts have at least removed the Meta Pixel tool from their websites, with some going on to issue apologies. To avoid similar privacy breaches and maintain the trust of customers, businesses should take immediate action.

Examples of how businesses could ensure their data protection compliance as regards their website and any tools used could include establishing a cross-functional data protection team with members from legal, technology, and marketing, and with the support of senior management. They could also conduct a thorough analysis of all data collected and transferred by websites and apps and identify the data necessary for their operations and ensure that legal grounds (such as consent) are in place for collecting and processing that data. For most smaller businesses it’s a case of remembering to stay on top of data protection matters, check what any tools are collecting and keep the importance of consent top-of-mind.

The implications for Meta of the newspaper’s report and the impending ICO investigation are significant as well. The incident highlights the need for greater transparency and understanding of the tools and services offered by companies like Meta, especially when it comes to sensitive topics and personal data. Privacy concerns arise when information from browsing habits is shared with social media platforms. Meta must address these concerns and ensure that the data collected through its tools is handled in accordance with data protection laws and user consent.

Overall, this case emphasises the importance of data protection compliance, informed consent, and transparency in the handling of personal data. Businesses must prioritise privacy and data security to maintain customer trust and avoid legal consequences.