Featured Article : Google Deleting Millions Of Users’ Incognito Data

As part of a deal to resolve a class action lawsuit in the US dating back to 2020, Google has said it will delete the incognito mode search data of millions of users.

What Lawsuit? 

In June 2020 in the US, three Californians named Chasom Brown, Christopher Castillo, and Monique Trujill (along with William Byatt of Florida and Jeremy Davis of Arkansas) brought a lawsuit against Google’s Incognito mode. They filed the class-action lawsuit on behalf of themselves and potentially millions of other Google users who believed their data was being collected by Google despite using Incognito mode for private browsing.

The plaintiffs accused Google of capturing data despite assurances that it would not, thereby misleading users about the privacy level provided by Incognito mode. For example, internal Google emails highlighted by the lawsuit appeared to show that users using incognito mode were actually being tracked by Google to measure web traffic and sell ads.

The original lawsuit was seeking at least $5 billion in damages from Google.

What’s Been Happening? 

Since the lawsuit was originally filed, some of the main events of note between the plaintiffs and Google have included:

– Google attempting to have the lawsuit dismissed, arguing that it never promised complete privacy or non-collection of data in Incognito mode. At the time, Google pointed to the disclaimers presented to users when opening an Incognito tab, which stated that activity might still be visible to websites, web services, and employers or schools.

– A judge then rejected Google’s request to dismiss the case. The judge emphasised that Google didn’t explicitly inform users that it would collect data in the manner alleged by the plaintiffs. This decision meant that the lawsuit could again move forward.

– Finally, back in December last year, with the scheduled trial due to begin in February 2024, the lawyers for Google and the plaintiffs announced that a preliminary settlement had been reached, i.e. Google had agreed to settle the class-action lawsuit. In doing so, Google acknowledged that it needed to address the plaintiffs’ concerns (but without admitting wrongdoing).

– In January, however, following the preliminary settlement announcement, Google updated its disclosures, clarifying that it still tracked user data even when users opted to search privately or used its “Incognito” setting.

– Google also said it was trialling a new feature that could automatically block third-party cookies (to prevent user activity being tracked) for all Google Chrome users and had made the block automatic for Incognito just after the lawsuit was filed. It’s also understood that as part of the settlement deal, this automatic block feature will stay in place for 5 years.

Mass Deletions 

Under the terms of the final settlement, the full details of which are not publicly known, Google has agreed to delete hundreds of billions of the private browsing data records that it collected (with incognito).

Google Says…

A Google spokesperson has been quoted as saying that the company was pleased to settle the lawsuit which it “always believed was meritless” and that it is “happy to delete old technical data that was never associated with an individual and was never used for any form of personalisation”. 

What Does This Mean For Your Business? 

This agreement came after extensive legal battles and discussions, which in themselves highlight the complexities surrounding user privacy and data collection practices in the digital age. Part of the complexity in the case appeared to be trying to decide whether, as the plaintiffs’ lawyers argued, Google was misleading users and violating privacy and wiretapping laws or, as Google’s lawyers said, Incognito mode was designed to allow users to browse without saving activity to their local device but not to entirely prevent Google or other services from tracking user activities online.

Google has consistently denied wrongdoing and maintained its stance. However, Google (and its parent company Alphabet) are already facing two other potentially painful monopoly cases brought by the US federal government and had to pay £318m in 2022 in settlement of claims brought by US states over it allegedly tracking the location of users who’d had opted out of location services on their devices. It’s not surprising, therefore, that Google has opted to settle in this most recently concluded case although, in addition to having to delete hundreds of billions of browsing records, there are no public details yet of what else it’s cost.

The settlement, therefore, will be seen by many as a victory in terms of forcing dominant technology companies to be more honest in their representations to users about how they collect and employ user data. For big tech companies such as Google, privacy and tracking have become a difficult area. Google had already moved to free itself from other volatile privacy matters around browsing by announcing back in 2020 that it would be looking to eliminate third-party cookies within two years anyway (which has been delayed) and cookies have been subject to greater regulation in recent years.

This latest settlement is bad news for Google (and advertisers) however it is likely to be good news for the many millions of Google Chrome users whose interests were represented in the class-action lawsuit.

Tech News : Glassdoor Site Shows Real Users’ Names

It’s been reported that Glassdoor (the website that allows current employees to anonymously review their employer) posted users’ real names to their profiles without their consent.

What Is Glassdoor? 

By allowing users to register anonymously, Glassdoor is a website that allows current and former employees to anonymously review their companies and management. Founded in 2007 in Mill Valley, California, the platform is used for obtaining insights into company cultures, salaries, and interview processes. Its aim is to foster workplace transparency, enabling job seekers to make better-informed decisions about their careers by learning from the experiences of others.

Reported 

Unfortunately for Glassdoor, a user’s account (taken from her personal blog) of her recent negative experience with Glassdoor (following her contacting Glassdoor’s customer support ) has been widely reported in the press.

Added Name To Profile 

Following the lady (reportedly named Monica) sending an email to Glassdoor’s customer support that showed her full name in the ‘From’ line, Monica alleges that she then discovered that Glassdoor had updated her profile by adding her real name and location (the name pulled from the email), without her consent.

Users Leaving Glassdoor 

It’s been reported that the experience of Monica, identified as a Midwest-based software professional who joined Glassdoor 10 years ago, has now led to other members leaving the platform over fears they could also be ‘outed’. Not only could this be regarded as a breach of trust of the anonymity and privacy that users signed up with but could also have adverse employment consequences from employer retaliation.

Following reports of Monica’s experience in the media, it’s been reported that another user, identified as Josh Simmons, has also said Glassdoor added information about him to his personal profile, again without his consent.

Had To Delete Account 

It’s been reported that although Glassdoor’s privacy policy states “If we have collected and processed your personal information with your consent, then you can withdraw your consent at any time,”  Monica claims that she was not given this option, that Glassdoor stored her name, and that her only recommended option to remove her details was to delete her account altogether. Deleting also meant deleting her reviews.

Shared With Fishbowl

One of the complications of the case appears to be the fact that Glassdoor was integrated with Fishbowl (an app for work-related discussions), three years ago. This led to:

– Glassdoor now saying that it “may update your Profile with information we obtain from third parties. We may also use personal data you provide to us via your resume(s) or our other services.” 

– Glassdoor staff reportedly consult publicly-available sources of information to verify information that is then used to update users’ Glassdoor accounts, in order to improve the accuracy of information for Fishbowl users.

– Glassdoor updating users’ profiles without notifying the user, e.g. if inaccuracies are found, because of its commitment to keeping Fishbowl’s information accurate.

What Does Glassdoor Say? 

Glassdoor has issued a statement saying: “Glassdoor is committed to providing a platform for people to share their opinions and experiences about their jobs and companies, anonymously – without fear of intimidation or retaliation. User reviews on Glassdoor have always and will always be anonymous.” 

What Does This Mean For Your Business? 

A large part of the value of Glassdoor is the fact that users are willing to share their ‘honest’ views about their employers and managers. One of the key reasons they feel able to do so is the anonymity that they had during registration and the assumption that this would remain and that their privacy would be protected. However, if reports are to be believed, integration with and cross-pollination between Fishbowl and Glassdoor has led to policy changes and a new approach whereby a user’s details can be updated, allegedly without consent, and obtained from other sources thereby potentially meaning that users could be unmasked to employers.

The widely publicised stories of this allegedly happening appear likely to have damaged a key source of Glassdoor’s value – the trust that users have that their anonymity will be protected. This may explain why users are reportedly leaving the platform. This story illustrates how important matters of data protection are to businesses and individuals, particularly around privacy and consent, plus how risks can increase for users if aspects of data protection are damaged and changed.

The consequences of putting users in what could be described as a difficult and risky position could potentially be severe and/or long-lasting damage for Glassdoor’s business and reputation.

Tech News : €345m Children’s Data Privacy Fine For TikTok

Video-focused social media platform TikTok has been fined €345m by Ireland’s Data Protection Commission (DPC) over the privacy of child users.

The Processing of Personal Data 

The fine, as well as a reprimand (and an order requiring them to bring its data processing into compliance within three months) were issued in relation to how the company processed personal data relating to child users in terms of:

– Some of the TikTok platform settings, such as public-by-default settings as well as the settings associated with the ‘Family Pairing’ feature.

– Age verification in the registration process.

During its investigation into TikTok, The DPC also looked at transparency information for children. The DPC’s investigation focused on the period from 31 July 2020 and 31 December 2020.

Explained 

Explained in basic terms, TikTok was fined because (according to the DPC’s findings) :

– The profile settings for child users accounts being set to public-by-default meant that anyone (on or off TikTok) could view the content posted by the child user. The DPC said this also posed risks to children under 13 who had gained access TikTok.

– The ‘Family Pairing’ setting allowed a non-child user (who couldn’t be verified as the parent or guardian) to pair their account to the child user’s account. The DPC says this enabled non-child users to enable Direct Messages for child users over 16, thereby posing a risk to child users.

– Child users hadn’t been provided with sufficient information transparency.

– The DPC said that TikTok had implemented “dark patterns” by “nudging users towards choosing more privacy-intrusive options during the registration process, and when posting videos.” 

TikTok Says…

TikTok has been reported as saying that it disagrees with the findings and the level of the fine. TikTok also said: “The criticisms are focused on features and settings that were in place three years ago, and that we made changes to well before the investigation even began, such as setting all under 16 accounts to private by default”.

Fines

This isn’t the first fine for TikTok in relation to this subject. For example, in July 2020, the company was fined $5.7 million by the U.S. Federal Trade Commission (FTC) for collecting data from minors without parental consent. Also, in April this year, TikTok was fined £12.7m by the ICO for allowing children under 13 to use the platform (in 2020).

The level of TikTok’s most recent fine, however, is not as much as the £1bn fine issued to Meta in May for mishandling people’s data in transfer between Europe and the US.

Banned In Many Countries

In addition to fines in the some of the countries where the TikTok app is allowed, for a mixture of reasons including worries about data privacy for young users, possible links to the Chinese state, incompatibility with some religious laws and some political situation(s) have resulted in TikTok being banned in Somalia, Norway, New Zealand, The Netherlands, India, Denmark, Canada, Belgium, Australia, and Afghanistan.

What Does This Mean For Your Business?

Back in 2020, TikTok was experiencing massive growth as the most downloaded app in the world. It was also the year when former U.S. President Donald Trump issued an executive order aiming to ban TikTok in the United States, plus the year when the platform picked up its first big fine ($5.7 million) from the FTC (in the US) over collecting data from minors without parental consent.

As pointed out by TikTok, this latest, much larger European fine dates back to issues from around the same time, which TikTok argues it had already addressed before the DPC’s investigation began. This story highlights how important it is to create a safe environment in this digital society for children and young people who are frequent users of the web and particularly social media platforms. This story also highlights how important it is for businesses to pay particular attention to data regulations relating to children and young users and to review systems and processes with this mind to ensure maximum efforts are made maintain privacy and safety.

Furthermore, it is also an example of the importance of having regulators with ‘teeth’ that can impose substantial fines and generate bad publicity for non-compliance which can help provide the motivation for the big tech companies to take privacy matters more seriously. TikTok’s worries, however, aren’t just related to data privacy issues. Ongoing frosty political relations between China and the west mean that its relationship with the Chinese government is still in question and this, together with the bans of the app in many countries means it remains under scrutiny, perhaps more than other (US based) social media platforms.

Tech News : Fitbit Data Transfer Complaints

Vienna-based advocacy group ‘Noyb’ has filed complaints against Google-owned Fitbit, alleging that it has violated the EU’s GDPR over illegal exporting of user data.

Complaints In Three Countries 

Noyb, which stands for ‘None Of Your Business,’ (and founded by privacy activist Max Schrems) filed three complaints against Fitbit – in Austria, the Netherlands and Italy.

Why? 

Noyb alleges that Fitbit forces users to consent to data transfers outside the EU, to the US and other countries (with different data protection laws), without providing users with the possibility to withdraw their consent, thereby potentially violating GDPR’s requirements. Noyb says that the only option users have to stop the “illegal processing” is to completely delete their Fitbit account.

How Would This Go Against GDPR? 

There are several ways that this (alleged) practice by Google’s Fitbit could violate GDPR. For example:

– GDPR mandates that consent must be freely given. If users are forced to agree to data transfers with no ability to withdraw, the consent is not freely given.

– Under GDPR, users must be informed about how their data will be used and processed. If the data transfer is a condition that users cannot opt-out of, then the consent cannot be considered specific or informed.

In relation to these points, Noyb says that because Fitbit (allegedly) forces users to consent to sharing sensitive data without providing them with clear information about possible implications or the specific countries their data goes to, this means that consent that it is neither free, informed, or specific (as GDPR requires).

Sensitive Data 

GDPR also emphasises that only the data that is necessary for the intended purpose should be collected and processed. Fitbit Forcing data transfers may violate this principle if the data being transferred is broader than what is strictly necessary for the service provided.

In relation to this, Noyb alleges that Fitbit’s privacy policy says that the shared data not only includes things like a user’s email address, date of birth and gender, but can also include “data like logs for food, weight, sleep, water, or female health tracking; an alarm; and messages on discussion boards or to your friends on the Services”.  This has raised concerns that, for example, the sharing of menstrual tracking data could be used in court cases where abortion care is criminalised, especially considering that sharing this kind of data is not common practice even in specialised menstrual tracking apps.

Also, Noyb alleges that the collected Fitbit data can even be shared for processing with third-party companies, the location of which are unknown, and that it’s “impossible” for users to find out which specific data is affected.

‘Take It Or Leave It’ Approach? 

One other aspect of GDPR is that to ensure users can change their mind, every person has the right to withdraw their consent. Noyb says that Fitbit’s privacy policy states that the only way to withdraw consent is to delete an account which would mean losing all previously tracked workouts and health data, even for those on a premium subscription for 79.99 euros per year. Noyb argues that this means that although people may buy a Fitbit for its features, there appears to be no realistic way to regain control their data without making the product useless.

Maartje de Graaf, Data Protection Lawyer at Noyb says: “First, you buy a Fitbit watch for at least 100 euros. Then you sign up for a paid subscription, only to find that you are forced to “freely” agree to the sharing of your data with recipients around the world. Five years into the GDPR, Fitbit is still trying to enforce a ‘take it or leave it’ approach.” 

Blank Cheque? 

Bernardo Armentano, Data Protection Lawyer at Noyb, says: “Fitbit wants you to write a blank check, allowing them to send your data anywhere in the world. Given that the company collects the most sensitive health data, it’s astonishing that it doesn’t even try to explain its use of such data, as required by law.” 

Fine Could Be £ Billions 

According to Noyb, based on Alphabet’s (Google’s parent company) turnover of last year, if the complaints are upheld by data regulators, Google could face fines of up to 11.28 billion euros over Fitbit’s alleged data protection violations.

There appears to be no publicly available comment from Google about Noyb’s allegations at the time of writing this article.

What Does This Mean For Your Business? 

Google which acquired Fitbit in 2021 and at the time, in addition expanding its move wearables, some commentators noted that it may also have been motivated by the lure of the health data of millions of Fitbit customers (potentially for profiling and advertising) and the ability to improve its competitive position in the lucrative healthcare tech space. Also, at the time, it was noted that Fitbit’s corporate partnerships with insurance companies and corporate wellness programmes may have also been attractive to Google.

Now, just a couple of years down the line, it’s the data aspect of the deal that appears to have landed Google in some hot water. Noyb’s complaints against Google-owned Fitbit could have a ripple effect that goes well beyond just a potentially hefty fine. With a penalty that could be up to 11.28 billion euros, the situation would have serious financial repercussions, and the case could set a precedent for how Google and other tech giants handle user data (especially sensitive health information), forcing them to change their global data policies.

It’s been noted, for example, in analyst GlobalData’s recent tech regulation report that data protection regulators look likely to continue closer scrutiny of companies in 2023, so there could be more trouble to come for other tech companies relating to which data they collect, how they share it, and around matters of consent.

Some may argue that Google may, several years down the line from GDPR’s introduction, need to invest more resources in compliance to avoid facing similar allegations related to other products or services.

For businesses that similarly rely on user-data, this case is a wake-up call to thoroughly review their data collection and transfer policies to ensure they align with GDPR requirements. Businesses must offer clear, informed choices to users about how their data is used, especially if it crosses borders. The situation with Fitbit highlights the reputational damage and legal risks involved in “take it or leave it” approaches to data consent. If Fitbit’s alleged actions are deemed a violation of GDPR, it could trigger a domino effect, prompting closer scrutiny of other businesses that have similar policies.

For users of Fitbit and similar devices, this case could lead to more transparent data practices, potentially providing them with greater control over their personal information. Reading about what may be happening to their extremely sensitive data may mean that users may become more cautious and discerning about the permissions they grant to these apps. Given the sensitive nature of health data involved, ranging from sleep patterns to menstrual cycles, users may start to demand more robust privacy protections, and this case could also encourage users to seek alternatives that offer better data protection guarantees.

Featured Article : Zoom Data Concerns

In this article, we look at why Zoom found itself as the subject of a backlash over an online update to its terms related to AI, what its response has been, plus what this says about how businesses feel about AI.

What Happened? 

Communications app Zoom updated its terms of service in March but following the change only being publicised on a popular forum in recent weeks, Zoom has faced criticism because many tech commentators have expressed alarm that the change appeared to go against its policy to not use customer-data to train AI.

The Update In Question 

The update to Section 10 of is terms of service, which Zoom says was to explain “how we use and who owns the various forms of content across our platform” gave Zoom “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights” to use Customer Content, i.e. data, content, communications, messages, files, documents and more, for “machine learning, artificial intelligence, training, testing” (and other product development purposes).

The Reaction 

Following the details of the update being posted and discussed on the ‘Hacker News’ forum, there was a backlash against Zoom, with many commentators unhappy with the prospect of AI (e.g. generative AI chatbots, AI image generators and Zoom’s own AI models namely Zoom IQ) and more) being given access to what should be private Zoom calls and other communications.

What’s The Problem? 

There are several concerns that individuals, businesses and other organisations may have over their “Customer Content” being used to train AI. For example:

– Privacy Concerns – worries that personal or sensitive information in video calls could be used in ways the participants never intended.

– Potential security risks. For example, if Zoom stores video and audio data for AI training, it increases the chance of that data being exposed in a hack or breach. Also, it’s possible with generative AI models that private information could be revealed if a user of an AI chatbot asked the right questions.

– Ethical questions. This is because some users may simply not have given clear permission for their data to be used for AI training, raising issues of consent and fairness.

– Legal Issues. For example, depending on the country, using customer data in this manner might violate data protection laws like GDPR, which could get both the company and users into legal trouble. Also, Zoom users or admins for business accounts could click “OK” to the terms of service without fully realising what they’re agreeing, to and employees who use the business Zoom account may be unaware of the choice their employer has made on their behalf. It’s also been noted by some online commentators that Zoom’s terms of service still permit it to collect a lot of data without consent, e.g. what’s grouped under the term ‘Service Generated Data.’

Another Update Prompted 

The backlash, the criticism of Zoom and the doubtless fear of some users leaving the platform over this controversy appears to have prompted another update to the company’s terms of service which Zoom says was to “to reorganise Section 10 and make it easier to understand”. 

The second update was a sentence, in bold, added on the end of Section 10.2 saying: “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.” 

On the company’s blog, Chief Product Officer, Smita Hashim, re-iterated that: “Following feedback received regarding Zoom’s recently updated terms of service Zoom has updated our terms of service and the below blog post to make it clear that Zoom does not use any of your audio, video, chat, screen sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models.” 

The Online Terms of Service Don’t Affect Large Paying Customers 

Smita Hashim explains in the blog post that the terms of service typically cover online customers, but “different contracts exist for customers that buy directly from us” such as “enterprises and customers in regulated verticals like education and healthcare.” Hashim states, therefore, that “updates to the online terms of service do not impact these customers.” 

What Zoom AI? 

Zoom has recently introduced two generative AI features to its platform – Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose, available on free trial and offering automated meeting summaries and AI-powered chat composition.

To customers worried that these tools may be trained using ‘Customer Content’ Zoom says, “We inform you and your meeting participants when Zoom’s generative AI services are in use” and has specifically assured customers that Zoom does not use customer content (e.g. as poll results, whiteboard-content, or user-reactions) to train Zoom’s own (or third-party) AI models.

Criticism 

In 2020, Zoom faced criticism over only offering end-to-end encryption as a paid extra feature after saying paying users would have it anyway. Also, with Zoom being the company whose product enabled (and is all about) remote working, it was criticised after asking staff living within a “commutable distance” (i.e. 50 miles / 80km) of the company’s offices to come to the office twice a week when it was reported to have said (at one time) that all staff could work remotely indefinitely.

What Does This Mean For Your Business? 

This story shows how, at a time when data is now needed in vast quantities to train AI, a technology that’s growing at a frightening rate (and has been the subject of dire warnings about the threats it could cause), clear data protections in this area are lagging or are missing altogether.

Yes, there are data protection laws. Arguably however, with the lack of understanding of how AI models work and what they need, service terms may not give a clear picture of what’s being consented to (or not) when using AI. There’s a worry, therefore, that boundaries of data protection, privacy, security, ethics, legality, and other contraints may be overstepped without users knowing it in the rush for more data as clear regulation is left behind.

Zoom’s extra assurances may have gone some way toward calming the backlash down and assuring users, but the fact that there was such a backlash over the contents of an old update shows the level of confusion and mistrust around this relatively new technological development and how it could affect everyone.

Snooper’s Charter Updated. (Poorly)

Amendments to the UK Online Safety Bill mean a report must be written before powers can be used by the regulator to force tech firms to scan encrypted messages for child abuse images.

What Is The Online Safety Bill? 

The Online Safety Bill is the way the UK government plans to establish a new regulatory regime to address illegal and harmful content online and to impose legal requirements on search engine and internet service providers, including those providing pornographic content. The bill will also give new powers to the Office of Communications (Ofcom), enabling them to act as the online safety regulator.

The Latest Amendments 

The government says the latest amendments to the (highly controversial) Online Safety Bill have been made to address concerns about the privacy implications and technical feasibility of the powers proposed in the bill. The new House of Lords amendments to the bill are:

– A report must be written for Ofcom by a “skilled person” (appointed by Ofcom) before the new powers are used to force a firm, such as an encrypted app like WhatsApp or Signal, to scan messages. Previously, the report was optional. The purpose of the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies that could be used instead. The report’s findings will be used to help decide whether to force a tech firm, e.g. an encrypted messages app, to scan messages, and a summary of those findings must be shared with the tech firm concerned.

– An amendment to the bill requiring Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

The Response 

The response from privacy campaigners and digital rights groups has focused on the idea that the oversight of an Ofcom-appointed “skilled person” is not likely to be as effective as judicial oversight (for example), and may not give the right level of consideration to users’ rights. For example, the Open Rights Group described the House of Lords debate on the amendments as a “disappointing experience” and said, that this “skilled person” could be a political appointee, and they would be overseeing decisions about free speech and privacy rights, this would not be “effective oversight”.

Apple’s Threats In Response To ‘Snoopers Charter’ Proposals 

In the same week, Apple said it would simply remove services like FaceTime and iMessage from the UK rather than weaken its security under the new proposals for updating the UK’s Investigatory Powers Act (IPA) 2016. The new proposals for updates to the act would mean tech companies like Apple and end-to-end encrypted message apps having to clear security features with the Home Office before releasing them to customers and allow the Home Office to demand security features are immediately disabled, without telling the public. Apple has submitted a nine-page statement to the government’s consultation on amendments to the IPA outlining its objections and opposition. For example, Apple says the proposals “constitute a serious and direct threat to data security and information privacy” that would affect people outside the UK.

What Does This Mean For Your Business? 

What the government says are measures to help in the fight against child sex abuse are seen by some rights groups as a route to monitoring and surveillance, and by tech companies as a way to weaken products and the privacy of their users. The idea that a “skilled person” (e.g. a consultant or political appointee) rather than a judge compiling a report to justify the forced scanning of encrypted messaging apps has not gone down well with the tech companies and rights groups. The fact that the House of Lords debate was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended, before the Bill becomes law with so many major objections from tech companies still being made, it looks unlikely that the big tech companies will comply with the new laws and changes. WhatsApp for example (owned by Meta) has simply said it would pull out the UK market over how new UK laws would force it compromise security which would be considerable blow to many people who use the app for business daily.  Signal (app) has also threatened to pull out of the UK and some critics think that the UK government may be naïve to think that simply pushing ahead with new laws and amendments will result in the big tech companies backing down and complying any time soon. It looks likely that the UK government will have a big fight on its hands going forward.