Security Stop Press : Google’s Cookie Replacement Plans Fall Short Says Regulator

It’s been reported (WSJ) that an internal report by the UK’s privacy regulator, the Information Commissioner’s Office (ICO), has said that Google’s proposed replacements for cookies fall short in terms of protecting consumer privacy.

The ICO’s draft report reportedly says that Google’s proposed technology, known as the ‘Privacy Sandbox,’ leaves gaps that could be exploited by advertisers, potentially undermining privacy and identifying users who should be kept anonymous.

The WSJ reports that the ICO now wants Google to make changes and share its concerns with UK’s competition regulator, the Competition and Markets Authority (CMA).

Tech News : Oxford’s Secure Quantum Computing Breakthrough

Researchers at Oxford University’s UK Quantum Computing and Simulation Hub claim to have made what could be an important breakthrough in quantum computing security.

The Issue 

As things stand, if businesses want to use cloud-based quantum computing services, they face privacy and security issues when trying to do so over a network, similar to the issues in traditional cloud computing. For example, users can’t keep their work secret from the server or check their results on their own when tasks get too complex for classical simulations, i.e. they risk disclosing sensitive information like the results of the computation or even the algorithm used itself.

The Breakthrough – ‘Blind Quantum Computing’ 

However, Oxford researchers have now developed “blind quantum computing” which is a method that enables users to access remote quantum computers to process confidential data with secret algorithms and even verify the results are correct, without having to reveal any useful information (thereby retaining security and privacy). In short, this breakthrough has developed a system for connecting two totally separate quantum computing entities (potentially an individual user accessing a cloud server) in a completely secure way.

How? 

The researchers achieved the breakthrough by creating a system from a fibre network link between a quantum computing server and a simple device detecting photons (particles of light), at an independent computer remotely accessing its cloud services.

This system was found to allow ‘blind quantum computing’ over a network as every computation incurs a correction which must be applied to all that follow and needs real-time information to comply with the algorithm. The researchers say it’s the unique combination of quantum memory and photons that’s the secret to the system.

What Will It Mean? 

As study lead-scientist, Dr Peter Drmota, pointed out: “Realising this concept is a big step forward in both quantum computing and keeping our information safe online.” Also, as Professor David Lucas, the Hub’s Principal Investigator, observed: “We have shown for the first time that quantum computing in the cloud can be accessed in a scalable, practical way which will also give people complete security and privacy of data, plus the ability to verify its authenticity”. 

What Does This Mean For Your Business? 

Quantum computers are able to dramatically accelerate tasks that have traditionally taken a long time, with astounding results, e.g. crunching numbers that would take a classical computer a week, could take a quantum computer less than a second.  As such, quantum computers are capable of solving some of the toughest challenges faced by many different industries, and some of the biggest challenges facing us all, such as how to successfully treat some of our most serious diseases and tackle the climate crisis.

However, they are very expensive and for businesses and organisations, the only hope is that they will be able to have access to quantum computers via the cloud as part of ‘Quantum-as-a-Service’, which at least a dozen companies are already offering. The opportunities for innovation and creating competitive advantages and/or achieving their own industry/sector breakthroughs or medical advances using the power of quantum computing are very attractive to many organisations. However, the security and privacy challenges of connecting with a quantum computer over a network have presented a considerable risk – up until now.

This breakthrough from the Oxford researchers appears, therefore, to be an important step in tackling a key challenge and also for potentially opening up access to quantum computing securely and privately, at scale for many businesses and organisations. The results could be a boost in value-adding innovations, and valuable new discoveries that could change the landscape in some sectors. This breakthrough represents another important step towards the future and puts the power of quantum computing within reach of many more ordinary people.

Featured Article : Google Deleting Millions Of Users’ Incognito Data

As part of a deal to resolve a class action lawsuit in the US dating back to 2020, Google has said it will delete the incognito mode search data of millions of users.

What Lawsuit? 

In June 2020 in the US, three Californians named Chasom Brown, Christopher Castillo, and Monique Trujill (along with William Byatt of Florida and Jeremy Davis of Arkansas) brought a lawsuit against Google’s Incognito mode. They filed the class-action lawsuit on behalf of themselves and potentially millions of other Google users who believed their data was being collected by Google despite using Incognito mode for private browsing.

The plaintiffs accused Google of capturing data despite assurances that it would not, thereby misleading users about the privacy level provided by Incognito mode. For example, internal Google emails highlighted by the lawsuit appeared to show that users using incognito mode were actually being tracked by Google to measure web traffic and sell ads.

The original lawsuit was seeking at least $5 billion in damages from Google.

What’s Been Happening? 

Since the lawsuit was originally filed, some of the main events of note between the plaintiffs and Google have included:

– Google attempting to have the lawsuit dismissed, arguing that it never promised complete privacy or non-collection of data in Incognito mode. At the time, Google pointed to the disclaimers presented to users when opening an Incognito tab, which stated that activity might still be visible to websites, web services, and employers or schools.

– A judge then rejected Google’s request to dismiss the case. The judge emphasised that Google didn’t explicitly inform users that it would collect data in the manner alleged by the plaintiffs. This decision meant that the lawsuit could again move forward.

– Finally, back in December last year, with the scheduled trial due to begin in February 2024, the lawyers for Google and the plaintiffs announced that a preliminary settlement had been reached, i.e. Google had agreed to settle the class-action lawsuit. In doing so, Google acknowledged that it needed to address the plaintiffs’ concerns (but without admitting wrongdoing).

– In January, however, following the preliminary settlement announcement, Google updated its disclosures, clarifying that it still tracked user data even when users opted to search privately or used its “Incognito” setting.

– Google also said it was trialling a new feature that could automatically block third-party cookies (to prevent user activity being tracked) for all Google Chrome users and had made the block automatic for Incognito just after the lawsuit was filed. It’s also understood that as part of the settlement deal, this automatic block feature will stay in place for 5 years.

Mass Deletions 

Under the terms of the final settlement, the full details of which are not publicly known, Google has agreed to delete hundreds of billions of the private browsing data records that it collected (with incognito).

Google Says…

A Google spokesperson has been quoted as saying that the company was pleased to settle the lawsuit which it “always believed was meritless” and that it is “happy to delete old technical data that was never associated with an individual and was never used for any form of personalisation”. 

What Does This Mean For Your Business? 

This agreement came after extensive legal battles and discussions, which in themselves highlight the complexities surrounding user privacy and data collection practices in the digital age. Part of the complexity in the case appeared to be trying to decide whether, as the plaintiffs’ lawyers argued, Google was misleading users and violating privacy and wiretapping laws or, as Google’s lawyers said, Incognito mode was designed to allow users to browse without saving activity to their local device but not to entirely prevent Google or other services from tracking user activities online.

Google has consistently denied wrongdoing and maintained its stance. However, Google (and its parent company Alphabet) are already facing two other potentially painful monopoly cases brought by the US federal government and had to pay £318m in 2022 in settlement of claims brought by US states over it allegedly tracking the location of users who’d had opted out of location services on their devices. It’s not surprising, therefore, that Google has opted to settle in this most recently concluded case although, in addition to having to delete hundreds of billions of browsing records, there are no public details yet of what else it’s cost.

The settlement, therefore, will be seen by many as a victory in terms of forcing dominant technology companies to be more honest in their representations to users about how they collect and employ user data. For big tech companies such as Google, privacy and tracking have become a difficult area. Google had already moved to free itself from other volatile privacy matters around browsing by announcing back in 2020 that it would be looking to eliminate third-party cookies within two years anyway (which has been delayed) and cookies have been subject to greater regulation in recent years.

This latest settlement is bad news for Google (and advertisers) however it is likely to be good news for the many millions of Google Chrome users whose interests were represented in the class-action lawsuit.

Tech News : Chrome’s Real-Time Safe Browsing Change

Google has announced the introduction of real-time, privacy-preserving URL protection to Google Safe Browsing for those using Chrome on desktop or iOS (and Android later this month).

Why? 

Google says with attacks constantly evolving, and with the difference between successfully detecting a threat or not now perhaps being just a “matter of minutes,” this new measure has been introduced “to keep up with the increasing pace of hackers.” 

Not Even Google Will Know Which Websites You’re Visiting 

Google says because this new capability uses encryption and other privacy-enhancing techniques, the level of privacy and security is such that no one, including Google, will know what website you’re visiting.

What Was Happening Before? 

Prior to the addition of the new real-time protection, Google’s Standard protection mode of Safe Browsing relied upon a list stored on the user’s device to check if a site or file was known to be potentially dangerous. The list was updated every 30 to 60 minutes. However, as Google now admits, the average malicious site only actually exists for less than 10 minutes – hence the need for a real-time, server-side list solution.

Another challenge that has necessitated the introduction of a server-side real-time solution is the fact that Safe Browsing’s list of harmful websites continues to grow rapidly and not all devices have the resources necessary to maintain this growing list, nor to receive and apply the required updates to the list.

Extra Phishing Protection 

Google says it expects this new real-time protection capability to be able to block 25 per cent more phishing attempts.

Partnership With Fastly 

Google says that the new enhanced level of privacy between Chrome and Safe Browsing has been achieved through a partnership with edge computing and security company Fastly.

Like Enhanced Mode 

In its announcement of the new capability, Google also highlighted the similarity between the new feature and Google’s existing ‘Enhanced Protection Mode’ (in Safe Browsing) which also uses a real-time list to compare the URLs customers visit against. However, the opt-in Enhanced Protection also uses “AI to block attacks, provides deep file scans and offers extra protection from malicious Chrome extensions.” 

What Does This Mean For Your Business? 

As noted by Google, the evolving, increasing number of cyber threats, the fact that malicious sites are only around for a few minutes, and that many devices don’t have the resources on board to handle a growing security list (and updates) have necessitated a better security solution. Having the list of suspect sites server-side and offering real-time improved protection kills a few birds with one stone, allows Google a more efficient (and hopefully effective) way to increase its level of security and privacy. It’s also a way for Google to plug a security gap for those who have not taken the opportunity to opt-in to its Enhance Protection Mode since its introduction last year.

For business users and other users of Chrome, the chance to get a massive (estimated) 25 per cent increase in phishing protection without having to do much or pay extra must be attractive. For example, with phishing accounting for 60 per cent of social engineering attacks and, according to a recent Zscaler report, phishing attacks growing by a massive 47 per cent last year, businesses are likely to welcome any fast, easy, extra phishing protection they can get.

Tech Insight : New Privacy Features For Facebook and Instagram

Meta has announced the start of a roll-out of default end-to-end encryption for all personal chats and calls via Messenger and Facebook, with a view to making them more private and secure.

Extra Layer Of Security and Privacy 

Meta says that despite it being an optional feature since 2016, making it the default has “taken years to deliver” but will provide an extra layer of security. Meta highlights the benefits of default end-to-end encryption saying that “messages and calls with friends and family are protected from the moment they leave your device to the moment they reach the receiver’s device” and that “nobody, including Meta, can see what’s sent or said, unless you choose to report a message to us.“  

Default end-to-end encryption will roll-out to Facebook first and then to Instagram later, after the Messenger upgrade is completed.

Not Just Security and Privacy 

Meta is also keen to highlight the other benefits of its new default version of end-to-end encryption for users which include additional functionality, such as the ability to edit messages, higher media quality, and disappearing messages. For example:

– Users can edit messages that may have been sent too soon, or that they’d simply like to change, for up to 15 minutes after the messages have been sent.

– Disappearing messages on Messenger will now last for 24 hours after being sent, and Meta says it’s improving the interface to make it easier to tell when ‘disappearing messages’ is turned on.

– To retain privacy and reduce pressure on users to feel like they need to respond to messages immediately, Meta’s new read receipt control allows users to decide if they want others to see when they’ve read their messages.

When? 

Considering that Facebook Messenger has approximately 1 billion users worldwide, the roll-out could take months.

Why Has It Taken So Long To Introduce? 

Meta says it’s taken so long (7 years) to introduce because its engineers, cryptographers, designers, policy experts and product managers have had to rebuild Messenger features from the ground up using the Signal protocol and Meta’s own Labyrinth protocol.

Also, Meta had intended to introduce default end-to-end encryption back in 2022 but had to delay its launch over concerns that it could prevent Meta detecting child abuse on its platform.

Other Messaging Apps Already Have It 

Other messaging apps that have already introduced default end-to-end encryption include Meta-owned WhatsApp (in 2016), and Signal Foundation’s Signal messaging service which has also been upgraded to guard against future encryption-breaking attacks (as much you realistically can), e.g. quantum computer encryption cracking.

Issues 

There are several issues involved with the introduction of end-to-end encryption in messaging apps. For example:

– Governments have long wanted to force tech companies to introduce ‘back doors’ to their apps using the argument that they need to monitor content for criminal activity and dangerous behaviour, including terrorism, child sexual abuse and grooming, hate speech, criminal gang communications, and more. Unfortunately, creating a ‘back door’ destroys privacy, leaves users open to other risks (e.g. hackers) and reduces trust between users and the app owners.

– Attempted legal pressure has been applied to apps like WhatsApp and Facebook Messenger, such as the UK’s Online Safety Act. The UK government wanted to have the ability to securely scan encrypted messages sent on Signal and WhatsApp as part of the law but has admitted that this can’t happen because the technology to do so doesn’t exist (yet).

There are many compelling arguments for having (default) end-to-end encryption in messaging apps, such as:

– Consumer protection, i.e. it safeguards financial information during online banking and shopping, preventing unauthorised access and misuse.

– Business security, e.g. when used in WhatsApp and VPNs, encryption protects sensitive corporate data, ensuring data privacy and reducing cybercrime risks.

– Safe Communication in conflict zones (as highlighted by Ukraine). For example, encryption can facilitate secure, reliable communication in war-torn areas, aiding in broadcasting appeals, organising relief, combating disinformation, and protecting individuals from surveillance and tracking by hostile forces.

– Ensuring the safety of journalists and activists, particularly in environments with censorship or oppressive regimes, by keeping information channels secure and private.

– However, for most people using Facebook’s Messenger app, encryption is simply more of a general reassurance.

What Does This Mean For Your Business?

For Meta, the roll-out of default end-to-end encryption for Facebook and Instagram has been a bit of a slog and a long time coming. However, its introduction to bring FB Messenger in line with Meta’s popular WhatsApp essentially enhances user privacy and security and helps Facebook to claw its way back a little towards positioning itself as a company that’s a strong(er) advocate for digital safety.

For UK businesses, this move offers enhanced protection for sensitive data and communication, aligning with growing demands for cyber security and providing some peace of mind. However, the move presents further challenges and frustration for law enforcement and the UK government, potentially complicating efforts to monitor criminal activities and enforce regulations like the Online Safety Act. Overall, the initiative could be said to underscore a broader trend towards prioritising user privacy and security in the digital landscape, as well as being another way for tech giants like Meta to compete with other apps like Signal. It’s also a way for Meta to demonstrate that it won’t be forced into bowing to government pressure that could destroy the integrity and competitiveness of its products and negatively affect user trust in its brand (which has taken a battering in recent years).

Tech Insight : Cameras In Airbnb Properties – What Are The Rules?

Following the Metro recently highlighting the issue of undisclosed cameras being used by a small number of Airbnb hosts, we take a look at what the rules say, reports in the news of this happening, and what you can do to protect yourself.

Do Airbnb Hosts Have The Right To Film Guests? 

You may be surprised to know that the answer to this question is yes, hosts do have the right to install surveillance devices in certain areas of their properties (which may result in guests being filmed) but this is heavily regulated and restricted for privacy reasons.

When/Where/Why/How Is It OK For Hosts To Film Guests? 

The primary legitimate reason for hosts to install surveillance devices is for security purposes. They are not allowed to use them for any invasive or unethical purposes. Airbnb’s community standards, for example, emphasise respect for the privacy of guests and any violation of these standards can lead to the removal of the host from the platform.

Clear Disclosure 

Airbnb’s company rules say that monitoring devices (e.g. cameras), may be used, but only if they are in common spaces (such as living rooms, hallways, and kitchens) and then only if Airbnb hosts disclose them in their listings. In short, if a host has any kind of surveillance device, they must clearly mention it in their house rules or property listing so that guests are made aware of these devices before they book the property.

What About Local Laws? 

It is also the case that although disclosed cameras in common spaces on a property may be OK by the company’s rules, Airbnb hosts must also adhere to local laws and regulations regarding surveillance. This can vary widely from place to place and, in some regions, recording audio without consent is illegal, whereas video might be permissible if disclosed.

Hidden Cameras 

Even though Airbnb rules are relatively clear, there appears to be anecdotal and news evidence that some Airbnb guests have discovered undisclosed surveillance devices in areas of Airbnb properties where they should not be installed. Examples that have made the news include:

– Back in 2019, it was reported that a couple staying for one night at an Airbnb property in Garden Grove, California discovered a camera hidden in a smoke detector directly above the bed.

– In July 2023, a Texas couple were widely reported to have filed a lawsuit against an Airbnb owner, claiming he had put up ‘hidden cameras’ in the Maryland property they had rented for 2 nights in August 2022. According to the Court documents of Kayelee Gates and Christian Capraro, the couple became suspicious after Capraro discovered multiple hidden cameras disguised as smoke detectors in the bedroom and bathroom.

– Last month, a man (calling himself Ian Timbrell) alleged in a post on X that he had found a camera tucked between two sofa cushions at his Aberystwyth Airbnb.

Wouldn’t It Be Better To Disallow Any Cameras Inside An Airbnb Rental Property? 

Banning all cameras at Airbnb rental properties might initially seem like a straightforward solution to privacy concerns, yet there are important factors to consider around this. Some hosts may legitimately need to use common areas such as entrances, for security purposes (perhaps the property is in an area where crime has been a problem) and they need to deter theft and vandalism and provide evidence if a crime occurs. On the other hand, a complete ban on cameras would address the privacy concerns of guests, ensuring they feel comfortable and secure during their stay.

Airbnb’s current policy attempts to balance security and privacy by allowing cameras in certain areas while requiring full disclosure and banning them in private spaces like bedrooms and bathrooms. However, enforcing a complete ban on cameras would be very challenging, as hidden cameras are, by nature, difficult to detect and even if there was a ban, some owners may simply not comply. The Airbnb model is built on trust between hosts and guests, and clear communication and transparency about security measures, including camera usage, are crucial for maintaining this trust. While a total ban on cameras might seem like a simple solution to privacy concerns, it overlooks the legitimate security needs of hosts. A balanced approach with clear guidelines and strict enforcement might be more effective in protecting both guest privacy and host security.

How To Check 

If you’re worried about possibly being filmed/recorded by hidden and undisclosed surveillance devices in a rented Airbnb property, here are some ways you can search the property and potentially reveal such devices:

– Inspect any gadgets. Check smoke detectors or alarm clocks as they are known as places to hide cameras. Examine any other tech that seems out of place. You may also want to check the shower head.

– Search for Lenses. For example, making sure the room is dark, use a torch (such as your phone’s torch) to spot reflective camera lenses in objects like decor or appliances.

– Use phone apps like Glint Finder for Android or Hidden Camera Detector for iOS to find hidden cameras.

– Check storage areas, e.g. examine drawers, vents, and any openings in walls/ceilings.

– Check mirrors. Many people worry about the two-way mirrors with cameras behind them. Ways to check include lifting any mirrors to see the wall behind, turning off the room light and then shining a torch into the mirror to see if an area behind is visible.

– Check for infrared lights (which can be used in movement-sensitive cameras). Again, this may be spotted by by using your phone’s camera in the dark, and then looking out for any small, purple, or pink lights that may be flashing or steady.

– Scan the property’s Wi-Fi network and smart home devices for unknown devices.

– Unplug the Airbnb property’s router. Stopping the Wi-Fi at source should disable surveillance devices and may reveal whether the owner is monitoring the property, e.g. it may prompt the host to ask about the router being unplugged.

– If you’re particularly concerned, buy and bring an RF signal detector with you. Widely available online, this is a device that can find any devices emitting Bluetooth or Wi-Fi signals, e.g. wireless surveillance cameras, tracking devices and power supplies.

What Does This Mean For Your Business? 

The issue of undisclosed cameras in Airbnb properties raises important considerations for Airbnb as a company, its hosts, and travellers. For Airbnb, the challenge lies in upholding and enforcing privacy standards to maintain user trust. This could involve enhancing their policies, perhaps even investing in technology or an inspection process for better detection of undisclosed devices, and/or providing more reassuring information about the issue, thereby safeguarding guest security, ensuring host accountability, and helping to protect their brand reputation.

It should be said that most Airbnb hosts abide by the company’s rules but are caught in a delicate balancing act between providing security and respecting the privacy of their guests. Any misuse of surveillance devices can, of course, have serious legal consequences and potentially harm a host’s reputation and standing on the platform. However, even just a few stories in the news about the actions on one or two hosts can have a much wider negative effect on consumer trust in Airbnb and can be damaging for all hosts. It could even simply deter people from using the platform altogether.

For some travellers, this situation may make them feel they must proactively take the responsibility for their own privacy (which may not reflect so well on Airbnb). They may feel as though they need to be informed about their rights, familiarise themselves with detection methods and remain vigilant during their stays.

This whole scenario emphasises the need for a continuous update of policies and practices by Airbnb to keep pace with technological advancements and the varying legal frameworks in different regions. It also highlights the importance of clear communication and transparency between the company, its hosts, and guests to maintain a trustworthy and secure environment.

Featured Article : Temporary Climb-Down By UK Government

In an apparent admission of defeat, the UK government has conceded that requiring scanning of platforms like WhatsApp for messages with harmful content, as required in the Online Safety Bill, is not (currently) feasible.

The ‘Spy Clause’ 

Under what’s been dubbed the ‘spy clause’ (Clause 122) in the UK’s Online Safety Bill, the government had stated Ofcom could issue notices to messaging apps like WhatsApp and Signal (which use end-to-end encryption) that would allow the deployment of scanning software. The reason given was to scan for child sex abuse images on the platforms. However, the messaging apps argued that this would effectively destroy the end-to-end encryption, an important privacy feature valued by customers. This led to both WhatsApp and Signal threatening to pull their services out of the UK if the Bill went through with the clause in it.

Also, some privacy groups, like the Open Rights Group, argued that forcing the scanning of private messages on apps amounted to an expansion of mass surveillance.

Climbdown 

However, in a recent statement to the House of Lords junior arts and heritage minister Lord Stephen Parkinson announced that the government would be backing down on the issue. Lord Parkinson said: “When deciding whether to issue a notice, Ofcom will work closely with the service to help identify reasonable, technically feasible solutions to address child sexual exploitation and abuse risk, including drawing on evidence from a skilled persons report. If appropriate technology which meets these requirements does not exist, Ofcom cannot require its use.” 

In other words, the technology that enables scanning of messages without violating encryption doesn’t currently exist and, therefore, under the amended version of the bill, WhatsApp and Signal will not be required to have their messages scanned (until such technology does exist).

This is a significant climbdown for the government which has been pushing for ‘back doors’ and scanning of encrypted apps for many years, particularly since it was revealed that the London Bridge terror attack appeared to have been planned via WhatsApp.

Victory – Signal & WhatsApp 

Writing on ‘X’ (formerly Twitter), Meredith Whittaker, the president of Signal, said the government’s apparent climbdown was “a victory, not a defeat” for the tech companies. She also admitted, however, that it wasn’t a total victory, saying “we would have loved to see this in the text of the law itself.”

Also posting on ‘X,’ Will Cathcart, head of WhatsApp said that WhatsApp “remains vigilant against threats” to its end-to-end encryption service, adding that “scanning everyone’s messages would destroy privacy as we know it. That was as true last year as it is today.” 

Omnishambles 

Following the news of the government’s ‘spy clause’ climbdown, privacy advocates the Open Rights Group’ (ORG) highlighted the fact that on the one hand, the government had conceded that the technology that would have been needed to scan messages didn’t exist, while on the other hand appeared they to say they hadn’t conceded.  Describing the matter as an “omnishambles,” the ORG highlighted how during an appearance on Times radio, Michelle Donelan MP said that, “We haven’t changed the bill at all” and that “further work to develop the technology was needed.” 

What Does This Mean For Your Business? 

For apps like WhatsApp and Signal, this is not only a victory against government pressure but is also good news for business as, presumably, they will continue to operate in the UK market.

This is also good news for many UK businesses that routinely use WhatsApp as part of their business communications and won’t need to worry (for the time being) about having their commercially (and personally) sensitive messages scanned, thereby posing a risk to privacy and security, and perhaps increasing the risk of hacks and data breaches. It appears that the UK government has been forced to admit the technology does not yet exist that can scan messages on end-to-end encrypted services and maintain the integrity of that end-to-end encryption at the same time. It also appears that it may realistically take quite some time (years) before this technology exists, thereby making the victory all the sweeter for the encrypted apps.

The government’s climbdown on ‘clause 122’ (the ‘spy clause’), is also being celebrated by the many privacy groups that have long argued against it on the grounds of it enabling mass surveillance.

Featured Article : UK Gov Pushing To Spy On WhatsApp (& Others)

The recent amendment to the Online Safety Bill which means a compulsory report must be written for Ofcom by a “skilled person” before encrypted app companies are forced to scan messages has led to even more criticism of this rather controversial bill to bypass security in apps and give the government (and therefore any number of people) more access to sensitive and personal information.

What Amendment? 

In the House of Lords debate, which was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended before the Bill becomes law, Government minister Lord Parkinson amended the bill by calling for the need for a report to be written for Ofcom by a “skilled person” (appointed by Ofcom) before powers can be used to force a provider / tech company (e.g. WhatsApp or Signal), to scan its messages. The stated purpose of scanning messages using the powers of the Online Safety Bill is (ostensibly) to uncover child abuse images.

The amendment states that “OFCOM may give a notice under section 111(1) to a provider only after obtaining a report from a skilled person appointed by OFCOM under section 94(3).” 

Prior to the amendment, the report had been optional.

Why Is A Compulsory Report Stage So Important? 

The amendment says that the report is needed before companies can be forced to scan messages “to assist OFCOM in deciding whether to give a notice…. and to advise about the requirements that might be imposed by such a notice if it were to be given”. In other words, the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies could be used instead.

It is understood, therefore, that the report’s findings will be used to help decide whether to force a tech firm to scan messages. Under the detail of the amendment, a summary of the report’s findings must be shared with the tech firm concerned.

Reaction 

Tech companies may be broadly in agreement of the aims of the bill. However, the detail of the bill that companies such as encrypted messages operators (e.g. WhatsApp and Signal and others) have always opposed being forced into scanning user messages before they are encrypted (client-side scanning). Operators say that this completely undermines the privacy and security of encrypted messaging, and they object to the idea of having to run government-mandated scanning services on their devices. Also, they argue that this could leave their apps more vulnerable to attack.

The latest amendment, therefore, has not changed this situation for the tech companies and has led to more criticism and more objections. Many objections have also been aired by campaign and rights groups such as Index on Censorship and The Open Rights Group, who have always opposed what they call the “spy clause” in the bill for example:

– The Ofcom appointed “skilled person” could simply be a consultant or political appointee, and having these people oversee decisions about free speech and privacy rights would not amount to effective oversight.

– Judicial oversight should be a bare minimum and a report written by just a “skilled person” wouldn’t be binding and would lack legal authority.

Other groups, however, such as the NSPCC, have broadly backed the bill in terms of finding ways to make tech firms mitigate the risks of child sexual abuse when designing their apps or adding features, e.g. end-to-end encryption.

Another Amendment 

Another House of Lords amendment to the bill requires Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

This has also been met with similar criticisms over the idea of government-mandated scanning technology’s effects on privacy, freedom of speech, and potentially being used as a kind of monitoring and surveillance. WhatsApp, Signal, and Apple have all opposed the scanning idea, with WhatsApp and Signal reportedly indicating that they would not comply.

Breach Of International Law? 

The clause 9(2) of the Online Safety Bill which requires platforms to prevent users from “encountering” certain “illegal content” has also been soundly criticised recently. This clause means that platforms which host user-generated content will need to immediately remove any such content, which has a broad range, or face considerable fines, blocked services, or even jail for executives. Quite apart from the technical and practical challenges of being able to achieve this effectively at scale, criticisms of the clause include that it threatens free speech in the UK, and it lacks the detail for legislation.

Advice provided The Open Rights Group suggests that the clause may even be a breach of international law in that there could be “interference with freedom of expression that is unforeseeable” and goes against the current legal order on platforms.

It’s also been reported that Wikipedia could withdraw from the UK over the rules in the bill.

Investigatory Powers Act Objections (The Snooper’s Charter) 

Suggested new updates to the Investigatory Powers Act (IPA) 2016 (sometimes called the ‘Snooper’s Charter’) have also come under attack from tech firms, not least Apple. For example, the government wants messaging services, e.g. WhatsApp, to clear security features with the Home Office before releasing them to customers. The update to the IPA would mean that the UK’s Home Office could demand, with immediate effect, that security features are disabled, without telling the users/the public. Currently, a review process with independent oversight (with the option of appeal by the tech company) is needed before any such action could happen.

The Response 

The response from tech companies has been and swift and negative, with Apple threatening to remove FaceTime and iMessage from the UK if the planned update to the Act goes ahead.

Concerns about granting the government the power to secretly remove security features from messaging app services include:

– It could allow government surveillance of users’ devices by default.

– It could reduce security for users, seriously affect their privacy and freedom of speech, and could be exploited by adversaries, whether they are criminal or political.

– Building backdoors into encrypted apps essentially means there is no longer end-to-end encryption.

Apple 

Apple’s specific response to the proposed updates/amendments (which will be subject to an eight-week consultation anyway) is that:

– It refuses to make changes to security features specifically for one country that would weaken a product for all users globally.

– Some of the changes would require issuing a software update, which users would have to be told about, thereby stopping changes from being made secretly.

– The proposed amendments threaten security and information privacy and would affect people outside the UK.

What Does This Mean For Your Business? 

There’s broad agreement about the aims of UK’s Online Safety Bill and IPA in terms of wanting to tackle child abuse, keep people safe, and even making tech companies take more responsibility and measures to improve safety. However, these are global tech companies where UK users represent only a small part of their total user base, and ideas like building in back doors into secure apps, running government approved scanning of user content and using reports written by consultants/political appointees to support scanning all go against ideas of privacy, one of key features of apps like WhatsApp.

Allowing governments access into apps and granting them powers to turn off security ‘as and when’ raise issues and suspicions about free speech, government monitoring and surveillance, legal difficulties, and more. In short, even though the UK government want to press ahead with the new laws and amendments there is still a long way to go before there is any real agreement with the tech companies. In fact, it looks likely that they won’t comply and some, like WhatsApp have simply said they’ll pull out of the UK market, which could be very troublesome for UK businesses, charities, groups and individuals.

The tech companies also have a point in that it seems unreasonable to expect them to alter their services just for one country in a way that could negatively affect their users in other countries. As some critics have pointed out, if the UK wants to be a leading player on the global tech stage, alienating the big tech companies may not be the best way to go about it. It seems that a lot more talking and time will be needed to get anywhere near real-world workable laws and, at the moment, with the UK government being seen by many as straying into areas that are alarming rights groups, some tech companies are suggesting the government ditch their new laws and start again.

Expect continued strong resistance from tech companies going forward if the UK government doesn’t slow down or re-think many aspects of these new laws – watch this space.