Tech Insight : New Privacy Features For Facebook and Instagram

Meta has announced the start of a roll-out of default end-to-end encryption for all personal chats and calls via Messenger and Facebook, with a view to making them more private and secure.

Extra Layer Of Security and Privacy 

Meta says that despite it being an optional feature since 2016, making it the default has “taken years to deliver” but will provide an extra layer of security. Meta highlights the benefits of default end-to-end encryption saying that “messages and calls with friends and family are protected from the moment they leave your device to the moment they reach the receiver’s device” and that “nobody, including Meta, can see what’s sent or said, unless you choose to report a message to us.“  

Default end-to-end encryption will roll-out to Facebook first and then to Instagram later, after the Messenger upgrade is completed.

Not Just Security and Privacy 

Meta is also keen to highlight the other benefits of its new default version of end-to-end encryption for users which include additional functionality, such as the ability to edit messages, higher media quality, and disappearing messages. For example:

– Users can edit messages that may have been sent too soon, or that they’d simply like to change, for up to 15 minutes after the messages have been sent.

– Disappearing messages on Messenger will now last for 24 hours after being sent, and Meta says it’s improving the interface to make it easier to tell when ‘disappearing messages’ is turned on.

– To retain privacy and reduce pressure on users to feel like they need to respond to messages immediately, Meta’s new read receipt control allows users to decide if they want others to see when they’ve read their messages.

When? 

Considering that Facebook Messenger has approximately 1 billion users worldwide, the roll-out could take months.

Why Has It Taken So Long To Introduce? 

Meta says it’s taken so long (7 years) to introduce because its engineers, cryptographers, designers, policy experts and product managers have had to rebuild Messenger features from the ground up using the Signal protocol and Meta’s own Labyrinth protocol.

Also, Meta had intended to introduce default end-to-end encryption back in 2022 but had to delay its launch over concerns that it could prevent Meta detecting child abuse on its platform.

Other Messaging Apps Already Have It 

Other messaging apps that have already introduced default end-to-end encryption include Meta-owned WhatsApp (in 2016), and Signal Foundation’s Signal messaging service which has also been upgraded to guard against future encryption-breaking attacks (as much you realistically can), e.g. quantum computer encryption cracking.

Issues 

There are several issues involved with the introduction of end-to-end encryption in messaging apps. For example:

– Governments have long wanted to force tech companies to introduce ‘back doors’ to their apps using the argument that they need to monitor content for criminal activity and dangerous behaviour, including terrorism, child sexual abuse and grooming, hate speech, criminal gang communications, and more. Unfortunately, creating a ‘back door’ destroys privacy, leaves users open to other risks (e.g. hackers) and reduces trust between users and the app owners.

– Attempted legal pressure has been applied to apps like WhatsApp and Facebook Messenger, such as the UK’s Online Safety Act. The UK government wanted to have the ability to securely scan encrypted messages sent on Signal and WhatsApp as part of the law but has admitted that this can’t happen because the technology to do so doesn’t exist (yet).

There are many compelling arguments for having (default) end-to-end encryption in messaging apps, such as:

– Consumer protection, i.e. it safeguards financial information during online banking and shopping, preventing unauthorised access and misuse.

– Business security, e.g. when used in WhatsApp and VPNs, encryption protects sensitive corporate data, ensuring data privacy and reducing cybercrime risks.

– Safe Communication in conflict zones (as highlighted by Ukraine). For example, encryption can facilitate secure, reliable communication in war-torn areas, aiding in broadcasting appeals, organising relief, combating disinformation, and protecting individuals from surveillance and tracking by hostile forces.

– Ensuring the safety of journalists and activists, particularly in environments with censorship or oppressive regimes, by keeping information channels secure and private.

– However, for most people using Facebook’s Messenger app, encryption is simply more of a general reassurance.

What Does This Mean For Your Business?

For Meta, the roll-out of default end-to-end encryption for Facebook and Instagram has been a bit of a slog and a long time coming. However, its introduction to bring FB Messenger in line with Meta’s popular WhatsApp essentially enhances user privacy and security and helps Facebook to claw its way back a little towards positioning itself as a company that’s a strong(er) advocate for digital safety.

For UK businesses, this move offers enhanced protection for sensitive data and communication, aligning with growing demands for cyber security and providing some peace of mind. However, the move presents further challenges and frustration for law enforcement and the UK government, potentially complicating efforts to monitor criminal activities and enforce regulations like the Online Safety Act. Overall, the initiative could be said to underscore a broader trend towards prioritising user privacy and security in the digital landscape, as well as being another way for tech giants like Meta to compete with other apps like Signal. It’s also a way for Meta to demonstrate that it won’t be forced into bowing to government pressure that could destroy the integrity and competitiveness of its products and negatively affect user trust in its brand (which has taken a battering in recent years).

Tech Insight : Cameras In Airbnb Properties – What Are The Rules?

Following the Metro recently highlighting the issue of undisclosed cameras being used by a small number of Airbnb hosts, we take a look at what the rules say, reports in the news of this happening, and what you can do to protect yourself.

Do Airbnb Hosts Have The Right To Film Guests? 

You may be surprised to know that the answer to this question is yes, hosts do have the right to install surveillance devices in certain areas of their properties (which may result in guests being filmed) but this is heavily regulated and restricted for privacy reasons.

When/Where/Why/How Is It OK For Hosts To Film Guests? 

The primary legitimate reason for hosts to install surveillance devices is for security purposes. They are not allowed to use them for any invasive or unethical purposes. Airbnb’s community standards, for example, emphasise respect for the privacy of guests and any violation of these standards can lead to the removal of the host from the platform.

Clear Disclosure 

Airbnb’s company rules say that monitoring devices (e.g. cameras), may be used, but only if they are in common spaces (such as living rooms, hallways, and kitchens) and then only if Airbnb hosts disclose them in their listings. In short, if a host has any kind of surveillance device, they must clearly mention it in their house rules or property listing so that guests are made aware of these devices before they book the property.

What About Local Laws? 

It is also the case that although disclosed cameras in common spaces on a property may be OK by the company’s rules, Airbnb hosts must also adhere to local laws and regulations regarding surveillance. This can vary widely from place to place and, in some regions, recording audio without consent is illegal, whereas video might be permissible if disclosed.

Hidden Cameras 

Even though Airbnb rules are relatively clear, there appears to be anecdotal and news evidence that some Airbnb guests have discovered undisclosed surveillance devices in areas of Airbnb properties where they should not be installed. Examples that have made the news include:

– Back in 2019, it was reported that a couple staying for one night at an Airbnb property in Garden Grove, California discovered a camera hidden in a smoke detector directly above the bed.

– In July 2023, a Texas couple were widely reported to have filed a lawsuit against an Airbnb owner, claiming he had put up ‘hidden cameras’ in the Maryland property they had rented for 2 nights in August 2022. According to the Court documents of Kayelee Gates and Christian Capraro, the couple became suspicious after Capraro discovered multiple hidden cameras disguised as smoke detectors in the bedroom and bathroom.

– Last month, a man (calling himself Ian Timbrell) alleged in a post on X that he had found a camera tucked between two sofa cushions at his Aberystwyth Airbnb.

Wouldn’t It Be Better To Disallow Any Cameras Inside An Airbnb Rental Property? 

Banning all cameras at Airbnb rental properties might initially seem like a straightforward solution to privacy concerns, yet there are important factors to consider around this. Some hosts may legitimately need to use common areas such as entrances, for security purposes (perhaps the property is in an area where crime has been a problem) and they need to deter theft and vandalism and provide evidence if a crime occurs. On the other hand, a complete ban on cameras would address the privacy concerns of guests, ensuring they feel comfortable and secure during their stay.

Airbnb’s current policy attempts to balance security and privacy by allowing cameras in certain areas while requiring full disclosure and banning them in private spaces like bedrooms and bathrooms. However, enforcing a complete ban on cameras would be very challenging, as hidden cameras are, by nature, difficult to detect and even if there was a ban, some owners may simply not comply. The Airbnb model is built on trust between hosts and guests, and clear communication and transparency about security measures, including camera usage, are crucial for maintaining this trust. While a total ban on cameras might seem like a simple solution to privacy concerns, it overlooks the legitimate security needs of hosts. A balanced approach with clear guidelines and strict enforcement might be more effective in protecting both guest privacy and host security.

How To Check 

If you’re worried about possibly being filmed/recorded by hidden and undisclosed surveillance devices in a rented Airbnb property, here are some ways you can search the property and potentially reveal such devices:

– Inspect any gadgets. Check smoke detectors or alarm clocks as they are known as places to hide cameras. Examine any other tech that seems out of place. You may also want to check the shower head.

– Search for Lenses. For example, making sure the room is dark, use a torch (such as your phone’s torch) to spot reflective camera lenses in objects like decor or appliances.

– Use phone apps like Glint Finder for Android or Hidden Camera Detector for iOS to find hidden cameras.

– Check storage areas, e.g. examine drawers, vents, and any openings in walls/ceilings.

– Check mirrors. Many people worry about the two-way mirrors with cameras behind them. Ways to check include lifting any mirrors to see the wall behind, turning off the room light and then shining a torch into the mirror to see if an area behind is visible.

– Check for infrared lights (which can be used in movement-sensitive cameras). Again, this may be spotted by by using your phone’s camera in the dark, and then looking out for any small, purple, or pink lights that may be flashing or steady.

– Scan the property’s Wi-Fi network and smart home devices for unknown devices.

– Unplug the Airbnb property’s router. Stopping the Wi-Fi at source should disable surveillance devices and may reveal whether the owner is monitoring the property, e.g. it may prompt the host to ask about the router being unplugged.

– If you’re particularly concerned, buy and bring an RF signal detector with you. Widely available online, this is a device that can find any devices emitting Bluetooth or Wi-Fi signals, e.g. wireless surveillance cameras, tracking devices and power supplies.

What Does This Mean For Your Business? 

The issue of undisclosed cameras in Airbnb properties raises important considerations for Airbnb as a company, its hosts, and travellers. For Airbnb, the challenge lies in upholding and enforcing privacy standards to maintain user trust. This could involve enhancing their policies, perhaps even investing in technology or an inspection process for better detection of undisclosed devices, and/or providing more reassuring information about the issue, thereby safeguarding guest security, ensuring host accountability, and helping to protect their brand reputation.

It should be said that most Airbnb hosts abide by the company’s rules but are caught in a delicate balancing act between providing security and respecting the privacy of their guests. Any misuse of surveillance devices can, of course, have serious legal consequences and potentially harm a host’s reputation and standing on the platform. However, even just a few stories in the news about the actions on one or two hosts can have a much wider negative effect on consumer trust in Airbnb and can be damaging for all hosts. It could even simply deter people from using the platform altogether.

For some travellers, this situation may make them feel they must proactively take the responsibility for their own privacy (which may not reflect so well on Airbnb). They may feel as though they need to be informed about their rights, familiarise themselves with detection methods and remain vigilant during their stays.

This whole scenario emphasises the need for a continuous update of policies and practices by Airbnb to keep pace with technological advancements and the varying legal frameworks in different regions. It also highlights the importance of clear communication and transparency between the company, its hosts, and guests to maintain a trustworthy and secure environment.

Featured Article : Temporary Climb-Down By UK Government

In an apparent admission of defeat, the UK government has conceded that requiring scanning of platforms like WhatsApp for messages with harmful content, as required in the Online Safety Bill, is not (currently) feasible.

The ‘Spy Clause’ 

Under what’s been dubbed the ‘spy clause’ (Clause 122) in the UK’s Online Safety Bill, the government had stated Ofcom could issue notices to messaging apps like WhatsApp and Signal (which use end-to-end encryption) that would allow the deployment of scanning software. The reason given was to scan for child sex abuse images on the platforms. However, the messaging apps argued that this would effectively destroy the end-to-end encryption, an important privacy feature valued by customers. This led to both WhatsApp and Signal threatening to pull their services out of the UK if the Bill went through with the clause in it.

Also, some privacy groups, like the Open Rights Group, argued that forcing the scanning of private messages on apps amounted to an expansion of mass surveillance.

Climbdown 

However, in a recent statement to the House of Lords junior arts and heritage minister Lord Stephen Parkinson announced that the government would be backing down on the issue. Lord Parkinson said: “When deciding whether to issue a notice, Ofcom will work closely with the service to help identify reasonable, technically feasible solutions to address child sexual exploitation and abuse risk, including drawing on evidence from a skilled persons report. If appropriate technology which meets these requirements does not exist, Ofcom cannot require its use.” 

In other words, the technology that enables scanning of messages without violating encryption doesn’t currently exist and, therefore, under the amended version of the bill, WhatsApp and Signal will not be required to have their messages scanned (until such technology does exist).

This is a significant climbdown for the government which has been pushing for ‘back doors’ and scanning of encrypted apps for many years, particularly since it was revealed that the London Bridge terror attack appeared to have been planned via WhatsApp.

Victory – Signal & WhatsApp 

Writing on ‘X’ (formerly Twitter), Meredith Whittaker, the president of Signal, said the government’s apparent climbdown was “a victory, not a defeat” for the tech companies. She also admitted, however, that it wasn’t a total victory, saying “we would have loved to see this in the text of the law itself.”

Also posting on ‘X,’ Will Cathcart, head of WhatsApp said that WhatsApp “remains vigilant against threats” to its end-to-end encryption service, adding that “scanning everyone’s messages would destroy privacy as we know it. That was as true last year as it is today.” 

Omnishambles 

Following the news of the government’s ‘spy clause’ climbdown, privacy advocates the Open Rights Group’ (ORG) highlighted the fact that on the one hand, the government had conceded that the technology that would have been needed to scan messages didn’t exist, while on the other hand appeared they to say they hadn’t conceded.  Describing the matter as an “omnishambles,” the ORG highlighted how during an appearance on Times radio, Michelle Donelan MP said that, “We haven’t changed the bill at all” and that “further work to develop the technology was needed.” 

What Does This Mean For Your Business? 

For apps like WhatsApp and Signal, this is not only a victory against government pressure but is also good news for business as, presumably, they will continue to operate in the UK market.

This is also good news for many UK businesses that routinely use WhatsApp as part of their business communications and won’t need to worry (for the time being) about having their commercially (and personally) sensitive messages scanned, thereby posing a risk to privacy and security, and perhaps increasing the risk of hacks and data breaches. It appears that the UK government has been forced to admit the technology does not yet exist that can scan messages on end-to-end encrypted services and maintain the integrity of that end-to-end encryption at the same time. It also appears that it may realistically take quite some time (years) before this technology exists, thereby making the victory all the sweeter for the encrypted apps.

The government’s climbdown on ‘clause 122’ (the ‘spy clause’), is also being celebrated by the many privacy groups that have long argued against it on the grounds of it enabling mass surveillance.

Featured Article : UK Gov Pushing To Spy On WhatsApp (& Others)

The recent amendment to the Online Safety Bill which means a compulsory report must be written for Ofcom by a “skilled person” before encrypted app companies are forced to scan messages has led to even more criticism of this rather controversial bill to bypass security in apps and give the government (and therefore any number of people) more access to sensitive and personal information.

What Amendment? 

In the House of Lords debate, which was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended before the Bill becomes law, Government minister Lord Parkinson amended the bill by calling for the need for a report to be written for Ofcom by a “skilled person” (appointed by Ofcom) before powers can be used to force a provider / tech company (e.g. WhatsApp or Signal), to scan its messages. The stated purpose of scanning messages using the powers of the Online Safety Bill is (ostensibly) to uncover child abuse images.

The amendment states that “OFCOM may give a notice under section 111(1) to a provider only after obtaining a report from a skilled person appointed by OFCOM under section 94(3).” 

Prior to the amendment, the report had been optional.

Why Is A Compulsory Report Stage So Important? 

The amendment says that the report is needed before companies can be forced to scan messages “to assist OFCOM in deciding whether to give a notice…. and to advise about the requirements that might be imposed by such a notice if it were to be given”. In other words, the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies could be used instead.

It is understood, therefore, that the report’s findings will be used to help decide whether to force a tech firm to scan messages. Under the detail of the amendment, a summary of the report’s findings must be shared with the tech firm concerned.

Reaction 

Tech companies may be broadly in agreement of the aims of the bill. However, the detail of the bill that companies such as encrypted messages operators (e.g. WhatsApp and Signal and others) have always opposed being forced into scanning user messages before they are encrypted (client-side scanning). Operators say that this completely undermines the privacy and security of encrypted messaging, and they object to the idea of having to run government-mandated scanning services on their devices. Also, they argue that this could leave their apps more vulnerable to attack.

The latest amendment, therefore, has not changed this situation for the tech companies and has led to more criticism and more objections. Many objections have also been aired by campaign and rights groups such as Index on Censorship and The Open Rights Group, who have always opposed what they call the “spy clause” in the bill for example:

– The Ofcom appointed “skilled person” could simply be a consultant or political appointee, and having these people oversee decisions about free speech and privacy rights would not amount to effective oversight.

– Judicial oversight should be a bare minimum and a report written by just a “skilled person” wouldn’t be binding and would lack legal authority.

Other groups, however, such as the NSPCC, have broadly backed the bill in terms of finding ways to make tech firms mitigate the risks of child sexual abuse when designing their apps or adding features, e.g. end-to-end encryption.

Another Amendment 

Another House of Lords amendment to the bill requires Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

This has also been met with similar criticisms over the idea of government-mandated scanning technology’s effects on privacy, freedom of speech, and potentially being used as a kind of monitoring and surveillance. WhatsApp, Signal, and Apple have all opposed the scanning idea, with WhatsApp and Signal reportedly indicating that they would not comply.

Breach Of International Law? 

The clause 9(2) of the Online Safety Bill which requires platforms to prevent users from “encountering” certain “illegal content” has also been soundly criticised recently. This clause means that platforms which host user-generated content will need to immediately remove any such content, which has a broad range, or face considerable fines, blocked services, or even jail for executives. Quite apart from the technical and practical challenges of being able to achieve this effectively at scale, criticisms of the clause include that it threatens free speech in the UK, and it lacks the detail for legislation.

Advice provided The Open Rights Group suggests that the clause may even be a breach of international law in that there could be “interference with freedom of expression that is unforeseeable” and goes against the current legal order on platforms.

It’s also been reported that Wikipedia could withdraw from the UK over the rules in the bill.

Investigatory Powers Act Objections (The Snooper’s Charter) 

Suggested new updates to the Investigatory Powers Act (IPA) 2016 (sometimes called the ‘Snooper’s Charter’) have also come under attack from tech firms, not least Apple. For example, the government wants messaging services, e.g. WhatsApp, to clear security features with the Home Office before releasing them to customers. The update to the IPA would mean that the UK’s Home Office could demand, with immediate effect, that security features are disabled, without telling the users/the public. Currently, a review process with independent oversight (with the option of appeal by the tech company) is needed before any such action could happen.

The Response 

The response from tech companies has been and swift and negative, with Apple threatening to remove FaceTime and iMessage from the UK if the planned update to the Act goes ahead.

Concerns about granting the government the power to secretly remove security features from messaging app services include:

– It could allow government surveillance of users’ devices by default.

– It could reduce security for users, seriously affect their privacy and freedom of speech, and could be exploited by adversaries, whether they are criminal or political.

– Building backdoors into encrypted apps essentially means there is no longer end-to-end encryption.

Apple 

Apple’s specific response to the proposed updates/amendments (which will be subject to an eight-week consultation anyway) is that:

– It refuses to make changes to security features specifically for one country that would weaken a product for all users globally.

– Some of the changes would require issuing a software update, which users would have to be told about, thereby stopping changes from being made secretly.

– The proposed amendments threaten security and information privacy and would affect people outside the UK.

What Does This Mean For Your Business? 

There’s broad agreement about the aims of UK’s Online Safety Bill and IPA in terms of wanting to tackle child abuse, keep people safe, and even making tech companies take more responsibility and measures to improve safety. However, these are global tech companies where UK users represent only a small part of their total user base, and ideas like building in back doors into secure apps, running government approved scanning of user content and using reports written by consultants/political appointees to support scanning all go against ideas of privacy, one of key features of apps like WhatsApp.

Allowing governments access into apps and granting them powers to turn off security ‘as and when’ raise issues and suspicions about free speech, government monitoring and surveillance, legal difficulties, and more. In short, even though the UK government want to press ahead with the new laws and amendments there is still a long way to go before there is any real agreement with the tech companies. In fact, it looks likely that they won’t comply and some, like WhatsApp have simply said they’ll pull out of the UK market, which could be very troublesome for UK businesses, charities, groups and individuals.

The tech companies also have a point in that it seems unreasonable to expect them to alter their services just for one country in a way that could negatively affect their users in other countries. As some critics have pointed out, if the UK wants to be a leading player on the global tech stage, alienating the big tech companies may not be the best way to go about it. It seems that a lot more talking and time will be needed to get anywhere near real-world workable laws and, at the moment, with the UK government being seen by many as straying into areas that are alarming rights groups, some tech companies are suggesting the government ditch their new laws and start again.

Expect continued strong resistance from tech companies going forward if the UK government doesn’t slow down or re-think many aspects of these new laws – watch this space.