Tech News : Fitbit Data Transfer Complaints

Vienna-based advocacy group ‘Noyb’ has filed complaints against Google-owned Fitbit, alleging that it has violated the EU’s GDPR over illegal exporting of user data.

Complaints In Three Countries 

Noyb, which stands for ‘None Of Your Business,’ (and founded by privacy activist Max Schrems) filed three complaints against Fitbit – in Austria, the Netherlands and Italy.

Why? 

Noyb alleges that Fitbit forces users to consent to data transfers outside the EU, to the US and other countries (with different data protection laws), without providing users with the possibility to withdraw their consent, thereby potentially violating GDPR’s requirements. Noyb says that the only option users have to stop the “illegal processing” is to completely delete their Fitbit account.

How Would This Go Against GDPR? 

There are several ways that this (alleged) practice by Google’s Fitbit could violate GDPR. For example:

– GDPR mandates that consent must be freely given. If users are forced to agree to data transfers with no ability to withdraw, the consent is not freely given.

– Under GDPR, users must be informed about how their data will be used and processed. If the data transfer is a condition that users cannot opt-out of, then the consent cannot be considered specific or informed.

In relation to these points, Noyb says that because Fitbit (allegedly) forces users to consent to sharing sensitive data without providing them with clear information about possible implications or the specific countries their data goes to, this means that consent that it is neither free, informed, or specific (as GDPR requires).

Sensitive Data 

GDPR also emphasises that only the data that is necessary for the intended purpose should be collected and processed. Fitbit Forcing data transfers may violate this principle if the data being transferred is broader than what is strictly necessary for the service provided.

In relation to this, Noyb alleges that Fitbit’s privacy policy says that the shared data not only includes things like a user’s email address, date of birth and gender, but can also include “data like logs for food, weight, sleep, water, or female health tracking; an alarm; and messages on discussion boards or to your friends on the Services”.  This has raised concerns that, for example, the sharing of menstrual tracking data could be used in court cases where abortion care is criminalised, especially considering that sharing this kind of data is not common practice even in specialised menstrual tracking apps.

Also, Noyb alleges that the collected Fitbit data can even be shared for processing with third-party companies, the location of which are unknown, and that it’s “impossible” for users to find out which specific data is affected.

‘Take It Or Leave It’ Approach? 

One other aspect of GDPR is that to ensure users can change their mind, every person has the right to withdraw their consent. Noyb says that Fitbit’s privacy policy states that the only way to withdraw consent is to delete an account which would mean losing all previously tracked workouts and health data, even for those on a premium subscription for 79.99 euros per year. Noyb argues that this means that although people may buy a Fitbit for its features, there appears to be no realistic way to regain control their data without making the product useless.

Maartje de Graaf, Data Protection Lawyer at Noyb says: “First, you buy a Fitbit watch for at least 100 euros. Then you sign up for a paid subscription, only to find that you are forced to “freely” agree to the sharing of your data with recipients around the world. Five years into the GDPR, Fitbit is still trying to enforce a ‘take it or leave it’ approach.” 

Blank Cheque? 

Bernardo Armentano, Data Protection Lawyer at Noyb, says: “Fitbit wants you to write a blank check, allowing them to send your data anywhere in the world. Given that the company collects the most sensitive health data, it’s astonishing that it doesn’t even try to explain its use of such data, as required by law.” 

Fine Could Be £ Billions 

According to Noyb, based on Alphabet’s (Google’s parent company) turnover of last year, if the complaints are upheld by data regulators, Google could face fines of up to 11.28 billion euros over Fitbit’s alleged data protection violations.

There appears to be no publicly available comment from Google about Noyb’s allegations at the time of writing this article.

What Does This Mean For Your Business? 

Google which acquired Fitbit in 2021 and at the time, in addition expanding its move wearables, some commentators noted that it may also have been motivated by the lure of the health data of millions of Fitbit customers (potentially for profiling and advertising) and the ability to improve its competitive position in the lucrative healthcare tech space. Also, at the time, it was noted that Fitbit’s corporate partnerships with insurance companies and corporate wellness programmes may have also been attractive to Google.

Now, just a couple of years down the line, it’s the data aspect of the deal that appears to have landed Google in some hot water. Noyb’s complaints against Google-owned Fitbit could have a ripple effect that goes well beyond just a potentially hefty fine. With a penalty that could be up to 11.28 billion euros, the situation would have serious financial repercussions, and the case could set a precedent for how Google and other tech giants handle user data (especially sensitive health information), forcing them to change their global data policies.

It’s been noted, for example, in analyst GlobalData’s recent tech regulation report that data protection regulators look likely to continue closer scrutiny of companies in 2023, so there could be more trouble to come for other tech companies relating to which data they collect, how they share it, and around matters of consent.

Some may argue that Google may, several years down the line from GDPR’s introduction, need to invest more resources in compliance to avoid facing similar allegations related to other products or services.

For businesses that similarly rely on user-data, this case is a wake-up call to thoroughly review their data collection and transfer policies to ensure they align with GDPR requirements. Businesses must offer clear, informed choices to users about how their data is used, especially if it crosses borders. The situation with Fitbit highlights the reputational damage and legal risks involved in “take it or leave it” approaches to data consent. If Fitbit’s alleged actions are deemed a violation of GDPR, it could trigger a domino effect, prompting closer scrutiny of other businesses that have similar policies.

For users of Fitbit and similar devices, this case could lead to more transparent data practices, potentially providing them with greater control over their personal information. Reading about what may be happening to their extremely sensitive data may mean that users may become more cautious and discerning about the permissions they grant to these apps. Given the sensitive nature of health data involved, ranging from sleep patterns to menstrual cycles, users may start to demand more robust privacy protections, and this case could also encourage users to seek alternatives that offer better data protection guarantees.

Featured Article : Zoom Data Concerns

In this article, we look at why Zoom found itself as the subject of a backlash over an online update to its terms related to AI, what its response has been, plus what this says about how businesses feel about AI.

What Happened? 

Communications app Zoom updated its terms of service in March but following the change only being publicised on a popular forum in recent weeks, Zoom has faced criticism because many tech commentators have expressed alarm that the change appeared to go against its policy to not use customer-data to train AI.

The Update In Question 

The update to Section 10 of is terms of service, which Zoom says was to explain “how we use and who owns the various forms of content across our platform” gave Zoom “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights” to use Customer Content, i.e. data, content, communications, messages, files, documents and more, for “machine learning, artificial intelligence, training, testing” (and other product development purposes).

The Reaction 

Following the details of the update being posted and discussed on the ‘Hacker News’ forum, there was a backlash against Zoom, with many commentators unhappy with the prospect of AI (e.g. generative AI chatbots, AI image generators and Zoom’s own AI models namely Zoom IQ) and more) being given access to what should be private Zoom calls and other communications.

What’s The Problem? 

There are several concerns that individuals, businesses and other organisations may have over their “Customer Content” being used to train AI. For example:

– Privacy Concerns – worries that personal or sensitive information in video calls could be used in ways the participants never intended.

– Potential security risks. For example, if Zoom stores video and audio data for AI training, it increases the chance of that data being exposed in a hack or breach. Also, it’s possible with generative AI models that private information could be revealed if a user of an AI chatbot asked the right questions.

– Ethical questions. This is because some users may simply not have given clear permission for their data to be used for AI training, raising issues of consent and fairness.

– Legal Issues. For example, depending on the country, using customer data in this manner might violate data protection laws like GDPR, which could get both the company and users into legal trouble. Also, Zoom users or admins for business accounts could click “OK” to the terms of service without fully realising what they’re agreeing, to and employees who use the business Zoom account may be unaware of the choice their employer has made on their behalf. It’s also been noted by some online commentators that Zoom’s terms of service still permit it to collect a lot of data without consent, e.g. what’s grouped under the term ‘Service Generated Data.’

Another Update Prompted 

The backlash, the criticism of Zoom and the doubtless fear of some users leaving the platform over this controversy appears to have prompted another update to the company’s terms of service which Zoom says was to “to reorganise Section 10 and make it easier to understand”. 

The second update was a sentence, in bold, added on the end of Section 10.2 saying: “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.” 

On the company’s blog, Chief Product Officer, Smita Hashim, re-iterated that: “Following feedback received regarding Zoom’s recently updated terms of service Zoom has updated our terms of service and the below blog post to make it clear that Zoom does not use any of your audio, video, chat, screen sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models.” 

The Online Terms of Service Don’t Affect Large Paying Customers 

Smita Hashim explains in the blog post that the terms of service typically cover online customers, but “different contracts exist for customers that buy directly from us” such as “enterprises and customers in regulated verticals like education and healthcare.” Hashim states, therefore, that “updates to the online terms of service do not impact these customers.” 

What Zoom AI? 

Zoom has recently introduced two generative AI features to its platform – Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose, available on free trial and offering automated meeting summaries and AI-powered chat composition.

To customers worried that these tools may be trained using ‘Customer Content’ Zoom says, “We inform you and your meeting participants when Zoom’s generative AI services are in use” and has specifically assured customers that Zoom does not use customer content (e.g. as poll results, whiteboard-content, or user-reactions) to train Zoom’s own (or third-party) AI models.

Criticism 

In 2020, Zoom faced criticism over only offering end-to-end encryption as a paid extra feature after saying paying users would have it anyway. Also, with Zoom being the company whose product enabled (and is all about) remote working, it was criticised after asking staff living within a “commutable distance” (i.e. 50 miles / 80km) of the company’s offices to come to the office twice a week when it was reported to have said (at one time) that all staff could work remotely indefinitely.

What Does This Mean For Your Business? 

This story shows how, at a time when data is now needed in vast quantities to train AI, a technology that’s growing at a frightening rate (and has been the subject of dire warnings about the threats it could cause), clear data protections in this area are lagging or are missing altogether.

Yes, there are data protection laws. Arguably however, with the lack of understanding of how AI models work and what they need, service terms may not give a clear picture of what’s being consented to (or not) when using AI. There’s a worry, therefore, that boundaries of data protection, privacy, security, ethics, legality, and other contraints may be overstepped without users knowing it in the rush for more data as clear regulation is left behind.

Zoom’s extra assurances may have gone some way toward calming the backlash down and assuring users, but the fact that there was such a backlash over the contents of an old update shows the level of confusion and mistrust around this relatively new technological development and how it could affect everyone.

Tech News : Opting Out Of AI-Targeting

The EU’s new Digital Services Act allows social media users to opt out of AI personalised content feeds based on relevance.

What Is The DSA? 

The Digital Services Act is a new EU Law designed to protect users. It applies to any digital company operating and serving the EU with “very large online platforms” (those with over 45 million EU users) and very large search engines subject to the toughest rules.

The DSA focuses on five key areas of user protection which are:

1. Illegal products. I.e. platforms will need to stop the sale of illegal products.

2. Illegal content. This means that platforms (e.g. social media platforms) need to take measures stop hate speech, child abuse and harassment, electoral interference and more, whilst safeguarding free speech and data protection.

3. Protection of children. This includes large online platforms and search engines having to take a wide range of measures to protect children, such as protection from being targeted with advertising based on their personal data or cookies, protecting their privacy, redesigning content “recommender systems” to reduce risks to children, and much more.

4. Racial and gender diversity. This means that companies (e.g. the large social media platforms) can’t target users with adverts based on personal date such as race, gender, and religion.

5. Banning so-called “dark patterns.” This means protecting consumers from manipulative practices designed to exploit their vulnerabilities or trick/manipulate them into buying things they don’t need or want and making it difficult for them to cancel. For example, this includes fake timers on deals, hiding information about signing up to a subscription and making subscription cancellation steps too complicated for users.

User Empowerment 

More on the matter of user empowerment, the DSA means that users (e.g. users of social media platforms) now need to be given clear information on why they are recommended certain information and have the right (and a clear way) to opt-out from recommendation systems based on profiling (tracking). This has led to the large social media platforms making changes. For example:

– Meta’s Facebook launching a chronological news Feeds tab (last July) to whereby users can see posts from their friends, groups, pages and more in chronological order, and no longer showing any “Suggested For You” posts. Also, since February, Meta’s apps, including Facebook, have stopped showing ads to users aged 13-17 based on their activity to the apps.

– Google’s YouTube stopping next video recommendations based on profiling for logged in users with the ‘watch history’ feature turned off.

– Instagram introducing a “Not Personalised” option instead of just an ‘Explore’ tab based on algorithmic content selections (personalised – “For you”).

– TikTok rolling out the option for users in Europe opt out from its personalised algorithm-based feed, i.e. as TikTok says, if users opt out of “For You” and “LIVE” feeds, it will instead show “popular videos from both the places where they live and around the world, rather than recommending content to them based on their personal interests”. Also, from July, TikTok stopped showing users in Europe aged 13-17 from being shown personalised ads based on their online activity.

– Snapchat has announced four new measures that it’s taking in the EU to comply with the DSA, including giving users “the ability to better understand why content is being shown to them and have the ability to opt out of a personalised Discover and Spotlight content experience.”

Amazon and Google

With the DSA also affecting very large search engines and companies like Amazon, a couple of examples of how they are complying include:

– Amazon creating a new channel for submitting notices against suspected illegal products and content.

– Google promising to increase data access to increase transparency, helping users to understand more about how Google Search, YouTube, Google Maps, Google Play, and Shopping work.

What Does This Mean For Your Business? 

Tech companies have known about the basic requirements of the DSA for three years and have had four months to comply with the act’s rules. Given the size of the “very large” social media companies and search engines, however, it has required some considerable work (some claiming thousands of staff had been involved), costs, and rethinking and re-organising. The DSA’s rules are far-reaching, while compliance means increased operational costs, e.g. due to necessary investment in technical infrastructure, legal fees, human resources for content moderation, and data governance systems. Also, the stricter regulations on data collection, content, and restrictions on targeting could limit ad-revenues and user-engagement. There’s also the added challenge of a greater workload for social media companies – e.g., with the need for more effective and continuous monitoring, user outreach, and updates.

That said, users may welcome the chance to essentially opt-out of being targeted and many may say that giving greater protection to users, especially children, is long overdue and that legislation appears to have been necessary to make change happen. For the very large tech companies, although they may not be happy with parts of the DSA, they have recognised that compliance is now crucial for sustained market access and legal operation within the EU and the fines for non-compliance are very steep and something (along with the bad publicity) they’d like to avoid (6% of turnover and potential costly suspension of the service).

The new rules have only just come into force, so it remains to be seen how the large tech companies fare going forward in a fast-evolving tech landscape that now has the added complications of AI.

Featured Article : UK Gov Pushing To Spy On WhatsApp (& Others)

The recent amendment to the Online Safety Bill which means a compulsory report must be written for Ofcom by a “skilled person” before encrypted app companies are forced to scan messages has led to even more criticism of this rather controversial bill to bypass security in apps and give the government (and therefore any number of people) more access to sensitive and personal information.

What Amendment? 

In the House of Lords debate, which was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended before the Bill becomes law, Government minister Lord Parkinson amended the bill by calling for the need for a report to be written for Ofcom by a “skilled person” (appointed by Ofcom) before powers can be used to force a provider / tech company (e.g. WhatsApp or Signal), to scan its messages. The stated purpose of scanning messages using the powers of the Online Safety Bill is (ostensibly) to uncover child abuse images.

The amendment states that “OFCOM may give a notice under section 111(1) to a provider only after obtaining a report from a skilled person appointed by OFCOM under section 94(3).” 

Prior to the amendment, the report had been optional.

Why Is A Compulsory Report Stage So Important? 

The amendment says that the report is needed before companies can be forced to scan messages “to assist OFCOM in deciding whether to give a notice…. and to advise about the requirements that might be imposed by such a notice if it were to be given”. In other words, the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies could be used instead.

It is understood, therefore, that the report’s findings will be used to help decide whether to force a tech firm to scan messages. Under the detail of the amendment, a summary of the report’s findings must be shared with the tech firm concerned.

Reaction 

Tech companies may be broadly in agreement of the aims of the bill. However, the detail of the bill that companies such as encrypted messages operators (e.g. WhatsApp and Signal and others) have always opposed being forced into scanning user messages before they are encrypted (client-side scanning). Operators say that this completely undermines the privacy and security of encrypted messaging, and they object to the idea of having to run government-mandated scanning services on their devices. Also, they argue that this could leave their apps more vulnerable to attack.

The latest amendment, therefore, has not changed this situation for the tech companies and has led to more criticism and more objections. Many objections have also been aired by campaign and rights groups such as Index on Censorship and The Open Rights Group, who have always opposed what they call the “spy clause” in the bill for example:

– The Ofcom appointed “skilled person” could simply be a consultant or political appointee, and having these people oversee decisions about free speech and privacy rights would not amount to effective oversight.

– Judicial oversight should be a bare minimum and a report written by just a “skilled person” wouldn’t be binding and would lack legal authority.

Other groups, however, such as the NSPCC, have broadly backed the bill in terms of finding ways to make tech firms mitigate the risks of child sexual abuse when designing their apps or adding features, e.g. end-to-end encryption.

Another Amendment 

Another House of Lords amendment to the bill requires Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

This has also been met with similar criticisms over the idea of government-mandated scanning technology’s effects on privacy, freedom of speech, and potentially being used as a kind of monitoring and surveillance. WhatsApp, Signal, and Apple have all opposed the scanning idea, with WhatsApp and Signal reportedly indicating that they would not comply.

Breach Of International Law? 

The clause 9(2) of the Online Safety Bill which requires platforms to prevent users from “encountering” certain “illegal content” has also been soundly criticised recently. This clause means that platforms which host user-generated content will need to immediately remove any such content, which has a broad range, or face considerable fines, blocked services, or even jail for executives. Quite apart from the technical and practical challenges of being able to achieve this effectively at scale, criticisms of the clause include that it threatens free speech in the UK, and it lacks the detail for legislation.

Advice provided The Open Rights Group suggests that the clause may even be a breach of international law in that there could be “interference with freedom of expression that is unforeseeable” and goes against the current legal order on platforms.

It’s also been reported that Wikipedia could withdraw from the UK over the rules in the bill.

Investigatory Powers Act Objections (The Snooper’s Charter) 

Suggested new updates to the Investigatory Powers Act (IPA) 2016 (sometimes called the ‘Snooper’s Charter’) have also come under attack from tech firms, not least Apple. For example, the government wants messaging services, e.g. WhatsApp, to clear security features with the Home Office before releasing them to customers. The update to the IPA would mean that the UK’s Home Office could demand, with immediate effect, that security features are disabled, without telling the users/the public. Currently, a review process with independent oversight (with the option of appeal by the tech company) is needed before any such action could happen.

The Response 

The response from tech companies has been and swift and negative, with Apple threatening to remove FaceTime and iMessage from the UK if the planned update to the Act goes ahead.

Concerns about granting the government the power to secretly remove security features from messaging app services include:

– It could allow government surveillance of users’ devices by default.

– It could reduce security for users, seriously affect their privacy and freedom of speech, and could be exploited by adversaries, whether they are criminal or political.

– Building backdoors into encrypted apps essentially means there is no longer end-to-end encryption.

Apple 

Apple’s specific response to the proposed updates/amendments (which will be subject to an eight-week consultation anyway) is that:

– It refuses to make changes to security features specifically for one country that would weaken a product for all users globally.

– Some of the changes would require issuing a software update, which users would have to be told about, thereby stopping changes from being made secretly.

– The proposed amendments threaten security and information privacy and would affect people outside the UK.

What Does This Mean For Your Business? 

There’s broad agreement about the aims of UK’s Online Safety Bill and IPA in terms of wanting to tackle child abuse, keep people safe, and even making tech companies take more responsibility and measures to improve safety. However, these are global tech companies where UK users represent only a small part of their total user base, and ideas like building in back doors into secure apps, running government approved scanning of user content and using reports written by consultants/political appointees to support scanning all go against ideas of privacy, one of key features of apps like WhatsApp.

Allowing governments access into apps and granting them powers to turn off security ‘as and when’ raise issues and suspicions about free speech, government monitoring and surveillance, legal difficulties, and more. In short, even though the UK government want to press ahead with the new laws and amendments there is still a long way to go before there is any real agreement with the tech companies. In fact, it looks likely that they won’t comply and some, like WhatsApp have simply said they’ll pull out of the UK market, which could be very troublesome for UK businesses, charities, groups and individuals.

The tech companies also have a point in that it seems unreasonable to expect them to alter their services just for one country in a way that could negatively affect their users in other countries. As some critics have pointed out, if the UK wants to be a leading player on the global tech stage, alienating the big tech companies may not be the best way to go about it. It seems that a lot more talking and time will be needed to get anywhere near real-world workable laws and, at the moment, with the UK government being seen by many as straying into areas that are alarming rights groups, some tech companies are suggesting the government ditch their new laws and start again.

Expect continued strong resistance from tech companies going forward if the UK government doesn’t slow down or re-think many aspects of these new laws – watch this space.