Tech News : €345m Children’s Data Privacy Fine For TikTok

Video-focused social media platform TikTok has been fined €345m by Ireland’s Data Protection Commission (DPC) over the privacy of child users.

The Processing of Personal Data 

The fine, as well as a reprimand (and an order requiring them to bring its data processing into compliance within three months) were issued in relation to how the company processed personal data relating to child users in terms of:

– Some of the TikTok platform settings, such as public-by-default settings as well as the settings associated with the ‘Family Pairing’ feature.

– Age verification in the registration process.

During its investigation into TikTok, The DPC also looked at transparency information for children. The DPC’s investigation focused on the period from 31 July 2020 and 31 December 2020.

Explained 

Explained in basic terms, TikTok was fined because (according to the DPC’s findings) :

– The profile settings for child users accounts being set to public-by-default meant that anyone (on or off TikTok) could view the content posted by the child user. The DPC said this also posed risks to children under 13 who had gained access TikTok.

– The ‘Family Pairing’ setting allowed a non-child user (who couldn’t be verified as the parent or guardian) to pair their account to the child user’s account. The DPC says this enabled non-child users to enable Direct Messages for child users over 16, thereby posing a risk to child users.

– Child users hadn’t been provided with sufficient information transparency.

– The DPC said that TikTok had implemented “dark patterns” by “nudging users towards choosing more privacy-intrusive options during the registration process, and when posting videos.” 

TikTok Says…

TikTok has been reported as saying that it disagrees with the findings and the level of the fine. TikTok also said: “The criticisms are focused on features and settings that were in place three years ago, and that we made changes to well before the investigation even began, such as setting all under 16 accounts to private by default”.

Fines

This isn’t the first fine for TikTok in relation to this subject. For example, in July 2020, the company was fined $5.7 million by the U.S. Federal Trade Commission (FTC) for collecting data from minors without parental consent. Also, in April this year, TikTok was fined £12.7m by the ICO for allowing children under 13 to use the platform (in 2020).

The level of TikTok’s most recent fine, however, is not as much as the £1bn fine issued to Meta in May for mishandling people’s data in transfer between Europe and the US.

Banned In Many Countries

In addition to fines in the some of the countries where the TikTok app is allowed, for a mixture of reasons including worries about data privacy for young users, possible links to the Chinese state, incompatibility with some religious laws and some political situation(s) have resulted in TikTok being banned in Somalia, Norway, New Zealand, The Netherlands, India, Denmark, Canada, Belgium, Australia, and Afghanistan.

What Does This Mean For Your Business?

Back in 2020, TikTok was experiencing massive growth as the most downloaded app in the world. It was also the year when former U.S. President Donald Trump issued an executive order aiming to ban TikTok in the United States, plus the year when the platform picked up its first big fine ($5.7 million) from the FTC (in the US) over collecting data from minors without parental consent.

As pointed out by TikTok, this latest, much larger European fine dates back to issues from around the same time, which TikTok argues it had already addressed before the DPC’s investigation began. This story highlights how important it is to create a safe environment in this digital society for children and young people who are frequent users of the web and particularly social media platforms. This story also highlights how important it is for businesses to pay particular attention to data regulations relating to children and young users and to review systems and processes with this mind to ensure maximum efforts are made maintain privacy and safety.

Furthermore, it is also an example of the importance of having regulators with ‘teeth’ that can impose substantial fines and generate bad publicity for non-compliance which can help provide the motivation for the big tech companies to take privacy matters more seriously. TikTok’s worries, however, aren’t just related to data privacy issues. Ongoing frosty political relations between China and the west mean that its relationship with the Chinese government is still in question and this, together with the bans of the app in many countries means it remains under scrutiny, perhaps more than other (US based) social media platforms.

Tech News : Fitbit Data Transfer Complaints

Vienna-based advocacy group ‘Noyb’ has filed complaints against Google-owned Fitbit, alleging that it has violated the EU’s GDPR over illegal exporting of user data.

Complaints In Three Countries 

Noyb, which stands for ‘None Of Your Business,’ (and founded by privacy activist Max Schrems) filed three complaints against Fitbit – in Austria, the Netherlands and Italy.

Why? 

Noyb alleges that Fitbit forces users to consent to data transfers outside the EU, to the US and other countries (with different data protection laws), without providing users with the possibility to withdraw their consent, thereby potentially violating GDPR’s requirements. Noyb says that the only option users have to stop the “illegal processing” is to completely delete their Fitbit account.

How Would This Go Against GDPR? 

There are several ways that this (alleged) practice by Google’s Fitbit could violate GDPR. For example:

– GDPR mandates that consent must be freely given. If users are forced to agree to data transfers with no ability to withdraw, the consent is not freely given.

– Under GDPR, users must be informed about how their data will be used and processed. If the data transfer is a condition that users cannot opt-out of, then the consent cannot be considered specific or informed.

In relation to these points, Noyb says that because Fitbit (allegedly) forces users to consent to sharing sensitive data without providing them with clear information about possible implications or the specific countries their data goes to, this means that consent that it is neither free, informed, or specific (as GDPR requires).

Sensitive Data 

GDPR also emphasises that only the data that is necessary for the intended purpose should be collected and processed. Fitbit Forcing data transfers may violate this principle if the data being transferred is broader than what is strictly necessary for the service provided.

In relation to this, Noyb alleges that Fitbit’s privacy policy says that the shared data not only includes things like a user’s email address, date of birth and gender, but can also include “data like logs for food, weight, sleep, water, or female health tracking; an alarm; and messages on discussion boards or to your friends on the Services”.  This has raised concerns that, for example, the sharing of menstrual tracking data could be used in court cases where abortion care is criminalised, especially considering that sharing this kind of data is not common practice even in specialised menstrual tracking apps.

Also, Noyb alleges that the collected Fitbit data can even be shared for processing with third-party companies, the location of which are unknown, and that it’s “impossible” for users to find out which specific data is affected.

‘Take It Or Leave It’ Approach? 

One other aspect of GDPR is that to ensure users can change their mind, every person has the right to withdraw their consent. Noyb says that Fitbit’s privacy policy states that the only way to withdraw consent is to delete an account which would mean losing all previously tracked workouts and health data, even for those on a premium subscription for 79.99 euros per year. Noyb argues that this means that although people may buy a Fitbit for its features, there appears to be no realistic way to regain control their data without making the product useless.

Maartje de Graaf, Data Protection Lawyer at Noyb says: “First, you buy a Fitbit watch for at least 100 euros. Then you sign up for a paid subscription, only to find that you are forced to “freely” agree to the sharing of your data with recipients around the world. Five years into the GDPR, Fitbit is still trying to enforce a ‘take it or leave it’ approach.” 

Blank Cheque? 

Bernardo Armentano, Data Protection Lawyer at Noyb, says: “Fitbit wants you to write a blank check, allowing them to send your data anywhere in the world. Given that the company collects the most sensitive health data, it’s astonishing that it doesn’t even try to explain its use of such data, as required by law.” 

Fine Could Be £ Billions 

According to Noyb, based on Alphabet’s (Google’s parent company) turnover of last year, if the complaints are upheld by data regulators, Google could face fines of up to 11.28 billion euros over Fitbit’s alleged data protection violations.

There appears to be no publicly available comment from Google about Noyb’s allegations at the time of writing this article.

What Does This Mean For Your Business? 

Google which acquired Fitbit in 2021 and at the time, in addition expanding its move wearables, some commentators noted that it may also have been motivated by the lure of the health data of millions of Fitbit customers (potentially for profiling and advertising) and the ability to improve its competitive position in the lucrative healthcare tech space. Also, at the time, it was noted that Fitbit’s corporate partnerships with insurance companies and corporate wellness programmes may have also been attractive to Google.

Now, just a couple of years down the line, it’s the data aspect of the deal that appears to have landed Google in some hot water. Noyb’s complaints against Google-owned Fitbit could have a ripple effect that goes well beyond just a potentially hefty fine. With a penalty that could be up to 11.28 billion euros, the situation would have serious financial repercussions, and the case could set a precedent for how Google and other tech giants handle user data (especially sensitive health information), forcing them to change their global data policies.

It’s been noted, for example, in analyst GlobalData’s recent tech regulation report that data protection regulators look likely to continue closer scrutiny of companies in 2023, so there could be more trouble to come for other tech companies relating to which data they collect, how they share it, and around matters of consent.

Some may argue that Google may, several years down the line from GDPR’s introduction, need to invest more resources in compliance to avoid facing similar allegations related to other products or services.

For businesses that similarly rely on user-data, this case is a wake-up call to thoroughly review their data collection and transfer policies to ensure they align with GDPR requirements. Businesses must offer clear, informed choices to users about how their data is used, especially if it crosses borders. The situation with Fitbit highlights the reputational damage and legal risks involved in “take it or leave it” approaches to data consent. If Fitbit’s alleged actions are deemed a violation of GDPR, it could trigger a domino effect, prompting closer scrutiny of other businesses that have similar policies.

For users of Fitbit and similar devices, this case could lead to more transparent data practices, potentially providing them with greater control over their personal information. Reading about what may be happening to their extremely sensitive data may mean that users may become more cautious and discerning about the permissions they grant to these apps. Given the sensitive nature of health data involved, ranging from sleep patterns to menstrual cycles, users may start to demand more robust privacy protections, and this case could also encourage users to seek alternatives that offer better data protection guarantees.

Featured Article : Zoom Data Concerns

In this article, we look at why Zoom found itself as the subject of a backlash over an online update to its terms related to AI, what its response has been, plus what this says about how businesses feel about AI.

What Happened? 

Communications app Zoom updated its terms of service in March but following the change only being publicised on a popular forum in recent weeks, Zoom has faced criticism because many tech commentators have expressed alarm that the change appeared to go against its policy to not use customer-data to train AI.

The Update In Question 

The update to Section 10 of is terms of service, which Zoom says was to explain “how we use and who owns the various forms of content across our platform” gave Zoom “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights” to use Customer Content, i.e. data, content, communications, messages, files, documents and more, for “machine learning, artificial intelligence, training, testing” (and other product development purposes).

The Reaction 

Following the details of the update being posted and discussed on the ‘Hacker News’ forum, there was a backlash against Zoom, with many commentators unhappy with the prospect of AI (e.g. generative AI chatbots, AI image generators and Zoom’s own AI models namely Zoom IQ) and more) being given access to what should be private Zoom calls and other communications.

What’s The Problem? 

There are several concerns that individuals, businesses and other organisations may have over their “Customer Content” being used to train AI. For example:

– Privacy Concerns – worries that personal or sensitive information in video calls could be used in ways the participants never intended.

– Potential security risks. For example, if Zoom stores video and audio data for AI training, it increases the chance of that data being exposed in a hack or breach. Also, it’s possible with generative AI models that private information could be revealed if a user of an AI chatbot asked the right questions.

– Ethical questions. This is because some users may simply not have given clear permission for their data to be used for AI training, raising issues of consent and fairness.

– Legal Issues. For example, depending on the country, using customer data in this manner might violate data protection laws like GDPR, which could get both the company and users into legal trouble. Also, Zoom users or admins for business accounts could click “OK” to the terms of service without fully realising what they’re agreeing, to and employees who use the business Zoom account may be unaware of the choice their employer has made on their behalf. It’s also been noted by some online commentators that Zoom’s terms of service still permit it to collect a lot of data without consent, e.g. what’s grouped under the term ‘Service Generated Data.’

Another Update Prompted 

The backlash, the criticism of Zoom and the doubtless fear of some users leaving the platform over this controversy appears to have prompted another update to the company’s terms of service which Zoom says was to “to reorganise Section 10 and make it easier to understand”. 

The second update was a sentence, in bold, added on the end of Section 10.2 saying: “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.” 

On the company’s blog, Chief Product Officer, Smita Hashim, re-iterated that: “Following feedback received regarding Zoom’s recently updated terms of service Zoom has updated our terms of service and the below blog post to make it clear that Zoom does not use any of your audio, video, chat, screen sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models.” 

The Online Terms of Service Don’t Affect Large Paying Customers 

Smita Hashim explains in the blog post that the terms of service typically cover online customers, but “different contracts exist for customers that buy directly from us” such as “enterprises and customers in regulated verticals like education and healthcare.” Hashim states, therefore, that “updates to the online terms of service do not impact these customers.” 

What Zoom AI? 

Zoom has recently introduced two generative AI features to its platform – Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose, available on free trial and offering automated meeting summaries and AI-powered chat composition.

To customers worried that these tools may be trained using ‘Customer Content’ Zoom says, “We inform you and your meeting participants when Zoom’s generative AI services are in use” and has specifically assured customers that Zoom does not use customer content (e.g. as poll results, whiteboard-content, or user-reactions) to train Zoom’s own (or third-party) AI models.

Criticism 

In 2020, Zoom faced criticism over only offering end-to-end encryption as a paid extra feature after saying paying users would have it anyway. Also, with Zoom being the company whose product enabled (and is all about) remote working, it was criticised after asking staff living within a “commutable distance” (i.e. 50 miles / 80km) of the company’s offices to come to the office twice a week when it was reported to have said (at one time) that all staff could work remotely indefinitely.

What Does This Mean For Your Business? 

This story shows how, at a time when data is now needed in vast quantities to train AI, a technology that’s growing at a frightening rate (and has been the subject of dire warnings about the threats it could cause), clear data protections in this area are lagging or are missing altogether.

Yes, there are data protection laws. Arguably however, with the lack of understanding of how AI models work and what they need, service terms may not give a clear picture of what’s being consented to (or not) when using AI. There’s a worry, therefore, that boundaries of data protection, privacy, security, ethics, legality, and other contraints may be overstepped without users knowing it in the rush for more data as clear regulation is left behind.

Zoom’s extra assurances may have gone some way toward calming the backlash down and assuring users, but the fact that there was such a backlash over the contents of an old update shows the level of confusion and mistrust around this relatively new technological development and how it could affect everyone.

Snooper’s Charter Updated. (Poorly)

Amendments to the UK Online Safety Bill mean a report must be written before powers can be used by the regulator to force tech firms to scan encrypted messages for child abuse images.

What Is The Online Safety Bill? 

The Online Safety Bill is the way the UK government plans to establish a new regulatory regime to address illegal and harmful content online and to impose legal requirements on search engine and internet service providers, including those providing pornographic content. The bill will also give new powers to the Office of Communications (Ofcom), enabling them to act as the online safety regulator.

The Latest Amendments 

The government says the latest amendments to the (highly controversial) Online Safety Bill have been made to address concerns about the privacy implications and technical feasibility of the powers proposed in the bill. The new House of Lords amendments to the bill are:

– A report must be written for Ofcom by a “skilled person” (appointed by Ofcom) before the new powers are used to force a firm, such as an encrypted app like WhatsApp or Signal, to scan messages. Previously, the report was optional. The purpose of the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies that could be used instead. The report’s findings will be used to help decide whether to force a tech firm, e.g. an encrypted messages app, to scan messages, and a summary of those findings must be shared with the tech firm concerned.

– An amendment to the bill requiring Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

The Response 

The response from privacy campaigners and digital rights groups has focused on the idea that the oversight of an Ofcom-appointed “skilled person” is not likely to be as effective as judicial oversight (for example), and may not give the right level of consideration to users’ rights. For example, the Open Rights Group described the House of Lords debate on the amendments as a “disappointing experience” and said, that this “skilled person” could be a political appointee, and they would be overseeing decisions about free speech and privacy rights, this would not be “effective oversight”.

Apple’s Threats In Response To ‘Snoopers Charter’ Proposals 

In the same week, Apple said it would simply remove services like FaceTime and iMessage from the UK rather than weaken its security under the new proposals for updating the UK’s Investigatory Powers Act (IPA) 2016. The new proposals for updates to the act would mean tech companies like Apple and end-to-end encrypted message apps having to clear security features with the Home Office before releasing them to customers and allow the Home Office to demand security features are immediately disabled, without telling the public. Apple has submitted a nine-page statement to the government’s consultation on amendments to the IPA outlining its objections and opposition. For example, Apple says the proposals “constitute a serious and direct threat to data security and information privacy” that would affect people outside the UK.

What Does This Mean For Your Business? 

What the government says are measures to help in the fight against child sex abuse are seen by some rights groups as a route to monitoring and surveillance, and by tech companies as a way to weaken products and the privacy of their users. The idea that a “skilled person” (e.g. a consultant or political appointee) rather than a judge compiling a report to justify the forced scanning of encrypted messaging apps has not gone down well with the tech companies and rights groups. The fact that the House of Lords debate was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended, before the Bill becomes law with so many major objections from tech companies still being made, it looks unlikely that the big tech companies will comply with the new laws and changes. WhatsApp for example (owned by Meta) has simply said it would pull out the UK market over how new UK laws would force it compromise security which would be considerable blow to many people who use the app for business daily.  Signal (app) has also threatened to pull out of the UK and some critics think that the UK government may be naïve to think that simply pushing ahead with new laws and amendments will result in the big tech companies backing down and complying any time soon. It looks likely that the UK government will have a big fight on its hands going forward.