Tech Insight : Blockchain Bill

In this insight, we look at the introduction of the Electronic Trade Documents Act 2023 (ETDA), what it means and why it’s so significant, plus its implications.

The ETDA 

The Electronic Trade Documents Act 2023 (ETDA), which was based on a draft Bill published by the Law Commission in March 2022, came into force in UK law on 20 September. This Act allows the legal recognition of trade documents in electronic form and crucially, allows an electronic document to be used and recognised in the same way as a paper equivalent. The type of trade documents it applies to include a bill of lading (a legal document issued by a carrier, or their agent, to a shipper, acknowledging the receipt of goods for transport), a bill of exchange, a promissory note, a ship’s delivery order, a warehouse receipt, and more.

The Aims 

The aims of the ETDA, which gives the electronic equivalents of paper trade documents the same legal treatment (subject to criteria) is to:

– Help to rectify deficiencies in the treatment of electronic trade documents under English law and modernise the law to reflect and embrace the benefits of new technologies.

– Help the move towards the benefits of paperless trade and to boost the UK’s international trade.

– Help in the longer-term goal to harmonise and digitise global commerce and its underlying legal frameworks, thereby advancing legal globalisation.

– Complement the 2017 UNCITRAL Model Law on Electronic Transferable Records (MLETR). This is the legal framework for the use of electronic transferable records that are functionally equivalent to transferable documents and instruments, e.g. bills of lading or promissory notes.

Why The Reference To Blockchain In The Title (‘Blockchain Bill’)? 

The development of technologies like blockchain (i.e. an incorruptible distributed ledger) technology that allows multiple parties to transfer value and record forgery-proof records of steps in supply chains and provenance in a secure and transparent way has made trade based on electronic documents possible and attractive.

What’s The Problem With A Paper-Based Trade Document System? 

Moving goods across borders involves a wide range of different actors, e.g. transportation, insurance, finance, and logistics, all of which require (paper) documentation. For example, it’s been estimated that global container shipping generates billions of paper documents per year. A single international shipment, for example, can involve multiple documents, many of which are issued with duplicates, and, considering that two-thirds of the total value of global trade uses container ships, the volume of paper documents is immense.

The need for so much paper, therefore, can slow things down (costs and inefficiencies), creates complication, and has a negative environmental impact.

Based On Old Practices 

Also, existing laws relating to trade documents are based on centuries old merchants’ practices. One key example from this is, prior to the new ETDA, the “holder” of a document was significant because an electronic document couldn’t be “possessed” (in England and Wales), hence the reliance on a paper system. Under ETDA, an electronic document can be possessed, thereby updating the law.

How Does It Benefit Trade? 

Giving electronic equivalents of paper trade documents the same legal treatment offers multiple benefits for businesses, governments and other stakeholders involved in trade. Some of the notable benefits include:

– Efficiency and Speed. Electronic documents can be generated, sent, received, and processed much faster than their paper counterparts. This can significantly reduce the time taken for trade transactions and the associated administrative procedures.

– Cost Savings. Transitioning to electronic trade documentation can save businesses considerable amounts of money by reducing costs related to printing, storage, and transportation of paper documents. For example, the Digital Container Shipping Association (DCSA) estimates that global savings could be as much as £3bn if half of the container shipping industry adopted electronic bills of lading.

– Environmental Benefits. As mentioned above, the shift from paper to electronic documentation could reduce the environmental impact associated with paper production, printing, and disposal. Also, as highlighted by the World Economic Forum, moving to digital trade documents could reduce global logistics carbon emissions by 10 to 12 per cent.

– Accuracy and transparency. Electronic documentation systems often come with features that reduce manual data entry, thereby decreasing errors. Additionally, digital platforms can provide more transparency in the trade process with easy-to-access logs and history.

– Security and fraud reduction. Advanced digital platforms come with encryption, authentication, and other security measures that can reduce the chances of document tampering and fraud. Blockchain, for example, is ‘incorruptible.’ It’s also easier to track the origin and changes in electronic documents.

– Accessibility and storage. ETDA doesn’t exactly specify any one technology, only the criteria that a trade document must meet to qualify as an “electronic trade document” (see the act for the exact criteria). That said, electronic documents can generally be easily stored, retrieved, and accessed from anywhere with the appropriate security clearances, making it easier for businesses to manage and maintain records.

– Interoperability. Digital documents can be more easily integrated with other IT systems, such as customs and regulatory databases, enterprise resource planning (ERP) systems, or financial platforms, providing more seamless trade operations.

– Flexibility and adaptability. Electronic systems can be more easily updated or modified to reflect changes in regulations, business practices, or market conditions.

– Harmonisation of standards. The adoption of electronic documents can pave the way for international standards/global standards, simplifying cross-border trade and making processes more predictable and harmonised across countries.

– Enhanced market access. For smaller enterprises that might not have the resources to deal with cumbersome paper-based processes, the digitisation of trade documentation could make it much easier to access global markets.

– Dispute resolution. Having a digital (secure) record with a clear audit trail, could make it easier to resolve disputes when discrepancies occur.

What Does This Mean For Your Business? 

The technologies exist now to enable reliable, secure, and workable systems that use digital rather than paper documents and this UK Act, in combination with other similar legal changes in other countries could help modernise and standardise global trade. Accepting digital documents as legal equivalents to their paper counterparts will bring a range of benefits to global trade including cost and time savings, greater efficiency, reduced complication (and making it easier for more businesses to get involved in international trade), environmental benefits, the advancement of standardisation of trade globally, and many more.

For the UK, not only does the Act update existing laws but could bring a significant trade boost. For example, the government estimates it could bring benefits to UK businesses (over the next 10 years) of £1.1 billion. It’s easy to see, therefore, why the introduction of EDTA is being seen by some as one of the most significant trade laws passed in over 140 years.

Featured Article : UK Gov Pushing To Spy On WhatsApp (& Others)

The recent amendment to the Online Safety Bill which means a compulsory report must be written for Ofcom by a “skilled person” before encrypted app companies are forced to scan messages has led to even more criticism of this rather controversial bill to bypass security in apps and give the government (and therefore any number of people) more access to sensitive and personal information.

What Amendment? 

In the House of Lords debate, which was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended before the Bill becomes law, Government minister Lord Parkinson amended the bill by calling for the need for a report to be written for Ofcom by a “skilled person” (appointed by Ofcom) before powers can be used to force a provider / tech company (e.g. WhatsApp or Signal), to scan its messages. The stated purpose of scanning messages using the powers of the Online Safety Bill is (ostensibly) to uncover child abuse images.

The amendment states that “OFCOM may give a notice under section 111(1) to a provider only after obtaining a report from a skilled person appointed by OFCOM under section 94(3).” 

Prior to the amendment, the report had been optional.

Why Is A Compulsory Report Stage So Important? 

The amendment says that the report is needed before companies can be forced to scan messages “to assist OFCOM in deciding whether to give a notice…. and to advise about the requirements that might be imposed by such a notice if it were to be given”. In other words, the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies could be used instead.

It is understood, therefore, that the report’s findings will be used to help decide whether to force a tech firm to scan messages. Under the detail of the amendment, a summary of the report’s findings must be shared with the tech firm concerned.

Reaction 

Tech companies may be broadly in agreement of the aims of the bill. However, the detail of the bill that companies such as encrypted messages operators (e.g. WhatsApp and Signal and others) have always opposed being forced into scanning user messages before they are encrypted (client-side scanning). Operators say that this completely undermines the privacy and security of encrypted messaging, and they object to the idea of having to run government-mandated scanning services on their devices. Also, they argue that this could leave their apps more vulnerable to attack.

The latest amendment, therefore, has not changed this situation for the tech companies and has led to more criticism and more objections. Many objections have also been aired by campaign and rights groups such as Index on Censorship and The Open Rights Group, who have always opposed what they call the “spy clause” in the bill for example:

– The Ofcom appointed “skilled person” could simply be a consultant or political appointee, and having these people oversee decisions about free speech and privacy rights would not amount to effective oversight.

– Judicial oversight should be a bare minimum and a report written by just a “skilled person” wouldn’t be binding and would lack legal authority.

Other groups, however, such as the NSPCC, have broadly backed the bill in terms of finding ways to make tech firms mitigate the risks of child sexual abuse when designing their apps or adding features, e.g. end-to-end encryption.

Another Amendment 

Another House of Lords amendment to the bill requires Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

This has also been met with similar criticisms over the idea of government-mandated scanning technology’s effects on privacy, freedom of speech, and potentially being used as a kind of monitoring and surveillance. WhatsApp, Signal, and Apple have all opposed the scanning idea, with WhatsApp and Signal reportedly indicating that they would not comply.

Breach Of International Law? 

The clause 9(2) of the Online Safety Bill which requires platforms to prevent users from “encountering” certain “illegal content” has also been soundly criticised recently. This clause means that platforms which host user-generated content will need to immediately remove any such content, which has a broad range, or face considerable fines, blocked services, or even jail for executives. Quite apart from the technical and practical challenges of being able to achieve this effectively at scale, criticisms of the clause include that it threatens free speech in the UK, and it lacks the detail for legislation.

Advice provided The Open Rights Group suggests that the clause may even be a breach of international law in that there could be “interference with freedom of expression that is unforeseeable” and goes against the current legal order on platforms.

It’s also been reported that Wikipedia could withdraw from the UK over the rules in the bill.

Investigatory Powers Act Objections (The Snooper’s Charter) 

Suggested new updates to the Investigatory Powers Act (IPA) 2016 (sometimes called the ‘Snooper’s Charter’) have also come under attack from tech firms, not least Apple. For example, the government wants messaging services, e.g. WhatsApp, to clear security features with the Home Office before releasing them to customers. The update to the IPA would mean that the UK’s Home Office could demand, with immediate effect, that security features are disabled, without telling the users/the public. Currently, a review process with independent oversight (with the option of appeal by the tech company) is needed before any such action could happen.

The Response 

The response from tech companies has been and swift and negative, with Apple threatening to remove FaceTime and iMessage from the UK if the planned update to the Act goes ahead.

Concerns about granting the government the power to secretly remove security features from messaging app services include:

– It could allow government surveillance of users’ devices by default.

– It could reduce security for users, seriously affect their privacy and freedom of speech, and could be exploited by adversaries, whether they are criminal or political.

– Building backdoors into encrypted apps essentially means there is no longer end-to-end encryption.

Apple 

Apple’s specific response to the proposed updates/amendments (which will be subject to an eight-week consultation anyway) is that:

– It refuses to make changes to security features specifically for one country that would weaken a product for all users globally.

– Some of the changes would require issuing a software update, which users would have to be told about, thereby stopping changes from being made secretly.

– The proposed amendments threaten security and information privacy and would affect people outside the UK.

What Does This Mean For Your Business? 

There’s broad agreement about the aims of UK’s Online Safety Bill and IPA in terms of wanting to tackle child abuse, keep people safe, and even making tech companies take more responsibility and measures to improve safety. However, these are global tech companies where UK users represent only a small part of their total user base, and ideas like building in back doors into secure apps, running government approved scanning of user content and using reports written by consultants/political appointees to support scanning all go against ideas of privacy, one of key features of apps like WhatsApp.

Allowing governments access into apps and granting them powers to turn off security ‘as and when’ raise issues and suspicions about free speech, government monitoring and surveillance, legal difficulties, and more. In short, even though the UK government want to press ahead with the new laws and amendments there is still a long way to go before there is any real agreement with the tech companies. In fact, it looks likely that they won’t comply and some, like WhatsApp have simply said they’ll pull out of the UK market, which could be very troublesome for UK businesses, charities, groups and individuals.

The tech companies also have a point in that it seems unreasonable to expect them to alter their services just for one country in a way that could negatively affect their users in other countries. As some critics have pointed out, if the UK wants to be a leading player on the global tech stage, alienating the big tech companies may not be the best way to go about it. It seems that a lot more talking and time will be needed to get anywhere near real-world workable laws and, at the moment, with the UK government being seen by many as straying into areas that are alarming rights groups, some tech companies are suggesting the government ditch their new laws and start again.

Expect continued strong resistance from tech companies going forward if the UK government doesn’t slow down or re-think many aspects of these new laws – watch this space.

Snooper’s Charter Updated. (Poorly)

Amendments to the UK Online Safety Bill mean a report must be written before powers can be used by the regulator to force tech firms to scan encrypted messages for child abuse images.

What Is The Online Safety Bill? 

The Online Safety Bill is the way the UK government plans to establish a new regulatory regime to address illegal and harmful content online and to impose legal requirements on search engine and internet service providers, including those providing pornographic content. The bill will also give new powers to the Office of Communications (Ofcom), enabling them to act as the online safety regulator.

The Latest Amendments 

The government says the latest amendments to the (highly controversial) Online Safety Bill have been made to address concerns about the privacy implications and technical feasibility of the powers proposed in the bill. The new House of Lords amendments to the bill are:

– A report must be written for Ofcom by a “skilled person” (appointed by Ofcom) before the new powers are used to force a firm, such as an encrypted app like WhatsApp or Signal, to scan messages. Previously, the report was optional. The purpose of the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies that could be used instead. The report’s findings will be used to help decide whether to force a tech firm, e.g. an encrypted messages app, to scan messages, and a summary of those findings must be shared with the tech firm concerned.

– An amendment to the bill requiring Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

The Response 

The response from privacy campaigners and digital rights groups has focused on the idea that the oversight of an Ofcom-appointed “skilled person” is not likely to be as effective as judicial oversight (for example), and may not give the right level of consideration to users’ rights. For example, the Open Rights Group described the House of Lords debate on the amendments as a “disappointing experience” and said, that this “skilled person” could be a political appointee, and they would be overseeing decisions about free speech and privacy rights, this would not be “effective oversight”.

Apple’s Threats In Response To ‘Snoopers Charter’ Proposals 

In the same week, Apple said it would simply remove services like FaceTime and iMessage from the UK rather than weaken its security under the new proposals for updating the UK’s Investigatory Powers Act (IPA) 2016. The new proposals for updates to the act would mean tech companies like Apple and end-to-end encrypted message apps having to clear security features with the Home Office before releasing them to customers and allow the Home Office to demand security features are immediately disabled, without telling the public. Apple has submitted a nine-page statement to the government’s consultation on amendments to the IPA outlining its objections and opposition. For example, Apple says the proposals “constitute a serious and direct threat to data security and information privacy” that would affect people outside the UK.

What Does This Mean For Your Business? 

What the government says are measures to help in the fight against child sex abuse are seen by some rights groups as a route to monitoring and surveillance, and by tech companies as a way to weaken products and the privacy of their users. The idea that a “skilled person” (e.g. a consultant or political appointee) rather than a judge compiling a report to justify the forced scanning of encrypted messaging apps has not gone down well with the tech companies and rights groups. The fact that the House of Lords debate was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended, before the Bill becomes law with so many major objections from tech companies still being made, it looks unlikely that the big tech companies will comply with the new laws and changes. WhatsApp for example (owned by Meta) has simply said it would pull out the UK market over how new UK laws would force it compromise security which would be considerable blow to many people who use the app for business daily.  Signal (app) has also threatened to pull out of the UK and some critics think that the UK government may be naïve to think that simply pushing ahead with new laws and amendments will result in the big tech companies backing down and complying any time soon. It looks likely that the UK government will have a big fight on its hands going forward.

Tech News : EU Wants AI-Generated Content Labelled

In a recent press conference, the European Union said that to help tackle disinformation, it wants the major online platforms to label AI generated content.

The Challenge – AI Can Be Used To Generate And Spread Disinformation

In the press conference, Vĕra Jourová (the vice-president in charge of values and transparency with the European Commission) outlined the challenge by saying, “Advanced chatbots like ChatGPT are capable of creating complex, seemingly well-substantiated content and visuals in a manner of seconds,” and that “image generators can create authentic-looking pictures of events that never occurred,” as well as “voice generation software” being able to “imitate the voice of a person based on a sample of a few seconds.”

Jourová Warned of widespread Russian disinformation in Central and Eastern Europe and said, “we have the main task to protect the freedom of speech, but when it comes to the AI production, I don’t see any right for the machines to have the freedom of speech.”   

Labelling Needed Now 

To help address this challenge, Jourová called for all 44 signatories of the European Union’s code of practice against disinformation to help users better identify AI-generated content. One key method she identified was for big tech platforms such as Google, Facebook (Meta), and Twitter to apply labels to any AI generated content to identify it as such. She suggested that this change should take place “immediately.” 

Jourová said she had already spoken with Google’s CEO Sundar Pichai about how the technologies exist and are being worked on to enable the immediate detection and labelling AI-produced content for public awareness.

Twitter, Under Musk 

Jourová also highlighted how, by withdrawing from the EU’s voluntary Code of Practice against disinformation back in May, Elon Musk’s Twitter had chosen confrontation and “the hard way, warning that, by leaving the code, Twitter had attracted a lot of attention,” and that “Its actions and compliance with EU law will be scrutinised vigorously and urgently.” 

At the time, referring to the EU’s new and impending Digital Services Act, the EU’s Internal Market Commissioner, Thierry Breton, wrote on Twitter: “You can run but you can’t hide. Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement”.

The DSA & The EU’s AI Act 

Legislation, such as that referred to by Thierry Breton, is being introduced in the EU as a way to tackle the challenges posed by AI in the EU’s own way rather than relying on Californian laws. Impending AI legislation includes:

The Digital Services Act (DSA) which includes new rules requiring Big Tech platforms like Meta’s Facebook, Instagram and YouTube to assess and manage risks posed by their services, e.g. advocacy of hatred and the spread of disinformation. The DSA also has algorithmic transparency and accountability requirements to complement other EU AI regulatory efforts which are driving legislative proposals like the AI Act (see below) and the AI Liability Directive. The DSA directs companies, large online platforms and search engines to label manipulated images, audio, and video.

The EU’s proposed ‘AI Act’ described as “first law on AI by a major regulator anywhere” which assigns applications of AI to three risk categories. These categories are ‘unacceptable risk’, e.g. government-run social scoring of the type used in China (banned under the Act), ‘high-risk’ applications, e.g. a CV-scanning tool to rank job applicants (which will be subject to legal requirements), plus those applications not explicitly banned or listed as high-risk which are largely left unregulated.

What Does This Mean For Your Business? 

Among the many emerging concerns about AI, there are the fears that the unregulated publishing of AI generated content could spread misinformation and disinformation (via deepfakes videos, photos, and voices) and in doing so, AI could erode truth and even threaten democracy. One method for enabling people to spot AI-generated content is to have it labelled (which the DSA seeks to do anyway) however the EC’s vice-president in charge of values and transparency with the EC sees this as being needed ungently, hence asking all 44 signatories of the European Union’s code of practice against disinformation to start labelling AI-produced content now.

Arguably, it’s unlike big tech companies to act voluntarily before regulations and legislation force them to and Twitter seems to have opted out already. The spread of Russian disinformation in Central and Eastern Europe is a good example of why labelling may be needed so urgently. That said, as Vĕra Jourová acknowledged herself, free speech needs to be protected too.

With AI generated content being so difficult to spot in many cases and with AI-generated content being published so quickly (in vast amounts), along with AI tools available to all for free, it’s difficult to see how the idea of labelling could be achievable or monitored/policed.

The requirement for big tech platforms like Google and Facebook to label AI-generated content could have significant implications for businesses and tech platforms alike. For example, primarily, labelling AI-generated content could be a way to foster more trust and transparency between businesses and consumers. By clearly distinguishing between content created by humans and that generated by AI, users would be empowered to make informed decisions. This labelling could help combat the spread of misinformation and enable individuals to navigate the digital realm with greater confidence.

However, businesses relying on AI-generated content must consider the impact of labelling on their brand reputation. If customers perceive AI-generated content as less reliable or less authentic, it could erode trust in the brand and deter engagement. Striking a balance between AI-generated and human-generated content would become crucial, potentially necessitating increased investments in human-generated content to maintain authenticity and credibility.

Also, labelling AI-generated content would bring attention to the issue of algorithmic bias. Bias in AI systems, if present, could become more noticeable when content is labelled as AI-generated. To address this concern, businesses would need to be proactive in mitigating biases and ensuring fairness in the AI systems used to generate content.

Looking at the implications for tech platforms, there may be considerable compliance costs associated with implementing and maintaining systems to accurately label AI-generated content. Such endeavours (if possible, to do successfully) would demand significant investments, including the development of algorithms or manual processes to effectively identify and label AI-generated content.

Labelling AI-generated content could also impact the user experience on tech platforms. Users might need to adjust to the presence of labels and potentially navigate through a blend of AI-generated and human-generated content in a different manner. This change could require tech platforms to rethink their user interface and design to accommodate these new labelling requirements.

Tech platforms would also need to ensure compliance with specific laws and regulations related to labelling AI-generated content. Failure to comply could result in legal consequences and reputational damage. Adhering to the guidelines set forth by governing bodies would be essential for tech platforms to maintain trust and credibility.

Finally, the introduction of labelling requirements could influence the innovation and development of AI technologies on tech platforms. Companies might find themselves investing more in AI systems that can generate content in ways that align with the labelling requirements. This, in turn, could steer the direction of AI research and development and shape the future trajectory of the technology.

The implications of labelling AI-generated content for businesses and tech platforms are, therefore, multifaceted. Businesses would need to adapt their content strategies, manage their brand reputation, and address algorithmic bias concerns. Tech platforms, on the other hand, would face compliance costs, the challenge of balancing user experience, and the need for innovation in line with labelling requirements. Navigating these implications would require adjustments, investments, and a careful consideration of user expectations and experiences in the evolving landscape of AI-generated content.