Tech Insight : Jobs Threatened By ChatGPT

In this insight, we look at the kinds of industries and jobs that research has identified as being most exposed to the disruptive threat of generative AI, but we also look at how AI has created some new job roles.

Research 

Research from Felten, Manav Raj, and Seamans –“How will Language Modelers like ChatGPT Affect Occupations and Industries?”), from OpenAI and also the University of Pennsylvania offered some reasonably in-depth analysis of how advances in AI language modeling, such as ChatGPT, impact various occupations and industries. As part of the key findings, the research paper identified some of the jobs most exposed to ChatGPT. These findings were:

Telemarketers 

The research indicated a high exposure Level. This is because the nature of telemarketing involves repetitive tasks that could be easily automated by language models. ChatGPT can, for example, handle customer inquiries, provide information, and even persuade potential customers, thereby reducing the need for human telemarketers.

Post-secondary Teachers
(e.g. English Language and Literature, Foreign Language and Literature, History)

According to the research, there is a significant exposure level for these jobs, with the reason being they often require the creation of educational content, as well as grading, plus answering student queries, which are all tasks that ChatGPT can perform efficiently. However, the interactive and mentoring aspects of teaching are much less likely to be fully replaced by AI.

Legal Services 

Famously, ChatGPT (specifically GPT-4), passed the legal bar exam back in March 2023 and exposure to ChatGPT in legal jobs is thought to be considerable. For example, many tasks within legal services, such as document review, contract analysis, and basic legal advice, can be automated using language models. ChatGPT’s ability to process and understand large volumes of text makes it suitable for these tasks.

Securities, Commodities, and Investments 

Financial analysis, report generation, and market trend analysis are all areas where ChatGPT can assist significantly. Specifically, its data processing capabilities can enhance efficiency and reduce the reliance on human analysts for routine tasks.

In fact, the researchers were able to compile a list of the top 20 professions most exposed to ChatGPT, which are:

1. Telemarketers
2. English language (and literature) teachers
3. Foreign language (and literature) teachers
4. History teachers
5. Law teachers
6. Philosophy and religion teachers
7. Sociology teachers
8. Political science teachers
9. Criminal justice and law enforcement teachers
10. Sociologists
11. Social work teachers
12. Psychology teachers
13. Communications teachers
14. Political scientists
15. Cultural studies teachers
16. Arbitrators, mediators, and conciliators
17. Judges, magistrate judges and magistrates
18. Geography teachers
19. Library science teachers
20. Clinical, counseling and school psychologists

Accountants Exposed 

The OpenAI / University of Pennsylvania research (working paper) also found that a significant portion of the US workforce, including accountants, mathematicians, interpreters, and writers, are highly exposed to the capabilities of generative AI technologies like ChatGPT. For instance, the research revealed that at least half of the tasks performed by accountants could be completed much faster using AI, thereby demonstrating the substantial impact of these technologies on various professions.

Not Creative & Management Jobs 

Conversely however, this paper noted that professions requiring human judgment, creativity, and complex decision-making are much less likely to be replaced by AI. These include jobs in fields like:

– Creative arts, including artists, writers, and designers, where the emphasis is on originality and human creativity.

– Management – roles that require strategic decision-making and interpersonal skills.

– Healthcare. Professions that involve direct patient care and complex medical decision-making.

The findings of the OpenAI research suggest that while AI like ChatGPT can significantly impact certain job sectors by automating routine tasks, roles requiring nuanced human skills and judgment appear to remain less vulnerable to automation.

Also, researchers at Northwestern University’s Kellogg School of Management (in the US) examined the historical impact of disruptive technologies on jobs and projected the effects of ChatGPT. Not surprisingly, their findings indicated that jobs involving data analysis and information retrieval are most at risk from ChatGPT.

What Can Workers Do? 

To protect themselves from the threat posed by AI technologies like ChatGPT, workers can focus on developing skills that are less likely to be automated. These include critical thinking, creativity, and complex decision-making abilities. Professions that require nuanced human judgment, such as those in creative arts, management, and healthcare, are less vulnerable to AI automation. It’s possible, therefore, that by enhancing skills in these areas, workers may remain more relevant in an AI-augmented job market.

Also, reskilling and upskilling are possible strategies for workers to stay competitive. Learning new technologies and understanding how to leverage AI tools can turn potential threats into opportunities. Workers could take advantage of AI to increase their productivity and efficiency rather than being replaced by it, suggesting that training programs focusing on AI literacy, data analysis, and digital transformation could also prove to become essential for workers to adapt to the changing landscape.

Integrating AI into their workflow in a way that complements their unique human capabilities may also be a way that workers can mitigate the threat posed to their jobs by AI such as ChatGPT. This could involve understanding how to use AI to augment tasks that require speed and accuracy while focusing on aspects of their jobs that necessitate empathy, interpersonal skills, and complex problem-solving. Embracing a collaborative approach with AI could therefore help workers enhance their roles and provide added value to their employers, thus securing their positions in the evolving job market.

What About AI Creating Jobs? 

It’s worth remembering that as well as posing a risk to certain jobs/roles, ChatGPT, and other generative AI could also create new jobs and opportunities. For example:

AI specialists and engineers. The rise of generative AI has led to an increased demand for AI and machine learning specialists. These professionals are responsible for developing, maintaining, and improving AI systems. According to the World Economic Forum’s Future of Jobs Report, there is a projected 40 per cent increase in the number of AI and machine learning specialists by 2027, highlighting the growing need for expertise in this field.

Prompt Engineers. As AI models like ChatGPT become more prevalent, the role of prompt engineers has emerged. These specialists create and refine the prompts used to train AI systems, ensuring they generate accurate and relevant outputs. This role requires a deep understanding of both the technology and the specific application domains, making it a unique and valuable (as well as a high salary) position in the AI ecosystem.

AI Trainers and data annotators. Generative AI models require vast amounts of data to learn and improve. AI trainers and data annotators play a crucial role in preparing and curating this data. For example, they label datasets, review AI outputs, and provide feedback to enhance the models’ accuracy and performance. This job is critical for maintaining the quality of AI-generated content and ensuring that the models operate within ethical and practical boundaries.

Digital transformation specialists. Organisations are now increasingly integrating AI into their workflows, which is feeding the demand for professionals who can manage and lead these transformations. Digital transformation specialists can help companies adopt and leverage AI technologies effectively, optimising processes and driving innovation. The Future of Jobs Report (World Economic Forum) indicates a significant rise in demand for digital transformation specialists, underlining their importance in the modern workplace.

AI ethics consultants. With the growing influence of AI, ethical considerations are important. AI ethics consultants work to ensure that AI applications comply with legal standards and ethical guidelines. They help organisations navigate the complexities of AI implementation, addressing issues like bias, transparency, and accountability. This emerging role is proving to be important for building public trust and promoting responsible AI use.

What Does This Mean For Your Business? 

The findings from the research on AI technologies like ChatGPT appear to show a real shift in the landscape of various industries. For UK businesses, this translates into a need for proactive adaptation to harness the benefits of AI while mitigating its disruptive potential. Integrating AI into business operations could significantly enhance efficiency, particularly in roles that involve routine cognitive tasks and data processing. For example, automating customer service, financial analysis, and legal documentation could free up valuable human resources to focus on more strategic, creative, and interpersonal tasks. Embracing AI can therefore lead to a more streamlined and productive business environment, reducing operational costs and improving service delivery.

Also, the evolution of AI presents an opportunity for businesses to invest in the reskilling and upskilling of their workforce. Note that there is an argument that genAI like ChatGPT can also have a deskilling effect. By providing training programs focused on AI literacy, data analysis, and digital transformation, businesses can equip their employees with the necessary skills to thrive in an AI-augmented job market. Encouraging a culture of continuous learning and adaptability will not only help in retaining talent but also foster innovation and resilience within the organisation. Workers who are adept at leveraging AI tools can hopefully transform potential threats into opportunities using AI to augment their roles and increase their productivity and value to the company.

Businesses also need to consider the ethical implications of AI deployment. Establishing roles such as AI ethics consultants could ensure that the integration of AI is conducted responsibly, addressing issues like bias, transparency, and accountability. This may not only build public trust but also help safeguard the company against potential legal and ethical pitfalls.

Featured Article : Apple Avalanche!

Following Apple’s 5-day Worldwide Developers Conference (WWDC24) last week at Apple Park in Cupertino, California, we take a look at the many new products announced and their key features.

Showcasing New Products 

At Apple’s WWDC24 from June 10 to June 14, Apple showcased a variety of updates and advancements across its software platforms, including iOS, iPadOS, macOS, watchOS, tvOS, and visionOS. Key announcements included significant updates for iOS 18 and macOS 15, as well as new AI integrations and improvements to built-in apps like Photos and Apple Music. Crucially, the conference also highlighted Apple’s commitment to AI technologies and its plans to integrate generative AI capabilities into its devices.

Let’s take a look at the key product and other announcements from WWDC24:

iOS 18 

iOS 18, Apple’s latest iOS for iPhones introduces several significant updates, including a more customisable home screen, a redesigned Photos app with AI-powered editing tools, RCS support in Messages for improved cross-platform communication, and enhancements to the Mail, Calendar, and Maps apps. All these improvements are around making the iPhone more intuitive and powerful for users. Also, the Control Centre has now been revamped to feature a multipage layout with third-party widgets.

One other fun new feature for iOS 18 around user-personalisation will be the ability for iPhone users to make their conversations more enjoyable by creating AI images of people they’re messaging with in a way that’s similar to an AI-upgraded Bitmoji.

iPhones To Use Satellites 

There was also the announcement at WWDC 24 that with iOS 18, iPhone users will be able to send messages via satellite. This feature, available on iPhone 14 models and later, expands upon the existing Emergency SOS via satellite capability. It allows users to send and receive iMessages and SMS texts, including emoji and Tapbacks, even when they are out of range of cellular and Wi-Fi networks.

macOS Sequoia 

Apple’s macOS Sequoia, the latest version of its OS for Macs has been given a range of new features including a new Passwords app, redesigned Reader view in Safari with machine learning integration, upgrades to Messages and Notes, and improved window management. The update also includes enhancements to Continuity, such as iPhone Mirroring. With iPhone Mirroring, (through macOS’ Continuity feature), users can mirror their iPhone’s screen and control it from their Mac laptop or desktop.

All this should mean enhanced user productivity and convenience (better password management), a smarter browsing experience, more efficient multitasking, and improved messaging and note-taking capabilities

iPadOS 18 

iPadOS 18 brings updates to the Notes app, including support for Math Notes and a new Calculator app that supports Apple Pencil. It also introduces a floating tab bar for better navigation and similar home screen customisation options to iOS 18.

watchOS 11 

watchOS 11, the latest version of Apple’s operating system for Apple Watch, adds a redesigned Photos face, a new Translate app, and enhancements to the Fitness app, including a Training Load feature and a customisable Summary mode. It also introduces the Vitals app for health monitoring. The hope is that these new features will provide users with a more personalised and comprehensive fitness and health tracking experience, and a more intuitive and visually engaging interface.

tvOS 18 

tvOS 18, the latest version of Apple’s OS for Apple TV includes AI-enhanced subtitles, Amazon X-Ray-style information while watching, and clearer dialogue options, improving the viewing experience on Apple TV 4K.

‘InSight’ For Apple TV+ 

Those who use Apple TV+ may be pleased with the new InSight feature that displays actors’ names and song titles as they appear on the screen and is similar to Amazon’s X-Ray technology. Also, like Shazam, it highlights the song playing in the TV show or film and, as you may expect, then gives the user the option to add it to their Apple Music playlist.

visionOS 2 

Apple’s OS for the Vision Pro headset, visionOS 2, has received upgrades to enhance the Vision Pro experience with new developer frameworks, an international launch schedule, and improved virtual display features. It also introduces new gestures and SharePlay support in the Photos app. For example, it will allow photos to be transformed into interactive experiences using AI. Notably, users will be able to turn existing images into spatial photos (including photos captured on older devices).

New navigation gestures are also being introduced, and it supports higher resolution and larger virtual displays for connected Macs.

Improvements also include new developer tools like volumetric APIs and TabletopKit for games, adds train support in travel mode, and expands content with new 180-degree 8K video formats through partnerships with content creators.

New Markets For Vision Pro Headsets Announced

Accompanying the news of the upgraded features in visionOS 2, Apple has also announced that it will be making its Vision Pro headset available in eight new countries – China, Japan, Singapore, Australia, Canada, France, Germany and the UK, and that the first release of the headset will be in China, Japan, and Singapore on June 28.

Apple Intelligence 

The most significant announcement from WWDC24 is the introduction of Apple Intelligence, a new AI initiative aimed at integrating personal and private AI capabilities across Apple’s ecosystem. There was some concern that Apple has fallen behind in AI and its announcement that it is partnering with OpenAI to include its technology and ChatGPT, which prompted an angry reaction on X by Elon Musk (citing privacy concerns – although possibly more about competition concerns) is a significant strategic shift for Apple.

Apple Intelligence includes, for example, significant upgrades to Siri (as outlined below), making interactions more natural and advanced, and other functionalities with advanced, personal, and private AI capabilities. Apple CEO, Tim Cook, described Apple Intelligence as “the next frontier” in personal AI and explained that the reason why it is so effective is that it will be able to “understand you and be grounded in your personal context, like your routine, your relationships, your communications and more”. 

Siri Upgrade 

The new Apple Intelligence AI initiative has meant that Siri, Apple’s virtual assistant, has received a substantial upgrade. The AI enhancements make Siri more conversational and contextually aware, so it can handle more complex tasks and understand a wider variety of requests. This should include being able to summarise incoming messages, executing commands across multiple apps, and integrating more naturally with users’ daily activities. Apple has also emphasised how most processing will be done ‘on-device’ to help user privacy.

One significant announcement is, of course, that Siri will also be one of the apps that will be able to use OpenAI’s ChatGPT for “expertise”. Tapping into ChatGPT will also mean that users will also be able to include photos with questions for ChatGPT (via Siri) and even ask questions related to docs or PDFs.

Developers And Siri 

It’s also worth noting here that Siri’s new capabilities will also allow developers to enable voice command access to any app menu items and displayed text without additional coding. This means users can issue commands like “show my presenter notes” in a slide deck or “FaceTime him” from a reminder, enhancing app functionality through natural language interactions and improving user experience.

Next-Generation CarPlay 

Apple provided an updated look at the next-generation CarPlay system, the in-car system that allows users to integrate their iPhone with their vehicle’s infotainment system. The improvements include new Vehicle, Media, and Climate apps, designed to offer a more integrated and enhanced user experience.

What Does This Mean For Your Business? 

The WWDC24 announcements appear to signify a transformative phase for Apple, with their belated yet determined and substantial adoption of AI all the way through their entire estate standing out as a strategic pivot. This initiative, which includes a partnership with OpenAI’s ChatGPT, enhances Siri’s capabilities, making it more contextually aware and conversational. For business users, this means more efficient and natural interactions with their devices, potentially improving productivity and streamlining workflows.

The upgrades across iOS, macOS, iPadOS, watchOS, and tvOS collectively may create a more cohesive and powerful Apple ecosystem. For instance, iOS 18’s customisation options and AI-powered tools should make iPhones more versatile and user-friendly, while macOS Sequoia’s new features may enhance productivity through smarter password management, improved multitasking, and seamless integration with iPhones. These improvements could help businesses better manage their digital environments, ensuring that employees can work more efficiently and securely.

The introduction of Messages via satellite with iOS 18 is significant for businesses operating in remote areas or in sectors where connectivity is often an issue, such as logistics, construction, and outdoor events, i.e. ensuring continuous communication, which is crucial for operational efficiency and safety.

Apple’s Vision Pro headset and the enhanced visionOS 2 signals a move towards more immersive and interactive experiences. For industries such as design, training, and presentations, the ability to turn photos into interactive experiences and use spatial navigation may offer new ways to engage and educate. The expanded international availability of the Vision Pro headset may also open up new markets and opportunities for businesses worldwide.

The updates to watchOS 11, with enhanced fitness and health tracking capabilities, emphasises Apple’s commitment to health and wellness, which may be particularly beneficial for businesses focusing on employee well-being and productivity. The new features in tvOS 18, such as AI-enhanced subtitles and detailed information while watching, enhance the user experience for both personal and professional usage, perhaps making Apple TV a more compelling option, e.g. for business presentations and entertainment.

Overall, Apple’s latest announcements reflect a strategic effort to integrate advanced AI and machine learning technologies across its product range. This not only addresses fears of Apple lagging behind in AI but this could even position Apple as a leader in the AI space. It also offers business users innovative tools to enhance productivity, connectivity, and user-engagement. By leveraging the advancements outlined by Apple at WWDC24, businesses could improve their operational efficiency, employee satisfaction, and customer interactions, which may ultimately give Apple a stronger foothold in the competitive tech marketplace.

Tech Insight : What Are ‘Deadbots’?

Following warnings by ethicists at Cambridge University that AI chatbots made to simulate the personalities of deceased loved ones could be used to spam family and friends, we take a look at the subject of so-called “deadbots”.

Griefbots, Deadbots, Postmortem Avatars 

The Cambridge study, entitled “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” looks at the negative consequences and ethical concerns of adoption of generative AI solutions in what it calls “the digital afterlife industry (DAI)”. 

Scenarios 

As suggested by the title of the study, a ‘deadbot’ is a digital avatar or AI chatbot designed to simulate the personality and behaviour of a deceased individual. The Cambridge study used simulations and different scenarios to try and understand the effects that these AI clones trained on data about the deceased, known as “deadbots” or “griefbots”, could have on living loved ones if made to interact with them as part of this kind of service.

Who Could Make Deadbots and Why?

The research involved several scenarios designed to highlight the issues around the use of deadbots. For example, the possible negative uses of deadbots highlighted in the study included:

– A subscription app that can create a free AI re-creation of a deceased relative (a grandmother in the study), trained on their data, and which can exchange text messages with and contact the living loved one, in a similar way that the deceased relative used to (via WhatsApp) giving the impression that they are still around to talk to. The study scenario showed how the bot could be made to mimic the deceased loved one’s grandmother’s “accent and dialect when synthesising her voice, as well as her characteristic syntax and consistent typographical errors when texting”. However, the study showed how this deadbot service could also be made to output messages that include advertisements in the loved one’s voice, thereby causing the loved one distress. The study also looked at how further distress could be caused if the app designers did not fully consider the user’s feelings around deleting the account and the deadbot, such as if provision is not made to allow them to say goodbye to the deadbot in a meaningful way.

– A service allowing a dying relative (e.g. a father and grandfather), to create their own deadbot that will allow their younger relatives (i.e. children and grandchildren) to get to know them better after they’ve died. The study highlighted negative consequences of this type of service, such as the dying relative not getting consent from the children and grandchildren to be contacted by the ‘deadbolt’ and the resulting unsolicited notifications, reminders, and updates from the deadbot, leaving relatives distressed and feeling as though they were being ‘haunted’ or even ‘stalked’.

Examples of services and apps that already exist and offer to recreate the dead with AI include ‘Project December’, and apps like ‘HereAfter’.

Many Potential Issues 

As shown by the examples in the Cambridge research (there were 3 main scenarios), the use of deadbots raise several ethical, psychological and social concerns. Some of the potential ways they could be harmful, unethical, or exploitative (along with the negative feelings they might provoke in loved ones) include concerns, such as:

– Consent and autonomy. As noted in the Cambridge study, a primary concern is whether the deceased gave consent for their personality, appearance, or private thoughts to be used in this way. Using someone’s identity without their explicit consent could be seen as a violation of their autonomy and dignity.

– Accuracy and representation: There is a risk that the AI might not accurately represent the deceased’s personality or views, potentially spreading misinformation or creating a false image that could tarnish their memory.

– Commercial exploitation. The study looked at how a deadbot could be used for advertising because the potential for commercial exploitation of a deceased person’s identity is a real concern. Companies could use deadbots for profit, exploiting a person’s image or personality without fair compensation to their estate or consideration of their legacy.

– Contractual issues. For example, relatives may find themselves in a situation where they are powerless to have an AI deadbot simulation suspended, e.g. if their deceased loved one signed a lengthy contract with a digital afterlife service.

Psychological and Social Impacts 

The Cambridge study was designed to look at the possible negative aspects of the use of deadbots, an important part of which are the psychological and social impacts on the living. These could include, for example:

– Impeding grief. Interaction with a deadbot might impede the natural grieving process. Instead of coming to terms with the loss, people may cling to the digital semblance of the deceased, potentially leading to prolonged grief or complicated emotional states.

– There’s also a risk that individuals might become overly dependent on the deadbot for emotional support, isolating themselves from real human interactions and not seeking support from living friends and family.

– Distress and discomfort. As identified in the Cambridge study, aspects of the experience of interacting with a simulation of a deceased loved one can be distressing or unsettling for some people, especially if the interaction feels uncanny or not quite right. For example, the Cambridge study highlighted how relatives may get some initial comfort from the deadbot of a loved one but may become drained by daily interactions that become an “overwhelming emotional weight”.  

Potential for Abuse 

Considering the fact that, as identified in the Cambridge study, people may develop strong emotional bonds with the deadbot AI simulations thereby making them particularly vulnerable to manipulation, one of the major risks of the growth of a digital afterlife industry (DAI) is the potential for abuse. For example:

– There could be misuse of the deceased’s private information (privacy violations), especially if sensitive or personal data is incorporated into the deadbot without proper safeguards.

– In the wrong hands, deadbots could be used to harass or emotionally manipulate survivors, for example, by a controlling individual using a deadbot to exert influence beyond the grave.

– There is also the real potential for deadbots to be used in scams or fraudulent activities, impersonating the deceased to deceive the living.

Emotional Reactions from Loved Ones 

The psychological and social impacts of the use of deadbots as part some kind of service to living loved ones, and/or misuse of deadbots could therefore lead to a number of negative emotional reactions. These could include :

– Distress due to the unsettling experience of interacting with a digital replica.

– Anger or frustration over the misuse or misrepresentation of the deceased.

– Sadness from a constant reminder of the loss that might hinder emotional recovery.

– Fear concerning the ethical implications and potential for misuse.

– Confusion over the blurred lines between reality and digital facsimiles.

What Do The Cambridge Researchers Suggest?

The Cambridge study led to several suggestions of ways in which users of this kind of service may be better protected from its negative effects, including:

– Deadbot designers being required to seek consent from “data donors” before they die.

– Products of this kind being required to regularly alert users about the risks and to provide easy opt-out protocols, as well as measures being taken to prevent the disrespectful uses of deadbots.

– The introduction of user-friendly termination methods, e.g. having a “digital funeral” for the deadbot. This would allow the living relative to say goodbye to the deadbot in a meaningful way if the account was to be closed and the deadbot deleted.

– As highlighted by Dr Tomasz Hollanek, one of the study co-authors: “It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations.” 

What Does This Mean For Your Business? 

The findings and recommendations from the Cambridge study shed light on crucial considerations that organisations involved in the digital afterlife industry (DAI) must address. As developers and businesses providing deadbot services, there is a heightened responsibility to ensure these technologies are developed and used ethically and sensitively. The study’s call for obtaining consent from data donors before their death underscores the need for clear consent mechanisms to be built in. This consent is not just a legal formality but a foundational ethical practice that respects the rights and dignity of individuals.

Also, the suggestion by the Cambridge team to implement regular risk notifications and provide straightforward opt-out options is needed for greater transparency and user control in digital interactions. This could mean incorporating these safeguards into service offerings to enhance user trust and digital afterlife services companies perhaps positioning themselves as a leaders in ethical AI practice. The introduction of a “digital funeral” to these services could also be a respectful and symbolic way to conclude the use of a deadbot, as well as being a sensitive way to meet personal closure needs, e.g. at the end of the contract.

The broader implications of the Cambridge study for the DAI sector include the need to navigate potential psychological impacts and prevent exploitative practices. As Dr Tomasz Hollanek from the study highlighted, the unintentional distress caused by these AI recreations can be profound, suggesting that their design and deployment strategies should really prioritise psychological safety and emotional wellbeing. This should involve designing AI that is not only technically proficient but also emotionally intelligent and sensitive to the nuances of human grief and memory.

Businesses in this field must also consider the long-term implications of their services on societal norms and personal privacy. The risk of commercial exploitation or disrespectful uses of deadbots could lead to public backlash and regulatory scrutiny, which could stifle innovation and growth in the industry. The Cambridge study, therefore serves as an early but important guidepost for the DAI industry and has highlighted some useful guidelines and recommendations that could contribute to a more ethical and empathetic digital world.

Tech Insight : New UK Law To Eradicate Weak Passwords

Here we look at the new UK cybersecurity law that will ban device manufacturers from having weak, easily guessable default passwords, thereby providing extra protection against hacking and cyber-attacks.

The Problem 

With 99 per cent of UK adults owning at least one smart device and UK households owning an average of nine connected devices, but with a home’s smart devices potentially being exposed to more than 12,000 hacking attacks in a single week (Which?), the UK government has decided that protective, proactive action is needed. It’s long been known that easy-to-guess default passwords (like ‘admin’ or ‘12345) in new devices and IoT devices have provided access for cybercriminals. An example (from the US) is the 2016 Mirai attack which led to 300,000 smart products being compromised due to weak security features as well as major internet platforms and services being attacked and much of the US East Coast being left without internet.

The New Laws 

The UK government has introduced the new laws as part of the Product Security and Telecommunications Infrastructure (PSTI) regime. This regime is part of a £2.6 billion National Cyber Strategy, which has been designed to improve the UK’s resilience from cyber-attacks and ensure malign interference does not impact the wider UK and global economy.

The key security aspects of these new laws are that:

– Common or easily guessable passwords (e.g. ‘admin’ or ‘12345’) will be banned to prevent vulnerabilities and hacking.

– Device manufacturers will be required to publish contact details so bugs and issues can be reported and dealt with.

– Manufacturers and retailers must be open with consumers on the minimum time they can expect to receive important security updates.

– The government hopes that taking this action will increase consumers’ confidence in the security of the products they buy and use and help the government to deliver on one of its five priorities to grow the economy.

– The UK’s Data and Digital Infrastructure Minister, Julia Lopez, said of these new laws: “Today marks a new era where consumers can have greater confidence that their smart devices, such as phones and broadband routers, are shielded from cyber threats, and the integrity of personal privacy, data and finances better protected.” 

The Major Role of Businesses 

NCSC Deputy Director for Economy and Society, Sarah Lyons, has highlighted the important role that businesses have to play in protecting the public by “ensuring the smart products they manufacture, import or distribute provide ongoing protection against cyber-attacks”. She has also advised all businesses and consumers that they can read the NCSC’s point of sale leaflet for an explanation of how the new Product Security and Telecommunications Infrastructure (PSTI) regulation affects them and how smart devices can be used securely.

What Does This Mean For Your Business? 

The issue of weak default passwords in devices enabling cybercrime is not new and the news that the government is finally doing something about via legislation is likely to be well-received. The new laws will have implications for businesses, consumers, and the overall UK economy.

For example, for device makers (and importers), the requirement to eliminate default password vulnerabilities and to provide clear avenues for reporting security issues places a significant onus on manufacturers to enhance their security protocols. This may not only involve revising the initial security features but also maintaining transparency about the duration of support for security updates. Such changes could, however, require these businesses to invest in better security frameworks, thereby potentially increasing operational costs. That said, it should also improve the marketability and trustworthiness of their products.

UK businesses stand to gain considerably from these heightened security measures. By bolstering the security standards of connected devices, the new laws may ensure that businesses that rely heavily on such technology, from retail to critical infrastructure, are less susceptible to the disruptions and financial losses associated with cyber-attacks. This enhanced security environment should help maintain business continuity and safeguard sensitive data, thereby helping to foster a more resilient economic landscape.

The new laws may also mean that consumers, who are increasingly concerned about their digital privacy and the security of their data, may be able to make more informed choices about and experience greater confidence in the products they choose to integrate into their daily lives. With manufacturers required to adhere to stricter security measures and provide ongoing updates, consumers can expect a new level of protection for their connected devices, which translates into safer personal and financial data.

Economically, by setting a new cybersecurity standard, the UK appears to be positioning itself as a leader in the safe expansion of digital infrastructure. This leadership could boost innovation in cybersecurity measures, potentially leading to growth in the tech sector and creating new opportunities for employment and development. Also, by fostering a safer digital environment, the UK may attract more digital businesses and investments, further stimulating economic growth.

Tech Insight : Exploring E-Signatures

In this tech-insight, we look at what e-signatures are, their benefits plus some of the main e-signature providers, as well as what to consider when choosing an e-signature service.

Popularity of E-signatures 

The initial growth of e-signatures happened in the early 2000s, due to the passage of laws such as the US Electronic Signatures in Global and National Commerce Act (ESIGN) in 2000 and the European Union’s Directive 1999/93/EC on a Community framework for electronic signatures. These laws established the legal framework for the use of e-signatures, providing them with the same legal status as handwritten signatures under certain conditions.

Further Growth During COVID-19 Pandemic 

The adoption of e-signatures accelerated in recent years, fuelled by the digital transformation of businesses and the need for remote work solutions during the COVID-19 pandemic. The physical distancing measures and the shift to online operations made digital processes essential for continuity in business, legal, and educational activities.

To give an idea of the size of the growing e-signature market, in 2020, Deloitte reported estimates that the market reached between USD 2.3 and USD 2.8 billion and was projected to grow further into a USD 4.5-5 billion market by 2023 and over USD 14 billion by 2026.

The Difference Between Electronic Signatures / eSignatures 

Electronic signatures (eSignatures) and digital signatures are both methods used to sign documents electronically, but they serve different purposes and operate based on different technical frameworks.

E-signatures

An e-signature is simply a broad term referring to any electronic process that indicates acceptance of an agreement or a record. This can be as simple as typing your name into a contract online, checking a box on a web form, or using a stylus or finger to sign on a touchscreen. E-signatures are meant to replace handwritten signatures in virtually any process, thereby providing a convenient and legally recognised way to obtain consent or approval on electronic documents.

Digital Signatures 

Digital signatures, on the other hand, are a specific subset of e-signatures that use cryptographic techniques to secure and verify the identity of the signer and ensure the integrity of the signed document. A digital signature is created using a digital certificate issued by a trusted Certificate Authority (CA) and provides a higher level of security than a basic e-signature. It doesn’t just verify the identity of the signer but also ensures that the document has not been altered / tampered with after signing.

The key differences between electronic and digital signatures are:

– The level of security. Digital signatures provide a higher level of security through encryption and authentication, ensuring the integrity and non-repudiation of the document. E-signatures, while secure, do not inherently include these cryptographic measures.

– Verification. Digital signatures verify the signer’s identity through digital certificates, whereas e-signatures may not have a robust mechanism for identity verification.

– Legal and regulatory compliance. Both e-signatures and digital signatures are legally binding in many jurisdictions around the world. However, certain documents or transactions may specifically require the use of digital signatures for added security and compliance with regulatory standards.

The Benefits 

E-signatures offer a number of benefits compared to traditional paper signatures. These benefits include:

– Reduced turnaround times because arranging to physically meet up or rely on postal service is not required (thereby streamlining the process).

– Lower costs associated with paper-based transactions, i.e. no need for printing, paper, ink, postage, shipping, and storage of physical documents, or for office space dedicated to storing paper documents.

– Convenience and enhanced customer experience by facilitating easier, faster, and more secure transactions. For example, e-signatures can be obtained from anywhere, at any time, using a computer or personal mobile device thereby eliminating the need to physically meet or to scan or mail documents for a signature (significantly speeding up the process of signing agreements or forms).

– Speed. For example, documents using e-signatures can be sent, signed, and returned in minutes rather than days or weeks, reducing turnaround times for contracts, approvals, and other processes.

– Security. This is because e-signature solutions often come with security features like encryption, audit trails, and tamper-evident seals, making them more secure than paper documents, which can be easily lost, damaged, or tampered with.

– Accuracy and compliance. E-signature platforms can enforce the completion of all required fields in a document before it can be signed, reducing errors and omissions. They also help in maintaining compliance with laws and regulations by ensuring that the process of obtaining signatures follows legal requirements.

– Environmental benefits from the reduced reliance on paper, such as conserving resources and reducing the environmental impact associated with paper production, printing, and waste.

– Legality and acceptance. The adoption of laws and regulations globally means that the legal validity of e-signatures is now recognised, plus they’ve have become widely accepted as a legal means of obtaining consent or approval on documents.

– Global reach. E-signatures make it easier conduct business internationally by allowing documents to be signed across borders without the need for physical presence, overcoming geographical and time zone barriers.

– E-signature solutions also typically provide detailed audit trails, recording each step of the signing process, including who signed the document, when, and where. This enhances transparency and can be crucial for legal and compliance purposes.

Examples of e-signature Providers 

Some examples of popular e-signature providers include:

– DocuSign. The global market leader in electronic signatures (75% share), DocuSign offers a comprehensive e-signature solution that is widely used in the UK. It provides a secure and easy way to sign documents online, giving compliance with UK and EU regulations.

– Adobe Sign. Part of the Adobe Document Cloud, Adobe Sign is another popular choice for e-signatures. Being Adobe, it integrates with other Adobe products and Microsoft Office and offers e-signature services that are compliant with EU eIDAS and other regulations.

– HelloSign. A Dropbox company, HelloSign provides a simple and intuitive e-signature service and is designed for businesses of all sizes.

– Signable. This e-signature provider is tailored to meet the needs of UK businesses, offering compliance with UK and EU laws (including GDPR).

– eSign Genie is a user-friendly e-signature solution that offers features like document management, custom templates, and bulk sending. It complies with legal standards in the UK, EU, and beyond.

– PandaDoc is document automation software. As such, it offers more than just e-signatures but includes them as part of its suite of features.

– Zoho Sign (part of the Zoho suite of online productivity tools) offers secure digital signature capabilities and (of course) it integrates with other Zoho apps. It also integrates with third-party platforms.

Challenges / Issues 

There are, however, some challenges or issues with e-signature provider tools that UK businesses should be aware of before choosing a service. For example:

– Choose a provider that you know offers compliance with UK, EU (eIDAS regulation), as well as international e-signature laws to ensure legal validity.

– Make sure the service you choose has robust security that can protect against unauthorised access, tampering, and fraud, and which includes encryption and authentication methods.

– It’s easier to use e-signature tools that seamlessly integrate with existing business systems to avoid workflow disruptions.

– Make sure adequate training and support are available for employees (and clients) on the use of your chosen service.

– Take into account the ‘total’ costs (especially for SMEs), i.e. include subscriptions and potential additional fees.

– Make sure that the e-signatures provided by your chosen service are recognised and legally binding in all jurisdictions where your business operates.

– Ensure GDPR and UK data protection law compliance, focusing on personal data safeguarding.

– Assess the provider’s platform reliability, support services, and uptime guarantees.

– Choose a solution that can grow with your business (i.e. make sure it’s scalable), to accommodate increasing demands.

– Ensure the tool/service you choose offers comprehensive audit trails and secure document storage for legal and compliance purposes.

What Does This Mean For Your Business? 

In the evolving digital landscape, UK businesses are finding that adopting e-signature technology is not just an option but a necessity for staying competitive and efficient. The transition from paper-based to digital processes, accelerated by global events like the COVID-19 pandemic, has underscored the importance of e-signatures in ensuring business continuity, enhancing operational efficiency, and reducing costs. E-signatures streamline transactions, ensure security, and offer the flexibility to conduct business remotely, making them a pretty much indispensable, and helpful tool in today’s digital economy.

For UK businesses contemplating e-signature solutions, the choice of provider is crucial. It’s essential to select a service that not only complies with UK and EU laws, including the eIDAS regulation and GDPR, but also offers robust security measures to safeguard against fraud and tampering. The integration capabilities of the e-signature solution with existing business systems should also be a consideration to help minimise disruption and to enhance user adoption.

While e-signatures have many benefits, businesses should be aware of the need to navigate challenges such as legal compliance, data privacy, and the cost of implementation. These challenges, however, are surmountable with a bit of research, planning, and the selection of the right e-signature provider. The importance of e-signatures is set to grow, driven by their legal acceptance, global reach, and the continued digital transformation of industries.

Tech Insight : Cameras In Airbnb Properties – What Are The Rules?

Following the Metro recently highlighting the issue of undisclosed cameras being used by a small number of Airbnb hosts, we take a look at what the rules say, reports in the news of this happening, and what you can do to protect yourself.

Do Airbnb Hosts Have The Right To Film Guests? 

You may be surprised to know that the answer to this question is yes, hosts do have the right to install surveillance devices in certain areas of their properties (which may result in guests being filmed) but this is heavily regulated and restricted for privacy reasons.

When/Where/Why/How Is It OK For Hosts To Film Guests? 

The primary legitimate reason for hosts to install surveillance devices is for security purposes. They are not allowed to use them for any invasive or unethical purposes. Airbnb’s community standards, for example, emphasise respect for the privacy of guests and any violation of these standards can lead to the removal of the host from the platform.

Clear Disclosure 

Airbnb’s company rules say that monitoring devices (e.g. cameras), may be used, but only if they are in common spaces (such as living rooms, hallways, and kitchens) and then only if Airbnb hosts disclose them in their listings. In short, if a host has any kind of surveillance device, they must clearly mention it in their house rules or property listing so that guests are made aware of these devices before they book the property.

What About Local Laws? 

It is also the case that although disclosed cameras in common spaces on a property may be OK by the company’s rules, Airbnb hosts must also adhere to local laws and regulations regarding surveillance. This can vary widely from place to place and, in some regions, recording audio without consent is illegal, whereas video might be permissible if disclosed.

Hidden Cameras 

Even though Airbnb rules are relatively clear, there appears to be anecdotal and news evidence that some Airbnb guests have discovered undisclosed surveillance devices in areas of Airbnb properties where they should not be installed. Examples that have made the news include:

– Back in 2019, it was reported that a couple staying for one night at an Airbnb property in Garden Grove, California discovered a camera hidden in a smoke detector directly above the bed.

– In July 2023, a Texas couple were widely reported to have filed a lawsuit against an Airbnb owner, claiming he had put up ‘hidden cameras’ in the Maryland property they had rented for 2 nights in August 2022. According to the Court documents of Kayelee Gates and Christian Capraro, the couple became suspicious after Capraro discovered multiple hidden cameras disguised as smoke detectors in the bedroom and bathroom.

– Last month, a man (calling himself Ian Timbrell) alleged in a post on X that he had found a camera tucked between two sofa cushions at his Aberystwyth Airbnb.

Wouldn’t It Be Better To Disallow Any Cameras Inside An Airbnb Rental Property? 

Banning all cameras at Airbnb rental properties might initially seem like a straightforward solution to privacy concerns, yet there are important factors to consider around this. Some hosts may legitimately need to use common areas such as entrances, for security purposes (perhaps the property is in an area where crime has been a problem) and they need to deter theft and vandalism and provide evidence if a crime occurs. On the other hand, a complete ban on cameras would address the privacy concerns of guests, ensuring they feel comfortable and secure during their stay.

Airbnb’s current policy attempts to balance security and privacy by allowing cameras in certain areas while requiring full disclosure and banning them in private spaces like bedrooms and bathrooms. However, enforcing a complete ban on cameras would be very challenging, as hidden cameras are, by nature, difficult to detect and even if there was a ban, some owners may simply not comply. The Airbnb model is built on trust between hosts and guests, and clear communication and transparency about security measures, including camera usage, are crucial for maintaining this trust. While a total ban on cameras might seem like a simple solution to privacy concerns, it overlooks the legitimate security needs of hosts. A balanced approach with clear guidelines and strict enforcement might be more effective in protecting both guest privacy and host security.

How To Check 

If you’re worried about possibly being filmed/recorded by hidden and undisclosed surveillance devices in a rented Airbnb property, here are some ways you can search the property and potentially reveal such devices:

– Inspect any gadgets. Check smoke detectors or alarm clocks as they are known as places to hide cameras. Examine any other tech that seems out of place. You may also want to check the shower head.

– Search for Lenses. For example, making sure the room is dark, use a torch (such as your phone’s torch) to spot reflective camera lenses in objects like decor or appliances.

– Use phone apps like Glint Finder for Android or Hidden Camera Detector for iOS to find hidden cameras.

– Check storage areas, e.g. examine drawers, vents, and any openings in walls/ceilings.

– Check mirrors. Many people worry about the two-way mirrors with cameras behind them. Ways to check include lifting any mirrors to see the wall behind, turning off the room light and then shining a torch into the mirror to see if an area behind is visible.

– Check for infrared lights (which can be used in movement-sensitive cameras). Again, this may be spotted by by using your phone’s camera in the dark, and then looking out for any small, purple, or pink lights that may be flashing or steady.

– Scan the property’s Wi-Fi network and smart home devices for unknown devices.

– Unplug the Airbnb property’s router. Stopping the Wi-Fi at source should disable surveillance devices and may reveal whether the owner is monitoring the property, e.g. it may prompt the host to ask about the router being unplugged.

– If you’re particularly concerned, buy and bring an RF signal detector with you. Widely available online, this is a device that can find any devices emitting Bluetooth or Wi-Fi signals, e.g. wireless surveillance cameras, tracking devices and power supplies.

What Does This Mean For Your Business? 

The issue of undisclosed cameras in Airbnb properties raises important considerations for Airbnb as a company, its hosts, and travellers. For Airbnb, the challenge lies in upholding and enforcing privacy standards to maintain user trust. This could involve enhancing their policies, perhaps even investing in technology or an inspection process for better detection of undisclosed devices, and/or providing more reassuring information about the issue, thereby safeguarding guest security, ensuring host accountability, and helping to protect their brand reputation.

It should be said that most Airbnb hosts abide by the company’s rules but are caught in a delicate balancing act between providing security and respecting the privacy of their guests. Any misuse of surveillance devices can, of course, have serious legal consequences and potentially harm a host’s reputation and standing on the platform. However, even just a few stories in the news about the actions on one or two hosts can have a much wider negative effect on consumer trust in Airbnb and can be damaging for all hosts. It could even simply deter people from using the platform altogether.

For some travellers, this situation may make them feel they must proactively take the responsibility for their own privacy (which may not reflect so well on Airbnb). They may feel as though they need to be informed about their rights, familiarise themselves with detection methods and remain vigilant during their stays.

This whole scenario emphasises the need for a continuous update of policies and practices by Airbnb to keep pace with technological advancements and the varying legal frameworks in different regions. It also highlights the importance of clear communication and transparency between the company, its hosts, and guests to maintain a trustworthy and secure environment.

Tech Insight : A Dozen Ways Copilot Can Help Your Business

With Microsoft’s Copilot AI assistant now embedded within the Microsoft 365 apps and services to help users save time and increase productivity, we look at a dozen things you can do with Copilot to help your business.

Microsoft 365 Copilot 

Copilot fuses ChatGPT version 4 and Microsoft Graph. More specifically, Copilot is designed to integrate the capabilities of ChatGPT version 4 (a sophisticated language model developed by OpenAI), with the extensive data and connectivity provided by Microsoft Graph.

Microsoft Graph is an API platform that enables developers to access and integrate data and insights from various Microsoft services and applications, such as Office 365, Windows 10, and Enterprise Mobility + Security, facilitating the creation of rich, interconnected applications within the Microsoft ecosystem.

This integration allows Copilot to leverage the conversational AI capabilities of ChatGPT in conjunction with the rich data ecosystem of Microsoft 365, enhancing productivity and offering more advanced features within Microsoft’s suite of applications.

Microsoft says Copilot can increase an employee’s productivity by as much as 50 per cent and that it can unlock the other 90 per cent of things that its apps can do that most users never try.

A Dozen Ways Copilot Can Help 

With this in mind, here are a dozen ways that you can use Copilot to help with your business:

1. Automating Customer Service Responses 

Copilot can manage routine customer service queries by providing instant, accurate responses to FAQs. This helps by reducing wait times and improving customer satisfaction. It can also act as a way to identify and escalate the more complex issues to human representatives, ensuring a balance between efficiency and having a personal touch.

2. Generating Reports and Summaries 

Microsoft 365 Copilot can also analyse large sets of data to generate detailed reports and executive summaries. This can be really helpful in identifying key metrics and trends, which are essential for strategic planning and decision-making, without the need for manual data crunching. This is an important way that Copilot can save time and effort and add more transparency to a business.

3. Drafting and Editing All Manner Of Business Documents 

Copilot assists in creating professional business documents, emails, and presentations. It offers suggestions on content, structure, and style, ensuring that the documents are not only well-written but also tailored to their intended audience. Again, this can save time and improve productivity but also improve the quality of business communications.

4. Data Analysis and Insights 

By analysing complex datasets, Copilot can uncover valuable insights, helping businesses understand customer behaviour, market trends, and operational efficiency. This leads to more informed decision-making and strategy development.

5. Scheduling and Calendar Management 

It streamlines calendar management by scheduling meetings, appointments, and events based on your availability. It can also send automated reminders and updates, ensuring efficient time management and reducing scheduling conflicts.

6. Training and Educational Resources 

Copilot can create custom training materials and educational content that are specifically tailored to a company’s processes and systems. This can help in onboarding new employees more efficiently and keeping the workforce updated on new tools and practices. This can, of course, also save money on training and potentially improve the efficiency of training (because it can be more targeted and customised).

7. Automating Routine Tasks 

For tasks like data entry, inventory management, and basic accounting, Copilot can automate these processes, thereby reducing the risk of human error and allowing employees to focus on more strategic and creative tasks.

8. Language Translation and Localisation 

Microsoft Copilot can also be used to facilitate global business operations by translating documents and communications into various languages, ensuring that businesses can effectively communicate with international clients and partners.

9. Market Research and Analysis 

Copilot can scour the internet and various databases to conduct market research, analyse industry trends, and provide actionable insights, helping businesses stay ahead in their market.

10. Social Media Management 

Copilot can also help with creating, scheduling, and analysing social media posts. Copilot can also track engagement metrics, thereby helping businesses understand their audience better and refine their social media strategies.

11. Project Management Assistance  

Microsoft 365 Copilot can also help with tracking project milestones, resource allocation, and progress updates. This can ensure that projects stay on track, resources are efficiently used, and stakeholders are kept informed.

12. Legal and Compliance Documentation 

One other really helpful aspect of Copilot is that it can assist in drafting legal documents and ensure that business operations comply with relevant laws and regulations. This is crucial for mitigating legal risks and maintaining a company’s reputation.

What Does This Mean For Your Business? 

The integration of Microsoft’s Copilot AI into the Microsoft 365 suite is a significant advancement for 365 and for business technology generally. With Copilot embedded in popular 365 apps, businesses now have a powerful ‘always on’ tool at their disposal to help with productivity, efficiency, creativity, adding value, and more. As such, this integration goes beyond mere convenience, and it taps into the unrealised potential of Microsoft 365, unlocking functionalities that many users have yet to explore, i.e. it can help businesses to leverage (and get more out of) what they’re already paying for from Microsoft.

By being able to quickly and easily automate tasks, e.g. from customer service to complex data analysis, Copilot not only saves time but also enhances creativity and leaves employees free to focus on more strategic and innovative tasks, thereby elevating the quality of work and driving business growth. Also, Copilot’s intuitive, natural language capabilities, akin to those of ChatGPT version 4, make it a user-friendly assistant that can simplify complex tasks and make technology more accessible to everyone in the organisation.

Copilot, therefore, serves as a tool for upskilling employees. It exposes them to a broader range of Microsoft 365 capabilities, fostering a deeper understanding and more efficient use of the software. This aspect of Copilot is particularly valuable as it achieves upskilling organically, without the need for additional training resources. It could be said that Copilot is not just enhancing productivity, but it’s also expanding the technological proficiency of the entire workforce.

For businesses, in addition to streamlining operations, Copilot can also help deliver a competitive edge e.g., the insights gleaned from Copilot’s data analysis and market research capabilities can inform strategic decisions, offering a clearer view of market trends and customer behaviours. Its ability to handle language translations and ensure compliance with legal standards positions businesses for global reach and operational safety may also be of real use for many businesses.

Microsoft 365 Copilot, therefore, is more than an incremental update to business software, but could prove to a transformative tool that can significantly enhance how businesses operate (if businesses make sure they use it). The rewards for using what is a comprehensive, and relatively easy-to-use solution that unlocks the power of the 365 apps could be to propel your businesses into a new era of efficiency and innovation.

Tech Insight : How A Norwegian Company Is Tackling ‘AI Hallucinations’

Oslo-based startup Iris.ai has developed an AI Chat feature for its Researcher Workspace platform which it says can reduce ‘AI hallucinations’ to single-figure percentages.

What Are AI Hallucinations? 

AI hallucinations, also known as ‘adversarial examples’ or ‘AI-generated illusions,’ are where AI systems generate or disseminate information that is inaccurate, misleading, or simply false. The fact that the information appears convincing and authoritative despite lacking any factual basis means that it can create problems for companies that use the information without verifying it.

Examples 

A couple of high-profile examples of when AI hallucinations have occurred are:

– When Facebook / Meta demonstrated its Galactica LLM (designed for science researchers and students) and, when asked to draft a paper about creating avatars, the model cited a fake paper from a genuine author working on that subject.

– Back in February, when Google demonstrated its Bard chatbot in a promotional video, Bard gave incorrect information about which satellite first took pictures of a planet outside the Earth’s solar system. Although it happened before a presentation by Google, it was widely reported, resulting in Alphabet Inc losing $100 billion in market value on its shares.

Why Do AI Hallucinations Occur? 

There are a number of reasons why chatbots (e.g. ChatGPT) generate AI hallucinations, including:

– Generalisation issues. AI models generalise from their training data, and this can sometimes result in inaccuracies, such as predicting incorrect years due to over-generalisation.

– No ground truth. LLMs don’t have a set “correct” output during training, differing from supervised learning. As a result, they might produce answers that seem right but aren’t.

– Model limitations and optimisation targets. Despite advances, no model is perfect. They’re trained to predict likely next words based on statistics, not always ensuring factual accuracy. Also, there has to be a trade-off between a model’s size, the amount of data it’s been trained on, its speed, and its accuracy.

What Problems Can AI Hallucinations Cause? 

Using the information from AI hallucinations can have many negative consequences for individuals and businesses. For example:

– Reputational damage and financial consequences (as in the case of Google and Bard’s mistake in the video).

– Potential harm to individuals or businesses, e.g. through taking and using incorrect medical, business, or legal advice (although ChatGPT passed the Bar Examination and business school exams early this year).

– Legal consequences, e.g. through publishing incorrect information obtained from an AI chatbot.

– Adding to time and workloads in research, i.e. through trying to verify information.

– Hampering trust in AI and AI’s value in research. For example, an Iris.ai survey of 500 corporate R&D workers showed that although 84 per cent of workers use ChatGPT as their primary AI research support tool, only 22 per cent of them said they trust it and systems like it.

Iris.ai’s Answer 

Iris.ai has therefore attempted to address these factuality concerns by creating a new system that has an AI engine for understanding scientific text. This is because the company developed it primarily for use in its Researcher Workspace platform (to which it’s been added as a chat feature) so that its (mainly large) clients, such as the Finnish Food Authority can use it confidently in research.

Iris.ai has reported that the inclusion of the system accelerated research on a potential avian flu crisis can essentially save 75 per cent of a researcher’s time (by not having to verify whether information is correct or made up).

How Does The Iris.ai System Reduce AI Hallucinations? 

Iris.ai says its system is able to address the factuality concerns of AI using a “multi-pronged approach that intertwines technological innovation, ethical considerations, and ongoing learning.” This means using:

– Robust training data. Iris.ai says that it has meticulously curated training data from diverse, reputable sources to ensure accuracy and reduce the risk of spreading misinformation.

– Transparency and explainability. Iris.ai says using advanced NLP techniques, it can provide explainability for model outputs. Tools like the ‘Extract’ feature, for example, show confidence scores, allowing researchers to cross-check uncertain data points.

– The use of knowledge graphs. Iris.ai says it incorporates knowledge graphs from scientific texts, directing language models towards factual information and reducing the chance of hallucinations. The company says this is because this kind of guidance is more precise than merely predicting the next word based on probabilities.

Improving Factual Accuracy 

Iris.ai’s techniques for improving factual accuracy in AI outputs, therefore, hinge upon using:

– Knowledge mapping, i.e. Iris.ai maps key knowledge concepts expected in a correct answer, ensuring the AI’s response contains those facts from trustworthy sources.

– Comparison to ground truth. The AI outputs are compared to a verified “ground truth.” Using the WISDM metric, semantic similarity is assessed, including checks on topics, structure, and vital information.

– Coherence examination. Iris.ai’s new system reviews the output’s coherence, ensuring it includes relevant subjects, data, and sources pertinent to the question.

These combined techniques set a standard for factual accuracy and the company says its aim has been to create a system that generates responses that align closely with what a human expert would provide.

What Does This Mean For Your Business? 

It’s widely accepted (and publicly admitted by AI companies themselves) that AI hallucinations are an issue that can be a threat for companies (and individuals) who use the output of generative AI chatbots without verification. Giving false but convincing information highlights both one of the strengths of AI chatbots, i.e. how it’s able to present information, as well as one of its key weaknesses.

As Iris.ai’s own research shows, although most companies are now likely to be using AI chatbots in their R&D, they are aware that they may not be able to fully trust all outputs, thereby losing some of the potential time savings by having to verify as well as facing many potentially costly risks. Although Iris.ai’s new system was developed specifically for understanding scientific text with a view to including it as a useful tool for researchers who use its own site, the fact that it can reduce AI hallucinations to single-figure percentages is impressive. Its methodology may, therefore, have gone a long way toward solving one of the big drawbacks of generative AI chatbots and, if it weren’t so difficult to scale up for popular LLMs it may already have been more widely adopted.

As good as it appears to be, Iris.ai’s new system can still not solve the issue of people simply misinterpreting the results they receive.

Looking ahead, some tech commentators have suggested that methods like using coding language rather than the diverse range of data sources and collaborations with LLM-makers to build larger datasets may bring further reductions in AI hallucinations. For most businesses now, it’s a case of finding the balance of using generative AI outputs to save time and increase productivity while being aware that those results can’t always be fully trusted and conducting verification checks where appropriate and possible.

Tech Insight : No Email Backup For Microsoft 365?

In this insight, we look at what many users think to be a surprising fact in that Microsoft 365 doesn’t provide a traditional email backup solution, and we look at what businesses can do about this.

Did You Know?…. 

Contrary to popular belief, Microsoft 365 (previously known as Office 365) is not designed as a traditional “backup” solution in the way many businesses might think of backups. Most importantly, email isn’t properly “backed-up” by Microsoft. Instead, the onus is on the business-owner to find their own email backup solution. In fact, Microsoft 365’s backup and recovery default settings only really protect your data for 30-90 days on average.

So, How Does It Handle Email and Other Data? 

Although Microsoft 365 doesn’t automatically provide a traditional email backup, it does provide some email and data handling protections that can include aspects of email. For example:

– Microsoft has multiple copies of your data as part of its ‘data resilience.’  For example, if there’s an issue with one data centre or a disk fails, they can recover data from their copies. Although this can help, it’s not the same as a backup that can be used to recover from accidental deletions, malicious activity, etc.

– Microsoft 365 provides retention policies that allow you to specify how long data (like emails) are kept in user mailboxes. Even if a user deletes an email, it can, therefore, be retained in a hidden part of their mailbox for a period you specify.

– For legal purposes, it is possible to put an entire mailbox (or just specific emails) on “Litigation Hold”, which basically ensures that the emails can’t be deleted or modified. Also, eDiscovery tools / document review software can be used by legal professionals for searching across the environment for specific data, e.g. to find emails, documents CAD/CAM files, databases, image files, and more.

– Microsoft’s archiving, i.e. where older emails can be automatically moved to an archive mailbox, can be one way to help businesses ensure that critical data is retained without cluttering the primary mailbox.

– When users delete emails, they go to the ‘Deleted Items’ folder. If emails are deleted from there, they go to the ‘Recoverable Items’ folder, where they remain for another 14 days (by default, but this can be extended) and can, therefore, be recovered.

Limitations 

Although these features help with retaining some important business data and emails, they’re not a substitute for a dedicated and complete email backup solution, and they have their limitations, which are:

– They may not protect against all types of data loss, especially if data gets deleted before a retention policy is set or if the retention period expires. For example, with email archiving, when an item reaches the end of its aging period, it is automatically deleted from Microsoft 365.

– They may not facilitate easy recovery if a user accidentally (or maliciously) deletes a vast amount of critical data.

– They don’t offer a separate, offsite backup in case of catastrophic issues or targeted attacks.

Third-Party Backup Solutions

Given these limitations and given that most businesses would feel more secure knowing that they have a proper email backup solution in place (such as for the sake of business continuity and disaster recovery following a cyber-attack or other serious incident), many businesses opt for third-party backup solutions specifically designed for Microsoft 365 to provide another layer of protection.

These solutions can offer more traditional backup and valued recovery capabilities, such as ‘point-in-time restoration’.

Backup Soultions

There are many examples of third-party Office 365 and email backup solutions and for most businesses, their managed support provider is able to provide an email backup solution that meets their specific needs.

Does Google Backup Your Gmail Emails? 

As with Microsoft 365, Google provides a range of data retention and resilience features for Gmail (especially for its business-oriented services like Google Workspace) but these aren’t traditional backup solutions. The retention and resilience features Google’s Gmail does provide include:

– For data resilience, Google has multiple data copies. If one fails, another ensures data availability.

– Deleted Gmail emails stay in ‘Trash’ for 30 days, allowing user recovery.

– The ‘Google Vault for Google Workspace sets email retention rules, which can be used to preserve emails even if deleted in Gmail.

– “Google Takeout” (data export) is probably the closest thing to backup that Gmail offers its users. Takeout lets users export/download their Gmail data for offline storage. Also, the exported MBOX file can be imported into various email clients or platforms. However, this isn’t necessarily the automatic, ongoing backup solution that many businesses feel they need.

Like 365, Google Workspace offers archiving to retain critical emails beyond Gmail’s regular duration.

Limitations

As with Microsoft 365’s data retaining features, these also have their limitations, such as:

– They might not protect against all types of data loss, especially if emails are deleted before retention policies are set or if the retention period expires.

– They might not offer an easy recovery process for large-scale data losses.

– They don’t provide a separate, offsite backup.

What Can Gmail Users Do To Back Up Their Email?

In addition to simply using Google Takeout for backups, other options that Gmail users could consider for email backup include:

– Third-party backup tools, such as UpSafe and Spinbackup and others.

– Using an email client, e.g. Microsoft Outlook. For example, once set up, the client will download and store a local copy of the emails, and regularly backing up the local machine or the email client’s data will include these emails.

– Setting up email forwarding to another account, although this may be a bit rudimentary for many businesses, and it won’t back up existing emails.

– While a bit tedious, businesses could choose to manually forward important emails to another email address or save emails as PDFs.

– Google Workspace Vault can technically enable Workspace admins to set retention rules, ensuring certain emails are kept even if they’re deleted in the main Gmail interface.

What Does This Mean For Your Business? 

You may (perhaps rightly) be surprised that Microsoft 365, and Google’s Gmail don’t specifically provide email backup as a matter of course.

Considering we operate in business environment where data is now a critical asset of businesses and organisations, email is still a core business communications tool, and cybercrime such as phishing attacks, malware (ransomware) are common threats, having an effective, regular, and automatic business backup solution in place is now essential, at least for business continuity and disaster recovery. Although Microsoft and Google offer a variety of data retention features, these have clear limitations and are not really a substitute for the peace of mind and confidence of knowing that the emails that are the lifeblood of the business (and contain sensitive and important data) are being backed up regularly, securely, and reliably.

For many businesses and organisations, therefore, their IT support company (or MSP – ‘managed service provider’) is the obvious and sensible first stop for getting a reliable backup solution for their Microsoft 365 emails.

This is because their IT Support company is likely to already have a suitable solution that they know well, and have an in-depth understanding of the business’s infrastructure, requirements, and unique challenges. This means that they can tailor their backup solution to fit specific client needs, ensuring seamless integration with existing systems. Also, their first-hand knowledge of a business’s operations positions them better for rapid response and effective resolution in case of data restoration requirements or backup issues. For businesses, lowering risk by entrusting email backup to a known entity can also streamline communication and support processes, making the overall backup and recovery experience more efficient and reliable for the business.

Tech Insight : How To Make a QR Code

In this tech insight, we look at QR codes, the many different methods to generate them, the benefits of doing so, and the future for QR codes as the successor to barcodes.

What Is A QR code? 

A QR (Quick Response) code, first designed in 1994 by Japanese company ‘Denso Wave’, is a type of two-dimensional barcode. It looks like square grid made up of smaller black and white squares (modules) and typically features three larger square patterns in three of its corners, which help scanners identify and orient the code. The black and white squares within the grid encodes the data. Unlike a one-dimensional barcode, which represents data in a series of vertical lines (which are based on the dots and dashes in Morse code), a QR code stores data in both vertical and horizontal arrangements. This means that a lot more data can be encoded in a QR code than a bar code, and a QR code can contain complex information, e.g. text, URLs, and other data types.

Making A QR Code 

There are several ways you can make your own QR code. If you want to quickly share a URL of interest with others, it’s possible to make a QR code in Microsoft Edge that can be shared, and which directs them to that web page. This could be particularly useful if you want to open the same web page on a mobile device or share it with someone else without having to type or text the entire URL. Here’s how to make a QR code for a URL in Edge:

– Open Edge and go to the web page you want to make a QR code for.

– Right-click on a blank area of the web page and select ‘Create QR code for this page’ and choose either the option to ‘Copy’ (to paste and share it) or ‘Download’ (to get a png image download of the QR code).

– A QR code symbol also appears in the right-hand side of the address bar enabling you to re-use the code by clicking on it (which launches another QR code copy/download window).

Making A QR Code For A URL In Google Chrome 

To make a QR code for a URL using Google Chrome, the process is the same, but a QR code symbol doesn’t appear in the address bar.

Safari? 

For the Safari browser, a QR code can’t be generated unless a Safari QR code generator extension or an online QR code generator is used.

Online QR Code Generators 

You can also use online QR code generators. Examples include https://www.qr-code-generator.com/https://www.the-qrcode-generator.com/, and many more.

Other Options 

Other options for making a QR code include:

– Using open-source software e.g., Libre Office (free open-source software).Open the ‘Insert’ menu, hover over ‘OLE,’ click ‘QR and Barcode,’ and paste in the URL to be converted to QR code.

– Mobile apps for Android’s or iOS. These apps often have the function to generate QR codes in addition to reading them. Examples include: QR Code Reader and Scanner, QR TIGER, QR & Barcode Scanner, QR Code Reader, NeoReader, and many more.

– Web browser extensions or add-ons.

– QR Code APIs e.g., QRServer’s free API or Goqr.me’s API.

QR Codes Will Replace Bar Codes 

QR codes are already set to replace bar codes. This will of course mean lower costs for retailers, will have implications for package design (less on-packaging information but more information available to customers), and the positive environmental impact of less packaging. For retailers, this could also mean improvements to inventory management, and it is likely to give greater flexibility to manufacturers and retailers in terms of updating product information.

What Does This Mean For Your Business? 

QR codes provide businesses a streamlined and interactive method to connect with their audience, offering a bridge between the physical and digital realms. By generating and sharing QR codes for URLs, businesses can quickly direct customers to specific online content, whether it’s a promotional deal, a digital menu, or an informational page, without requiring users to manually type in web addresses. This eliminates potential errors, speeds up access, and is easy and convenient for customers in a world where most of us now use our mobiles for everything.

Having QR code generation features built into browsers, is also very convenient for users as the creation process is fast, seamless, integrated, and creates something that’s easy to share, which helps the business whose URL is being shared.  Also, not having to rely upon on external tools or platforms to generate QR codes means that businesses can instantly create, share, and update QR codes directly from their browser, thereby enhancing efficiency and ensuring they can adapt to changing digital needs swiftly.

Being able to generate and share QR codes will soon be more important than ever for businesses with the QR codes set to replace the now 50-year-old bar codes. It should be noted, however, that QR codes can send users to web pages containing malicious code and therefore care should be taken when scanning them to check for authenticity, which could be something as simple as ensuring a sticker hasn’t been put over the original code.