Featured Article : WhatsApp Updates

Here we look at some of the latest WhatsApp updates and the value and benefits they deliver to users.

Search Conversations By Date For Android 

The first of three new updates of significance for WhatsApp is the “search by date” function for individual and group chats on Android devices. Previously, this function had been available on other platforms (iOS, Mac desktop and WhatsApp Web).

As featured on Meta’s Mark Zuckerberg’s WhatsApp channel (Meta owns WhatsApp), WhatsApp users on Android can now search for a chat on a particular date (not just within a range). For example, one-on-one or group chat details can be date searched by tapping on the contact or the group name, tapping on the search button, and then tapping the calendar icon (right-hand side of the search box), and selecting the individual date. This feature is likely to deliver a better user experience by giving greater precision and control and potentially saving time in locating specific messages.

Privacy Boost From User Profile Change 

Another potentially beneficial boost to the privacy aspect of what is already an end-to-end encrypted messaging app is (in the beta version) closing the loophole on sharing profile pictures without consent, impersonation, and harassment by preventing users from taking screenshots within the app. If users try to screenshot a profile picture, for example, WhatsApp now displays a warning message. Although the ability to download profile pictures was stopped 5 years ago, it was still possible to take screenshots. Closing this loophole in the latest update should, therefore, contribute to greater user privacy and safety.

Minimum Age Lowered To 13 

One slightly more controversial change to WhatsApp’ T&C’s’s terms and conditions however is the lowering of the minimum age of users in Europe (and the UK) to 13 from 16. This brings the service in line with its minimum age rules in the US and Australia, and the move by WhatsApp was taken in response to new EU regulations, namely the Digital Services Act (DSA) and the Digital Markets Act (DMA), and to ensure a consistent minimum age requirement globally. The two new regulations have been introduced both to tackle illegal and harmful activities online and the spread of disinformation, and to help steer large online platforms toward behaving more fairly.

In addition to the minimum age change, WhatsApp is also updating its Terms of Service and Privacy Policies to add more details about what is or is not allowed on the messaging service and to inform users about the EU-US Data Privacy Framework. The framework is designed to provide reliable mechanisms for personal data transfers between the EU and the US in a way that’s compliant and consistent with both EU and US law, thereby ensuring data protection.

Criticism 

However, although the minimum age change (which may sound quite young to many parents) will be good for WhatsApp by expanding its user base and good for users by expanding digital inclusion and family connectivity, it has also attracted some criticism.

For example, the fact that there’s no checking/verification of how old users say they are (i.e. it relies on self-declaration of age and parental monitoring) has led to concerns that more reliable methods are needed. The concern, of course, also extends to children younger than 13 accessing online platforms (e.g. social media) despite the set age limits.

In Meta’s (WhatsApp’s) defence, however, it already protects privacy with end-to-end encryption and has resisted calls and pressure for government ‘back doors’. It has also taken other measures to protect young users. These include, for example, the ability to block contacts (and report problematic behaviour), control over group additions, the option to customise privacy settings, and more.

Competitors 

Regarding compliance with new EU regulations, the European Commission has been actively engaging with large online platforms and search engines, including Snapchat, under the Digital Services Act (DSA). Also, given the widespread impact of these regulations on digital platforms and their emphasis on data privacy and security, it is likely that Signal (a competitor), and other messaging and social media platforms, are taking steps to align with these new requirements.

Some people may also remember that Snapchat came under scrutiny last summer from the UK’s data regulator to determine if it is effectively preventing underage users from accessing its platform. The investigation was in response to concerns about Snapchat’s measures to remove children under 13, as UK law required parental consent for processing the data of children under this age.

What Does This Mean For Your Business? 

The latest WhatsApp updates, alongside the broader implications of new EU and UK regulations, herald potentially significant shifts for businesses, messaging app users, and the industry at large. These changes, encompassing enhanced search functionalities, privacy safeguards, and adjustments to user age limits, will reshape some user experiences and offer both challenges and opportunities.

The “search by date” function for Android users should enhance user convenience and accessibility, save time, facilitate precise and efficient message retrieval, plus improve user engagement and satisfaction. Businesses leveraging WhatsApp for customer service or internal communications, for example, could find this feature particularly beneficial, i.e. by enabling quicker access to pertinent information, and streamlined interactions.

The extra privacy enhancements essentially reflect a growing industry-wide focus on user security and digital safety and will strengthen individual privacy (always welcome). They also emphasise the importance of user-consent and control over personal information and should remind businesses of the need to prioritise and manage user data both in line with (evolving) regulatory standards and today’s consumer expectations.

The adjustment of WhatsApp’s minimum user age in Europe and the UK presents a bit more of a nuanced landscape. While aiming to broaden digital inclusion and connectivity, this change also highlights the complexities of age verification and online safety. Messaging and other platforms, however, must find ways to navigate these complexities, ensuring compliance while fostering a safe and inclusive digital environment for younger users.

The broader context of the DSA and DMA, along with similar regulatory efforts in the UK, signal the transformative period that digital platforms are now in and although we can all see the benefit of curtailing harmful online activities, there’s also an argument for resisting pressure to go as far as giving governments back doors (thereby destroying the privacy and exposing to other risks). Messaging apps and social media platforms, including WhatsApp and its competitors (e.g. Snapchat, Signal, and others) have known regulations were coming, probably expect more in future, and are now having to adapt to enable compliance and retain trust while introducing other features valued for users at the same time.

Businesses using apps like WhatsApp (which also has a specific business version) are likely to already value its privacy features, e.g. its end-to-end encryption, for data protection. As such, they are unlikely to oppose any more helpful privacy-focused, or improved user experience changes, as long as they don’t interfere with the ease of use of the app (or result in extra costs).

An Apple Byte : Instagram and Facebook Ads ‘Apple Tax’

Meta has announced that it will be passing on Apple’s 30 per cent service charge (often referred to as the “Apple tax”) to advertisers who pay to boost posts on Facebook and Instagram through the iOS app.

This move is a response to Apple’s in-app purchase fees, which apply to digital transactions within apps available on the iOS platform (announced in the updated App Store guidelines back in 2022). Advertisers wanting to avoid the additional 30 per cent fee can do so by opting to boost their posts from the web, using either Facebook.com or Instagram.com via desktop and mobile browsers.

Meta says it is “required to either comply with Apple’s guidelines, or remove boosted posts from our apps” and that, “we do not want to remove the ability to boost posts, as this would hurt small businesses by making the feature less discoverable and potentially deprive them of a valuable way to promote their business.” 

Apple has reportedly responded (a statement in MacRumors), saying that it has “always required that purchases of digital goods and services within apps must use In-App Purchase,” and that because boosting a post “is a digital service — so of course In-App Purchase is required”.

Meta’s introduction of the Apple tax for advertisers on iOS apps highlights the conflict with Apple over digital ad space control and monetisation and this move, aimed at challenging Apple’s app store policies, could make advertising more costly and complicated for small businesses.

Tech Insight : New Privacy Features For Facebook and Instagram

Meta has announced the start of a roll-out of default end-to-end encryption for all personal chats and calls via Messenger and Facebook, with a view to making them more private and secure.

Extra Layer Of Security and Privacy 

Meta says that despite it being an optional feature since 2016, making it the default has “taken years to deliver” but will provide an extra layer of security. Meta highlights the benefits of default end-to-end encryption saying that “messages and calls with friends and family are protected from the moment they leave your device to the moment they reach the receiver’s device” and that “nobody, including Meta, can see what’s sent or said, unless you choose to report a message to us.“  

Default end-to-end encryption will roll-out to Facebook first and then to Instagram later, after the Messenger upgrade is completed.

Not Just Security and Privacy 

Meta is also keen to highlight the other benefits of its new default version of end-to-end encryption for users which include additional functionality, such as the ability to edit messages, higher media quality, and disappearing messages. For example:

– Users can edit messages that may have been sent too soon, or that they’d simply like to change, for up to 15 minutes after the messages have been sent.

– Disappearing messages on Messenger will now last for 24 hours after being sent, and Meta says it’s improving the interface to make it easier to tell when ‘disappearing messages’ is turned on.

– To retain privacy and reduce pressure on users to feel like they need to respond to messages immediately, Meta’s new read receipt control allows users to decide if they want others to see when they’ve read their messages.

When? 

Considering that Facebook Messenger has approximately 1 billion users worldwide, the roll-out could take months.

Why Has It Taken So Long To Introduce? 

Meta says it’s taken so long (7 years) to introduce because its engineers, cryptographers, designers, policy experts and product managers have had to rebuild Messenger features from the ground up using the Signal protocol and Meta’s own Labyrinth protocol.

Also, Meta had intended to introduce default end-to-end encryption back in 2022 but had to delay its launch over concerns that it could prevent Meta detecting child abuse on its platform.

Other Messaging Apps Already Have It 

Other messaging apps that have already introduced default end-to-end encryption include Meta-owned WhatsApp (in 2016), and Signal Foundation’s Signal messaging service which has also been upgraded to guard against future encryption-breaking attacks (as much you realistically can), e.g. quantum computer encryption cracking.

Issues 

There are several issues involved with the introduction of end-to-end encryption in messaging apps. For example:

– Governments have long wanted to force tech companies to introduce ‘back doors’ to their apps using the argument that they need to monitor content for criminal activity and dangerous behaviour, including terrorism, child sexual abuse and grooming, hate speech, criminal gang communications, and more. Unfortunately, creating a ‘back door’ destroys privacy, leaves users open to other risks (e.g. hackers) and reduces trust between users and the app owners.

– Attempted legal pressure has been applied to apps like WhatsApp and Facebook Messenger, such as the UK’s Online Safety Act. The UK government wanted to have the ability to securely scan encrypted messages sent on Signal and WhatsApp as part of the law but has admitted that this can’t happen because the technology to do so doesn’t exist (yet).

There are many compelling arguments for having (default) end-to-end encryption in messaging apps, such as:

– Consumer protection, i.e. it safeguards financial information during online banking and shopping, preventing unauthorised access and misuse.

– Business security, e.g. when used in WhatsApp and VPNs, encryption protects sensitive corporate data, ensuring data privacy and reducing cybercrime risks.

– Safe Communication in conflict zones (as highlighted by Ukraine). For example, encryption can facilitate secure, reliable communication in war-torn areas, aiding in broadcasting appeals, organising relief, combating disinformation, and protecting individuals from surveillance and tracking by hostile forces.

– Ensuring the safety of journalists and activists, particularly in environments with censorship or oppressive regimes, by keeping information channels secure and private.

– However, for most people using Facebook’s Messenger app, encryption is simply more of a general reassurance.

What Does This Mean For Your Business?

For Meta, the roll-out of default end-to-end encryption for Facebook and Instagram has been a bit of a slog and a long time coming. However, its introduction to bring FB Messenger in line with Meta’s popular WhatsApp essentially enhances user privacy and security and helps Facebook to claw its way back a little towards positioning itself as a company that’s a strong(er) advocate for digital safety.

For UK businesses, this move offers enhanced protection for sensitive data and communication, aligning with growing demands for cyber security and providing some peace of mind. However, the move presents further challenges and frustration for law enforcement and the UK government, potentially complicating efforts to monitor criminal activities and enforce regulations like the Online Safety Act. Overall, the initiative could be said to underscore a broader trend towards prioritising user privacy and security in the digital landscape, as well as being another way for tech giants like Meta to compete with other apps like Signal. It’s also a way for Meta to demonstrate that it won’t be forced into bowing to government pressure that could destroy the integrity and competitiveness of its products and negatively affect user trust in its brand (which has taken a battering in recent years).

Tech News : Seven Safeguarding SamurAI?

Following warnings about threats posed by the rapid growth of AI, the US White House has reported that seven leading AI companies have committed to developing safeguards.

Voluntary Commitments Made 

A recent White House fact sheet has highlighted how, in a bid to manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety, President Biden met with and secured voluntary commitments from seven leading AI companies “to help move toward safe, secure, and transparent development of AI technology”. 

The companies who have made the voluntary commitments are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

What Commitments? 

In order to improve safety, security, and trust, and to help develop responsible AI, the voluntary commitments from the companies are:

Ensuring Products are Safe Before Introducing Them to the Public

– Internal and external security testing of their AI systems before their release, carried out in part by independent experts, to guard against AI risks like biosecurity and cybersecurity.

– Sharing information across the industry and with governments, civil society, and academia on managing AI risks, e.g. best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

Building Systems that Put Security First 

– Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights (regarded as the most essential part of an AI system). The model weights will be released only when intended and when security risks are considered.

– Facilitating third-party discovery and reporting of vulnerabilities in their AI systems, e.g. putting a robust reporting mechanism in place to enable vulnerabilities to be found and fixed quickly.

Earning the Public’s Trust 

– Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system, thereby enabling creativity AI while reducing the dangers of fraud and deception.

– Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security risks and societal risks (e.g. the effects on fairness and bias).

– Prioritising research on the societal risks that AI systems can pose, including those on avoiding harmful bias and discrimination, and protecting privacy.

– Developing and deploying advanced AI systems to help address society’s greatest challenges, e.g. cancer prevention, mitigating climate change, thereby (hopefully) contributing to the prosperity, equality, and security of all.

To Be Able To Spot AI-Generated Content Easily 

One of the key aspects of more obvious issues of risk associated with AI is the fact that people need to be able to definitively tell the difference between real content and AI generated content. This could help mitigate the risk of people falling victim to fraud and scams involving deepfakes or believing misinformation and disinformation spread using AI deepfakes which could have wider political and societal consequences.

One example of how this may be achieved, with the help of the AI companies, is the use of watermarks. This refers to embedding a digital marking in images and videos which is not visible to the human eye but can be read by certain software and algorithms and give information about whether it’s been produced by AI. Watermarks could help in tackling all kinds of issues including passing-off, plagiarism, stopping the spread of false information, tackling cybercrime (scams and fraud), and more.

What Does This Mean For Your Business? 

Although AI is a useful business tool, the rapid growth-rate of AI has outstripped the pace of regulation. This has led to fears about the risks of AI when used to deceive, spread falsehoods, and commit crime (scams and fraud) as well as the bigger threats such as political manipulation, societal destabilisation, and even the existential threat to humanity. This, in-turn, has led to the first stage action. Governments, particularly, need to feel that they can get the lid partially back on the “genie’s bottle” so that they can at least ensure safeguards are built-in early-on to mitigate risks and threats.

The Biden administration getting at least some wide-ranging voluntary commitments from the Big AI companies is, therefore, a start. Given that many of signatories to the open letter calling for 6-month moratorium on systems more powerful that GPT-4 were engineers from those big tech companies, it’s also a sign that more action may not be too far behind. Ideas like watermarking look a likely option and no doubt there’ll be more ideas.

AI is transforming businesses in a positive way although many also fear how the automation it offers could result in big job losses, thereby affecting economies. This early stage is, therefore, the best time to make a real start in building in the right controls and regulations that allow the best aspects of AI to flourish and keep the negative aspects in check, but this complex subject clearly has a long way to run.

Featured Article : Millions Sign Up To Meta’s ‘Threads’ Twitter Competitor

Following the release of Meta’s alternative platform to Twitter called ‘Threads’, Meta’s head, Mark Zuckerberg, reports 100 million signups to the new platform in its first five days.

What Is Threads? 

The Threads app, launched by Meta on 6 June, is a “text-based conversation app” that is a direct competitor to Twitter – it looks remarkably similar to Twitter and functions in a very similar way.

Threads – Available Via Instagram Login 

The fact that the app is from Meta and available via Instagram (and was developed by the Instagram team) which has over a billion users means that it has instantly become a serious competitor to the troubled Twitter.

To use Threads, Instagram users use their normal Instagram account to log in and their Instagram username and verification is carried over, with the option to customise their profile after specifically for Threads. The app is available for iOS and Android and can be downloaded from the Apple App Store and Google Play Store.

In what appears to be a little swipe at Twitter, Meta says Threads will “enable positive, productive conversations” and posts can be up to 500 characters long and include links, photos, and videos up to 5 minutes in length. Users can share a Threads post to their Instagram story or share their post as a link on any other platform they choose.

Available in 100+ Countries But Not In The EU 

Threads has been launched in 100+ countries but Meta has decided not to make it available in EU countries due to what it describes as the “complexities” of trying to comply with new laws coming in next year. This appears to be a reference to the Digital Markets Act.

30+ Million In The First Day  

Meta’s head, Mark Zuckerberg reported that more than 10 million users had signed up to the Threads “initial version” within the first seven hours of its release, more than 30 million had signed up before the end of the first day, and a staggering 100 million had signed up in the first five days! (a faster sign-up rate that ChatGPT).

Zuckerberg has great ambitions for the app which he sees as a “friendly” alternative to Twitter, stating that it could become a public conversations app with 1 billion+ people on it and that “Twitter has had the opportunity to do this but hasn’t nailed it. Hopefully we will.”

Launched As Twitter Is Struggling 

Seen a as part of the latest rivalry between Meta boss Mark Zuckerberg and Twitter owner Elon Musk (in June, Elon Musk challenged Mark Zuckerberg on social media to “a cage match” fight), the Threads app has been launched at a time when Twitter is seen by many to be in a weakened position.

Why Is Twitter Looking Weak? 

Since Musk took over Twitter and tried to produce more revenue streams from it than just advertising, avoid bankruptcy (something Musk said publicly could happen), and turn Twitter into a ‘super-app’, several events, and comments have led to bad publicity and appeared to be unpopular with Twitter users and advertisers. For example:

– Musk’s $44 billion takeover led to ultimatum’s being given to staff over committing to new working conditions, mass job cuts – Twitter slashed roughly 50 per cent of its workforce (reports showed Musk’s leadership sacking an estimated 80 per cent of contract employees without formal notice).

– Twitter top executives getting sacked, e.g. Chief Executive Parag Agrawal, Chief Financial Officer Ned Segal and legal affairs and policy chief Vijaya Gadd.

– Fears that Twitter could change for the worse under Musk’s ownership, i.e. reinstating unpopular banned users and controversial figures and allowing the wrong kind of ‘free speech’ (former US President Trump, who’d been previously banned was invited back – an offer he declined).

– Thousands of (outsourced) content moderators were dropped, leading to fears of a drop in quality and possible rise of misinformation.

– The Blue service/Blue Tick service, a way to generate new revenue and tackle the problem of fake / bot accounts, and parody accounts led to a wave of blue tick verified (yet fake) accounts impersonating influential brands and celebrities tweeting fake news plus having to be suspended and removed. Also, there was confusion over the introduction of new grey “official” badges instead of blue ticks on some high-profile accounts, which were then suddenly scrapped, also reports that US far-right activists have been able to purchase Twitter blue ticks.

– Elon Musk announcing that all but “exceptional” Twitter employees need to come back to working in the office for at least 40 hours per week or their resignation would be accepted.

– Twitter users leaving the platform in protest over Musk’s ownership and moving to competing, and decentralised social network ‘Mastodon’, Donald Trumps ‘Truth Social’, Discord, Hive Social, and Post.

– America’s Federal Trade Commission warning that “no chief executive or company is above the law,” fears over Twitter’s approach to security, and questions about this in relation to possible Saudi involvement in the Twitter takeover.

– Reports of Apple and Google threatening to drop Twitter from their app stores (denied by Musk).

– Apple and Amazon (major sources of advertising revenue for Twitter) stopping (which some deny about Amazon) and then resuming advertising on Twitter following a reported meeting between Musk and Apple CEO Tim Cook at Apple HQ over the “misunderstanding.”

– Twitter losing more than 50 per cent of its advertising partners and a number of large companies pausing advertising on Twitter since Musk’s takeover, e.g. General Mills Inc, Audi, Volkswagen, General Motors, and more.

– Reports (Mikmak) of Twitter suffering a massive 68 per cent drop in media traffic (the number of times people click on an ad).

– Many high-profile celebrities publicly leaving/announcing they were leaving Twitter since Musk’s takeover, e.g. Elton John, Jim Carrey, Whoopi Goldberg, and more.

– A storm of criticism following Elon Musk’s threatening to turn off SMS 2FA after 20 March 2023 unless users paid for Blue Tick.

– Microsoft dropping Twitter from its advertising platform following Twitter’s announcement that it would charge a minimum of $42,000 per month to enterprise users of its API.

– Following a vote by Twitter users for him to resign, Elon Musk saying he may step down as head of Twitter the end of this year.

Most Recently…. 

Some other controversial moves very recently include:

– In response to alleged “data scraping” (perhaps a reference to Microsoft allegedly using Twitter’s data), and “system manipulation”, Twitter is limiting how many tweets users can read daily – verified 6,000 posts, unverified 600. In contrast, Meta has said there are no restrictions on how many posts users of Threads can see. The restrictions on how many posts Twitter users could read led to problems as angry TweetDeck users reported issues such as notifications and entire columns failing to load.

– As part of a “temporary emergency measure” against data scraping by companies (e.g. perhaps OpenAI using Twitter data to train), anyone wanting to view any Twitter content will need a login or will need to sign up, which could be inconvenient to web users and could affect search engine results.

Meta Not Without Its Own Bad Publicity 

That said, although Twitter doesn’t appear to be having its finest hour, in the interest of fairness it’s worth remembering that Meta/Facebook has faced its own worries with users such as trust issues over data sharing with Cambridge Analytica, Facebook being used to spread disinformation in the US election and UK Brexit campaigns, plus issues about user safety on its platform (hate speech, damaging content, and more).

Twitter Threatens Legal Action 

Towards the end of the first day of the release of Threads, Twitter threatened to take legal action against Meta with Twitter’s attorney Alex Spiro sending a letter to Mark Zuckerberg accusing Meta of “systematic, wilful, and unlawful misappropriation of Twitter’s trade secrets and other intellectual property” in the creation of Threads. Musk said, “competition is fine, cheating is not”. Meta denied any wrongdoing and denied claims that ex-Twitter staff helped create the rival app.

Big Plans – Striking While The Iron’s Hot 

Meta clearly plans to push forward more and give the Threads app maximum reach and clout, saying that it is working to make Threads compatible with the open, interoperable social networks – making it “compatible with ActivityPub, the open social networking protocol” (from the W3C). This could also make Threads interoperable with other ActivityPub-supporting apps like Mastodon and WordPress, and Tumblr in the future.

What Does This Mean For Your Business? 

Meta seems to have chosen the right moment and used the huge leverage and reach it has (via Instagram) to launch an app to compete head-on, and perhaps with more success than others, with Twitter. The fact that Twitter is undergoing a crisis of funding and unhappiness among many customers and continuing issues under Musk, who also appears to be involved in personal rivalry with Meta’s head Mark Zuckerberg, while there have been 30+ million plus sign-ups in one day and 100 million sign-ups in just five days could mean that Twitter is now facing a very serious extra challenge from a credible, strong competitor. With Instagram having an estimated 1.3 billion users worldwide and Twitter having 353 million users, if even one-third of Instagram users signed up to Threads it would dwarf Twitter (10 per cent of the number of Twitter users signed up to Threads in the first day!) and some are predicting that Threads could even kill off Twitter.

Musk’s language lately has contained many references to suing people who take things from Twitter, e.g. data, and his public rivalry with Zuckerberg has intensified, and one could be forgiven for thinking that Musk may have got wind of what was coming with Threads. In the legal letter from Twitter to Zuckerberg, the accusation that ex-Twitter staff worked on Threads is interesting because there may well have been many disgruntled Twitter staff who were ousted unceremoniously when Musk took over and it’s conceivable that they could have gone across to Meta.

For users, the fact that Threads is from Meta, is easy to sign up to, and free, doesn’t have some of the limitations and restrictions of Twitter, appears to be making it look like a viable alternative. This is also unwelcome news for other Twitter alternative platforms, e.g. Mastodon, Discord, Hive Social, and more who will also see Threads as a serious competitor. For business advertisers, Threads may provide another good opportunity to reach customers, plus advertisers, celebrities, and influencers may also value the chance to use another platform that gives them the reach while escaping any negative connotations of things Musk may have said, done, or introduced, e.g. problems over the Blue Tick scheme or sharing a platform with those whose ‘free speech’ may not be compatible with their thinking and brand image. Signing up to (and switching across to) Threads may also simply give many people an opportunity to feel that they’re ‘sticking it’ to Musk – who many see as a controversial figure.

Tech News : Meta’s New ‘Human-Like’ AI Image Generator

Meta has announced the introduction of I-JEPA, an AI model that generate images that can create the most human-like images so far and is “a step closer to human-level intelligence in AI”. 

Overcomes Previous Limitations 

Meta says that the I-JEPA model is based on its Chief AI Scientist, Yann LeCun’s vision for more human-like AI and that it can “overcome key limitations of even the most advanced AI systems today”. 

What Does It Do? 

I-JEPA analyses a user-provided sketch and completes the unfinished image by filling in the missing details, e.g. the colour of objects, lighting conditions, and the background in a way that’s incredibly accurate.

Much Better At Filling In The Missing Details 

The ‘Joint Embedding Predictive Architecture’ (JEPA) model’s ‘knowledge-guided generation’ means that it can use its knowledge of the world to fill in the missing details of an image, thereby creating a much better result. This could mean that it will be much more difficult to tell whether an image is human created (real) or has been artificially created by AI. In the past, for example, issues like people in AI-generated images having strange-looking hands with 6 fingers have been one of the ways that ‘deepfakes’ can be spotted. I-JEPA is able to resolve these issues.

Why Is I-JEPA Better? 

Whereas generative architectures learn by removing or distorting portions of the input to the model and tries to fill-in every bit of missing information (even though the world is inherently unpredictable), I-JEPA predicts the representation of part of an input (an image or piece of text) from the representation of other parts of the same input – it predicts representations at a high level of abstraction rather than predicting pixel values directly. It also uses an enormous amount of background knowledge about the world. I-JEPA is, therefore, able to predict missing information in an abstract representation that’s more akin to the general understanding people have. I.e., it’s exceptionally good at analysing and finishing unfinished pictures and making them look real.

Meta says that I-JEPA’s pretraining is also “computationally efficient” and doesn’t involve any overhead associated with applying more computationally intensive data augmentations to produce multiple views.

What Does This Men For Your Business? 

I-JEPA appears to be the next “step closer to human-level intelligence in AI” and gives users the ability to quickly create very realistic images from simple sketches while eliminating many of the usual problems that AI image generators have had to date. Businesses using I-JEPA (which is currently in the hands of researchers and developers) can have confidence in the quality of its output for a whole range of private and public/published uses. I-JEPA gives businesses the ability to quickly create a detailed, realistic picture from a sketch can save time and costs, add value, be applicable to a wide range of tasks, avoid the need for large amounts of manually labelled data, and as the researchers say, it can let businesses create “strong off-the-shelf semantic representations without the use of hand-crafted view augmentations”. In short, I-JEPA could be a real game-changer, and Meta making the model available as open source may help to get it widely established as a new industry-standard tool (Meta hopes).