Home 9 Uncategorized 9 Is Meta doing enough to protect WhatsApp users against misinformation?

Is Meta doing enough to protect WhatsApp users against misinformation?

by | Uncategorized

 

While credible news posts tend to trickle onto timelines in a steady stream, studies show that negative posts containing more extreme sentiments are more likely to be shared. In a world where algorithms favour posts that users are more likely to engage with, misinformation spills quickly from person to person. In particular, instant messaging platform, WhatsApp, has been criticised for harbouring fake news and not taking sufficient measures to effectively tackle the issue. Although they have recently implemented a new feature intended to slow the spread of misinformation, a new report claims misinterpretation of this feature may have been a deliberate design flaw.

You might have noticed the new feature on WhatsApp, alerting users when a post they have received has been “forwarded many times”. Consequently, the post also has a forwarding limit of one chat at a time. Parent company, Meta, implemented the feature to slow down the spread of rumours and fake news. The idea was to act as a red flag, prompting critical engagement and encouraging their users to question the credibility of the post. However, the report from Loughborough University sheds light on the worrisome reality of how users have interpreted the tag.

The study, titled ‘Beyond Quick Fixes: How Users Make Sense of Misinformation Warnings on Personal Messaging’, conducted interviews with 102 WhatsApp users. Its findings show that a significant set of users have misinterpreted the “forwarded many times” tag to signify importance, rather than poor reliability. A group of participants associated the tag with viral content, which could encourage more sharing. This miscommunication means that the feature sometimes inadvertently achieves the opposite of its intended purpose, allowing misinformation to continue spreading unchecked. Despite its small sample size, the report proposes that the feature is inadequately designed, meaning that Meta has not sufficiently addressed this dangerous problem.

The researchers emphasize that relying solely on “quick fixes” is simply not enough. Additionally, they assert that this design flaw may have been intentional, claiming Meta deliberately left room for interpretation to “avoid tarnishing its brand with associations between its platform and the spread of misinformation”. If the researcher’s claims are true, WhatsApp is deliberately allowing its users to be in danger of fake news out of a desire to protect their brand image. To understand just how insidious that claim is, we must remember that misinformation is much more than an online issue – it is a real-world problem, and it can be deadly.

WhatsApp has been under scrutiny since misinformation spread on their platform was linked to multiple violent attacks. In 2018, an enraged mob broke out in Mexico after rumours began circulating on WhatsApp of people abducting children and harvesting their organs. Tragically, two innocent people were beaten and burnt to death. “It was like a great spell had overtaken the people,” said a witness of the barbaric episode. In the same year, a video edited to look like a kidnapping ignited mass hysteria as a village in India was overrun by mobs. As WhatsApp’s largest userbase, India has been a particular victim of misinformation with over two dozen deaths attributed to the platform in 2018 alone. In Sri Lanka, the government instated a temporary ban on WhatsApp, Facebook, and Instagram (all owned by Meta) in an effort to stop ethnic conflicts. These stories serve as a grim reminder of the threat of misinformation: the panic, tragedy, and fatality it can inspire. After these events, WhatsApp introduced certain limits on the number of times an individual could forward a post. But, with support for greater online protection growing, they may be forced to take an active stance against fake news in the near future.

The UK’s Online Safety Bill, currently being debated in the House of Lords, will hold companies like Meta accountable for identifying, mitigating, and managing the risks of harm posed by misinformation. The bill will put the Office of Communications (Ofcom) in charge of regulating social media platforms, to check if they are adequately protecting their users. Dame Melanie Dawes, Ofcom’s Chief Executive, rightfully emphasizes the importance of equipping ourselves with the tools and confidence to discern fact from fiction online. She asserts: “Many adults and children are struggling to spot what might be fake. So, we’re calling on tech firms to prioritise rooting out harmful misinformation”.

Hopefully, once the bill is instated, we will see personal messaging platforms implement more explicit warning mechanisms to address the potential for misinformation. Four out of five adult internet users (81%) have expressed the desire for tech firms to take more responsibility for monitoring content on their sites and apps. To assist with this endeavour, researchers of the WhatsApp report propose five principles that any messaging platform can adopt to ensure more effective miscommunication warnings:

  1. Don’t rely on description alone: Tags should include explicit warnings about the potential presence of misinformation to ensure users take caution.
  2. Introduce user friction: To counteract mindless forwarding, features should require active user confirmation to ensure that warnings are noticed and engaged with.
  3. Gain media exposure: Public relations campaigns are essential to spread awareness and encouraging critical thinking, uniting users against misinformation.
  4. Consider the context: The design of misinformation warnings should account for the diverse contexts in which users experience them, tailoring the approach to meet individual needs and challenges.
  5. Think beyond the platform: It is important to be mindful that misinformation extends beyond personal messaging platforms, and therefore requires a holistic solution.

Although we can advocate for legislative efforts such as the Online Safety Bill – with the help of research like the Loughborough University report – to address the urgent need for a safer digital environment, it remains essential to protect ourselves. Regardless of tech firms’ battle against misinformation and misinterpretation, we need to build a more media literate society. Going forward, keep in mind that the “forwarded many times” tag on WhatsApp is a warning. Take a step back and think critically before sharing any post online, consider the validity of its source, and question what purpose you have for promoting it (and the consequences that could have).

 

LINKS TO RELATED STORIES

Misinformation Susceptibility Test | Open Minds Foundation

The power of prebunking against coercive control | Open Minds Foundation

Is modern media killing critical thinking? | Open Minds Foundation   

Tackling the rising tide of misinformation | Open Minds Foundation

How you can help us combat coercive control

Categories