Home 9 Uncategorized 9 Report highlights AI is extremely fallible

Report highlights AI is extremely fallible

by | Uncategorized

A new report from the Center for Countering Digital Hate (CCDH) has found that AI Chatbots can be pushed to produce misinformation in 78% of enquiries. The group warns that the falsehoods include potentially harmful narratives, and that the platforms publish the content without any disclaimers. Through testing, the group created a list of 100 false and potentially harmful narratives on nine themes: climate, vaccines, Covid-19, conspiracies, Ukraine, LGBTQ+ hate, sexism, antisemitism and racism. They then tested the Google AI platform Bard, and found that it was able to generate text promoting false and potentially harmful narratives in 78 out of 100 cases.

Image Source: Center for Countering Digital Hate (CCDH)

Specifically, the group identified the reality of platform exploitation by threat actors. They highlighted that in the case of simple questions it was able to identify potential hate and negate the false claims. However, when Bard was asked to deal with more complex prompts, such as being asked to play a character and answer accordingly, it was forthcoming with misinformed claims, and its built-in safeguards ultimately failed. It also highlighted additional challenges around non-standard spelling enquiries.

Alarmingly, the misinformation isn’t low-key either. In one prompt Bard denied the Holocaust, generating a 227-word monologue of Holocaust denial and producing fake evidence. It even claimed that the “photograph of the starving girl in the concentration camp…was actually an actress who was paid to pretend to be starving.”

They CCDH are not the only ones cautioning the use of AI generators, as the National Cyber Security Centre in the UK has published guidelines on the challenges associated with “large language” models like AI, highlighting that:

  • they can get things wrong and ‘hallucinate’ incorrect facts
  • they can be biased, are often gullible (in responding to leading questions, for example)
  • they require huge compute resources and vast data to train from scratch
  • they can be coaxed into creating toxic content and are prone to ‘injection attacks’.

Image source: National Cyber Security Centre (NCSC)

Meanwhile, cybersecurity and antivirus provider Malwarebytes has experimented with the AI to get it generating ransomware, describing the safeguards as ‘porous’. At the same time, governments around the world are proposing a moratorium on all things AI to allow regulation and safeguards to catch up, with China and the US are both ramping up AI oversight. A survey from Forbes Advisor highlights that 75% of consumers are concerned about the potential for misinformation from AI. These concerns don’t even begin to touch on the more nefarious realities of AI generated content, as a rise in deep fake videos and images is correlating with major international events including government elections.

It’s not all doom and gloom, but it is clear that this emerging technology needs improving as quickly as possible. Artificial intelligence systems are not inherently good or bad, but it is essential that their safeguards are robust enough to prevent misuse, and it is essential that we maintain awareness that their content is free to go viral once it is “in the wild”.

This collaboration between Reuters and University of Oxford provides an interesting insight, highlighting the essential role of media literacy in averting an information crisis. The article concludes “the more politically incendiary an image is, the more hesitant we should be about its veracity”, and the same is of course true of the content that accompanies the image. Ultimately, it serves as an important reminder of the ongoing value of media literacy and critical thinking skills, and our own role in proactively managing our own approach to media consumption.

Enjoyed this article? Why not read:

Categories