Hasty Briefsbeta

Bilingual

Bad Actors Are Grooming LLMs to Produce Falsehoods

10 months ago
  • #AI
  • #Propaganda
  • #Disinformation
  • Bad actors are grooming LLMs to produce falsehoods, exploiting their lack of reasoning.
  • Current models like OpenAI's 4o and o3 often repeat propaganda from known disinformation networks like Pravda.
  • Models fail to connect known facts, such as Pravda's unreliability, to avoid citing false narratives.
  • Real-time searches make models more vulnerable to grooming, especially on less widely discussed topics.
  • Even 'reasoning' models like o3 perform poorly, citing unreliable sources despite knowing their reputation.
  • Users are unlikely to switch to premium models due to high costs and slow response times.
  • AI systems need better cognition to evaluate sources, understand satire, and fact-check their outputs.
  • Current AI models risk contaminating their own training data by unthinkingly repeating propaganda.