The Future of Comments Is Lies, I Guess
a year ago
- #LLM
- #moderation
- #spam
- The author has extensive experience in content moderation since 2004, covering email spam, social media, and Mastodon.
- Spam exists in various forms, from cheap, mass-produced messages to highly targeted spear phishing attacks.
- Large Language Models (LLMs) are changing the spam landscape by enabling cheap, automated, and plausible spam generation.
- Examples include LLM-generated blog comments with fabricated personal experiences and product plugs.
- LLMs are also being used to create misleading summaries on platforms like Hacker News, spreading misinformation.
- The cost of spam moderation is increasing as moderators must distinguish between awkward humans and sophisticated LLM spam.
- Future risks include LLM-generated voice scams, impersonation of trusted contacts, and long-term fake relationships.
- Decentralized and privacy-focused networks like Mastodon may become targets as spam economics shift.
- The author expresses concern over the growing challenge of moderating LLM-generated spam and misinformation.