Hasty Briefsbeta

Bilingual

Antislop: A Framework for Eliminating Repetitive Patterns in Language Models

6 months ago
  • #AI Ethics
  • #Machine Learning
  • #Natural Language Processing
  • Introduction of 'slop' as repetitive phraseology in LLM outputs degrading quality and recognizability.
  • Presentation of Antislop framework with three key innovations: Antislop Sampler, automated slop profiling pipeline, and FTPO fine-tuning method.
  • Demonstration of slop patterns appearing over 1,000x more frequently in LLM outputs than human text.
  • Effectiveness of Antislop Sampler in suppressing over 8,000 patterns without quality loss.
  • Superior performance of FTPO with 90% slop reduction and maintained or improved cross-domain evaluation results.
  • Comparison showing DPO's degradation in writing quality and lexical diversity versus FTPO.
  • Release of all code and results under MIT license for public access.