Hasty Briefsbeta

Bilingual

TeapotLLM- an open-source <1B model for hallucination-resistant Q&A on a CPU

a year ago
  • #NLP
  • #AI
  • #OpenSource
  • Teapot is an open-source small language model (~800 million parameters) optimized for resource-constrained devices like smartphones and CPUs.
  • It is fine-tuned on synthetic data to reduce hallucinations and focuses on context-based answers.
  • Teapot supports tasks like Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
  • The model is trained to provide conversational answers and resist hallucinations by refusing to answer without sufficient context.
  • Teapot can perform RAG across multiple documents and extract structured information in formats like JSON.
  • It includes a library (teapotai) for easy integration into production environments.
  • Teapot is fine-tuned from flan-t5-large and trained on a ~10mb synthetic dataset.
  • The model is licensed under MIT and is community-driven, with support available via Discord.