Hasty Briefsbeta

Show HN: Prompt-refiner – Lightweight optimization for LLM inputs and RAG

2 days ago
  • #AI Agents
  • #Token Optimization
  • #Python Library
  • Lightweight Python library for AI Agents, RAG apps, and chatbots with smart context management and automatic token optimization.
  • Save 5-70% on API costs - 57% average reduction on function calling, 5-15% on RAG contexts.
  • Key features include automatic refining, tool schema compression, and tool response compression.
  • Includes modules like Cleaner, Compressor, Scrubber, Tools, Packer, and Strategy for various optimizations.
  • Tested on real-world APIs showing 56.9% average reduction in tokens with 100% lossless compression.
  • Benchmarks show minimal latency overhead (< 0.5ms per 1k tokens).
  • MIT licensed and production-ready with detailed examples and documentation.