Hasty Briefsbeta

Hallucination Risk Calculator

a day ago
  • #calibration
  • #hallucination-risk
  • #LLM
  • Post-hoc calibration toolkit for large language models without retraining.
  • Provides bounded hallucination risk using Expectation-level Decompression Law (EDFL).
  • Supports two deployment modes: Evidence-based and Closed-book.
  • Uses OpenAI Chat Completions API for scoring without model retraining.
  • Includes mathematical framework for information budget and prior masses.
  • Decision to ANSWER or REFUSE based on Information Sufficiency Ratio (ISR).
  • Offers practical considerations, calibration adjustments, and deployment options.
  • Comes with installation instructions and project layout details.
  • Includes API surface details and validation methods.
  • Licensed under MIT License by Hassana Labs.