Hasty Briefsbeta

Bilingual

Ollama Turbo

9 months ago
  • #Privacy
  • #AI
  • #Hardware
  • TurboPreview offers faster model inference with datacenter-grade hardware.
  • Allows running larger models by upgrading to newer hardware.
  • Ensures privacy and security by not retaining user data.
  • Saves battery life by offloading model processing from local devices.
  • Turbo is a new way to run open models efficiently.
  • Available models during preview include gpt-oss-20b and gpt-oss-120b.
  • Compatible with Ollama's CLI, API, and JavaScript/Python libraries.
  • No data logging or retention in Turbo mode.
  • Hardware is located in the United States.
  • Usage limits are in place with future metered pricing planned.