Show HN: Use Third Party LLM API in JetBrains AI Assistant
a year ago
- #JetBrains
- #LLM
- #Proxy
- Proxy remote LLM API as Local model for JetBrains AI Assistant.
- Powered by Ktor and kotlinx.serialization, leveraging no-reflex features.
- Addresses limited quotes in JetBrains AI Assistant by proxying third-party LLM APIs (e.g., Gemini, Qwen).
- Supports proxying from OpenAI, Claude, DashScope, Gemini, Deepseek, Mistral, SiliconFlow.
- Proxies as LM Studio and Ollama for compatibility with JetBrains IDEs.
- Distributed as a fat runnable jar and GraalVM native image for cross-platform use.
- Features automatic config file generation with schema annotations for easy setup.
- Config file supports hot-reloading and includes default values for quick start.
- Supports streaming chat completion API only.
- Includes detailed configuration options for API providers and model lists.