Inception releases upgraded Mercury dLLM with improved coding and reasoning
16 days ago
- #AI
- #LLM
- #Diffusion
- Inception launched Mercury, the first commercially available diffusion-based LLM (dLLM), which is up to 10X faster and more efficient than traditional LLMs.
- Closed a $50M financing round led by Menlo Ventures, with participation from notable investors like Andrew Ng and Andrej Karpathy.
- Released an upgraded version of Mercury with improvements in coding, instruction following, mathematical problem solving, and knowledge recall.
- dLLMs use diffusion to generate answers in parallel, unlike autoregressive LLMs that generate tokens sequentially.
- Developers and enterprises are leveraging Mercury for AI assistants, voice agents, and AI co-pilots that operate instantly.
- New Mercury features include larger models, key architectural upgrades, and major training and inference improvements.
- Mercury is available via API and partners like OpenRouter and Poe, with pricing at $0.25 per 1M input tokens and $1.00 per 1M output tokens.
- Inception is backed by top venture capitalists and focuses on building the fastest, most efficient AI models using diffusion technology.