Hasty Briefsbeta

The State of AI Coding Report 2025

2 days ago
  • #Productivity Metrics
  • #AI Development
  • #Model Benchmarks
  • Median PR size increased by 33% from March to November 2025, from 57 to 76 lines changed per PR.
  • Lines of code per developer grew from 4,450 to 7,839 due to AI coding tools.
  • Medium teams (6-15 devs) increased output from 7,005 to 13,227 lines per developer.
  • Median lines changed per file grew from 18 to 22 as PRs become denser.
  • mem0 dominates AI memory infrastructure with 59% market share.
  • CLAUDE.md leads adoption at 67%, with 17% of repos using all three formats.
  • Anthropic SDK leads at 43M downloads, with 8x growth.
  • LangSmith dominates with 110M monthly downloads, bundled with LangChain installs.
  • OpenAI leads with 130M downloads, Anthropic grew 1,547x since Apr 2023.
  • OpenAI-to-Anthropic ratio dropped from 47:1 (Jan 2024) to 4.2:1 (Nov 2025).
  • Model benchmarks for GPT-5.1, Claude Sonnet 4.5, GPT-5-Codex, Claude Opus 4.5, and Gemini 3 Pro.
  • DeepSeek-V3 is a 671B-parameter Mixture-of-Experts model activating only 37B parameters per token.
  • Qwen2.5-Omni separates perception from sequence modeling for stable, real-time reasoning.
  • Long Context vs. RAG for LLMs: An Evaluation and Revisits compares LC models and RAG.
  • GEPA (Genetic-Pareto) optimizes instructions using execution traces.
  • SFR-DeepResearch trains a single web-research agent using reinforcement learning.
  • MEM1 is an RL framework for long multi-turn tasks with constant memory usage.
  • Search-R1 trains models to interleave reasoning with live search-engine queries.