Hasty Briefsbeta

Where's the shovelware? Why AI coding claims don't add up

7 days ago
  • #AI coding
  • #productivity
  • #software development
  • The author, a software developer with 25+ years of experience, expresses frustration with AI coding tools, initially believing they increased productivity but later doubting their effectiveness.
  • A METR study revealed developers overestimated AI's productivity boost, thinking it made them 20% faster when it actually slowed them down by 19%.
  • The author conducted a personal six-week experiment, flipping a coin to decide between AI-assisted and manual coding, finding no statistically significant productivity difference.
  • Despite widespread adoption and claims from companies like GitHub Copilot, Claude Code, and Google, there's no observable surge in software output or 'shovelware'.
  • Charts analyzing software releases show no exponential growth post-AI adoption, contradicting the narrative of AI-driven productivity boosts.
  • The author criticizes the tech industry's FOMO-driven adoption of AI tools, leading to layoffs and salary cuts based on unproven productivity claims.
  • Developers feeling pressured to use AI tools are reassured that their skepticism is valid, backed by data showing no significant increase in software output.
  • The article debunks common rebuttals, such as the need for better prompting or the idea that AI improves code quality, emphasizing the lack of tangible evidence.
  • The author concludes that AI coding tools, despite massive investment, have not delivered on their promises and urges developers to trust their experiences over hype.