Hasty Briefsbeta

Bilingual

Everyone is wrong about AI and Software Engineering

2 months ago
  • #AI
  • #Software Engineering
  • #Epistemology
  • There is an epistemological inversion where less sophisticated observers of AI (like politicians) have a more accurate model of reality than sophisticated ones.
  • Hacker News commenters initially dismissed LLMs for complex software engineering, but recent models like Claude Opus 4.5 and GPT-5.2 show significant improvements in handling real-world codebases.
  • November 2025 marked a turning point with major AI releases (Gemini 3 Pro, Claude Opus 4.5, GPT-5.2) showing dramatic improvements in benchmarks like SWE-bench and internal hiring exams.
  • The technically sophisticated crowd correctly identified early LLM limitations but now resist updating their beliefs despite new evidence, while naive believers in AI marketing are accidentally becoming correct.
  • AI companies' claims about automating software engineering are wrong because they conflate code generation with the actual hard parts of software engineering: specification, verification, and domain modeling.
  • The real change is the inversion of skill value: syntax and API knowledge become less important, while understanding distributed systems, consistency models, and domain-specific requirements becomes more critical.
  • Entry-level roles focused on code translation may contract, while senior roles emphasizing specification and verification become more leveraged.
  • Both skeptics and AI executives need to adjust their views: skeptics should acknowledge recent AI advancements, while executives should recognize that automating code generation doesn't solve the core challenges of software engineering.