The future belongs to those who can refute AI, not just generate with AI
6 days ago
- #Epistemology
- #Software Engineering
- #AI Verification
- The future of engineering will be shaped by the ability to verify and refute AI-generated content, not just generate it.
- Volume of generated content (code, images, videos) is no longer a signal of value due to the ease of AI generation.
- Knowledge is defined by what survives adversarial scrutiny, not just by experience or authority.
- The value of ideas, designs, and code is revealed under stress and attack, not in their creation.
- The source of an idea is irrelevant; only its survival through falsification matters, as per Karl Popper's philosophy.
- GenAI is a conjecture engine; all its outputs must be treated as provisional and subjected to rigorous testing.
- Existing systems like CI/CD, code review, and unit tests are mechanisms for adversarial scrutiny of code.
- Historically, software codebases grew at about 20% annually, but AI could drastically increase this growth rate.
- AI-driven productivity could lead to 'monster codebases' that are unmanageable without scalable verification.
- Human-only review cannot keep up with AI-scale generation; AI must assist in verification to prevent collapse.
- Specialized review AI can focus on checking specific properties, acting as a junior reviewer to flag anomalies.
- Guardrails should be implemented in the verification layer, not just the generation layer, to ensure reliability.
- Even minimal, highly leveraged codebases require rigorous verification due to the increased blast radius of errors.
- The problem space for software is far from saturated, with many global challenges still requiring software solutions.
- Engineering must adapt quickly to integrate cost-effective verification tools into development workflows.
- Automated review on every git commit can help systematically test every change as a hypothesis.
- The future of engineering depends on rigorous refutation of AI-generated content, not just its generation.