The author reflects on their lifelong journey with coding, starting from their first line of code on a Commodore 64 at age 6.
They describe the exhilarating feeling of altering reality through code, likening it to the discovery of fire.
The author ponders the possibility of defining all the rules of the world in a program, inspired by 'The Matrix'.
With the rise of LLMs and AI, the author initially thought the challenge was just about memory and computing power.
Recent advancements like DeepMind's Genie 3 and Robbyant's LingBot-World suggest that future simulations may not require traditional coding.
The author explores the idea of AI models simulating systems like spreadsheets or Linux terminals without underlying code, just by learning patterns.
They question the nature of software itself, suggesting that if a model can simulate behavior accurately, the distinction between 'real' and 'simulated' software blurs.
The author concludes by musing on the possibility that the 'Matrix' might not be written by anyone but learned by AI from observing the real world.
AI is not replacing SREs but deskilling them by handling 95% of incident responses, making humans less capable of handling the remaining 5%.
The 'Ironies of Automation' (1983) highlights that automation makes human operators critical during rare failures but leaves them unprepared due to lack of practice.
AI is automating routine cognitive tasks in SRE, such as alert noise reduction, log pattern detection, and basic root-cause analysis, leaving humans to handle rare, complex issues.
Deskilling occurs when reliance on AI reduces human expertise, as seen in medicine and aviation, where skills atrophy without practice.
Never-skilling is a concern for junior SREs who may never develop foundational skills due to AI handling most tasks.
Solutions include deliberate inefficiency (human practice on AI-handled incidents) and human-in-the-loop systems to maintain engagement and skills.
The future of SRE should focus on AI supercharging humans, keeping them sharp for critical decisions, rather than replacing them.
OpenAI is revising its agreement with the US military to include explicit prohibitions against using its technology for domestic surveillance.
The deal faced backlash, leading to a surge in uninstalls of OpenAI's ChatGPT app and a rise in popularity for rival Anthropic's Claude.
AI's military applications include logistics and data analysis, with companies like Palantir providing tools for intelligence and defense purposes.
Concerns remain about AI's role in autonomous weapons and the need for human oversight in military decision-making.
Anthropic's refusal to allow its AI in autonomous weapons led to its blacklisting by the Trump administration, yet its technology was reportedly used in the US-Israel conflict with Iran.
Human-written code died in 2025, and code reviews will die in 2026.
Code reviews are becoming obsolete due to the exponential increase in code changes and the inefficiency of manual reviews.
AI-generated code requires more effort to review than human-written code, making manual reviews unsustainable.
The future of code review lies in AI-driven tools and shifting human checkpoints upstream to review intent rather than code.
Spec-driven development is emerging as the primary method, where humans review specs, plans, and acceptance criteria instead of code.
Trust in AI-generated code is built through layers of verification, including multiple agent comparisons, deterministic guardrails, and adversarial verification.
Human value shifts to defining acceptance criteria and business logic, while AI handles implementation.
The concept of 'good code' is evolving, with a focus on standardization and automation.
The future of software development is characterized by rapid iteration, observation, and quick reverts rather than slow, manual reviews.
The traditional résumé is becoming obsolete as hiring managers and recruiters find them unreliable due to AI-generated content and embellishments.
Companies like Expensify and Automattic are moving away from requiring résumés, focusing instead on direct questions or work trials to assess candidates.
Skills-based hiring is on the rise, with 70% of employers prioritizing practical abilities over credentials like degrees and years of experience.
AI tools in recruitment have introduced biases and inefficiencies, overwhelming recruiters with unqualified applicants and demoralizing job seekers.
New platforms like Vamo are emerging to evaluate candidates based on their actual work, such as GitHub projects, rather than their résumés.
LinkedIn is adapting by introducing features to verify skills and encouraging job seekers to showcase their knowledge and personality through posts and videos.
Indeed is experimenting with faster interview processes to reduce the 'black hole problem' where candidates wait too long for responses.
Despite the shift towards skills-based hiring, challenges remain, including potential biases in new hiring methods and accessibility issues for some candidates.
Experts suggest that a more holistic assessment of candidates is needed, combining technology with human judgment to ensure fair and effective hiring.