Hasty Briefsbeta

Bilingual

Old code never dies: why legacy software is often safer than new code

19 hours ago
  • #software-engineering
  • #risk-management
  • #legacy-software
  • Legacy software accumulates operational knowledge from handling edge cases and real-world incidents over time, making it often more reliable than new code.
  • Systems like COBOL in banking have encoded institutional memory and regulatory adaptations, making them hard to replace without losing critical knowledge.
  • Infrastructure failures, such as the FAA NOTAM system outage, highlight that age and complexity can indicate both risk and dependence on continued functionality.
  • Rewrites of core systems can lead to catastrophic failures, as seen in TSB Bank's migration disaster and Knight Capital's trading loss, due to rushed transitions and incomplete edge-case testing.
  • AI-generated code poses security risks, with studies showing vulnerabilities and a lack of thorough review, creating 'verification debt' and not adequately capturing legacy business logic.
  • Organizations should evaluate legacy systems by assessing actual brokenness versus age, isolating transition risks through incremental strategies like the strangler fig pattern, and deciding what to keep, refactor, or replace based on cost and risk.
  • New software can fail as severely as old, as demonstrated by the CrowdStrike incident, emphasizing that careful change management and real-world testing are more critical than age alone.