Hasty Briefsbeta

Bilingual

The machines are fine. I'm worried about us

13 hours ago
  • #scientific training
  • #human understanding
  • #AI in academia
  • The author describes two PhD students, Alice and Bob, who completed similar astrophysics projects. Alice engaged deeply, building understanding through effort, while Bob used an AI agent throughout, producing a paper without the underlying learning.
  • Institutions often fail to distinguish between Alice and Bob because quantitative metrics (e.g., paper counts) treat them equally, even though only Alice developed scientific intuition and independent thinking.
  • AI tools risk turning researchers into 'competent prompt engineers' who can produce results but lack the deep understanding needed for scientific creativity and problem detection.
  • Current AI models can generate publishable work under expert supervision, but the supervision itself—the human expertise—remains essential. Without it, errors go unnoticed.
  • The danger is a slow drift toward not understanding the work, where researchers rely on AI for thinking, leading to a loss of the human element that makes science meaningful.
  • Using AI as a tool for efficiency (e.g., syntax help) is acceptable, but outsourcing cognitive work undermines the learning process necessary for scientific development.
  • Career incentives in academia pressure students to prioritize short-term output over long-term understanding, encouraging reliance on AI, which may harm future scientific capability.
  • The author argues that the 'grunt work' of science is where real learning happens, and bypassing it with AI diminishes researchers, even if output appears successful.