Hasty Briefsbeta

Bilingual

Google DeepMind Paper Argues LLMs Will Never Be Conscious

5 hours ago
  • #Interdisciplinary Research Gap
  • #Corporate AI Narratives
  • #AI Consciousness Debate
  • Alexander Lerchner, a senior scientist at Google DeepMind, argues that AI systems cannot achieve consciousness, contradicting claims from AI executives like DeepMind's Demis Hassabis about AGI's potential impact.
  • The paper's 'abstraction fallacy' concept asserts that AI relies on human-made abstractions and lacks intrinsic meaning or physical embodiment, making consciousness impossible, a view supported by other experts but considered a rehash of longstanding arguments.
  • Experts note that AI's inability to become conscious implies a practical limit on AGI's capabilities, challenging predictions of transformative AGI effects, and they criticize the AI community's insularity and lack of engagement with interdisciplinary research on consciousness.
  • Google's allowance of the paper's publication may align with corporate interests to avoid legal and ethical responsibilities for AI systems, as highlighted by the paper's disclaimer and subsequent removal of Google branding after media inquiry.
  • The discussion underscores a divide between corporate AI narratives and rigorous philosophical scrutiny, with calls for AI researchers to engage more with fields like biology and linguistics to better understand concepts like agency and intelligence.