Proving (literally) that ChatGPT isn't conscious
4 months ago
- #AI Consciousness
- #Philosophy of Mind
- #Neuroscience
- The article presents a proof that Large Language Models (LLMs) like ChatGPT are not conscious, arguing that their outputs are based on training data rather than subjective experiences.
- The author, with a background in neuroscience and Integrated Information Theory, uses meta-theoretic reasoning to argue that no non-trivial theory of consciousness can apply to LLMs.
- Key to the argument is the idea of substitutions—replacing an LLM with systems like a single-hidden-layer neural network or a lookup table—which would lead to mismatches in predictions about consciousness, thus falsifying any theory that grants consciousness to LLMs.
- The paper introduces a Proximity Argument, stating that LLMs are too similar to provably non-conscious systems (like lookup tables) to be considered conscious themselves.
- Continual learning is highlighted as a critical feature for consciousness, something LLMs lack as they process inputs statically without ongoing learning during interactions.
- The author critiques the current state of consciousness research, calling for a more rigorous, falsifiable approach to theories of consciousness, and announces the founding of Bicameral Labs to pursue this research.
- Ethical implications are noted, emphasizing the importance of distinguishing conscious from non-conscious systems to guide AI development responsibly.