LLMorphism: When humans come to see themselves as language models
5 hours ago
- #cognitive bias
- #LLMorphism
- #human cognition
- LLMorphism is a bias wherein human cognition is erroneously equated with large language models (LLMs) due to their human-like language outputs.
- This bias may grow as conversational LLMs become more prevalent, leading people to assume similarity in cognitive architecture based solely on linguistic similarity.
- LLMorphism spreads via analogical transfer (projecting LLM features onto humans) and metaphorical availability (LLM terminology shaping descriptions of human thought).
- It is distinct from concepts like mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories.
- Potential impacts include changes in work, education, responsibility, healthcare, communication, creativity, and perceptions of human dignity.
- The concern is not just attributing too much mind to machines, but also attributing too little mind to humans, highlighting a broader societal issue.