AI capability isn't humanness
2 days ago
- #AI vs Human Cognition
- #LLM Constraints
- #Alignment Techniques
- Humans and LLMs operate under fundamentally different constraints and algorithms, making their similarities superficial.
- Scaling AI will widen the gap between human and AI cognition, not make AI more human-like.
- Humans rely on limited, personal memories and slow, step-by-step reasoning, while LLMs use vast datasets and parallel processing.
- LLMs can scale almost arbitrarily in parameters and data, unlike humans who are constrained by biology.
- Human cognition is bounded by metabolic limits and slow processing, whereas LLMs are not bound by biological constraints.
- LLMs have effectively unbounded training data compared to humans, who filter information through attention and relevance.
- Humans must act quickly using heuristics, while LLMs have more generous time budgets for responses.
- Alignment methods like RLHF tweak LLM behavior superficially without changing underlying reasoning mechanisms.
- Evaluating human-likeness in LLMs requires probing decision-making processes, not just surface-level outputs.
- Roundtable Technologies is developing Proof of Human, an API for verifying human identity, based on these insights.