A Knockout Blow for LLMs?
a year ago
- #LLMs
- #Apple
- #AGI
- Apple's new paper highlights significant weaknesses in LLMs, particularly their inability to generalize beyond their training data.
- The paper critiques 'chain of thought' and 'reasoning models,' showing they often fail to produce correct answers even when their reasoning traces appear correct.
- LLMs struggle with classic problems like the Tower of Hanoi, performing poorly even when given the solution algorithm.
- The paper argues that LLMs cannot reliably solve problems that humans and conventional algorithms handle easily, casting doubt on their potential to achieve AGI.
- A weakness in the paper's argument is that humans also have limits, but AGI should combine human adaptability with computational reliability.
- LLMs are not a substitute for well-specified conventional algorithms and should not be expected to work reliably in complex scenarios.
- The paper suggests that LLMs will continue to have uses in coding, brainstorming, and writing but are not a direct route to transformative AGI.
- The field of neural networks and deep learning is not dead, but LLMs have clear limits, and other approaches may thrive.
- The paper is seen as an elegant scientific research piece, critiquing the over-reliance on scaling LLMs without theoretical foundations.
- True advancement in human civilization requires theory-driven system building, not just 'dumb scaling' of LLMs.