LLMs exploit our tolerance for sloppiness
10 months ago
- #LLMs
- #Rigor
- #Education
- Dick Guindon's observation on the progression of thought and its relevance to programmers and thinkers.
- LLMs are good at understanding human language but perform poorly in math and writing reliable code.
- The sloppiness of LLMs starts at level 2 of thought progression and worsens from there.
- A theory suggests bigger models might improve LLM rigor, but skepticism remains about matching human intelligence.
- LLMs exploit human tendencies to overlook sloppiness, driving their popularity.
- Higher education aims to reduce human sloppiness, but LLMs may increase it by bypassing discovery processes.
- There's a moral imperative to resist normalizing LLMs in education, especially for generating expression.
- LLMs should be used for data retrieval and summarization, not replacing human creative output.
- Maintaining academic rigor requires recognizing LLM limitations and reversing declining academic standards.
- Future academic standards may need to exceed LLM capabilities to preserve the value of a 'good degree'.