"The Bitter Lesson" is wrong. Well sort of
9 months ago
- #AI Research
- #Machine Learning
- #Domain Knowledge
- The Bitter Lesson by Rich Sutton contrasts AI research based on human knowledge versus scaling methods with data and compute, favoring the latter.
- A false conclusion from The Bitter Lesson is that human knowledge is unnecessary, relying solely on data and compute.
- Counter-arguments highlight that all ML models involve human knowledge in design and guidance, and pure data-driven models may not align with human needs.
- An alternative theory suggests domain knowledge guides the model-building process, balancing direct and influential methods.
- The model-building lifecycle often starts with broad, influential approaches and later incorporates more direct domain knowledge, especially in evaluation.
- Example: LLMs begin with self-supervision on massive datasets, then incorporate curated data, human feedback, alignment techniques, and expert evaluation.
- Domain knowledge remains critical for building useful AI models, with a gradual shift toward more influential methods over time.