10 months ago
- Tinygrad's competition is strong, open-source, and run by smart people, unlike comma's past competition in self-driving cars.
- Tinygrad is significantly smaller in line count (14,556 lines) compared to PyTorch (3.3M), JAX (400K), and MLIR (950K), suggesting potential efficiency.
- The hypothesis that tinygrad's small size means it's not competitive in speed or features is challenged; it's already feature-competitive.
- Tinygrad aims to abstract away all but the core problem of scheduling across different scales in machine learning, proposing a unified solution.
- The project's goal is to be the fastest NN framework under 25K lines, capable of handling GPT-5 scale training jobs.
- Key steps include exposing the underlying search problem, ensuring a simple and complete formulation, and applying state-of-the-art search techniques.
- Success could lead to rethinking software development, but it's a high-risk bet compared to comma's more straightforward path.
- Development speed is a critical indicator of tinygrad's potential success, with a significant milestone being the AMD contract to train LLaMA 405B as fast as NVIDIA within a year.