The "high-level CPU" challenge
12 days ago
- #hardware-design
- #high-level-languages
- #performance-optimization
- The author challenges the notion that high-level languages (HLLs) can run efficiently on current hardware without significant performance loss.
- They argue that hardware optimized for HLLs must not incur more than a 25% performance penalty compared to traditional RISC architectures like MIPS.
- Criticism is directed at vague claims by notable figures (Alan Kay, Jamie Zawinski, Steve Yegge) about hardware inefficiencies without concrete technical solutions.
- Historical architectures like the B5000 are examined, but the author doubts their efficiency against modern RISC designs due to complexity and overhead (e.g., tagged data).
- The trade-offs of hardware-enforced safety (e.g., tagging) versus software solutions (e.g., JVM/.NET) are discussed, with skepticism about cost-effectiveness.
- Dynamic languages (Lisp/Smalltalk) are critiqued for inherent inefficiencies (type checks, indirection) that hinder performance even with static annotations.
- Von Neumann architecture is defended due to memory standardization, while alternative models (e.g., neural nets) are dismissed without detailed proposals.
- The author demands specific, implementable hardware designs to support HLLs efficiently, offering to prototype viable ideas and publicly credit contributors.