The fast path/slow path mirage
9 days ago
- #performance-optimization
- #system-design
- #networking
- The fast path/slow path split is a common optimization technique in computing, aiming to optimize for the common case (fast path) while handling exceptions or uncommon cases (slow path) in a slower, more generic manner.
- Despite its theoretical appeal, the fast path/slow path split often fails in real-world deployments, leading to performance issues and limitations in system extensibility.
- Amdahl's Law highlights that the effectiveness of the fast path/slow path split depends on the proportion of time spent in the fast path, with mispredictions leading to worse performance than a uniform approach.
- Tail latency is a critical consideration in distributed systems, where the slowest operation (often in the slow path) can dictate overall performance, making the fast path/slow path split detrimental in such contexts.
- The fast path/slow path split can be exploited for Denial of Service (DoS) attacks, as attackers can flood systems with requests that force slow path processing, draining resources.
- In networking, particularly with routers, the fast path/slow path split has led to inflexibility, hindering the deployment of new protocols or features like IPv6 extension headers.
- RFC9673 addresses the fast path/slow path issue by advocating for protocol designs that avoid slow path processing and encouraging router vendors to expand fast path capabilities, leveraging programmable datapaths.