Floats Don't Agree with Themselves
11 hours ago
- #geometry-algorithms
- #integer-arithmetic
- #deterministic-computing
- Floats can give different results on different machines due to IEEE 754 standard allowing variations like FMA, reassociation, denormal handling, and extended precision registers, leading to reproducibility issues.
- A single bit difference in a cross product sign near zero can cause entire convex polygon decomposition algorithms to fork, as decisions are discrete (e.g., reflex vs convex vertex classification).
- The solution is to avoid floats entirely; exact-poly uses integer arithmetic (i64 coordinates, i128 cross products) for deterministic results across architectures like x86, ARM, WASM, and even systems without FPUs.
- Integer arithmetic requires choosing a fixed scale (e.g., SCALE=1,000,000 for micrometers in geodesy) and handling bounds to avoid overflow, with i128 providing sufficient headroom for cross products.
- exact-poly implements a cascade of convex decomposition algorithms (e.g., ExactPartition, Bayazit, EarClip with Hertel-Mehlhorn) and uses rotations as heuristics to handle pathological cases, ensuring robust results.
- Key invariants include exact area equality (sum of part areas equals original ring area) and deterministic branching based on integer comparisons, with no floating-point operations in the pipeline.
- The library handles practical cases (e.g., up to 100 vertices) but rejects self-intersecting polygons, holes, or inputs exceeding configured limits, focusing on correctness and reproducibility.
- Applications include lockstep simulations, geofencing, ZK proofs, and embedded systems where bit-for-bit agreement across processes is critical, with potential extensions to other geometric operations.