Hasty Briefsbeta

Bilingual

Two Studies in Compiler Optimisations

4 days ago
  • #compiler-optimizations
  • #LLVM
  • #performance
  • Compiler optimizations can transform seemingly simple code into highly efficient machine code, often requiring deep understanding of compiler internals to predict or explain.
  • The LLVM optimization pipeline includes multiple passes like InstCombine and CodeGenPrepare, each targeting different optimization opportunities at various stages of compilation.
  • Using compiler hints like `[[assume]]` attributes or assertions can enable optimizations by providing the compiler with additional information about variable relationships.
  • Pattern matching in optimization passes (like InstCombine's peephole optimizations) can recognize and transform common idioms, such as replacing divisions with conditional moves.
  • Instruction selection phases (like LLVM's SelectionDAG) perform target-specific optimizations, such as folding multiple byte loads into a single wider load.
  • Template functions can sometimes enable earlier optimizations by allowing constant propagation and loop unrolling before inlining.
  • Compiler optimizations are sensitive to code structure; small changes can lead to significantly different optimization outcomes due to pass ordering or pattern matching limitations.
  • Optimizing for size (`-Os`) can sometimes inhibit optimizations that would otherwise improve performance, requiring manual intervention like `#pragma unroll`.
  • Understanding compiler internals (e.g., via LLVM's Opt Pipeline Viewer) can help developers write code that is more amenable to optimization.