Missed optimization in i.div_euclid(power_of_two) #71096
Labels
A-codegen
Area: Code generation
A-LLVM
Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues.
C-enhancement
Category: An issue proposing an enhancement or a PR with one.
C-optimization
Category: An issue highlighting optimization opportunities or PRs implementing such
E-needs-test
Call for participation: An issue has been fixed and does not reproduce, but no test has been added.
I-slow
Issue: Problems and improvements with respect to performance of generated code.
T-compiler
Relevant to the compiler team, which will review and decide on the PR/issue.
If a signed integer is divided by a power of two using Euclidean division, it is equivalent to an arithmetic shift, but this is not caught. For example
produces the assembly code
while it could simply be
The LLVM IR is
which I interpret to be something like
(I did check that this function produces the same LLVM IR.)
I don't know LLVM enough to know whether there is some kind of flooring division intrinsic that could be used to optimize this, or whether this a missed optimization in the LLVM side and LLVM can recognize that pattern into an arithmetic shift.
The text was updated successfully, but these errors were encountered: