I have a problem regarding some LLVM optimization passes, which modify the compilation output, so that it is not functional anymore.
This is the input source code of a Fibonacci algorithm:
f<int> fib(int n) {
if n <= 1 { return 1; }
return fib(n - 1) + fib(n - 2);
}
f<int> main() {
printf("Result: %d", fib(46));
return 0;
}
Without optimization, my compiler spits out following IR code, which is working perfectly fine:
; ModuleID = 'Module'
source_filename = "Module"
@0 = private unnamed_addr constant [11 x i8] c"Result: %d\00", align 1
declare i32 @printf(i8*, ...)
define i32 @"fib(int)"(i32 %0) {
entry:
%n = alloca i32, align 4
store i32 %0, i32* %n, align 4
%result = alloca i32, align 4
%1 = load i32, i32* %n, align 4
%le = icmp sle i32 %1, 1
br i1 %le, label %then, label %end
then: ; preds = %entry
ret i32 1
br label %end
end: ; preds = %then, %entry
%2 = load i32, i32* %n, align 4
%sub = sub i32 %2, 1
%3 = call i32 @"fib(int)"(i32 %sub)
%4 = load i32, i32* %n, align 4
%sub1 = sub i32 %4, 2
%5 = call i32 @"fib(int)"(i32 %sub1)
%add = add i32 %3, %5
ret i32 %add
}
define i32 @main() {
main_entry:
%result = alloca i32, align 4
%0 = call i32 @"fib(int)"(i32 46)
%1 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([11 x i8], [11 x i8]* @0, i32 0, i32 0), i32 %0)
ret i32 0
}
Then I applied a few optimization passes to it. Here is my chain of passes:
fpm->add(llvm::createDeadCodeEliminationPass());
fpm->add(llvm::createLoopDeletionPass());
fpm->add(llvm::createDeadStoreEliminationPass());
fpm->add(llvm::createGVNPass());
fpm->add(llvm::createPromoteMemoryToRegisterPass());
fpm->add(llvm::createInstructionCombiningPass());
fpm->add(llvm::createReassociatePass());
fpm->add(llvm::createCFGSimplificationPass()); // Breaks recursion
fpm->add(llvm::createCorrelatedValuePropagationPass());
fpm->add(llvm::createLoopSimplifyPass());
With this optimization passes enabled, I get following IR code:
; ModuleID = 'Module'
source_filename = "Module"
@0 = private unnamed_addr constant [11 x i8] c"Result: %d\00", align 1
declare i32 @printf(i8*, ...)
define i32 @"fib(int)"(i32 %0) {
entry:
%le = icmp slt i32 %0, 2
%sub = add i32 %0, -1
%1 = call i32 @"fib(int)"(i32 %sub)
%sub1 = add i32 %0, -2
%2 = call i32 @"fib(int)"(i32 %sub1)
%add = add i32 %2, %1
ret i32 %add
}
define i32 @main() {
main_entry:
%0 = call i32 @"fib(int)"(i32 46)
%1 = call i32 (i8*, ...) @printf(i8* noundef nonnull dereferenceable(1) getelementptr inbounds ([11 x i8], [11 x i8]* @0, i64 0, i64 0), i32 %0)
ret i32 0
}
Obviously, this code does produce a stack overflow, because the recursion anchor is gone. It seems like the CFGSimplificationPass merges blocks in a wrong way / eliminates the if body although it is relevant. When I remove the 'createCFGSimplificationPass' line, the optimizations works and the executable outcome runs fine.
Now my question: What am I doing wrong? Or is this maybe a bug in LLVM?
Thanks for your help!