I'm having some timing issues while migrating a Drools project to Quarkus+Kogito (with legacy API and TMS) in order to create a native executable in order to improve startup and execution times. I'm not sure if the timing problems are due to a misconfiguration for the executable generation or if it's expected for large rule volumes.
The dataset consists of 26,608 rules and 29 functions in several DRL files. The creation of the executable model in Drools takes 13 minutes and generates 95,350 Java files, which I believe is the cause of the problem.
I haven't been able to get past step 2 of the native executable generation (Performing analysis). It doesn't throw any errors, but it exceeds 5 hours of execution without finishing. To confirm that it's indeed the executable model causing these long times, I tried using a greatly reduced set of rules, and it was able to generate the executable in a reasonable time.
This is the command executed by quarkus plugin for building the native executable inside a docker container:
docker run --env LANG=C --rm \
-v /c/path_to_repo/app-2.0.0.0-native-image-source-jar:/project:z --name \
build-native-giSUf quay.io/quarkus/ubi-quarkus-graalvmce-builder-image:22.3-java17 \
-J-Dsun.nio.ch.maxUpdateArraySize=100 \
-J-Djava.util.logging.manager=org.jboss.logmanager.LogManager \
-J-Dlogging.initial-configurator.min-level=500 \
-J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory \
-J-Dvertx.disableDnsResolver=true \
-J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=3 \
-J-Duser.language=es -J-Duser.country=ES -J-Dfile.encoding=UTF-8 \
--features=io.quarkus.runner.Feature,io.quarkus.runtime.graal.ResourcesFeature,io.quarkus.runtime.graal.DisableLoggingFeature \
-J--add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED \
-J--add-opens=java.base/java.text=ALL-UNNAMED -J--add-opens=java.base/java.io=ALL-UNNAMED \
-J--add-opens=java.base/java.lang.invoke=ALL-UNNAMED \
-J--add-opens=java.base/java.util=ALL-UNNAMED -H:+CollectImageBuildStatistics \
-H:ImageBuildStatisticsFile=app-2.0.0.0-runner-timing-stats.json \
-H:BuildOutputJSONFile=app-2.0.0.0-runner-build-output-stats.json -H:+BuildOutputColorful \
-H:+BuildOutputProgress -H:+AllowFoldMethods -J-Djava.awt.headless=true \
--no-fallback --link-at-build-time -H:+ReportExceptionStackTraces -J-Xmx25g \
-H:-AddAllCharsets --enable-url-protocols=http -H:-UseServiceLoaderFeature \
-H:-StackTrace -J--add-exports=org.graalvm.sdk/org.graalvm.nativeimage.impl=ALL-UNNAMED \
-J--add-exports=org.graalvm.nativeimage.builder/com.oracle.svm.core.jdk=ALL-UNNAMED \
--exclude-config io.netty.netty-codec \
/META-INF/native-image/io.netty/netty-codec/generated/handlers/reflect-config.json \
--exclude-config io.netty.netty-handler \
/META-INF/native-image/io.netty/netty-handler/generated/handlers/reflect-config.json \
app-2.0.0.0-runner -jar app-2.0.0.0-runner.jar
What could be causing the lengthy execution time during the Performing analysis step when generating the native executable?
Is there any important configuration parameter that I am missing?
I'm allocating 25GB to the container for running the compilation task. Could the performance be significantly improved if I increase the memory?
Is there a way to optimize this process for large rule volumes?
Any insights, examples, or references to relevant documentation would be greatly appreciated. Thank you!