2

I'm working with the LLVM API and was curious about the performance of the in built JIT through it's execution engine compared to statically compiled object code, and whether the execution engine was more for ease of use during development or not. Having seen this question apparently the JIT LLVM offers can provide very significant speedups, for particular kinds of code.

My main question is, what specific features in given code allows for such gains through JIT compilation, where concrete types are known to the optimiser, compared to static compilation? When does it make a difference, and when is it more or less the same?

As a side question, I'm also curious if the above question is still accurate. The nature of JIT vs AOT compilation is obviously unchanging, but a lot still could had changed in the 10 years since the question was asked, and I just wonder if there's any new information on the topic that might be relevant for working with LLVM.

muke
  • 306
  • 2
  • 11
  • I don't regard that question as accurate. For example, "can feed information" is vague, "feeds " would be accurate and AFAICT, the list isn't such as to provide "very significant" speedups. As to what kinds... if you have something small, just a few lines, wouldn't i be nice to not have to make an executable? Just one function? – arnt Feb 08 '21 at 08:09
  • Well if you expect to repeatedly call this one function, however small, wouldn't it be preferable to eliminate the JIT overhead? I'm trying to ascertain what potential additional optimisations you're giving up by doing so though. – muke Feb 08 '21 at 09:28
  • Well, sure. "The" JIT overhead. it sounds as if you know how much that is. – arnt Feb 08 '21 at 09:32
  • I don't, I was going to try and run some benchmarks on different code, but I wanted to know what kinds code were likely to benefit and which weren't, so I could get benchmarks for both. – muke Feb 08 '21 at 09:33

0 Answers0