1

Is it theoretically and/or practically possible to compile native c++ to some sort of intermediate language which will then be compiled at run time?

Along the same lines, is "portable" the term used to denote this?

Makubex
  • 1,054
  • 1
  • 9
  • 16
  • 1
    Seems like it would be. There would be some hairy issues to resolve, like template magic. Why would you want to? – wallyk Jun 26 '12 at 06:15
  • your code is 'portable' if you are able to just compile it in any platform and run without making any changes in the code. Portability is different from converting one language to another. – Sanish Jun 26 '12 at 06:22
  • Is portability your actual main reason to use JIT? And if so, what is preventing you to simply make multiple target builds? Sounds like an XY problem to me. – KillianDS Jun 26 '12 at 06:23
  • One problem is that the C++ code is targeted at a specific platform, and needs to link against system specific libraries. Other languages, like Java, solves this by just targeting a single platform, the JVM. – Bo Persson Jun 26 '12 at 06:27
  • 2
    Yes, and this is in fact very common. Quite often C++ is compiled to the ancient x86 ISA, which is then translated (at runtime) to whatever internal RISC architecure your CPU is actually using today. Hardwired x86 implementations went out of vogue around the early Pentium generations. – MSalters Jun 26 '12 at 07:25
  • @KillianDS From what I know C++ has only compile time optimizations, the best that we can get is guided opti. I was wondering if it's possible to optimize c++ code at runtime like how .NET JIT does. – Makubex Jun 26 '12 at 13:01
  • 1
    @MSalters Cool.. I didn't know this. How does x86 get compiled to x64 instructions? I couldn't find this answer on google. Thanks – Makubex Jun 26 '12 at 13:03
  • @Makubex: LLVM can do (and does) link-time optimization as well. Moreover, they're thinking of doing it at the install-time also (wherever possible). – Nawaz Jun 26 '12 at 13:08
  • @Makubex: x86 isn't compiled to x64. x86 and x64 are both compiled to micro-ops. – MSalters Jun 26 '12 at 13:28

1 Answers1

7

LLVM which is a compiler infrastructure parses C++ code, transforming it to an intermediate language called LLVM IR (IR stands for Intermediate Representation) which looks like high-level assembly language. It is a machine independent language. Generating IR is one phase. In the next phase, it passes through various optimizers (called pass). which then reaches to third phase which emits machine code (i.e machine dependent code).

It is a module-based design; output of one phase (module) becomes input of another. You could save IR on your disk, so that the remaining phases can resume later, maybe on entirely different machine!

So you could generate IR and then do rest of the things on runtime? I've not done that myself, but LLVM seems really promising.

Here is the documentation of LLVM IR:

This topic on Stackoverlow seems interesting, as it says,

  • LLVM advantages:
    • JIT - you can compile and run your code dynamically.

And these articles are good read:

Community
  • 1
  • 1
Nawaz
  • 353,942
  • 115
  • 666
  • 851
  • 2
    BTW, they plan to use LLVM IR for Android apps, so that once downloaded from the market an application compiles into native code on the device, so that app creators don't have to build it for each supported device. – Maxim Egorushkin Jun 26 '12 at 08:19