15

I am working on a high performance system written in C++. The process needs to be able to understand some complex logic (rules) at runtime written in a simple language developed for this application. We have two options:

  1. Interpret the logic - run a embedded interpreter and generate a dynamic function call, which when receives data, based on the interpreted logic works on the data

  2. Compile the logic into a plugin.so dynamic shared file, use dlopen, dlsym to load the plugin and call logic function at runtime

Option 2 looks to be really attractive as it will be optimized machine code, would run much faster than embedded interpreter in the process.

The options I am exploring are:

    • write a compile method string compile( string logic, list & errors, list & warnings )
    • here input logic is a string containing logic coded in our custom language
    • it generates llvm ir, return value of the compile method returns ir string
    • write link method bool link(string ir, string filename, list & errors, list & warnings)
    • for the link method i searched llvm documentation but I have not been able to find out if there is a possibility to write such a method

    If i am correct, LLVM IR is converted to LLVM Byte Code or Assembly code. Then either LLVM JIT is used to run in JIT mode or use GNU Assembler is used to generate native code.

    Is it possible to find a function in LLVM which does that ? It would be much nicer if it is all done from within the code rather than using a system command from C++ to invoke "as" to generate the plugin.so file for my requirement.

    Please let me know if you know of any ways i can generate a shared library native binary code from my process at runtime.

    Sanjit
    • 321
    • 2
    • 8
    • I'm no expert on this, but maybe http://llvm.org/docs/tutorial/LangImpl4.html#adding-a-jit-compiler helps. – godfatherofpolka Apr 09 '14 at 08:28
    • thanks for the link. Yes, I think a embedded jit is another option. I was wondering if we can implement a C++ interface in the JIT code and return a pointer to the derived class implementation from JIT to the calling C++ application ? – Sanjit Apr 09 '14 at 09:09
    • If scripting languages like AngelScript or Lua (more specifically LuaJit) are not fast enough for your purposes, I suggest looking into OpenCL. It allows you to compile computational code written in C/C++ at runtime. You can easily configure your computational kernels by simply using C preprocessor. Most of OpenCL compilers use LLVM internally, so you'll get its full power without having to mess with tons of technical details. As a final bonus, you can easily try to run your code on GPU, though you can constraint yourself to CPU if you wish. – stgatilov Nov 29 '15 at 10:19
    • @stgatilov there are some serious restrictions with OpenCL. Have to distribute source code, No C++ templates (if i remember correctly)... – Russell Greene Nov 30 '15 at 23:41
    • @RussellGreene: Limited C++ support (including templates) has been added very recently in [version 2.1](https://en.wikipedia.org/wiki/OpenCL#OpenCL_2.1). Also, loading SPIR-V bytecode is supported in the same version (which is essentially LLVM IR). So both issues will be resolved quite soon hopefully =) – stgatilov Dec 01 '15 at 17:37

    1 Answers1

    3

    llc which is a llvm tool that does LLVM-IR to binary code translation. I think that is all you need.

    Basically you can produce your LLVM IR the way you want and then call llc over your IR.

    You can call it from the command line or you can go to the implementation of llc and find out how it works to do that in your own programs.

    Here is a usefull link:

    http://llvm.org/docs/CommandGuide/llc.html

    I hope it helps.

    AngelBaltar
    • 105
    • 1
    • 4
      llc is an assembler/compiler, not a linker. It produces .asm or .obj output. It does not produce loadable libraries (.dll or .so). – Sean May 15 '14 at 22:31
    • 2
      Thanks Sean for the comment, it clears up my question. Embedded JIT is the best option I have found so far. – Sanjit Jun 19 '14 at 09:07