6

We are creating a trading application, where the backend is totally in C++ (using QuickFix engine). We would like to build a web application in Django on top of this backend, where the user can place his orders. Both the Django (python) and the C++ application will be running in their own processes and address space. What do you think would be the best idea to pass orders/messages from Django to C++?

Also, this is a trading application, so latency is the biggest concern. So, I do not want to put orders into a database from Django and then fetch from C++ application.

I'm currently looking at doing it via shared memory or some other IPC mechanism. Is this a good idea?

Lazylabs
  • 1,414
  • 16
  • 23

5 Answers5

2

Well you have to use some IPC method. One that you don't mention here is having the C++ process listen to a socket. That would add in flexibility (with slight speed cost) that the processes don't even need to be on the same machine.

I've been doing a sort of similar thing, coming from C++ but wanting to write UX in python. My computational backend is C++, and I compile a python module and generate html with flask for the UX. My C++ and python live in the same process so I haven't addressed your core question in practice yet.

One piece of advice I would give is to keep all of your IPC stuff in C++, and write a small python module in C++ using Boost.Python. This will let the python process doing 95% of the work in a pythony world, but give you the bit-level confidence I would want as a C++ dev for the data you are sending over to C++. Boost.Python has made bridging C++ and python web frameworks a breeze for me.

totowtwo
  • 2,101
  • 1
  • 14
  • 21
  • I'm just curious how your C++ and python live in the same process? boost.python? – Lazylabs Sep 02 '11 at 08:01
  • @Lazylabs: Yes, Boost.python (and even python itself, but the C-API is a bit.. well.. C) allows for embedding, i.e. launching the python interpreter from C/C++. You could also write a python wrapper over your c++'s "main" (which would presumably create some threads and return) and thus have the app launched from python instead. It makes little difference either way (unless your both libs require an event loop on the main thread.) – Macke Sep 02 '11 at 12:04
  • Actually @Macke I am doing extending rather than embedding. Python is the OS entrypoint to my process, and I have Boost.Python outputting a DLL that works as a PYD python module. Python loads the PYD (== C++ DLL) dynamically when my module is loaded in python. From there, python can instantiate wrapped C++ classes and call wrapped C++ methods. – totowtwo Sep 02 '11 at 15:01
1

You have to come up with an existing protocol or create your own protocol that will allow communication between C++ and Python. The easiest way I believe would be to use some IPC like ZeroC Ice, or CORBA. Alternatively, you can throw in a native C++ code into Python and use that from Django, that could as well use QuickFIX.

And if you are really concerned about latency (at least milliseconds matter, not saying about nanoseconds) - you shouldn't use QuickFIX or Python at all.

1

I would use zeromq for the IPC

Tom Willis
  • 5,250
  • 23
  • 34
  • Thanks. Zeromq looks promising. I'll investigate more into this. And, scaling acorss multiple systems would not be a problem with this setup, which might be a problem in case of shared memory. – Lazylabs Sep 02 '11 at 07:54
0

I'd probably go for something like JSON-RPC and communicate over local sockets or named pipes.

Shared memory is faster, but trickier to get right if you have to do it yourself (it implies concurrency and locking, which, IMO, one should avoid if possible.)

It depends on message sizes and latency requirements. And you could always try an IPC mechanism that can work over shared memory, like vlad mentions in the comment below.

(Do note that having an IPC-system thatn can fall back on pipes/sockets might be a good thing, in case you need to cluster your system in the future).

Macke
  • 24,812
  • 7
  • 82
  • 118
  • As a matter of fact, shared memory is faster than sockets etc. Many good OSs implement local IPC trough shared memory. –  Sep 01 '11 at 19:10
  • @Vlad: I was a bit unclear. I cheifly meant that implementing your own IPC layer in shared memory is more difficult than doing so over sockets or pipes. – Macke Sep 02 '11 at 11:58
0

You sould/can use Microservice Architecture with message-brokers so you cant easily connect your apps to each other.

Bambier
  • 586
  • 10
  • 16