3

I am developing a C++ application that will use Lua scripts for external add-ons. The add-ons are entirely event-driven; handlers are registered with the host application when the script is loaded, and the host calls the handlers as the events occur.

What I want to do is to have each Lua script running in its own thread, to prevent scripts from locking up the host application. My current intention is to spin off a new thread to execute the Lua code, and allow the thread to terminate on its own once the code has completed. What are the potential pitfalls of spinning off a new thread as a form of multi-threaded event dispatching?

Robert M
  • 71
  • 1
  • 7
  • What sorts of events? Server, GUI, or realtime control? – Potatoswatter Apr 02 '11 at 02:33
  • How robust does your application need to be versus these add-ons? – Emil Sit Apr 02 '11 at 02:39
  • The host is a client connected to a server. The server dispatches events to the client, and the client in turn will dispatch the events to add-ons. – Robert M Apr 02 '11 at 02:46
  • The host only needs to be robust to the point where an add-on will not lock the UI or any of the host-internal processing. The suggestion of using a single event-dispatching thread may work better than my currently posted intention. – Robert M Apr 02 '11 at 02:47

3 Answers3

3

Here are a few:

  1. Unless you take some steps to that effect, you are not in control of the lifetime of the threads (they can stay running indefinitely) or the resources they consume (CPU, etc)
  2. Messaging between threads and synchronized access to commonly accessible data will be harder to implement
  3. If you are expecting a large number of add-ons, the overhead of creating a thread for each one might be too great

Generally speaking, giving event-driven APIs a new thread to run on strikes me as a bad decision. Why have threads running when they don't have anything to do until an event is raised? Consider spawning one thread for all add-ons, and managing all event propagation from that thread. It will be massively easier to implement and when the bugs come, you will have a fighting chance.

Jon
  • 428,835
  • 81
  • 738
  • 806
  • Maybe I didn't state it clearly, but the threads will be created only at the point that the events are being dispatched. As noted in other responses, this could degrade sytsem resources as opposed to your suggestion of using a single thread for dispatching. – Robert M Apr 02 '11 at 02:49
2

Creating a new thread and destroying it frequently is not really a good idea. For one, you should have a way to bound this so that it doesn't consume too much memory (think stack space, for example), or get to the point where lots of pre-emption happens because the threads are competing for time on the CPU. Second, you will waste a lot of work associated with creating new threads and tearing them down. (This depends on your operating system. Some OSs might have cheap thread creation and others might have that be expensive.)

It sounds like what you are seeking to implement is a work queue. I couldn't find a good Wikipedia article on this but this comes close: Thread pool pattern.

One could go on for hours talking about how to implement this, and different concurrent queue algorithms that can be used. But the idea is that you create N threads which will drain a queue, and do some work in response to items being enqueued. Typically you'll also want the threads to, say, wait on a semaphore while there are no items in the queue -- the worker threads decrement this semaphore and the enqueuer will increment it. To prevent enqueuers from enqueueing too much while worker threads are busy and hence taking up too much resources, you can also have them wait on a "number of queue slots available" semaphore, which the enqueuer decrements and the worker thread increments. These are just examples, the details are up to you. You'll also want a way to tell the threads to stop waiting for work.

asveikau
  • 39,039
  • 2
  • 53
  • 68
1

My 2 cents: depending on the number and rate of events generated by the host application, the main problem I can see is in term of performances. Creating and destroyng thread has a cost [performance-wise] I'm assuming that each thread once spawned do not need to share any resource with the other threads, so there is no contention. If all threads are assigned on a single core of your CPU and there is no load balancing, you can easily overload one CPU and have the others [on a multcore system] unloaded. I'll consider some thread affinity + load balancing policy.

Other problem could be in term of resource [read memory] How much memory each LUA thread will consume?

Be very careful to memory leaks in the LUA threads as well: if events are frequent and threads are created/destroyed frequently leaving leacked memory, you can consume your host memory quite soon ;)

sergico
  • 2,595
  • 2
  • 29
  • 40
  • Good point. I initially thought of it as similar to a web server spawning threads for incoming client connections, but it may not be quite as similar as I first thought. – Robert M Apr 02 '11 at 02:51