2

I'm currently using the boost threadpool with the number of threads equal to the number of cores. I have scheduled, say 10 tasks using the pool's schedule function. For example, suppose I have the function

void my_fun(std::vector<double>* my_vec){
    // Do something here
}

The argument 'my_vec' here is just used to do some temporary calculations. The main reason I passing it the function is that I would like to reuse this vector when I call the function again.

Currently, I have the following

// Create a vector of 10 vectors called my_vecs

// Create threadpool
boost::threadpool::pool tp(num_threads);

// Schedule tasks
for (int m = 0; m < 10; m++){
    tp.schedule(boost::bind(my_fun, my_vecs.at(m)));
}

This is my problem: I would like to replace the vector of 10 vectors with only 2 vectors. If I want to schedule 10 tasks and I have 2 cores, a maximum of 2 threads (tasks) will be running at any time. So I only want to use two vectors (one assigned to each thread) and use it to carry out my 10 tasks. How can I do this?

I hope this is clear. Thank You!

A-A
  • 663
  • 1
  • 13
  • 17
  • Maybe http://stackoverflow.com/questions/3344028/how-to-make-boostthread-group-execute-a-fixed-number-of-parallel-threads would be helpful? – sarnold May 13 '11 at 05:36

4 Answers4

1

Probably boost::thread_specific_ptr is what you need. Below is how you may use it in your function:

#include <boost/thread/tss.hpp>
boost::thread_specific_ptr<std::vector<double> > tls_vec;

void my_fun()
{
    std::vector<double>* my_vec = tls_vec.get();
    if( !my_vec ) {
        my_vec = new std::vector<double>();
        tls_vec.reset(my_vec);
    }
    // Do something here with my_vec
} 

It will reuse vector instances between tasks scheduled to the same thread. There might be more than 2 instances if there are more threads in the pool, but due to preemption mentioned in other answers you really need an instance per running thread, not per core.

You should not need to delete vector instances stored in thread_specific_ptr; those will be automatically destroyed when corresponding threads finish.

Alexey Kukanov
  • 12,479
  • 2
  • 36
  • 55
1

I wouldn't limit the number of threads to the number of cores. Remember that multi-threaded programming has been going on long before we had multi-core processors. This is because the threads will likely block for some resource and the next thread can jump in and use the CPU.

Richard Brightwell
  • 3,012
  • 2
  • 20
  • 22
  • software thread switching is expensive. Typically way more than waiting for most resources to be loaded (and we did not have multi-threading for performance reasons on single core, it was for multi-tasking). What you describe is why we sometimes have multiple _hardware_ threads on a single core. – Bahbar May 13 '11 at 07:10
  • I disagree. Sometimes it may be more expensive, but I disagree that it is typical. You must evaluate this on a application by application basis. Believe me that multiple threads on a single core can certainly improve performance. Especially when one thread is responsible for UI interaction. For example, Apple includes [NSOperations for iPhone apps](http://www.icodeblog.com/2010/03/04/iphone-coding-turbo-charging-your-apps-with-nsoperation/). iPhones are single core but Apple saw the need for multiple threads. – Richard Brightwell May 13 '11 at 11:50
  • 1
    I agree my statement was over-sweeping (and your UI example is a good one). But for compute-only ? That's what the question looked like to me. – Bahbar May 13 '11 at 12:07
  • I maybe should have mentioned that all threads are CPU bound. – A-A May 13 '11 at 13:45
0

Java has a FixedThreadPool.

it looks like Boost might have something similar

http://deltavsoft.com/w/RcfUserGuide/1.2/rcf_user_guide/Multithreading.html

Basically a fixed thread pool spawned a fixed number of threads and then you can queue tasks in the manager queue.

EnabrenTane
  • 7,428
  • 2
  • 26
  • 44
0

While it's two that only two threads can be scheduled at the same time, on many threading systems the threads get time-sliced, so a thread gets pre-empted during the execution of its task. Hence a third (fourth, ...) thread will get a chance to work while the processing of the first and second are still incomplete.

I don't know about this particular threading implementation, but my guess is that it will allow (or run in environments supporting) pre-emptive scheduling. My way of thinking for threads is to try to keep it really simple, let each threads have its own resoruces.

djna
  • 54,992
  • 14
  • 74
  • 117