0

The code works fine for a single "manager", which basically launches some HTTP GETs to a server. But I've hit a brick wall.

How do I create 2 managers now, each with its own Download_Dashlet_Job object and tcp_pool_object? In essence, the managers would be commanding their own workers on two seperate jobs. This seems to be a really good puzzle for learning Python classes.

import workerpool
from urllib3 import HTTPConnectionPool

class Download_Dashlet_Job(workerpool.Job):
  def __init__(self, url):
    self.url = url
  def run(self):
    request = tcp_pool_object.request('GET', self.url, headers=headers)

tcp_pool_object = HTTPConnectionPool('M_Server', port=8080, timeout=None, maxsize=3, block=True)
dashlet_thread_worker_pool_object = workerpool.WorkerPool(size=100)

#this section emulates a single manager calling 6 threads from the pool but limited to 3 TCP sockets by tcp_pool_object
for url in open("overview_urls.txt"):
  job_object = Download_Dashlet_Job(url.strip())
  dashlet_thread_worker_pool_object.put(job_object)

dashlet_thread_worker_pool_object.shutdown()
dashlet_thread_worker_pool_object.wait()
TedBurrows
  • 5,401
  • 4
  • 16
  • 10
  • I looked into Celery...it's very interesting. I see it has a lot of potential there, but probably overkill for what I'm doing. I'm just basically writing a load test for a server (ie, does said server support 200+ clients all at once). – TedBurrows Apr 27 '12 at 03:40

1 Answers1

0

First, workerpool.WorkerPool(size=100) creates 100 worker threads. In the comment below, you're saying you want 6 threads? You need to change that to 6.

In order to create a second pool, you need to create another pool. You can also create another job class, and just add this different type of job to the same pool, if you prefer.

shazow
  • 17,147
  • 1
  • 34
  • 35