Echoing wombie's concern, I don't think you want the server trying to do big data copy jobs in parallel.
Whether you are trying to copy multiple partitions, which wombie predicts would cause the disk heads to thrash and slow things down, or to trying to copy multiple disks over a usb bus, in which each data stream may cause interrupts that would slow each other down, unless you are dealing with a transmission technology specifically designed to handle high throughput from multiple clients, you are going to slow things down if you try to do them in parallel.
For example, trying to ftp a single file over 10BaseT Ethernet, I could get over 1 MByte/sec (over 8Mbit/sec) throughput, but if I tried to ftp two files from different machines, even to the same server, the throughput would fall to about 150 KByte/sec/per transfer (i.e., about 300 KByte/sec, 2.4MBit/sec). (This is from memory, and it may have taken 3 transmitting stations to get the 10BaseT throughput to drop from ~90% to ~30%. Still, adding a second station did decrease the overall efficiency, due to collisions.)
Besides, its a catch-22: the protocols that can gracefully handle multiplexing high throughput streams generally introduce high overhead. Classic examples of networking protocols that gracefully handle multiplexing high throughput streams: Token-Ring, FDDI, ATM. For example, ATM introduces a minimum 10% overhead (of the 53 bytes in a cell, 5 are header) to the transmission.
Whether you use dd, partimage, or clonezilla, I would suggest:
- write a script that sequential checks to see if there is a disk to copy
- copies one disk at a time
- loop
Then, when you add a disk to the chain, it will get copied. Like some bittorrent clients that periodically check for a torrent in some folder and then process the torrent automatically.
I would also suggest not using USB, if you can, or at least getting multiple USB cards so each disk can have its own USB bus.