One logical part of my C++ program has to fetch text files calling curl
(linux program, not a library) and getting its output via pipe. There may be many urls to download. And download time may be reasonable. The program is meant to be "scalable" and effective.
So the question is what variant is preferable:
- Run single instance of curl giving it a list of urls
- Create a pool of threads each calling a curl process program with single url and subsequent aggregation of thread outputs.
In other words is curl asynchronous when downloading several urls or I have to implement it manually.