I have written a short python script to process my big fastq files in size from 5Gb to 35Gb. I am running the script in a Linux server that has many cores. The script is not written in parallel at all and taking about 10 minutes to finish for a single file in average.
If I run the same script on several files like
$ python my_script.py file1 &
$ python my_script.py file2 &
$ python my_script.py file3 &
using the & sign to push back the process.
do those scripts run in parallel and will I save some time?
It seems not to me, since I am using top command to check the processor usage and each ones usage drops as I added new runs or shouldn't it use somewhere close 100% ?
So if they are not running in parallel, is there a way to make the os run them in parallel ?
Thanks for answers