1

When executing a job on LSF you can specify the working directory and create a output directory, i.e

bsub -cwd /home/workDir -outdir /home/$J program inputfile

where it will look for inputfile in the specified working directory. The -outdir will create a new directory based on the JobId.

What I'm wondering is how you pipe the results created from the run in the working directory to the newly created output dir.

You can't add a command like

mv * /home/%J

as the underlying OS has no understanding of the %J identifier. Is there an option in LSF for piping the data inside the job, where it knows the jobId?

RBanks
  • 11
  • 1

1 Answers1

0

You can use the environment variable $LSB_JOBID.

mv * /data/${LSB_JOBID}/

If you copy the data inside your job script then it will hold the compute resource during the data copy. If you're copying a small amount of data then its not a problem. But if its a large amount of data you can use bsub -f so that other jobs can start while the data copy is ongoing.

bsub -outdir "/data/%J" -f "/data/%J/final < bigfile" sh script.sh

bigfile is the file that your job creates on the compute host. It will be copied to /data/%J/final after the job finishes. It even works on a non-shared filesystem.

Michael Closson
  • 902
  • 8
  • 13