My model produces one netcdf file for every timestep and every variable, named DDDDDDD.VVV.nc, where DDDDDDD is the date and VVV the variable name.
For each timestep, I'm using nco to append the files corresponding to the different variables, in order to get one file per timestep.
#! /bin/bash
# looping on timesteps to merge all variables
# I use one variable 'O2o' to get the list of timesteps
for timesteps in *.O2o.nc;
do
timestep=$(echo $timesteps| cut -b -21)
echo $timestep
for var in $timestep*.nc;
do
ncks -Ah $var 'F1_'$timestep.nc
done
done
There are about 432 outputs variables, and each file is about 6,4K or 1,1K (the variables do not have the same number of dimensions).
I find the process very slow (eg. 15 seconds per timestep), while the files are very small. Any idea how I should optimize the script ?