Debian's Bash manual suggest using the special command substitution $(< file)
whenever $(cat file)
would be required, for the sake of performance, avoiding the execution of an external binary.
However, the measured completion time for the following code is about the same:
time for i in {0..1000}; do echo str | { in=$(cat); }; done
time for i in {0..1000}; do echo str | { in=$(< /dev/fd/0); }; done
Over a few runs, they consistently return values around these figures, respectively:
real 0m3.665s
user 0m0.365s
sys 0m0.782s
and
real 0m2.401s
user 0m0.233s
sys 0m0.533s
So the improvement of command substitution over cat
is largely negligible for most use cases. Since my script needs to quickly and cyclically read large amounts of stdin, what can I do to accelerate these readings? In particular, the whole stream of stdin data needs to be dumped into a Bash variable for further parameter substitutions.
Further testing:
After the comments below and further testing, I set 10,000 iterations instead of 1000 to minimize the pipe setup overhead, and I deleted the brackets for the compound command syntax:
$ time for i in {1..10000}; do echo str | in=$(cat); done
real 0m24.754s
user 0m6.958s
sys 0m18.996s
$ time for i in {1..10000}; do echo str | in=$(< /dev/fd/0); done
real 0m33.913s
user 0m3.736s
sys 0m10.516s
Here I am unable to explain why $(< /dev/fd/0)
is even slower now.