This from a bash script, looks like it will always run out of memory eventually.. incrementally using more memory for each line it reads:
while read line; do
echo $line
done < ./hugefile
Is there a route to bash reading a file line by line, that will not run up memory usage?
I've seen another thread acknowledge [for x in y; do] loops have this problem but the workaround for that doesn't quite fit for my reading a file line by line.
Obviously, I could break the file in pieces but would prefer an elegant solution that doesn't see memory used where it's not needed.. memory should only need to hold one line at a time.
So, how can I read an infinitely large file line by line, without worrying about memory?