0

This from a bash script, looks like it will always run out of memory eventually.. incrementally using more memory for each line it reads:

while read line; do
echo $line
done < ./hugefile

Is there a route to bash reading a file line by line, that will not run up memory usage?

I've seen another thread acknowledge [for x in y; do] loops have this problem but the workaround for that doesn't quite fit for my reading a file line by line.

Obviously, I could break the file in pieces but would prefer an elegant solution that doesn't see memory used where it's not needed.. memory should only need to hold one line at a time.

So, how can I read an infinitely large file line by line, without worrying about memory?

  • The problem in the post you link to is completely different. Your loop shouldn't leak. How are you measuring that it does? – Mat Jun 03 '17 at 15:12
  • top total memory is increasing and last round it failed in a way consistent with out of memory. I'm using a raspberry-pi and couldn't then ssh into it last time, so watching it this time. The only other aspect is the request following the read of file, is a write to disk but I can't see that being a factor. Unless it is `screen` related but I understood screen doesn't retain much past a couple of pages. – David Brown Jun 03 '17 at 15:15
  • So, I just checked outside screen and it's not `screen` adding memory use. – David Brown Jun 03 '17 at 15:22
  • The loop itself isn't much of a problem, and will only fail due to memory if lines are ridiculously long or contain sequences like `/*/*/*/* /*/*/*/* ...`. However, anything that invokes this loop may soak up infinite memory, such as `echo "$(thisscript)" > file` – that other guy Jun 03 '17 at 17:27
  • Not obviously either of those.. and not the simple sed statements. It seems to be centred on mkdir but I'll need to take a closer look to be sure.. first pass it's not the errors where those occur - throwing them to /dev/null doesn't seem to change this memory creeping. I don't know enough of how disk drive management works (ext3) to understand if its necessarily accumulating or retaining some detail and the pace of new detail is too much faster than its actioning that. – David Brown Jun 03 '17 at 18:16
  • What is also odd is that having stopped the script the memory doesn't then drop down to prescript start level.. I don't know if there is a way to prompt memory to reconsider what it held against live processes and purge redundant detail - perhaps that would also be a fix for this.. occasion refresh of the memory use..??? – David Brown Jun 03 '17 at 18:16
  • This sounds like an Raspberry-Pi issue. Are you doing anything other than 'echoing' the line? If not, perhaps try 'awk' on the same file? – Dale_Reagan Jun 09 '17 at 17:34

0 Answers0