1

I came across a requirement where I need to continuously monitor a live (appending) file for a predefined pattern (say a error message), I'm planning to use tail -F [FileName] | grep "pattern" & put it in some shell script to notify me. My concern is how its going to work for a huge file say 50 GBs in size, I want to understand how much system resources this kind of solution will consume. In short how tail works/handle a file in resource utilization perspective.

vasco.debian
  • 306
  • 2
  • 13
  • 2
    http://unix.stackexchange.com/questions/20256/is-it-fine-to-use-tail-f-on-large-log-files looks like exactly your problem. – user121391 Nov 02 '16 at 15:12

1 Answers1

3

tail does not read the whole file. When it can, it starts at the end and then backtracks until it has reached the expected number of lines. However, it does read everything when it cannot seek, like reading from a pipe for example.

This might not apply to you, but keep in mind that tail -f tracks the file descriptor not the filename. So if you have for example log rotation, it will just stop, because the original file has stopped changing.

mzhaase
  • 3,798
  • 2
  • 20
  • 32