0

I can filter the last 500 lines using tail or grep

tail --line 500 my_log | grep "ERROR"

What is the equivalent command for using awk

How can I add no of lines in below command

awk '/ERROR/' my_log
kvantour
  • 25,269
  • 4
  • 47
  • 72
Shihabudheen K M
  • 1,347
  • 1
  • 13
  • 19
  • 3
    Possible duplicate of [Implement tail with awk](https://stackoverflow.com/questions/9101296/implement-tail-with-awk) – kvantour Jan 03 '19 at 13:48

3 Answers3

2

Could you please try following.

tac Input_file | awk 'FNR<=100 && /error/' | tac

In case you want to add number of lines in awk command then try following.

awk '/ERROR/{print FNR,$0}' Input_file
RavinderSingh13
  • 130,504
  • 14
  • 57
  • 93
2

As you had no sample data to test with, I'll show with just numbers using seq 1 10. This one stores last n records and prints them out in the end:

$ seq 1 10 | 
  awk -v n=3 '{a[++c]=$0;delete a[c-n]}END{for(i=c-n+1;i<=c;i++)print a[i]}'
8
9
10

If you want to filter the data add for example /ERROR/ before {a[++c]=$0; ....

Explained:

awk -v n=3 '{          # set wanted amount of records
    a[++c]=$0          # hash to a
    delete a[c-n]      # delete the ones outside of the window
}
END {                  # in the end
for(i=c-n+1;i<=c;i++)  # in order
    print a[i]         # output records
}'
James Brown
  • 36,089
  • 7
  • 43
  • 59
  • 1
    It might be easier to use `a[NR%n]=$0` and then `for(i=NR+1;i<=NR+n;i++) if (a[i%n] ~ /ERROR/) print a[i%n]` I don't know if it will be faster (`delete` vs `%`) – kvantour Jan 03 '19 at 13:53
  • If the last 100 processed records didn't have a single error line, there wouldn't be any output—if my meeting pestered mind serves me right. – James Brown Jan 03 '19 at 14:19
  • 1
    That is correct, but that would be the analogue to the OP. – kvantour Jan 03 '19 at 14:21
  • Right. Didn't even bother re-reading the op. You know what we think of the ops. :D – James Brown Jan 03 '19 at 14:23
2

awk don't know about end of file until it change of reading file but you can read twhice the file, first time to find the end, second to treat line that are in the scope. You could also keep the X last line in a buffer but it's a bit heavy in memory consuption and process. Notice that the file need to be mentionned twice at the end for this.

awk 'FNR==NR{L=NR-500;next};FNR>=L && /ERROR/{ print FNR":"$0}' my_log  my_log 

With explanaition

awk '# first reading
     FNR==NR{
       #last line is this minus 500
       LL=NR-500
       # go to next line (for this file)
       next
       }

     # at second read (due to previous section filtering)
     # if line number is after(included) LL AND error is on the line content, print it
     FNR >= LL && /ERROR/ { print FNR ":" $0 }
     ' my_log  my_log 

on gnu sed

sed '$-500,$ {/ERROR/ p}' my_log
NeronLeVelu
  • 9,908
  • 1
  • 23
  • 43