2

I have files with these kind of duplicate lines, where only the last field is different:

OST,0202000070,01-AUG-09,002735,6,0,0202000068,4520688,-1,0,0,0,0,0,55
ONE,0208076826,01-AUG-09,002332,316,3481.055935,0204330827,29150,200,0,0,0,0,0,5
ONE,0208076826,01-AUG-09,002332,316,3481.055935,0204330827,29150,200,0,0,0,0,0,55
OST,0202000068,01-AUG-09,003019,6,0,0202000071,4520690,-1,0,0,0,0,0,55

I need to remove the first occurrence of the line and leave the second one.

I've tried:

awk '!x[$0]++ {getline; print $0}' file.csv

but it's not working as intended, as it's also removing non duplicate lines.

Johannes Schaub - litb
  • 496,577
  • 130
  • 894
  • 1,212
zedascouves
  • 35
  • 1
  • 7

3 Answers3

2
#!/bin/awk -f
{
    s = substr($0, 0, match($0, /,[^,]+$/))
    if (!seen[s]) {
        print $0
        seen[s] = 1
    }
}
Steven Huwig
  • 20,015
  • 9
  • 55
  • 79
1

As a general strategy (I'm not much of an AWK pro despite taking classes with Aho) you might try:

  1. Concatenate all the fields except the last.
  2. Use this string as a key to a hash.
  3. Store the entire line as the value to a hash.
  4. When you have processed all lines, loop through the hash printing out the values.

This isn't AWK specific and I can't easily provide any sample code, but this is what I would first try.

Willi Ballenthin
  • 6,444
  • 6
  • 38
  • 52
1

If your near-duplicates are always adjacent, you can just compare to the previous entry and avoid creating a potentially huge associative array.

#!/bin/awk -f
{
    s = substr($0, 0, match($0, /,[^,]*$/))
    if (s != prev) {
        print prev0
    }
    prev = s
    prev0 = $0
} 
END {
    print $0
}

Edit: Changed the script so it prints the last one in a group of near-duplicates (no tac needed).

Dennis Williamson
  • 346,391
  • 90
  • 374
  • 439