-1

Since I have a file which is huge (several GBs), I would not like to load the whole thing in memory and instead use *generators to load line by line. My file is something like this:

# millions of lines
..................
..................
keyw 28899
2233 121 ee 0o90 jjsl
2321 232 qq 0kj9 jksl
keyw 28900
3433 124 rr 8hu9 jkas
4532 343 ww 3ko9 aslk
1098 115 uy oiw8 rekl
keyw 29891
..................
..................
# millions more

So far I have found a similar answer here. But I am lost as how to implement it. Because the ans has specific identifiers Start and Stop, whereas my files have an incremental number with a identical keyword. I would like some help regarding this.

Edit: Generators not iterators

rNov
  • 61
  • 1
  • 15

1 Answers1

1

If you want to adapt that answer this may help:

bucket = []
for line in infile:
    if line.split()[0] == 'keyw':
        for strings in bucket:
            outfile.write( strings + '\n')
        bucket = []
        continue

    bucket.append(line.strip())
Juan Diego Godoy Robles
  • 14,447
  • 2
  • 38
  • 52