6

So I'm playing with the Wikipedia dump file. It's an XML file that has been bzipped. I can write all the files to directories, but then when I want to do analysis, I have to reread all the files on the disk. This gives me random access, but it's slow. I have the ram to put the entire bzipped file into ram.

I can load the dump file just fine and read all the lines, but I cannot seek in it as it's gigantic. From what it seems, the bz2 library has to read and capture the offset before it can bring me there (and decompress it all, as the offset is in decompressed bytes).

Anyway, I'm trying to mmap the dump file (~9.5 gigs) and load it into bzip. I obviously want to test this on a bzip file before.

I want to map the mmap file to a BZ2File so I can seek through it (to get to a specific, uncompressed byte offset), but from what it seems, this is impossible without decompressing the entire mmap file (this would be well over 30 gigabytes).

Do I have any options?

Here's some code I wrote to test.

import bz2
import mmap

lines = '''This is my first line
This is the second
And the third
'''

with open("bz2TestFile", "wb") as f:
    f.write(bz2.compress(lines))

with open("bz2TestFile", "rb") as f:
    mapped = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)

    print "Part of MMAPPED"
    # This does not work until I hit a minimum length
    # due to (I believe) the checksums in the bz2 algorithm
    #
    for x in range(len(mapped)+2):
        line = mapped[0:x]
        try:
            print x
            print bz2.decompress(line)
        except:
            pass

# I can decompress the entire mmapped file
print ":entire mmap file:"
print bz2.decompress(mapped)

# I can create a bz2File object from the file path
# Is there a way to map the mmap object to this function?
print ":BZ2 File readline:"
bzF = bz2.BZ2File("bz2TestFile")

# Seek to specific offset
bzF.seek(22)
# Read the data
print bzF.readline()

This all makes me wonder though, what is special about the bz2 file object that allows it to read a line after seeking? Does it have to read every line before it to get the checksums from the algorithm to work out correctly?

MercuryRising
  • 892
  • 1
  • 7
  • 15
  • This is a limitation of the BZ2 format; you cannot know the size of anything in the file until you've decompressed the whole damn thing. – Martijn Pieters Sep 30 '12 at 09:04
  • 1
    If the file is static, can I decompress it once, get the data I need, and then use this information to decompress it on the fly? Or should I just try a different compression format? – MercuryRising Sep 30 '12 at 09:42
  • I don't know; I'd use `gzip` compression instead, it's more suitable for streaming and flexible decompression. – Martijn Pieters Sep 30 '12 at 09:44

1 Answers1

2

I found an answer! James Taylor wrote a couple scripts for seeking in BZ2 files, and his scripts are in the biopython module.

https://bitbucket.org/james_taylor/bx-python/overview

These work pretty well, although they do not allow for seeking to arbitrary byte offsets in the BZ2 file, his scripts read out blocks of BZ2 data and allow seeking based on blocks.

In particular, see bx-python / wiki / IO / SeekingInBzip2Files

nealmcb
  • 12,479
  • 7
  • 66
  • 91
MercuryRising
  • 892
  • 1
  • 7
  • 15
  • Note that to get the bzip-table command, which takes care of mapping uncompressed offsets to compressed offsets, you also need the seek-bzip2 repo, as noted at [james_taylor / bx-python / issues / #14 - Getting Started: Indexing MAFs — Bitbucket](https://bitbucket.org/james_taylor/bx-python/issues/14/getting-started-indexing-mafs) – nealmcb Feb 09 '16 at 05:07