I have a huge text file which I want to open.
I'm reading the file in chunks, avoiding memory issues related to reading too much of the file all at once.
code snippet:
def open_delimited(fileName, args):
with open(fileName, args, encoding="UTF16") as infile:
chunksize = 10000
remainder = ''
for chunk in iter(lambda: infile.read(chunksize), ''):
pieces = re.findall(r"(\d+)\s+(\d+_\d+)", remainder + chunk)
for piece in pieces[:-1]:
yield piece
remainder = '{} {} '.format(*pieces[-1])
if remainder:
yield remainder
the code throws the error UnicodeDecodeError: 'utf16' codec can't decode bytes in position 8190-8191: unexpected end of data
.
I tried UTF8
and got the error UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte
.
latin-1
and iso-8859-1
raised the error IndexError: list index out of range
A sample of the input file:
b'\xff\xfe1\x000\x000\x005\x009\x00\t\x001\x000\x000\x005\x009\x00_\x009\x007\x004\x007\x001\x007\x005\x003\x001\x000\x009\x001\x00\t\x00\t\x00P\x00o\x00s\x00t\x00\t\x001\x00\t\x00H\x00a\x00p\x00p\x00y\x00 \x00B\x00i\x00r\x00t\x00h\x00d\x00a\x00y\x00\t\x002\x000\x001\x001\x00-\x000\x008\x00-\x002\x004\x00 \x00'
I will also mention that I have several of those huge text files.
UTF16
works fine for many of them, and fail at a specific file.
Anyway to resolve this issue?