In a python (v. 3.10.6) script, trying to read a large shapefile (.shp 825.4 Mb, .dbf 7.8 Gb, .shx 235.8 Mb) with 29 million geometries:
import shapefile
n_rec = 0
shp = shapefile.Reader(file_in)
for rec in shp.iterShapeRecords(fields=field_to_read):
n_rec = n_rec + 1
print("\r record: " + str(n_rec), end="")
I get this error:
record: 24.212.715
Traceback (most recent call last):
File "shp2shp.py", line 166, in <module>
for rec in shp.iterShapeRecords(fields=field_to_read):
File "/home/user/.local/lib/python3.10/site-packages/shapefile.py", line 1771, in iterShapeRecords
for shape, record in izip(self.iterShapes(), self.iterRecords(fields=fields)):
File "/home/user/.local/lib/python3.10/site-packages/shapefile.py", line 1733, in iterRecords
r = self.__record(oid=i, fieldTuples=fieldTuples, recLookup=recLookup, recStruct=recStruct)
File "/home/user/.local/lib/python3.10/site-packages/shapefile.py", line 1612, in __record
recordContents = recStruct.unpack(f.read(recStruct.size))
struct.error: unpack requires a buffer of 324 bytes
Any attempt to read the shapefile records beyond that index fails. Same error with:
rec = sf.record(24212715 + n)
what does it mean: "unpack requires a 324 byte buffer"?
Is there a limit on the number of geometries that can be read with pyshp?