0

The converting function seems to only work if you do laspy.read(). However, the whole point of the chunk iterator is so you don't have to read in a full file at a time. This is how I thought an example of laspy.convert() would work with the chunk iterator.

            # READ IN FILE
            with laspy.open(scan.las_file, mode='r') as las_open:
                for chunk in las_open.chunk_iterator(1_000_000):
                   chunk = laspy.convert(chunk,file_version='1.2', point_format_id=2)

                    # WRITE TO FILE
                    file_path = os.path.join(path, filename)
                    header = laspy.LasHeader(point_format=2, version='1.2')
                    with laspy.open(source=facility_path, mode='w', header=header) as las_write:
                        las_write.write_points(chunk)

However, it raises this error:

\lib\site-packages\laspy\lib.py", line 318, in convert
    header = copy.deepcopy(source_las.header)
\lib\site-packages\laspy\point\record.py", line 230, in __getattr__
    raise AttributeError("{} is not a valid dimension".format(item)) from None
    AttributeError: header is not a valid dimension
Joe
  • 1
  • 1

1 Answers1

0

laspy.convert expects an object of type LasData not just points and the chunk iterator returns a ScaleAwarePointRecord

To achieve conversion using chunk read/write you'd have to create a point record that serves as a buffer and use copy_fields_from.

Example:

with laspy.open(input_path, mode='r') as las_open:
    header = laspy.LasHeader(point_format=6, version='1.4')
    with laspy.open(output_path, header=header, mode="w") as las_write:

        buffer_chunk = laspy.PackedPointRecord.zeros(point_count=chunk_size, point_format=header.point_format)

        for input_chunk in las_open.chunk_iterator(chunk_size):
            output_chunk = buffer_chunk[:len(input_chunk)]
            output_chunk.copy_fields_from(input_chunk)
            las_write.write_points(output_chunk)

Jälv
  • 51
  • 1
  • 3