1

I am writing a program in Java that consumes parquet files and processes them line-by-line. Each file is rather large: roughly 1.3 million rows and 3000 columns of double precision floats, for a file size of about 6.6G.

I have tried implementing the code at https://www.arm64.ca/post/reading-parquet-files-java/, but without success. I tried running the following:

package org.example;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.page.PageReadStore;
import org.apache.parquet.example.data.simple.SimpleGroup;
import org.apache.parquet.example.data.simple.convert.GroupRecordConverter;
import org.apache.parquet.hadoop.ParquetFileReader;
import org.apache.parquet.hadoop.util.HadoopInputFile;
import org.apache.parquet.io.ColumnIOFactory;
import org.apache.parquet.io.MessageColumnIO;
import org.apache.parquet.io.RecordReader;
import org.apache.parquet.schema.MessageType;
import org.apache.parquet.schema.Type;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class DirectCopy {

    public static void main(String[] args) throws IOException {
        getParquetData("./data/file1.parquet");
    }

    public static Parquet getParquetData(String filePath) throws IOException {
        List<SimpleGroup> simpleGroups = new ArrayList<>();
        ParquetFileReader reader = ParquetFileReader.open(HadoopInputFile.fromPath(new Path(filePath), new Configuration()));
        MessageType schema = reader.getFooter().getFileMetaData().getSchema();
        List<Type> fields = schema.getFields();
        PageReadStore pages;
        while ((pages = reader.readNextRowGroup()) != null) {
            long rows = pages.getRowCount();

            System.out.println("Row count: " + rows);

            MessageColumnIO columnIO = new ColumnIOFactory().getColumnIO(schema);
            RecordReader recordReader = columnIO.getRecordReader(pages, new GroupRecordConverter(schema));

            for (int i = 0; i < rows; i++) {
                SimpleGroup simpleGroup = (SimpleGroup) recordReader.read();
                simpleGroups.add(simpleGroup);
            }
        }
        reader.close();
        return new Parquet(simpleGroups, fields);
    }

    public static class Parquet {
        private List<SimpleGroup> data;
        private List<Type> schema;

        public Parquet(List<SimpleGroup> data, List<Type> schema) {
            this.data = data;
            this.schema = schema;
        }

        public List<SimpleGroup> getData() {
            return data;
        }

        public List<Type> getSchema() {
            return schema;
        }
    }
}

and get an OutOfMemoryError exception:

Row count: 1275748
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at org.apache.parquet.column.values.rle.RunLengthBitPackingHybridDecoder.readNext(RunLengthBitPackingHybridDecoder.java:93)
    at org.apache.parquet.column.values.rle.RunLengthBitPackingHybridDecoder.readInt(RunLengthBitPackingHybridDecoder.java:62)
    at org.apache.parquet.column.values.rle.RunLengthBitPackingHybridValuesReader.readInteger(RunLengthBitPackingHybridValuesReader.java:50)
    at org.apache.parquet.column.impl.ColumnReaderImpl$ValuesReaderIntIterator.nextInt(ColumnReaderImpl.java:665)
    at org.apache.parquet.column.impl.ColumnReaderImpl.readRepetitionAndDefinitionLevels(ColumnReaderImpl.java:514)
    at org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:527)
    at org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:638)
    at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:353)
    at org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:80)
    at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:75)
    at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271)
    at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147)
    at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109)
    at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
    at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
    at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:80)
    at org.example.DirectCopy.getParquetData(DirectCopy.java:38)
    at org.example.DirectCopy.main(DirectCopy.java:23)

Process finished with exit code 1

This suggests to me that the columnIO.getRecordReader call is trying to load the entire row group into memory, and that that row group contains all 1.3 million rows.

I've tried loading the file using pandas, and resaving with row_group_size=1000, and re-running. This leads to a different, earlier error when opening the file, and interestingly prints the stack-trace twice:

Reported exception:
java.lang.OutOfMemoryError: Required array length 2147483639 + 532866 is too large
    at java.base/jdk.internal.util.ArraysSupport.hugeLength(ArraysSupport.java:649)
    at java.base/jdk.internal.util.ArraysSupport.newLength(ArraysSupport.java:642)
    at java.base/java.lang.AbstractStringBuilder.newCapacity(AbstractStringBuilder.java:250)
    at java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:230)
    at java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:571)
    at java.base/java.lang.StringBuilder.append(StringBuilder.java:179)
    at java.base/java.lang.StringBuilder.append(StringBuilder.java:173)
    at java.base/java.util.AbstractCollection.toString(AbstractCollection.java:457)
    at java.base/java.lang.String.valueOf(String.java:4213)
    at java.base/java.lang.StringBuilder.append(StringBuilder.java:173)
    at org.apache.parquet.format.FileMetaData.toString(FileMetaData.java:977)
    at org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
    at org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
    at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
    at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
    at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
    at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:229)
    at org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:884)
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:532)
    at org.apache.parquet.hadoop.ParquetFileReader.<init>(ParquetFileReader.java:689)
    at org.apache.parquet.hadoop.ParquetFileReader.open(ParquetFileReader.java:583)
    at org.example.DirectCopy.getParquetData(DirectCopy.java:28)
    at org.example.DirectCopy.main(DirectCopy.java:23)
Exception in thread "main" java.lang.OutOfMemoryError: Required array length 2147483639 + 1323 is too large
    at java.base/jdk.internal.util.ArraysSupport.hugeLength(ArraysSupport.java:649)
    at java.base/jdk.internal.util.ArraysSupport.newLength(ArraysSupport.java:642)
    at java.base/java.lang.AbstractStringBuilder.newCapacity(AbstractStringBuilder.java:250)
    at java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:230)
    at java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:727)
    at java.base/java.lang.StringBuffer.append(StringBuffer.java:410)
    at java.base/java.io.StringWriter.write(StringWriter.java:99)
    at org.codehaus.jackson.impl.WriterBasedGenerator._flushBuffer(WriterBasedGenerator.java:1812)
    at org.codehaus.jackson.impl.WriterBasedGenerator._writeString(WriterBasedGenerator.java:987)
    at org.codehaus.jackson.impl.WriterBasedGenerator.writeString(WriterBasedGenerator.java:448)
    at org.codehaus.jackson.map.ser.std.StringSerializer.serialize(StringSerializer.java:28)
    at org.codehaus.jackson.map.ser.std.StringSerializer.serialize(StringSerializer.java:18)
    at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:446)
    at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:150)
    at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:112)
    at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:446)
    at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:150)
    at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:112)
    at org.codehaus.jackson.map.ser.std.StdContainerSerializers$IndexedListSerializer.serializeContents(StdContainerSerializers.java:122)
    at org.codehaus.jackson.map.ser.std.StdContainerSerializers$IndexedListSerializer.serializeContents(StdContainerSerializers.java:71)
    at org.codehaus.jackson.map.ser.std.AsArraySerializerBase.serialize(AsArraySerializerBase.java:86)
    at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:446)
    at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:150)
    at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:112)
    at org.codehaus.jackson.map.ser.std.StdContainerSerializers$IndexedListSerializer.serializeContents(StdContainerSerializers.java:122)
    at org.codehaus.jackson.map.ser.std.StdContainerSerializers$IndexedListSerializer.serializeContents(StdContainerSerializers.java:71)
    at org.codehaus.jackson.map.ser.std.AsArraySerializerBase.serialize(AsArraySerializerBase.java:86)
    at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:446)
    at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:150)
    at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:112)
    at org.codehaus.jackson.map.ser.StdSerializerProvider._serializeValue(StdSerializerProvider.java:610)
    at org.codehaus.jackson.map.ser.StdSerializerProvider.serializeValue(StdSerializerProvider.java:256)
    at org.codehaus.jackson.map.ObjectWriter._configAndWriteValue(ObjectWriter.java:456)
    at org.codehaus.jackson.map.ObjectWriter.writeValue(ObjectWriter.java:379)
    at org.apache.parquet.hadoop.metadata.ParquetMetadata.toJSON(ParquetMetadata.java:62)
    at org.apache.parquet.hadoop.metadata.ParquetMetadata.toPrettyJSON(ParquetMetadata.java:55)
    at org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:886)
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:532)
    at org.apache.parquet.hadoop.ParquetFileReader.<init>(ParquetFileReader.java:689)
    at org.apache.parquet.hadoop.ParquetFileReader.open(ParquetFileReader.java:583)
    at org.example.DirectCopy.getParquetData(DirectCopy.java:28)
    at org.example.DirectCopy.main(DirectCopy.java:23)

Process finished with exit code 1

If I increase row_group_size to 10000 my reads then do start to succeed, and I see row counts printed for the first 19 rows, and then I get this exception:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at org.apache.parquet.column.values.rle.RunLengthBitPackingHybridDecoder.readNext(RunLengthBitPackingHybridDecoder.java:93)
    at org.apache.parquet.column.values.rle.RunLengthBitPackingHybridDecoder.readInt(RunLengthBitPackingHybridDecoder.java:62)
    at org.apache.parquet.column.values.dictionary.DictionaryValuesReader.readDouble(DictionaryValuesReader.java:101)
    at org.apache.parquet.column.impl.ColumnReaderImpl$2$2.read(ColumnReaderImpl.java:218)
    at org.apache.parquet.column.impl.ColumnReaderImpl.readValue(ColumnReaderImpl.java:460)
    at org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:366)
    at org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
    at org.example.DirectCopy.getParquetData(DirectCopy.java:43)
    at org.example.DirectCopy.main(DirectCopy.java:23)

unless I remove the simpleGroups.add(simpleGroup) call, in which case the whole thing completes in 10 min 28 s. If I use pandas in python to read this file, it completes in 18.6s.

It's not clear to me why this is the case. My questions really are:

  1. Why am I getting this heap space error with a row group size of 10000 and 1.3 million? This machine has 54G of free memory, so reading the 6.6G parquet file should not come close to filling this. (I use the VM options -Xmx30g and -Xms30g too, so the VM should have access to sufficient memory. Further, pandas in python has no trouble loading this file.)
  2. Why do I get a different error when the row group size is low?
  3. For my application, where I want to process rows individually, must I ensure that the file is saved with a small row group size if I wish to avoid loading a huge number of rows (and hence consuming a lot of memory) upfront?
Harry Braviner
  • 627
  • 4
  • 12

0 Answers0