2

Using amazon kinesis analytics with a java flink application I am taking data from a firehose and trying to write it to a S3 bucket as a series of parquet files. I am hitting the following exception in my cloud watch logs which is the only error I can see that might be related.

I have enabled checkpointing as specified in the documentation and included the flink/arvo dependancies. Running this locally works. The parquet files are written to local local disk when a checkpoint is reached.

The exception

"message": "Exception type is USER from filter results [UserClassLoaderExceptionFilter -> USER, UserAPIExceptionFilter -> SKIPPED, UserSerializationExceptionFilter -> SKIPPED, UserFunctionExceptionFilter -> SKIPPED, OutOfMemoryExceptionFilter -> NONE, TooManyOpenFilesExceptionFilter -> NONE, KinesisServiceExceptionFilter -> NONE].",
"throwableInformation": [
    "java.lang.Exception: Error while triggering checkpoint 1360 for Source: Custom Source -> Map -> Sink: HelloS3 (1/1)",
    "org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1201)",
    "java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)",
    "java.util.concurrent.FutureTask.run(FutureTask.java:266)",
    "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)",
    "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)",
    "java.lang.Thread.run(Thread.java:748)",
    "Caused by: java.lang.AbstractMethodError: org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(Lorg/apache/parquet/bytes/BytesInput;IILorg/apache/parquet/column/statistics/Statistics;Lorg/apache/parquet/column/Encoding;Lorg/apache/parquet/column/Encoding;Lorg/apache/parquet/column/Encoding;)V",
    "org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:53)",
    "org.apache.parquet.column.impl.ColumnWriterBase.writePage(ColumnWriterBase.java:315)",
    "org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteStoreBase.java:152)",
    "org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:27)",
    "org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:172)",
    "org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:114)",
    "org.apache.parquet.hadoop.ParquetWriter.close(ParquetWriter.java:308)",
    "org.apache.flink.formats.parquet.ParquetBulkWriter.finish(ParquetBulkWriter.java:62)",
    "org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:62)",
    "org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:235)",
    "org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:276)",
    "org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:249)",
    "org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:244)",
    "org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:235)",
    "org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.snapshotState(StreamingFileSink.java:347)",
    "org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)",
    "org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)",
    "org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)",
    "org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:395)",
    "org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1138)",
    "org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1080)",
    "org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:754)",
    "org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:666)",
    "org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:584)",
    "org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:114)",
    "org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1190)",
    "\t... 5 more"

Below is my code snippets. I am getting my logging when processing the events and even the logging from the bucketassigner.

env.setStateBackend(new FsStateBackend("s3a://<BUCKET>/checkpoint"));
env.setParallelism(1);
env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE);

StreamingFileSink<Metric> sink = StreamingFileSink
            .forBulkFormat(new Path("s3a://<BUCKET>/raw"), ParquetAvroWriters.forReflectRecord(Metric.class))
            .withBucketAssigner(new EventTimeBucketAssigner())
            .build();

My pom:

<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-parquet_2.11</artifactId>
    <version>1.11-SNAPSHOT</version>
</dependency>

<dependency>
    <groupId>org.apache.parquet</groupId>
    <artifactId>parquet-avro</artifactId>
    <version>1.11.0</version>
</dependency>

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>3.2.1</version>
</dependency>

My AWS configuration has 'Snapshots' enabled. Write permissions are working to the bucket when I use the rowWriting instead of bulk writing.

Really unsure what to look for to get this working now.

alessiosavi
  • 2,753
  • 2
  • 19
  • 38
J T
  • 337
  • 1
  • 3
  • 14
  • check you classpath, try to be sure flink-parquet doesn't use transitive hadoop/parquet deps which might conflicts with the ones specified in your pom – morsik Feb 07 '20 at 13:31
  • Thanks for the suggestion. I suspected it might be related to this but as checkpointing to S3 was working I assumed that was all I needed. In fact it wasn't theres two s3 required dependancies for getting checkpointing and bulk writing working. Got there in the end. Thanks for the tip – J T Feb 14 '20 at 14:06
  • I seem to be facing the same issue, the only difference being I am unable to write to my local disk. What was the fix you put in for this? – Adi Kish Aug 20 '20 at 17:01
  • I found the issue. The encoding format was basically getting included from 2 places, parquet-avro and from flink-parquet. I excluded the parquet-hadoop from flink-parquet and it started working. – Adi Kish Aug 20 '20 at 17:16

0 Answers0