0

I'm new to the Gobblin and trying to ingest data from the Kafka to HDFS. I was able to flow the Kafka-HDFS Ingestion example successfully. But now I need to add a time-based writer partition option to my job. I did go through the TimeBasedWriterPartitioner google forum and come up with the bellow solution as suggested by Zongjun.

  1. I create a separate Java project for my time-based writer partition class:
import gobblin.writer.partitioner.TimeBasedWriterPartitioner;

public class LogJsonWriterPartitioner  extends TimeBasedWriterPartitioner<byte[]> {
    public LogJsonWriterPartitioner(gobblin.configuration.State state, int numBranches, int branchId) {
        super(state, numBranches, branchId);
    }

    @Override
    public long getRecordTimestamp(byte[] payload) {
        return System.currentTimeMillis();
    }
}

POM.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.pm.data.gobblin.kafka</groupId>
    <artifactId>LogJsonWriterPartitioner </artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>com.linkedin.gobblin</groupId>
            <artifactId>gobblin-api</artifactId>
            <version>0.6.2</version>
        </dependency>
        <dependency>
            <groupId>com.linkedin.gobblin</groupId>
            <artifactId>gobblin-core</artifactId>
            <version>0.6.2</version>
        </dependency>
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.3</version>
        </dependency>
        <dependency>
            <groupId>org.testng</groupId>
            <artifactId>testng</artifactId>
            <version>6.9.10</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

</project>
  1. Create a Jar from the above project and copy it to gobblin-dist/lib directory
  2. I update gobblin-mapreduce.sh in gobblin-dist/bin directory and add the new jar name under LIBJARS.
  3. Create a Job file as bellow:
job.name=GobblinKafkaQuickStart
job.group=GobblinKafka
job.description=Gobblin quick start job for Kafka
job.lock.enabled=false
fs.uri=file:///

kafka.brokers=localhost:9092

source.class=org.apache.gobblin.source.extractor.extract.kafka.KafkaSimpleSource
extract.namespace=org.apache.gobblin.extract.kafka

writer.builder.class=org.apache.gobblin.writer.SimpleDataWriterBuilder
writer.partitioner.class=com.pm.data.gobblin.kafka.LogJsonWriterPartitioner
writer.partition.granularity=day
writer.partition.pattern=YYYY-MM-dd
writer.partition.timezone=UTC
writer.file.path.type=tablename
writer.destination.type=HDFS
writer.output.format=txt

data.publisher.type=org.apache.gobblin.publisher.BaseDataPublisher
data.publisher.replace.final.dir=false
data.publisher.final.dir=/home/myuser/Desktop/Gobblin

mr.job.max.mappers=1

metrics.reporting.file.enabled=true
metrics.log.dir=${gobblin.cluster.work.dir}/metrics
metrics.reporting.file.suffix=txt

bootstrap.with.offset=earliest
  1. Then I start gobblin as standalone using the gobblin-standalone.sh file in the bin directory.

I got below error on logs/gobblin-current.log

 org.apache.gobblin.runtime.fork.Fork  250 - Fork 0 of task task_GobblinKafkaQuickStart_1590391135660_0 failed to process data records. Set throwable in holder org.apache.gobblin.runtime.ForkThrowableHolder@433cf3c0
java.io.IOException: java.lang.ClassNotFoundException: com.pm.data.logging.gobblin.LogJsonWriterPartitioner
    at org.apache.gobblin.writer.PartitionedDataWriter.<init>(PartitionedDataWriter.java:135)
    at org.apache.gobblin.runtime.fork.Fork.buildWriter(Fork.java:534)
    at org.apache.gobblin.runtime.fork.Fork.buildWriterIfNotPresent(Fork.java:542)
    at org.apache.gobblin.runtime.fork.Fork.processRecord(Fork.java:502)
    at org.apache.gobblin.runtime.fork.AsynchronousFork.processRecord(AsynchronousFork.java:103)
    at org.apache.gobblin.runtime.fork.AsynchronousFork.processRecords(AsynchronousFork.java:86)
    at org.apache.gobblin.runtime.fork.Fork.run(Fork.java:243)
    at org.apache.gobblin.util.executors.MDCPropagatingRunnable.run(MDCPropagatingRunnable.java:39)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.pm.data.logging.gobblin.LogJsonWriterPartitioner
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:264)
    at org.apache.gobblin.writer.PartitionedDataWriter.<init>(PartitionedDataWriter.java:128)
    ... 12 more

Whoever when I modify my job file as writer.partitioner.class=LogJsonWriterPartitioner, error is changed as java.lang.NoClassDefFoundError: gobblin/writer/partitioner/TimeBasedWriterPartitioner.

Could some help me to overcome this problem?

GihanDB
  • 591
  • 2
  • 6
  • 23

1 Answers1

0

For the first problem, make sure that you have correct package statement for LogJsonWriterPartitioner, i would expect it to be package com.pm.data.logging.gobblin

For the second, looks like dependencies in pom.xml are not correct, and that is why TimeBasedWriterPartitioner cannot be loaded. com.linkedin.gobblin was renamed to org.apache.gobblin long time ago, and the version numbers are higher. Recent release was '0.14.0'

alex
  • 12,464
  • 3
  • 46
  • 67
  • Your answer seems okay but I can't prove it because when I try the above changes, dependence resolve failed with  "Could not find artifact io.con fluent:kafka-schema-registry-client:jar:2.0.1 in central".I check the "https://mvnrepository.com/artifact/io.confluent/kafka-schema-registry-client/2.0.1" and seem like the particular version not available. Could you suggest a workaround to overcome this problem? – GihanDB May 28 '20 at 05:42