I am following the tutorial for stream cube building from
Kylin Cube from Streaming (Kafka)
All the property are set as it has said in the mentioned page.
But while triggering to build the cube. It fails in step 1 Save data from Kafka
saying:
org.apache.kylin.engine.mr.exception.MapReduceException: no counters for job job_1547096967734_0086
at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:173)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:164)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:70)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:164)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have seen Apache kylin cube fails “no counters for job”
But the use case there is for normal cube building and not streaming via kafka cube building.
In mapred-root-historyserver.log below entry was seen didn't seem to help.
2019-01-22 11:33:15,557 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob:
Loading job: job_1547096967734_0087 from file:
hdfs://localhost:9000/tmp/hadoop-
yarn/staging/history/done_intermediate/root/job_1547096967734_0087-
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step-
1548149585065-0-0-FAILED-default-1548149566816.jhist
2019-01-22 11:33:15,557 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob:
Loading history file: [hdfs://localhost:9000/tmp/hadoop-
yarn/staging/history/done_intermediate/root/job_1547096967734_0087-
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step-
1548149585065-0-0-FAILED-default-1548149566816.jhist]
2019-01-22 11:33:15,572 INFOorg.apache.hadoop.mapreduce.jobhistory.
JobSummary:jobId=job_1547096967734_0087,submitTime=1548149562328
,launchTime=1548149566816,firstMapTaskLaunchTime=1548149570064,
firstReduceTaskLaunchTime=0,finishTime=1548149585065,resourcesPerMap
=1024,resourcesPerReduce=0,numMaps=1,numReduces=0,user=root,queue=
default,status=FAILED,mapSlotSeconds=8,reduceSlotSeconds=0,jobName=
Kylin_Save_Kafka_Data_kylin_streaming_cube_Step
2019-01-22 11:33:15,572 INFO org.apache.hadoop.mapreduce.v2.hs.
HistoryFileManager: Deleting JobSummary file: [hdfs://localhost:9000/
tmp/hadoop-yarn/staging/history/done_intermediate/
root/job_1547096967734_0087.summary]
2019-01-22 11:33:15,574 INFO
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving
hdfs://localhost:9000/tmp/hadoop-
yarn/staging/history/done_intermediate/root/job_1547096967734_0087-
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step-
1548149585065-0-0-FAILED-default-1548149566816.jhist to
hdfs://localhost:9000/tmp/hadoop-
yarn/staging/history/done/2019/01/22/000000/job_1547096967734_0087-
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step-
1548149585065-0-0-FAILED-default-1548149566816.jhist
2019-01-22 11:33:15,574 INFO
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving
hdfs://localhost:9000/tmp/hadoop-
yarn/staging/history/done_intermediate/root/job_1547096967734_0087_conf.xml
to hdfs://localhost:9000/tmp/hadoop-
yarn/staging/history/done/2019/01/22/000000/job_1547096967734_0087_conf.xml
2019-01-22 11:35:30,160 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory:
Starting scan to move intermediate done files
This is a completely manual installed kylin environment below are the version specifications:
apache-hive-2.3.4-bin
apache-kylin-2.5.2-bin-hbase1x
hadoop-2.9.1
hbase-1.4.9
kafka_2.11-2.0.0
spark-2.3.2-bin-hadoop2.7
zookeeper-3.4.13
Any help will be greatly appreciated.