0

hadoop 3.3.1
Hive2.3.9
flume 1.9.0

what I want is:when hive updates log, flume monitors hive log,flume sink hive log changes to hdfs.but I can't get log in hdfs.
I have downloaded and copy below jars to /home/hadoop/flume/lib

commons-configuration-1.10.jar
hadoop-common-3.3.1.jar
hadoop-hdfs-3.3.1.jar
hadoop-auth-3.3.1.jar
htrace-core-4.0.0-incubating.jar

create flume-file-hdfs.conf under /home/hadoop/flume/job

# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# Describe/configure the source
a2.sources.r2.type = exec
a2.sources.r2.command = tail -F /tmp/hadoop/hive.log
a2.sources.r2.shell = /bin/bash -c

# Describe the sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://localhost:9000/flume/%Y%m%d/%H
a2.sinks.k2.hdfs.filePrefix = logs- 
a2.sinks.k2.hdfs.round = true
a2.sinks.k2.hdfs.roundValue = 1
a2.sinks.k2.hdfs.roundUnit = hour
a2.sinks.k2.hdfs.useLocalTimeStamp = true
a2.sinks.k2.hdfs.batchSize = 5
a2.sinks.k2.hdfs.fileType = DataStream
a2.sinks.k2.hdfs.rollInterval = 30
a2.sinks.k2.hdfs.rollSize = 134217700
a2.sinks.k2.hdfs.rollCount = 0

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel 
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

and run flume bin/flume-ng agent --conf conf/ --name a2 --conf-file job/flume-file-hdfs.conf

start Hadoop

sbin/start-dfs.sh
sbin/start-yarn.sh

and do some hive operations:

bin/hive

I run show databases; five times.

but I don't see any logs under http://localhost:9870/explorer.html#/flume/

and found log after bin/flume-ng command .

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.NoSuchMethodError: org.apache.htrace.core.Tracer$Builder.<init>(Ljava/lang/String;)V
    at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3460)
    at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:255)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:247)
    at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:727)
    at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
    at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:724)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Venus
  • 1,184
  • 2
  • 13
  • 32

1 Answers1

0

I download newest htrace-core4-4.2.0-incubating.jar,and the error disappear.

Venus
  • 1,184
  • 2
  • 13
  • 32