0

I am trying to make a query(a simple select) through Shark Java API from a Hive table on a cluster.

However I get this error message:

14/01/15 17:25:54 INFO cluster.ClusterTaskSetManager: Loss was due to java.lang.NoClassDefFoundError
java.lang.NoClassDefFoundError: Could not initialize class com.google.common.cache.CacheBuilder
at org.apache.hadoop.hdfs.DomainSocketFactory.<init>(DomainSocketFactory.java:46)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:456)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:105)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:93)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:83)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:29)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
at ....

Followed by this error:

14/01/15 17:25:54 INFO cluster.ClusterTaskSetManager: Loss was due to java.lang.IncompatibleClassChangeError
java.lang.IncompatibleClassChangeError: class com.google.common.cache.CacheBuilder$3 has interface com.google.common.base.Ticker as super class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at com.google.common.cache.CacheBuilder.<clinit>(CacheBuilder.java:207)
at org.apache.hadoop.hdfs.DomainSocketFactory.<init>(DomainSocketFactory.java:46)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:456)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:105)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:93)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:83)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)

It seems like it is a problem with the Guava dependency but I just can't figure out what's the problem.

I am using Spark-0.8.0 , Shark-0.8.0, Hive-0.9.0 and Hadoop-4.5.0.

The only dependencies in my .pom file that require Guava are:

<dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.9.3</artifactId>
        <version>0.8.0-incubating</version>
</dependency>
<dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.0.0-cdh4.5.0</version>
</dependency>
<dependency>
        <groupId>org.apache.hive</groupId>
        <artifactId>hive-exec</artifactId>
        <version>0.9.0</version>
</dependency>

Does anyone know how to solve this issue ?

Thanks.

ChrisGPT was on strike
  • 127,765
  • 105
  • 273
  • 257
Radu C.
  • 11
  • 2
  • Have you tried `mvn dependency:tree` to compare the versions they require? Obviously hdfs is not getting the Guava version it expects. – Frank Pavageau Jan 15 '14 at 22:59

1 Answers1

2

All three of the dependencies you reference have dependencies on different versions of Guava.

It appears that Hadoop is looking for Guava's CacheBuilder, added in 10.0, but the version from Hive (r09) must be the one taking precedence.

My suggestion would be to use Maven's dependency exclusions to prevent Maven from importing Guava from via Hive. You may want to exclude it from Hadoop as well, so that you can be sure that the latest of the three (14.0) is the one that gets used.

Zoe
  • 27,060
  • 21
  • 118
  • 148
Ray
  • 4,829
  • 4
  • 28
  • 55
  • Thanks for the answer. The problem was that I added the Hive\lib folder to spark classpath and in that folder there was a Guava r09 jar. It was a problem of how I have configured Spark and not a maven issue. – Radu C. Feb 07 '14 at 15:24