1

I've been pulling my hair out trying to install accumulo on a cloudera quickstart VM (i've found quickstart to be anything but). I'm attempting to install it via cloudera manager (which I thought would have been a lot more plug and play) however at step 5 when trying to Start the Accumulo service the process fails. Digging into the logs I find that it has not been able to start the Master, Tracer, Tablet server, or Garbage Collector. The Stderr for the master gives me the following:

++ hostname
+ HOST=quickstart.cloudera
+ '[' master = monitor -a '' = true ']'
+ exec /usr/lib/accumulo/bin/accumulo master --address quickstart.cloudera
grep: /var/run/cloudera-scm-agent/process/26-accumulo16-ACCUMULO16_MASTER/masters: No such file or directory
log4j:WARN No appenders could be found for logger (org.apache.accumulo.start.classloader.AccumuloClassLoader).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Uncaught exception: com/google/common/base/Preconditions
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
    at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:325)
    at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:338)
    at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:420)
    at org.apache.hadoop.io.WritableComparator.<init>(WritableComparator.java:128)
    at org.apache.hadoop.io.WritableComparator.<init>(WritableComparator.java:116)
    at org.apache.hadoop.io.Text$Comparator.<init>(Text.java:360)
    at org.apache.hadoop.io.Text.<clinit>(Text.java:374)
    at org.apache.accumulo.server.master.Master.<clinit>(Master.java:192)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.accumulo.start.classloader.AccumuloClassLoader.loadClass(AccumuloClassLoader.java:378)
    at org.apache.accumulo.start.classloader.AccumuloClassLoader.loadClass(AccumuloClassLoader.java:385)
    at org.apache.accumulo.start.Main.main(Main.java:42)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.Preconditions
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 13 more

Any help would be appreciated. Unfortunately I don't speak much Java - im guessing i'm missing a crucial package however if that's the case im confused as to why this is installed as part of the accumulo set up (fyi this is a brand new cloudera.quickstart VM).

jhole89
  • 718
  • 9
  • 28
  • You need to have the guava libs in your path. How to do that / why this isn't already done I can't help you with, but I've run across this type of stuff many times http://stackoverflow.com/questions/28317911/exception-in-thread-main-java-lang-noclassdeffounderror-com-google-common-bas – FuriousGeorge Mar 09 '16 at 17:31
  • thanks, apologies but how do I add this to my path? do you mean the accumulo user path or is this the classpath.xml? – jhole89 Mar 09 '16 at 21:35
  • my current accumulo-site.xml has the following classpath `general.classpaths$ACCUMULO_HOME/lib/[^.].*.jar,$HADOOP_CONF_DIR,$HADOOP_CLIENT_HOME/[^.](?!lf4j-log4j|uava|vro).*-[0-9a.]*.jar,$HADOOP_CLIENT_HOME/slf4j-log4j12.jar,$HADOOP_CLIENT_HOME/avro.jar,$HADOOP_CLIENT_HOME/[^.](?!ookeeper).*-[0-9.]*(?:-[^-]*)?-cdh.*.jar,$ZOOKEEPER_HOME/zookeeper.*-[0-9].*.jar,/usr/jars/guava-11.0.2.jar,/usr/lib/hadoop/lib/guava-11.0.2.jar ` – jhole89 Mar 09 '16 at 22:14

1 Answers1

0

In regard to the services not starting; make sure that you have the Gateway role or some/any HDFS role assigned to your accumulo master, monitor, tracer and gc nodes. I was having the same issue and once I realized that accumulo couldn't see HDFS I added the Gateway role. I was then able to initialize accumulo and start the related master services.

You can check to see if accumulo can see the hadoop file system by running hdfs dfs -ls / on the accumulo master. If you get an error saying

Warning: fs.defaultFS is not set when running 'ls' command

and it shows you the local file system, then you know that the accumulo master can't see HDFS and the necessary accumulo instance id.

Benjamin W.
  • 46,058
  • 19
  • 106
  • 116
DJ B.
  • 1