-1

I'm trying to install Scala IDE 4.7 in my Cloudera VM 5.10, Which is preconfigured with JDK 1.7, Spark 1.6 version respectively. So, I have installed jdk version 1.8 in /opt/ location by uninstalling the default JDK located at /usr/java/jdk1.7 given by Cloudera. I added the java environment variables in .bash_profile. I was successfully able to install Scala IDE. But now all the ecosystem of Hadoop are pointing towards old JDK 1.7 which I have uninstalled and throws an error when running. Can anyone let me know where I can config java variables for Hadoop ecosystem to work with new 1.8 JDK? Here is the screen shot for reference

1 Answers1

0

Step 1: Stop all the hadoop servers Step 2: edit bigtop-utils file. I've added the location of jdk 1.8. Create a variable BIGTOP_MAJOR = 8. the order of preference for it to choose the jdk is 6,7,8 and open JDK. Step 3: save and reboot.