8

I am trying to setup hadoop on my local machine and was following this. I have setup hadoop home also

This is the command I am trying to run now

hduser@ubuntu:~$ /usr/local/hadoop/bin/start-all.sh

And this is the error I get

-su: /usr/local/hadoop/bin/start-all.sh: No such file or directory

This is what I added to my $HOME/.bashrc file

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

EDIT After trying the solution given by mahendra I am getting the following output

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [localhost] localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-mmt-HP-ProBook-430-G3.out localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-mmt-HP-ProBook-430-G3.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-mmt-HP-ProBook-430-G3.out starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-mmt-HP-ProBook-430-G3.out localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-mmt-HP-ProBook-430-G3.out

Legendary_Hunter
  • 1,040
  • 2
  • 10
  • 29

1 Answers1

10

Try to run :

hduser@ubuntu:~$ /usr/local/hadoop/sbin/start-all.sh

Since start-all.sh and stop-all.sh located in sbin directory while hadoop binary file is located in bin directory.

Also updated your .bashrc for:

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

so that you can directly access start-all.sh

Mahendra
  • 1,436
  • 9
  • 15
  • Hi, its been long since I aksed this question. your above command works and is giving the output I have posted in the edit part. Is that the correct output? – Legendary_Hunter Mar 31 '16 at 06:39
  • Refer accepted answer of [how-to-check-if-hadoop-daemons-are-running](http://stackoverflow.com/questions/15555965/how-to-check-if-hadoop-daemons-are-running). In local mode you can also check java processes using `$JAVA_HOME/bin/jps` command – Mahendra Mar 31 '16 at 09:41