12

I am trying to use the below to list my dirs in hdfs:

ubuntu@ubuntu:~$ hadoop fs -ls hdfs://127.0.0.1:50075/ 
ls: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: 
Protocol    message end-group tag did not match expected tag.; 
Host Details : local host is: "ubuntu/127.0.0.1"; destination host is: "ubuntu":50075; 

Here is my /etc/hosts file

127.0.0.1       ubuntu localhost
#127.0.1.1      ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

How do I properly use hdfs:// to list my dirs?

I am using couldera 4.3 on ubuntu 12.04

arghtype
  • 4,376
  • 11
  • 45
  • 60
Tampa
  • 75,446
  • 119
  • 278
  • 425

7 Answers7

24

HDFS is not running at 50075. To check your hdfs port use the following command in linux

hdfs getconf -confKey fs.default.name

You will get the output something like

hdfs://hmaster:54310

And correct your URL accordingly

7

On your cloudera manager, check on the Name node for configuration item "NameNode Service RPC Port" OR "dfs.namenode.servicerpc-address". Add the same port number from there on the URL. And it should work fine. enter image description here

Jinith
  • 439
  • 6
  • 16
2

Is your NN running on port 50075? You actually don't have to do that if you just want to list down all the directories. Simply use hadoop fs -ls /. This will list all your directories under your root directory.

Tombart
  • 30,520
  • 16
  • 123
  • 136
Tariq
  • 34,076
  • 8
  • 57
  • 79
2

In /usr/local/hadoop/etc/hadoop/core-site.xml

In place of localhost, use 0.0.0.0 i.e..

Change <value>hdfs://localhost:50075</value> to

<value>hdfs://0.0.0.0:50075</value>

This solved the problem for me

0

Can you check your hostname?. The same name(ubuntu) should be there in your /etc/hostname file and /etc/hosts file.

0

Make sure that your tcp port of NN is on 50075 which is defined in hdfs-site.xml

<property>
<name>dfs.namenode.rpc-address.nn1</name>
<value>r101072036.sqa.zmf:9000</value>
</property>

my problem is that I use http-address port to connect with NN, this cause the same exception as you.

the http port is also configured in hdfs-site.xml:

<property>
<name>dfs.namenode.http-address.nn1</name>
<value>r101072036.sqa.zmf:8000</value>
</property>
eleforest
  • 332
  • 2
  • 6
0

This error arise because of :

  1. It is not able to contact with namenode
  2. Namenode might be not running (you can check it by running jps command.)
  3. kill what ever is running in that particular port

check what is running in particular port by netstat -tulpn | grep 8080 and kill -9 <PID>

  1. Restart the namenode
andani
  • 414
  • 1
  • 9
  • 28