0

I have a fully distributed Hadoop and Hbase instances of two nodes. HDFS working perfectly on the master and the slave. But HBase shell works only one time after the nodename is formated and the cluster is new after that I get the error: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing hbase

Also I can not connect through hbase shell from the slave I always get the error Connection Refused and in the HBase Web UI, I only can see on regionserver which is the master node

Master hbase-site.xml:

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master:9000/hbase</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>master,slave</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/usr/local/hbase-1.2.1/data/zookeeper</value>
    </property>
    <property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
        </property>
    <property>
        <name>hbase.regionserver.thrift.framed</name>
        <value>true</value>
    </property>
     <property>
                <name>hbase.zookeeper.property.maxClientCnxns</name>
                <value>1000</value>
        </property>
    <property>
        <name>hbase.regionserver.thrift.server.type</name>
        <value>TThreadPoolServer</value>
    </property>
    <property>
            <name>avatica.statementcache.maxcapacity</name>
            <value>20000</value>
    </property>
</configuration>

Slave hbase-site.xml

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master:9000/hbase</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
                <name>hbase.zookeeper.property.maxClientCnxns</name>
                <value>1000</value>
        </property>
    <property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
        </property>
</configuration>

JPS Master:

enter image description here

JPS Slave:

enter image description here

Community
  • 1
  • 1
Aboulfouz
  • 98
  • 8
  • And when I check the slave nodes using localhost:16030/rs-status .... I get The RegionServer is initializing – Aboulfouz Sep 28 '16 at 12:03
  • Did you check master and regionserver logs, put them to DEBUG level and see if you get any additional info – mbaxi Sep 28 '16 at 12:16
  • I have checked the log for hbase region server there is no problem: Auth successful for hadoop, Connection from 127.0.0.1 port: 56556 with version info: version: – Aboulfouz Sep 29 '16 at 08:19
  • But is it OK to give me the local ip address instead of the network ip address for the master – Aboulfouz Sep 29 '16 at 08:22
  • while starting HBase I get this error slave: 0 [main] ERROR org.apache.zookeeper.server.quorum.QuorumPeerConfig - Invalid configuration, only one server specified (ignoring) in my confiuration I put HBase to manage the zookeeper – Aboulfouz Sep 29 '16 at 08:38
  • it looks like either your zookeeper quorum is not correctly setup or its content has got corrupted; check this answer - http://stackoverflow.com/questions/17038957/org-apache-hadoop-hbase-pleaseholdexception-master-is-initializing – mbaxi Sep 29 '16 at 08:47

1 Answers1

0

The problem solved after I removed all entries related to 127.0.0.1 in /etc/hosts and copied the hdfs-site.xml to $HBASE_HOME/conf in all nodes

Aboulfouz
  • 98
  • 8
  • Does remove mean completely remove from the file or, just commenting out is enough? thanks! – CoolCK Mar 12 '21 at 02:23