0

I am trying to use HBase on a Cloudera cluster. HBase is up and running without errors but, when I try to enter the HBase shell and create a new table:

create 'test', 'cf1'

I receive the following error:

Table Namespace Manager not ready yet

Cloudera Manager does not signal issues about the health of the HBase cluster. Anyway, if I check the HBase log on Cloudera Manager, I can find a lot of errors like the following:

Can't open after 24 attempts and 310225ms  for hdfs://hmaster:8020/hbase/WALs/slave8,60020,1416510428414-splitting/slave8%2C60020%2C1416510428414.1416510442205

Or:

Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for slave8,60020,1416510428414, will retry
    at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:330)
    at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:210)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://hmaster:8020/hbase/WALs/slave8,60020,1416510428414-splitting] Task = installed = 1 done = 0 error = 1
    at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:360)
    at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:416)
    at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:390)
    at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288)
    at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:203)
    ... 4 more

What is wrong with my cluster ?

Nicola Ferraro
  • 4,051
  • 5
  • 28
  • 60
  • it is hard to see what's the exact problem - you need to run diagnostics for both Hadoop and HBase http://hbase.apache.org/book.html#hbck.in.depth and http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fsck – Arnon Rotem-Gal-Oz Jan 14 '15 at 11:46
  • looks a bit like a region server crash. Did you manage to solve the issue? – mut1na Nov 05 '15 at 13:47
  • my hack for v1.0.0 (may not be necessary for later versions) was to move the WALs folder on HDFS (/hbase/WALs) so that corrupt files weren't being replayed on startup – mut1na Nov 05 '15 at 17:39

0 Answers0