0

I just try to install hypertable on hadoop follow official document first I deploy cdh4 on CentOS 6.5-32bit node in persudo-distribute mode

then follow hypertable offical document to install hypertable on hadoop

when I run

cap start -f Capfile.cluster

get DfsBroker did not come up error

 * executing `start'
 ** transaction: start
  * executing `start_servers'
  * executing `start_hyperspace'
  * executing "/opt/hypertable/current/bin/start-hyperspace.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg"
    servers: ["master"]
    [master] executing command
 ** [out :: master] Started Hyperspace
    command finished in 6543ms
  * executing `start_master'
  * executing "/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh"
    servers: ["master"]
    [master] executing command
 ** [out :: master] DFS broker: available file descriptors: 65536
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] ERROR: DFS Broker (hadoop) did not come up
    command finished in 129114ms
failed: "sh -c '/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh'" on master

I check the DfsBroker.hadoop.log in /opt/hypertable/0.9.7.16 get this

/opt/hypertable/current/bin/jrun: line 113: exec: java: not found

but i JAVA_HOME has been set and I test java run normally with

java --version  

and I try to run jrun individually ,it didn't prompt exec: java : not found

I have seen similar problem after google

but I have use all those solution I can find

/opt/hypertable/current/bin/set-hadoop-distro.sh cdh4

just get

Hypertable successfully configured for Hadoop cdh4

so if any one can give me a hint about this problem , i'll be appreciated

xiaoxiao
  • 1
  • 2

1 Answers1

0

Before starting cluster you have to run:

cap fhsize -f Capfile.cluster

Then you can check that all directories has been correctly set up:

ls -laF /opt/hypertable/current/lib/java/*.jar

and also java version should work

/opt/hypertable/current/bin/jrun -version

more information in the quick start.

Tombart
  • 30,520
  • 16
  • 123
  • 136