2

Extending one of the questions: Hadoop: Connecting to ResourceManager failed

Hadoop 2.6.1

I do have ResourceManager HA configured.

When I do kill 'local' ResourceManager (to check the cluster), then the failover occures, and the ResourceManager on other server becomes active. Unfortunatelly, when I try to run a job using 'local' instance nodemanager, it does not 'failover' the request to active ResourceManager.

yarn@stg-hadoop106:~$ jps
26738 Jps
23463 DataNode
23943 DFSZKFailoverController
24297 NodeManager
25690 ResourceManager
23710 JournalNode
23310 NameNode

#kill and start ResourceManager, so the failover occur
yarn@stg-hadoop106:~$ kill -9 25690
~/hadoop/sbin/yarn-daemon.sh  start resourcemanager

yarn@stg-hadoop106:~$ ~/hadoop/bin/yarn  rmadmin -getServiceState rm1
standby
yarn@stg-hadoop106:~$ ~/hadoop/bin/yarn  rmadmin -getServiceState rm2
active

#run my class:

14:56:51.476 [main] INFO  o.apache.samza.job.yarn.ClientHelper - trying to connect to RM 0.0.0.0:8032
2015-10-29 14:56:51 RMProxy [INFO] Connecting to ResourceManager at /0.0.0.0:8032
14:56:51.572 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5 configuration not found, setting default realm to empty
2015-10-29 14:56:51 NativeCodeLoader [WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14:56:51.575 [main] DEBUG o.a.hadoop.util.PerformanceAdvisory - Falling back to shell based
2015-10-29 14:56:52 Client [INFO] Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-10-29 14:56:53 Client [INFO] Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

yarn-site.xml

 <property>
     <name>yarn.resourcemanager.ha.enabled</name>
     <value>true</value>
 </property>
 <property>
     <name>yarn.resourcemanager.cluster-id</name>
     <value>clusterstaging</value>
 </property>
 <property>
     <name>yarn.resourcemanager.ha.rm-ids</name>
     <value>rm1,rm2,rm3</value>
 </property>
 <property>
     <name>yarn.resourcemanager.hostname.rm1</name>
     <value>stg-hadoop106</value>
 </property>
 <property>
     <name>yarn.resourcemanager.hostname.rm2</name>
     <value>stg-hadoop107</value>
 </property>
 <property>
     <name>yarn.resourcemanager.hostname.rm3</name>
     <value>stg-hadoop108</value>
 </property>
 <property>
     <name>yarn.resourcemanager.zk-address</name>
     <value>A:2181,B:2181,C:2181</value>
 </property>

I did not configure

<name>yarn.resourcemanager.hostname</name>

since it should work 'as is' - correct me if I'm wrong :)

I did try

<name>yarn.client.failover-proxy-provider</name>

but no success

Any ideas? Maybe I'm wrongly expecting client to find out active RM node?

Do You know howto switch node active/standby in 'auto-failover' option?

~/hadoop/bin/yarn  rmadmin -failover rm1 rm2
    Exception in thread "main" java.lang.UnsupportedOperationException: RMHAServiceTarget doesn't have a corresponding ZKFC address

~/hadoop/bin/yarn  rmadmin -transitionToActive rm1 rm2
    Automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@2b72cb8a
    Refusing to manually manage HA state, since it may cause
Community
  • 1
  • 1
sirkubax
  • 885
  • 2
  • 10
  • 19

1 Answers1

0

If you enable HA-RM in automatic fail-over mode, you cannot trigger active to stand-by or vice-verse. and you should provide yarn.client.failover-proxy-provider parameter, the class to be used by Clients to fail-over to the Active RM. And as well as configure yarn.resourcemanager.hostname to identify the RM's (i.e, rm1 , rm2).

If automatic fail-over is not enables, you can trigger using below yarn rmadmin -transitionToStandby rm1

Please do above changes and give reply with result

BruceWayne
  • 3,286
  • 4
  • 25
  • 35