0

I have two namenodes in HA environment. And Hive configured to point to HA namespace. But intermediatly my Hive fails pointing to passive namenode giving below error even tough my active namenode is still in service. Kindly help me dig where the issues is. Even Zkfc logs dosent show any failover happening when Hive fails.

Couldn't set up IO streams; Host Details : local host is: "my node/10.10.11.6"; destination host is: "passive node":8020;

leftjoin
  • 36,950
  • 8
  • 57
  • 116

1 Answers1

0

To prevent hive server from opening too many connections with namenode, we need to set ipc.client.connection.maxidletime to the default value of 10 seconds. By default, PHD will set this parameter to 1 hour in the core-site.xml which can cause out of memory errors on HiveServer2.

<property> <name>ipc.client.connection.maxidletime</name> <value>10000</value> </property>

Refer Below…

https://issues.apache.org/jira/browse/HIVE-6866 https://discuss.pivotal.io/hc/en-us/articles/201646766-How-to-Configure-HiveServer2-and-use-a-Beeline-Client-on-a-Pivotal-HD-Cluster