We are having 20 data nodes and 3 management nodes. Each data node is having 45GB of RAM .
Data node RAM Capacity
45x20=900GB total ram
Management nodes RAM Capacity
100GB x 3 = 300GB RAM
I can see memory is completely occupied in Hadoop resource manager URL and Jobs submitted are in waiting state since 900GB is occupied till 890GB in resource manager url.
However , I have raised a request to Increase my memory capacity to avoid memory is being used till 890Gb out of 900GB.
Now, Unix Team guys are saying in data node out of 45GB RAM 80% is completely free using free -g command (cache/buffer) shows the output as free . However in Hadoop side(resource manager) URL says it is completely occupied and few jobs are in hold since memory is completely occupied.I would like to know how hadoop is calculating the memory in resource manager and is it good to upgrade the memory since it is occupying every user submit a hive jobs .
Who is right here hadoop output in RM or Unix free command .