I have an Hadoop cluster that is quickly running out of room. This cluster is built on RHEL7 VMs on GCP (Google Cloud Platform) Compute Engine. I had originally provisioned the cluster with 10Gb drives on 4 nodes, but that doesn't seem to be enough as Ambari is telling me HDFS is 86% full even though I have yet to load any data. Anyways, I went into the GCP console and 'expanded' my drives to 20Gb on each node. I guess I have two questions- 1) How do I mount the additional capacity in RHEL? 2) How do I tell HDFS/Ambari to use the new capacity?
When I run 'fdisk -l', I can see the expanded drive fdisk -l But I don't see it in df -h df -h Which makes sense, since it's not been mounted yet.
My confusion comes in with fdisk. All of the 20Gb shows up under /dev/sda1, but df only shows 10Gb. How do I mount the extra capacity if fdisk already shows the capacity as being partitioned? Since I just grew an existing disk instead of adding a new one, does that mess things up?