I've setup a bare metal cluster and want to provide different types of shared storage to my applications, one of which is an s3 bucket I mount via goofys to a pod that exports if via NFS. I then use the NFS client provisioner to mount the share to automatically provide volumes to pods.
Letting aside the performance comments, the issue is that the nfs client provisioner mounts the NFS share via the node's OS, so when I set the server name to the NFS pod, this is passed on to the node and it cannot mount because it has no route to the service/pod.
The only solution so far has been to configure the service as NodePort, block external connections via ufw on the node, and configure the client provisioner to connect to 127.0.0.1:nodeport.
I'm wondering if there is a way for the node to reach a cluster service using the service's dns name?