To let the standard file system user or a program see the HDFS name space as a locally mounted directory, CDH4 i.e. has a hadoop-hdfs-fuse component.
It is working with NON secure HDFS, but on a Kerberos secured HDFS, how to do it there?
Thks.
To let the standard file system user or a program see the HDFS name space as a locally mounted directory, CDH4 i.e. has a hadoop-hdfs-fuse component.
It is working with NON secure HDFS, but on a Kerberos secured HDFS, how to do it there?
Thks.
Kerberos authentication support for fuse_dfs executable (shipped with Hadoop distribution) has been added since version 2.0.2 of Apache Hadoop distribution.
I have spent a lot of time on figuring out how this should be configured.I have found that In order to make Fuse-DFS pick the correct configuration files(that contain authentication type, kerberos not simple etc...), CLASSPATH
must contain HADOOP_CONF_DIR
before Hadoop jar directories.
When using Kerberos authentication, users must run kinit before accessing the FUSE mount point. Failing to do this will result in I/O errors when users attempt to access the mount point.
You could use py-hdfs-mount which supports Kerberos and is easier to setup: https://github.com/EDS-APHP/py-hdfs-mount