1

Is there any way to add user to hdfs group (any) on Windows platform? I can't assign Windows user itself to group because I have no admin privileges on the machine.

The more wide problem is when I try to run mapred task on hadoop, it sleeps and warns me with "security.UserGroupInformation: No groups available for user". If there is any workaround, I will be glad to know it.

Aguinore
  • 91
  • 11
  • If you have no admin privileges, then how do you expect to fix group information? IMO Hadoop doesn't play well on Windows anyway – OneCricketeer Jan 25 '17 at 06:53
  • @cricket_007 inside hadoop itself. I know it takes user info from OS there it is running, but maybe there is some way to play with it. – Aguinore Jan 25 '17 at 06:59
  • What do you mean "inside hadoop?" It's just an application. You seem to know it pulls form the OS, and so you need admin permission to modify the OS settings. – OneCricketeer Jan 25 '17 at 07:03
  • @cricket_007 it has its own FS, so maybe has way to assign user to group. If not, well, it will be a good reason to move to a Linux machine. – Aguinore Jan 25 '17 at 07:05
  • I mean, sure there is `hadoop fs` commands `-chmod` and `-chown`, but if you are messing with those, there is some other issue going on. – OneCricketeer Jan 25 '17 at 07:10
  • These commands aren't add user to group. All neccessary files and dirs have 777 rights just to be sure the problem isn't here. But still mapred jobs are waiting forever – Aguinore Jan 25 '17 at 07:15

1 Answers1

2

The more wide problem is when I try to run mapred task on hadoop, it sleeps and warns me with "security.UserGroupInformation: No groups available for user".

As of Apache Hadoop 2.3.0 (and also later versions), Hadoop supports a capability to define static group mappings inside Hadoop's configuration (core-site.xml). Apache JIRA HADOOP-10142 tracked development of this feature.

For example, you could specify in core-site.xml that user "myuser" belongs to a single group: "mygroup". When static group mapping is defined, then all further group lookup mechanisms inside Hadoop are bypassed, so you won't see this error anymore.

Here is the documentation of the property from core-default.xml

<property>
  <name>hadoop.user.group.static.mapping.overrides</name>
  <value>dr.who=;</value>
  <description>
    Static mapping of user to groups. This will override the groups if
    available in the system for the specified user. In otherwords, groups
    look-up will not happen for these users, instead groups mapped in this
    configuration will be used.
    Mapping should be in this format.
    user1=group1,group2;user2=;user3=group2;
    Default, "dr.who=;" will consider "dr.who" as user without groups.
  </description>
</property>

If you'd like to use this feature, then you'll need to restart Hadoop daemons after making the configuration change.

Using this feature is a trade-off, as it will generally be less maintainable than relying on the OS for group resolution. However, it can be helpful for managing the frequent group lookups on service accounts like this, with rarely changing group memberships. In environments that source group membership from LDAP, it also can greatly reduce load on the LDAP infrastructure by preventing those group lookups.

Chris Nauroth
  • 9,614
  • 1
  • 35
  • 39
  • Thank you for so great explanation. But after I added the property and restarted all hadoop daemons, I still see the same warn. My core-site.xml has this property now: ` hadoop.user.group.static.mapping.overrides %my_user_name%=hadoop; ` – Aguinore Jan 25 '17 at 07:53
  • @Aguinore, did you also include a `` for the configuration property to list out which users you want to override and their groups, as shown above? The XML snippet in your last comment shows a `` but no ``. – Chris Nauroth Jan 25 '17 at 07:56
  • @Aguinore, is the `%my_user_name%` exactly what you have in the configuration file, or is it the actual user name? It would have to be the actual user name (e.g. "chris"). Also, I just edited my answer to clarify that this feature launched in Apache Hadoop 2.3.0. If you're running a version earlier than that, then unfortunately you don't have this feature. – Chris Nauroth Jan 25 '17 at 08:04