0

We have got a solaris server farm that runs lavastorm application.

Each server on the form runs a lavastorm instance, and that instance takes care of creating subprocesses to run various work tasks.

The lavastorm user has secondary group rights to another group.

However, it seems sometimes some instances do not effectively apply the secondary group rights--i.e. when they write/read a file they get bounced even if the file is open to the secondary group they already belong in.

If I login interactively as the lavastorm user, i have no problem with reading/writing to the files in question.

I know that it is possible to run newgrp to change the process' primary group but the lavastorm process is not able to execute this in the context of certain tasks.

And since the behavior is inconsistent across the server farm it seems possible there is an OS setting that influences the cascade or effect of secondary groups.

Is there such a setting in solaris?

Additional detail:

OS version from uname: 5.10 Generic_150400-12 sun4u sparc SUNW

FS type: NFS vers 3 (using Netapp NAS filer)

Authentication: from file

The output of nfsstat -m:

/data01 from 10.1.11.160:/vol/data_vol1/shared_data_prod
 Flags:         vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5,timeo=600
 Attr cache:    acregmin=3,acregmax=60,acdirmin=30,acdirmax=60
user55570
  • 458
  • 6
  • 18
  • The fact that it's inconsistent makes this highly unlikely to be a problem with Solaris itself. The code any kernel (Solaris, Linus, BSD, anything) uses to evaluate file system permissions doesn't change. You need to provide a lot more specific information if you're going to get any help - OS version(s), file system(s), source of user/group authentication (files, LDAP, NIS, etc). – Andrew Henle Jul 26 '15 at 23:59
  • I've put that into the original post... OS version from `uname`: `5.10 Generic_150400-12 sun4u sparc SUNW` FS type: NFS (using NAS filer) Authentication: from file – user55570 Jul 27 '15 at 05:09
  • So, these things are consistent across the farm, and yet the effective permissions behaviour shows differences. All the servers are accessing the same NAS. What else could it be? – user55570 Jul 27 '15 at 05:14
  • Poor NFS implementation? What type of NAS server? It's been known to happen in ways that might duplicate your problem: http://www.xkyle.com/solving-the-nfs-16-group-limit-problem/ (Linux would silently truncate the number of group id sent to 16 - and which group ids were cut could sometimes vary - leading to results similar to what you're seeing. Solaris would immediately return an error if the user was in more than 16 supplementary groups. FWIW - Sun invented NFS...) Post the output from `nfsstat` from a Solaris host that has seen your problem. – Andrew Henle Jul 27 '15 at 20:39
  • I am unsure what our NFS server is based on but we do not have more than 16 groups. Far less. – user55570 Jul 28 '15 at 03:52
  • I understand it is on Netapp NFS3 – user55570 Jul 28 '15 at 04:03
  • The "16 group limit" is a problem with NFSv2 implementations. Are you specifying the NFS protocol to use in your mount options? If you don't and the UDP-based negotiation at mount time goes awry because a packet gets dropped, you can wind up mounting a file system as NFSv3 or even NFSv2 - and NFSv2 doesn't support large files, among its other 64-bit issues. – Andrew Henle Jul 28 '15 at 10:07
  • i can see that the nfs version is specified in the mount options in `/etc/vfstab`. its specified as `vers=3` which seems appropriate. And if I check the output of `nfsstat -m` all of the mounts turn up as `vers=3` as well – user55570 Jul 29 '15 at 04:37

0 Answers0