This is a very strange question and I don't even know how to Google for it, so I'm posting here to see if anyone has encountered this sort of situation before.
I have multiple Ubuntu 14.04 systems running in AWS EC2.
We have several VPCs dedicated to different purposes -- prod/qa/dev/etc
I'm running puppet, with a puppetmaster. Pretty standard. Except that we originally had a single puppetmaster that handled configuration for all nodes in all VPCs.
I recently migrated the puppetmaster to a separate puppetmaster, per-VPC. But the configuration -- i.e. the puppet manifests, modules, etc -- was copied verbatim. So all hosts should still be getting the same node defs from puppet. This may drift over time, but at the cutoff, the puppetmasters were identically configured.
I'm running auditd on all of them, with rules that pass through the "audit" userid -- "auid" -- whenever one sudo's to another account. auditd rules are set to "immutable" -- they require a reboot to change the running rules.
Example:
$my-laptop# ssh fred@web.example.com # "fred"'s uid is 9999 in this example
web.example.com# sudo su -
web.example.com$ grep config_item /etc/ -r
This type of event would show up in audit.log as something like
type=SYSCALL msg=audit(1457613844.394:1234567): arch=c000003e \
syscall=59 success=yes exit=0 ppid=1234 pid=5678 auid=9999 \
uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 \
tty=(none) ses=51 comm="expr" exe="/bin/grep" key="sudo-command"
(I broke this single-line log entry onto multiple lines to be easier to read)
Anyway, the Ubuntu default user on the EC2 Ubuntu 14.04 AMI is "ubuntu" with UID 1000. I'm seeing a ton of messages with "auid=1000" in them.
The change the precipitated this issue is that I migrated the puppetmaster from one VPC to another, and isolated the puppet agents to only speak to the puppetmaster in their VPC.
A reboot seems to fix this problem.
Is there some race condition or other situation where auditd can see the wrong auid?
Something, anything, ideas or thoughts. I'm befuddled.
In the meantime, I'm rebooting everything.