Introduction:
We are using puppet to configure the nodes via a custom fact which is then referenced in hiera. The fact can either reside in the golden image in /etc/facter/fact.d/ or via pluginsync (makes no difference, tested both)
Versions:
dpkg -l|grep puppet
hi facter 1.7.5-1puppetlabs1 amd64 Ruby module for collecting simple facts about a host operating system
hi hiera 1.3.4-1puppetlabs1 all A simple pluggable Hierarchical Database.
hi puppet 3.4.3-1puppetlabs1 all Centralized configuration management - agent startup and compatibility scripts
hi puppet-common 3.4.3-1puppetlabs1 all Centralized configuration management
The setup is simple:
Puppetmaster:
cat hiera.yaml
:hierarchy:
- "aws/%{::aws_cluster}"
/etc/puppet/hieradata/aws/web.json
EC2 Node:
cat /etc/facter/facts.d/ec_cluster.sh
echo 'aws_cluster=web'
So there is this golden ec2 image including the fact aws_cluster. This is referenced in hiera and specifies the classes and configurations to make.
Problem:
When we boot the instance and enable autosigning the first run will not have the $aws_cluster present on the client side. So it will fail (which makes sense) saying
puppet-agent[2163]: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find data item classes in any Hiera data file and no default supplied at /etc/puppet/manifests/site.pp:33 on node ip-172-31-35-221.eu-west-1.compute.internal
When the puppet agent is restarted, everything works as expected. Any hints on this?
Our guess is:
- has it something to do with certificate generation?
- what happens the very first run?
- is it different if we start it by hand /etc/init.d/puppet start than over init?
Update:
when trying to start it over /etc/rc.local it fails too. So there has to be a difference between interactive and non interactive runs. are there special enviroment variables which have to be set?