1

When attempting a "run puppet" from the UI (1.11.0 - thanks for the improved UI speed, btw) on a hostgroup (same config/installs/OS, etc), the result will be a "Failed to apply catalog: Broken pipe - " error. All hosts are Ubuntu Trusty. Here's a syslog output from a UI Puppet Run on 20 nodes:

Apr 14 11:34:27 pn02 puppet-agent[45865]: Retrieving pluginfacts
Apr 14 11:34:27 pn02 puppet-agent[45865]: Retrieving plugin
Apr 14 11:34:28 pn02 puppet-agent[45865]: Loading facts
Apr 14 11:35:15 pn02 puppet-agent[45865]: Caching catalog for pn02.blahblah.org
Apr 14 11:35:22 pn02 puppet-agent[45865]: Failed to apply catalog: Broken pipe - <STDOUT>

However, when running puppet manually from the node, or when the regularly scheduled puppet run executes, the run will complete successfully. Additionally, individual Puppet Runs, as well as runs on, e.g. 5 nodes from the UI also succeed - it's just the attempt to execute on 20 nodes that produces errors.

Any thoughts on next steps to diagnose? Is this network-congestion related? Should my master's hardware be boosted?

Travis
  • 123
  • 8

1 Answers1

2

I was having the same issue! I was using puppetssh method to run the puppet agent -t command.

The error would only show up when running on many hosts.

I was able to fix this by redirecting the STDOUT to /dev/null. I changed my 'puppetssh_command:' to sudo puppet agent -t &>>/dev/null.

Zach Rice
  • 21
  • 4
  • Hmmm... unless I'm missing something, that didn't work for me :( I was really hoping you'd got it, too. Here's what my puppet.yml file looks like now: http://pastebin.com/ByX8Eyw0 – Travis Jun 10 '16 at 15:28
  • I uploaded my config as well: http://pastebin.com/zj59Nr8g I had to remove /usr/bin/puppet but i forget why. My default puppet is /bin/puppet. Do you have puppet_wait: set? – Zach Rice Jun 10 '16 at 16:55
  • Also, did you remember to restart the foreman-proxy service? – Zach Rice Jun 10 '16 at 16:57
  • I'm assuming that puppet_wait defaults to false? What you see up there is the entirety of my config file. I had neglected to restart the service, but I have now, & am still experiencing the same behavior :( – Travis Jun 28 '16 at 16:05