I'm trying to collect logs from cron jobs running on our self hosted Github runners, but so far can only see the actual github-runner host logs.
I've created a self-hosted Github Runner in AWS running on Unbtu with a standard config.
We've also installed the Datadog agent v7 with their script and basic configuration, and added log collection from files using these instructions
Our config for log collection is below.
curl https://s3.amazonaws.com/dd-agent/scripts/install_script.sh -o ddinstall.sh
export DD_API_KEY=${datadog_api_key}
export DD_SITE=${datadog_site}
export DD_AGENT_MAJOR_VERSION=7
bash ./ddinstall.sh
# Configure logging for GitHub runner
tee /etc/datadog-agent/conf.d/runner-logs.yaml << EOF
logs:
- type: file
path: /home/ubuntu/actions-runner/_diag/Worker_*.log
service: github
source: github-worker
- type: file
path: /home/ubuntu/actions-runner/_diag/Runner_*.log
service: github
source: github-runner
EOF
chown dd-agent:dd-agent /etc/datadog-agent/conf.d/runner-logs.yaml
# Enable log collection
echo 'logs_enabled: true' >> /etc/datadog-agent/datadog.yaml
systemctl restart datadog-agent
After these steps, I can see logs from our Github runners servers. However, on those runners we have several python cron jobs running in Docker containers, logging to stdout. I can see those logs in the Github Runner UI, but they're not available in Datadog, and those are the logs I'd really like to capture, so I can extract metrics from.
Do the docker containers for the python scripts need some special datadog setup as well? Do they need to log to a file that the datadog agents registers as a log file in the setup above?