I'm running Jenkins pod with helm charts and having weird logs when starting jenkins jobs. The requested resources and limits are seems to be in default state - compared to what I set in values.
helm install stable/jenkins --name jenkins -f jenkins.yaml
And after creating and running random job from UI
Agent jenkins-agent-mql8q is provisioned from template Kubernetes Pod Template
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations: {}
labels:
jenkins/jenkins-slave: "true"
jenkins/label: "jenkins-jenkins-slavex"
name: "jenkins-agent-mql8q"
spec:
containers:
- args:
- "********"
- "jenkins-agent-mql8q"
env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-agent:50000"
- name: "JENKINS_AGENT_NAME"
value: "jenkins-agent-mql8q"
- name: "JENKINS_NAME"
value: "jenkins-agent-mql8q"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://jenkins:8080/"
image: "jenkins/jnlp-slave:3.27.1"
imagePullPolicy: "IfNotPresent"
name: "jnlp"
resources:
limits:
memory: "2Gi"
cpu: "2"
requests:
memory: "1Gi"
cpu: "1"
And my helm values is
master:
(...)
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "3"
memory: "3Gi"
agent:
resources:
requests:
cpu: "2"
memory: "2Gi"
limits:
cpu: "4"
memory: "3Gi"
Any idea why it spawns agents with default 1cpu/1Gi to 2cpu/2Gi