I am writing an Ansible playbook that uses the kubernetes
module to modify a ConfigMap
entry on my cluster. An important caveat to note is that I am running a docker image that contains an Ansible installation to do this work. I run the docker image and hand it the necessary inputs for it to do its job. Here is an example of the run command:
$ docker run --rm -it -e ANSIBLE_CONFIG=/play-config/ansible.cfg -e K8S_AUTH_KUBECONFIG=/play-config/gagnon.config -e K8S_AUTH_CONTEXT=kubernetes-admin@kubernetes -v "C:\Users\jgagnon\gagnon-test\local-kube-prometheus-stack\ansible":/play-config cytopia/ansible:latest-tools
Then, in the running container:
$ ansible-playbook /play-config/playbook-arc-control-plane.yaml -u jgagnon
After some initial hurdles, where I found that some missing dependencies needed to be installed on the target cluster nodes, I believe I have satisfied dependency requirements. Now I'm running into a problem where the playbook fails when it attempts to make the ConfigMap change using kubernetes.core.k8s_json_patch
. I have tried a number of things to see if I could correct the problem, to no avail. I keep getting an error:
"msg": "Failed to load kubeconfig due to Invalid kube-config file. No configuration found."
Here is the play (from playbook-arc-control-plane.yaml):
- name: "Make kube-proxy visible to Prometheus"
hosts: control_planes
become_user: root
become: true
tasks:
- name: "Install pip"
shell:
cmd: "apt-get install -y python3-pip"
- name: "Install jsonpatch"
shell:
cmd: "apt-get install -y python3-jsonpatch"
- name: "Install kubernetes Ansible module"
pip:
name:
kubernetes
- debug:
var: lookup('env', 'K8S_AUTH_KUBECONFIG')
- debug:
var: lookup('env', 'K8S_AUTH_CONTEXT')
- name: "Patch kube-proxy ConfigMap metricsBindAddress"
kubernetes.core.k8s_json_patch:
kind: ConfigMap
name: kube-proxy
namespace: kube-system
context: "{{ lookup('env', 'K8S_AUTH_CONTEXT') }}"
kubeconfig: "{{ lookup('env', 'K8S_AUTH_KUBECONFIG') }}"
patch:
- op: replace
path: /data/config.conf/metricsBindAddress
value: 0.0.0.0
Here is a section of the playbook console output (-vvv
flag was specified):
TASK [debug] ***********************************************************************************************************************************
task path: /play-config/playbook-arc-control-plane.yaml:180
ok: [gagnon-m1] => {
"lookup('env', 'K8S_AUTH_KUBECONFIG')": "/play-config/gagnon.config"
}
TASK [debug] ***********************************************************************************************************************************
task path: /play-config/playbook-arc-control-plane.yaml:182
ok: [gagnon-m1] => {
"lookup('env', 'K8S_AUTH_CONTEXT')": "kubernetes-admin@kubernetes"
}
TASK [Patch kube-proxy ConfigMap metricsBindAddress] *******************************************************************************************
task path: /play-config/playbook-arc-control-plane.yaml:185
...
The full traceback is:
File "/tmp/ansible_kubernetes.core.k8s_json_patch_payload_aqz5jjfp/ansible_kubernetes.core.k8s_json_patch_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py", line 256, in get_api_client
_load_config()
File "/tmp/ansible_kubernetes.core.k8s_json_patch_payload_aqz5jjfp/ansible_kubernetes.core.k8s_json_patch_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py", line 218, in _load_config
kubernetes.config.load_kube_config(
File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/kube_config.py", line 813, in load_kube_config
loader = _get_kube_config_loader(
File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/kube_config.py", line 770, in _get_kube_config_loader
raise ConfigException(
fatal: [gagnon-m1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_key": null,
"api_version": "v1",
"ca_cert": null,
"client_cert": null,
"client_key": null,
"context": "kubernetes-admin@kubernetes",
"host": null,
"impersonate_groups": null,
"impersonate_user": null,
"kind": "ConfigMap",
"kubeconfig": "/play-config/gagnon.config",
"name": "kube-proxy",
"namespace": "kube-system",
"no_proxy": null,
"password": null,
"patch": [
{
"op": "replace",
"path": "/data/config.conf/metricsBindAddress",
"value": "0.0.0.0"
}
],
"persist_config": null,
"proxy": null,
"proxy_headers": null,
"username": null,
"validate_certs": null,
"wait": false,
"wait_condition": null,
"wait_sleep": 5,
"wait_timeout": 120
}
},
"msg": "Failed to load kubeconfig due to Invalid kube-config file. No configuration found."
}
I've verified the referenced kubeconfig file (/play-config/gagnon.config
) exists in the container. Also, I have been using this config file for months with no problem, so I'm pretty sure it's valid.
Does anyone have an idea what's wrong? I assume I have met all dependencies, otherwise I wouldn't expect this task to run at all (or at least fail for a different reason).
UPDATE:
I suspect, but have not been able to verify, that the problem stems from an incorrect path specified in the kubernetes.core.k8s_json_patch
command.
If you dump a ConfigMap as JSON, the data is not represented as JSON, but rather as just a string.
For example:
{
"apiVersion": "v1",
"data": {
"config.conf": "apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nbindAddressHardFail: false\nclientConnection:\n acceptContentTypes: \"\"\n burst: 0\n contentType: \"\"\n kubeconfig: /var/lib/kube-proxy/kubeconfig.conf\n qps: 0\nclusterCIDR: \"\"\nconfigSyncPeriod: 0s\nconntrack:\n maxPerCore: null\n min: null\n tcpCloseWaitTimeout: null\n tcpEstablishedTimeout: null\ndetectLocal:\n bridgeInterface: \"\"\n interfaceNamePrefix: \"\"\ndetectLocalMode: \"\"\nenableProfiling: false\nhealthzBindAddress: \"\"\nhostnameOverride: \"\"\niptables:\n masqueradeAll: false\n masqueradeBit: null\n minSyncPeriod: 0s\n syncPeriod: 0s\nipvs:\n excludeCIDRs: null\n minSyncPeriod: 0s\n scheduler: \"\"\n strictARP: false\n syncPeriod: 0s\n tcpFinTimeout: 0s\n tcpTimeout: 0s\n udpTimeout: 0s\nkind: KubeProxyConfiguration\nmetricsBindAddress: 0.0.0.0\nmode: \"\"\nnodePortAddresses: null\noomScoreAdj: null\nportRange: \"\"\nshowHiddenMetricsForVersion: \"\"\nudpIdleTimeout: 0s\nwinkernel:\n enableDSR: false\n forwardHealthCheckVip: false\n networkName: \"\"\n rootHnsEndpointName: \"\"\n sourceVip: \"\"",
"kubeconfig.conf": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n server: https://172.20.8.68:6443\n name: default\ncontexts:\n- context:\n cluster: default\n namespace: default\n user: default\n name: default\ncurrent-context: default\nusers:\n- name: default\n user:\n tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubeadm.kubernetes.io/component-config.hash": "sha256:aa87680dfe2321f98df103555d18d439916b19e0bf23bd0f98bb3e27c5adfc08"
},
"creationTimestamp": "2022-08-22T12:08:21Z",
"labels": {
"app": "kube-proxy"
},
"name": "kube-proxy",
"namespace": "kube-system",
"resourceVersion": "21706920",
"uid": "97594de0-5aaa-4ea0-bd8c-a2f5fb357be7"
}
}
I am trying to modify the value of the metricsBindAddress
field contained within the config.conf
item in the ConfigMap
data. The example provided above has the path specified as /data/config.conf/metricsBindAddress
. I think this is why the failure is occurring.
To test my theory, I changed the path to /data[config.conf]metricsBindAddress
. I had no idea what would happen, but to my surprise, it did not throw an error. However, it also did not change the field of interest. Progress, though.
I don't know the correct way to specify a path to get to what I need in the context of the Ansible kubernetes
module.