7

Setup description

I have the following scenario: Created a Build Pipeline in Azure DevOps and after setting up my Kubernetes cluster I want to get a specific pod name using kubectl. I am doing this via the "Deploy to Kubernetes" task V1, which looks like this:

steps:
- task: Kubernetes@1

  displayName: 'Get pod name'
  inputs:
    azureSubscriptionEndpoint: 'Azure Pay-as-you-Go (anonymized)'
    azureResourceGroup: MyK8sDEV
    kubernetesCluster: myCluster
    command: get
    arguments: 'pods -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}"'

So the task is running successfully and I want to get the output string of the above command. In the Pipeline visual designer it shows me an output variable of undefined.KubectlOutput that is being written to.

Problem statement

I have created a subsequent Bash script task directly after the above kubectl task. If I read the variable $KUBECTLOUTPUT or $UNDEFINED_KUBECTLOUTPUT it just returns an empty string. What am I doing wrong? I just need the output from the previous command as a variable.

My goal with the action

I am trying to make sure that the application I deployed with a helm chart in the previous step is up and running. In the next step I need to run some scripts inside the application pods (using kubectl exec) so I want to make sure that at least 1 pod hosting the app is up and running so that I can execute commands against it. In the meantime I realized that I can skip the checking step if I use the --wait flag when deploying the helm chart, but I still have issues using kubectl from within the bash script.

Razvan
  • 342
  • 2
  • 9
  • can you take a step back and describe what you are trying to achieve – 4c74356b41 Jan 27 '19 at 09:31
  • Sure, I am trying to make sure that the application I deployed with a helm chart in the previous step is up and running. In the next step I need to run some scripts inside the application pods (using kubectl exec) so I want to make sure that at least 1 pod hosting the app is up and running so that I can execute commands against it. – Razvan Jan 27 '19 at 14:44
  • why dont you want to do it all in one go? so you dont have to pass variables between steps? – 4c74356b41 Jan 27 '19 at 14:46
  • I would do so gladly if I knew how, it would spare quite some effort as my later pipeline requires a couple of additional such steps. Can I do it by parameterizing the helm chart in some way or with kubernetes own methods? – Razvan Jan 27 '19 at 14:51

3 Answers3

7

If you give the NAME for the kubectl task eg. SomeNameForYourTask like below

- task: Kubernetes@1
   name: SomeNameForYourTask 
   displayName: some display name
   inputs:
     connectionType: Kubernetes Service Connection
  ...

you will be able to access kubectl command output using

echo $(SomeNameForYourTask.KubectlOutput)

or

echo $(SomeNameForYourTask.KUBECTLOUTPUT)

or

echo $SOMENAMEFORYOURTASK_KUBECTLOUTPUT

in the following script task(s). Of course, the output should not exceed 32766 chars (according to the code https://github.com/microsoft/azure-pipelines-tasks/blob/b0e99b6d8c7d1b8eba65d9ec08c118832a5635e3/Tasks/KubernetesV1/src/kubernetes.ts).

betelgeuse
  • 1,136
  • 3
  • 13
  • 25
recies
  • 71
  • 2
  • 2
2

this is what I've been using:

config=`find . -name config`

kubectl --kubeconfig $config get -n $(k8sEnv) deploy --selector=type=$(containerType) -o | jq '.items[].metadata.name' \
  | xargs -L 1 -i kubectl --kubeconfig $config set -n $(k8sEnv) image deploy/{} containername=registry.azurecr.io/$(containerImage):$(BUILD.BUILDNUMBER) --record=true

this will find all the deployments with the specific label and run kubectl set on each one of these, you can adapt this to your needs easily. the only prerequisite, you have to have kubectl task before this task, so your agent downloads kubectl config from Azure Devops.
this above has to run in this directory:

/home/vsts/work/_temp/kubectlTask
4c74356b41
  • 69,186
  • 6
  • 100
  • 141
  • Thanks for the pointer, however the config file is not being found. I've been trying for some time now, but I only get either a visual studio config file or the git config file as the find result. Changing the working directory to /home/vsts/work/_temp/kubectlTask does not bring anything. I've checked the folder and it contains only two files with cryptic names: 1548619950257 and 1548619970089. I have placed a kubectl task before the script task as you advised. Where should I look next? – Razvan Jan 27 '19 at 20:31
  • are you using latest kubectl task version, if not - try using that? you can widen the search and just do `find / -name config` or try running build with diagnostic logging and see if you can spot where kubectl task puts (or gets) its config from – 4c74356b41 Jan 27 '19 at 20:33
  • I have tried to widen the search and researched how the kubectl task itself accesses the cluster (see my detailed post below), but the config file is not being written unless I explicitly use az aks get-credentials in my script. Afterwards it does seem to work but I was uneasy about the unencrypted access data in the config file on a microsoft-hosted agent. – Razvan Jan 28 '19 at 09:04
0

After a couple of hours of different attempts at figuring out how azure devops connects to the the AKS cluster, I figured out that it is using an OAuth access token as far as I can tell. One can access this token using the System.AccessToken variable (if the Agent Job is allowed access to the token - this is a configuration option and its off by default). What I could not figure out is how to use this token with kubectl inside a script, so I have abandoned this path for now.

Also the job is running on a hosted Ubuntu agent (as in Microsoft hosted) so it might be avoiding downloading the config file for security reasons, even though Microsoft itself maintains that the agents are single-use VMs and that "The virtual machine is discarded after one use" see MS docs here.

What works on the hosted agent (I would still recommend some encryption for production scenarios) - using azure CLI commands to log in and get the cluster credentials:

az login
az aks get-credentials --resource-group=MyClusterDEV --name myCluster
kubectl […]

The alternative solution I used is to run the scripts on a local agent that already has the Kubernetes config file pre-configured. For this I simply created an additional agent job to run my scripts, so now I have:

  1. A general agent job (Hosted Ubuntu 16) doing the helm init and other basic setup tasks
  2. A local agent job (Windows) running more complex scripts against specific pods
Razvan
  • 342
  • 2
  • 9