1

I am triying to deploy a Jenkins using helm with JCASC to get vault secrets. I am using a local minikube to create mi k8 cluster and a local vault instance in my machine (not in k8 cluster).

Even that I am trying using initContainerEnv and ContainerEnv I am not able to reach the vault values. For CASC_VAULT_TOKEN value I am using vault root token. This is helm command i run locally:

helm upgrade --install -f values.yml mijenkins jenkins/jenkins

And here is my values.yml file code:

controller:
  installPlugins:
    # need to add this configuration-as-code due to a known jenkins issue: https://github.com/jenkinsci/helm-charts/issues/595
  - "configuration-as-code:1414.v878271fc496f"
  - "hashicorp-vault-plugin:latest"

  # passing initial environments values to docker basic container
  initContainerEnv:
  - name: CASC_VAULT_TOKEN
    value: "my-vault-root-token"
  - name: CASC_VAULT_URL
    value: "http://localhost:8200"
  - name: CASC_VAULT_PATHS
    value: "cubbyhole/jenkins"
  - name: CASC_VAULT_ENGINE_VERSION
    value: "2"
  ContainerEnv:
  - name: CASC_VAULT_TOKEN
    value: "my-vault-root-token"
  - name: CASC_VAULT_URL
    value: "http://localhost:8200"
  - name: CASC_VAULT_PATHS
    value: "cubbyhole/jenkins"
  - name: CASC_VAULT_ENGINE_VERSION
    value: "2"

  JCasC:
    configScripts:
      here-is-the-user-security: |
        jenkins:
          securityRealm:
            local:
              allowsSignup: false
              enableCaptcha: false
              users:
                - id: "${JENKINS_ADMIN_ID}"
                  password: "${JENKINS_ADMIN_PASSWORD}"

And in my local vault I can see/reach values:

>vault kv get cubbyhole/jenkins
============= Data =============
Key                       Value
---                       -----
JENKINS_ADMIN_ID          alan
JENKINS_ADMIN_PASSWORD    acosta

Any of you have an idea what I could be doing wrong?

David Maze
  • 130,717
  • 29
  • 175
  • 215
alanmas
  • 61
  • 4

2 Answers2

2

I haven't used Vault with jenkins so I'm not exactly sure about your particular situation but I am very familiar with how finicky the Jenkins helm chart is and I was able to configure my securityRealm (with the Google Login plugin) by creating a k8s secret with the values needed first:

kubectl create secret generic googleoauth --namespace jenkins \
  --from-literal=clientid=${GOOGLE_OAUTH_CLIENT_ID} \
  --from-literal=clientsecret=${GOOGLE_OAUTH_SECRET}

then passing those values into helm chart values.yml via:

controller:
  additionalExistingSecrets:
  - name: googleoauth
    keyName: clientid
  - name: googleoauth
    keyName: clientsecret

then reading them into JCasC like so:

...
  JCasC:
    configScripts:
      authentication: |
        jenkins:
          securityRealm:
            googleOAuth2:
              clientId: ${googleoauth-clientid}
              clientSecret: ${googleoauth-clientsecret}

In order for that to work the values.yml also needs to include the following settings:

serviceAccount:
  name: jenkins

rbac:
  readSecrets: true # allows jenkins serviceAccount to read k8s secrets

Note that I am running jenkins as a k8s serviceAccount called jenkins in the namespace jenkins

david_beauchamp
  • 161
  • 1
  • 7
  • Actually what I was trying to achieve was to touch vault local server with my minikube cluster, but with this info you shared I realized that issue was not my code, was actually the comunication between my vault and my local kubernetes cluster. I found a solution exposing vault service! I will share it in comments. THANKS! – alanmas Apr 08 '22 at 01:50
0

After debugging my jenkins installation I figured out that the main issue was not my values.yml neither my JCASC integration as I was able to see the ContainerEnv values if I go inside my jenkins pod with:

kubectl exec -ti mijenkins-0 -- sh

So I needed to expose my vault server so my jenkins is able to reach it, I used this Vault tutorial to achieve it. Which in, brief, instead of using normal: vault server -dev

We need to use:

vault server -dev -dev-root-token-id root -dev-listen-address 0.0.0.0:8200

Then we need to export an environment variable for the vault CLI to address the Vault server.

export VAULT_ADDR=http://0.0.0.0:8200

After that, we need to determine the vault address which we are going to redirect our jenkins ping, to do that we need start a minukube ssh session:

minikube ssh

Within this SSH session, retrieve the value of the Minikube host.

$ dig +short host.docker.internal
192.168.65.2

After retrieving the value, we are going to retrieve the status of the Vault server to verify network connectivity.

$ dig +short host.docker.internal | xargs -I{} curl -s http://{}:8200/v1/sys/seal-status

And now we can connect our jenkins pod with our vault, we just need to change CASC_VAULT_URL to use http://192.168.65.2:8200 in our main .yml file like this:

  - name: CASC_VAULT_URL
    value: "http://192.168.65.2:8200"
alanmas
  • 61
  • 4