1

I have:

  • EKS deployed by aws-cdk script, with kubectl enabled, and apps deployed by eks.Cluster.addResource()
  • AWS Secrets Manager with a set of secrets I want to be available for EKS application

I tried to deploy Secret this way:

  import * as sm from "@aws-cdk/aws-secretsmanager";

  getSecret(secretKey: string): string {
    let secretTokens = sm.Secret.fromSecretArn(scope, "ImportedSecrets", awsSecretStorageArn);
    return secretTokens.secretValueFromJson(secretKey).toString();
  }

  createKubernetesImagePullSecrets(k8s: eks.Cluster): void {
    let eksSecretStorageName = this.env.awsResourcesConfig.k8sImagePullSecretStorageName;
    k8s.addResource(eksSecretStorageName, {
      apiVersion: "v1",
      kind: "Secret",
      metadata: {
        name: eksSecretStorageName,
      },
      data: {
        ".dockerconfigjson": this.getSecret('hub-secret'),
      },
      type: "kubernetes.io/dockerconfigjson",
    });
  }

I'm getting an error from CloudFormation:

Secret in version "v1" cannot be handled as a Secret: v1.Secret.ObjectMeta: v1.ObjectMeta.TypeMeta: Kind: Data: decode base64: illegal base64 data at input byte 0

This happens because the secret token is not expanded and the ".dockerconfigjson" field value, in this case, looks like ${Token[TOKEN.417]}

Is there a way to deploy the EKS Secret resource and expand secret tokens correctly during deployment?

Andrew
  • 3,696
  • 3
  • 40
  • 71

1 Answers1

0

I created a temporary workaround for this, by downloading a plain-text version of secrets with aws-cli. Not a safe way, but works. Do not use this if you have a more secure solution.

import { execSync } from "child_process";

  extractSecretValues(awsSecretStorageArn: string) : Map<string, string> {
    let map = new Map<string, string>();
    let secretsContent = execSync(`aws secretsmanager get-secret-value --secret-id ${awsSecretStorageArn}`).toString();
    let secrets = JSON.parse(secretsContent);
    if (!secrets)
      throw new Error(`Secret values could not be extracted from ${awsSecretStorageArn}`);
    if (secrets.SecretString) {
      let secretValuesObj = JSON.parse(secrets.SecretString);
      for (let [secretKey, secretValue] of Object.entries<string>(secretValuesObj)) {
        map.set(secretKey, secretValue);
      }
    }
    return map;
  }

  let secretValueMap = extractSecretValues();

  createKubernetesImagePullSecrets(k8s: eks.Cluster): void {
    let eksSecretStorageName = this.env.awsResourcesConfig.k8sImagePullSecretStorageName;
    k8s.addResource(eksSecretStorageName, {
      apiVersion: "v1",
      kind: "Secret",
      metadata: {
        name: eksSecretStorageName,
      },
      data: {
        ".dockerconfigjson": secretValueMap.get('hub-secret'),
      },
      type: "kubernetes.io/dockerconfigjson",
    });
  }
Andrew
  • 3,696
  • 3
  • 40
  • 71
  • yeah, it's too bad the secret is saved in the template this way. The only way around seems to me to create the secret outside the CDK with e.g. aws cli, eksctl or kubectl. That would require one to create the cluster in a first CDK script, set the secret for that cluster and the in another CDK script reference the pod that uses that secret. Quite complicated. CDK's normal solution with `SecretValue` seems not supported here either (at least for system manager SecureStrings: "SSM Secure reference is not supported in: [Custom::AWSCDK-EKS-KubernetesResource/Properties/Manifest]") – kossmoboleat Jul 03 '20 at 08:57