2

I have the following issue and I am not sure if I am doing something not right or it is not working as expected.

  1. I have a consul cluster with ACL enabled.
  2. ACL default policy is set to DENY ("acl_default_policy": "deny",)
  3. For now I am always using the main management CONSUL token for communication.
  4. I also have VAULT and NOMAD configured with the management token and "vault.service.consul" and "nomad.service.consul" are registering in consul
  5. I specifically configured NOMAD with the consul stanza with the consul management token to be able to communicate with consul and register itself.

consul { address = "127.0.0.1:8500" token = "management token" }

I am using NOMAD to schedule Docker containers. Those docker containers need to populate configuration files from CONSUL KV store and I made that work with consul template (when no ACL is enabled).

Now my issue is that when I have ACL enabled in CONSUL - the docker containers are NOT able to get the values from CONSUL KV store with 403 errors (permission deny) because of the ACL. I thought that since I have configured the consul stanza in NOMAD like:

consul {
  address = "127.0.0.1:8500"
  token   = "management token"
}

all the jobs scheduled with NOMAD will be able to use that management token and the Docker containers will be able to communicate with CONSUL KV ?!

If I place the management token as a Docker environment variable in the NOMAD job description - than it works:

env {
      "CONSUL_HTTP_TOKEN" = "management token"
    }

However I do not want to place that management token in the Job description as they will be checked in git.

Am I doing something wrong or this simply does not work like that ?

Thank you in advance.

kereza
  • 29
  • 3

0 Answers0