After much deliberation and many dead ends, we finally came up with a solution that works for us. Part of the problem is that at the time of writing, Docker 1.12 is somewhat juvenile and introduces a number of concepts that have to be understood before it all makes sense. In our case, our previous experiences with pre 1.12 variants of Swarm have hindered our forward thinking rather than helped.
The solution we utilised to deploy a consul K/V service for our swarm goes as follows
Create an overlay network called 'consul'. This creates an address space for our service to operate within.
docker network create --driver overlay --subnet 10.10.10.0/24 consul
Deploy the consul server cluster into the new overlay. We have three hosts that we use as manager nodes and we wanted the consul server containers to run on this cluster rather than the app servers hence the 'constraint' flag
docker service create -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' --name consulserver --network consul --constraint 'node.role == manager' --replicas 3 consul agent server -bootstrap-expect=3 -bind=0.0.0.0 -retry-join="10.10.10.2" -data-dir=/tmp
The key here is that swarm will allocate a new VIP (10.10.10.2) at the start of the consul network that maps onto the three new instances.
Next we deployed an agent service
docker service create \
-e 'CONSUL_BIND_INTERFACE=eth0' \
-e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true, "retry_join":["10.10.10.2"]}' \
--publish "8500:8500" \
--replicas 1 \
--network consul \
--name consulagent \
--constraint 'node.role != manager' \
consul agent -data-dir=/tmp -client 0.0.0.0
Specifying the VIP of the consulserver service. (Consul won't resolve names for join - other containers may do better, allowing the service name "consulserver" to be specified rather than the VIP)
This done, any other service can access the consulagent by being joined to the consul network, and resolving the name "consulagent". The consulagent service can be scaled (or maybe deployed as a global service) as required.
Publishing port 8500 makes the service available at the edge of the swarm and could be dropped if you didnt need to make it available to non swarm services.