Similar question: How to expose kube-dns service for queries outside cluster?
I have a PerconaDB instance in a VM in Google compute engine. Next to it is running Kubernetes cluster where services connect to the PerconaDB.
When I login with MySQL client and do show processlist;
, I see following:
| 175 | user | 10.12.142.24:46124 | user | Sleep | 14 | | NULL | 0 | 0 |
| 176 | user | 10.12.142.24:46126 | user | Sleep | 14 | | NULL | 0 | 0 |
| 177 | user | 10.12.122.42:60806 | user | Sleep | 2 | | NULL | 0 | 0 |
| 178 | user | 10.12.122.43:55164 | user | Sleep | 14 | | NULL | 1 | 0 |
| 179 | user | 10.12.122.43:55166 | user | Sleep | 4 | | NULL | 1 | 0 |
| 180 | user | 10.12.141.11:35944 | user | Sleep | 14 | | NULL | 1 | 0 |
Notice the number of different IPs for which I have no idea what they belong to. These are the pods inside the Kubernetes cluster and I would like to know their names so instead of 10.12.142.24:46124
I could see myservice-0dkd0:46124
.
I thought the solution would be to somehow link the kube-dns
service to the PerconaDB VM, but I have no idea, how to do that correctly. Also this is now running in production, so I don't want to experiment too much.