Setup
I have a k8s cluster setup in the following manner:
- 1 Master Node
- 2 Worker Nodes
The cluster is setup using kubeadm and Flannel.
I have two different pod types:
- Java Proxy Server
- Java TCP Server
The Java Proxy Server pod is what I initially created as a StatefulSet. Each Java Proxy Server has its own state (clients currently connected), however they all are expected to share a common state.
This common state is an up-to-date list of Java TCP Server pods and their associated IP addresses. My objective here is to ensure every proxy server has a current list of TCP servers it can proxy connections towards.
Each instance of the Java TCP server has its own unique state and is also deployed as a StatefulSet. The only commonality between the TCP server pods is that they can receive connections from the proxy servers.
The Proxy Servers must know whenever a TCP server pod comes up or goes down so they know what pods are available to proxy connections towards.
The TCP servers are delegated connections by the proxy server. It is never the case where a TCP server is randomly given a connection by the proxy server and they are not load balanced.
Attempt
I have tried to utilize the Java Kubernetes Client and implemented a watch on my Proxy Servers like so:
ApiClient apiClient = Config.defaultClient();
apiClient.setReadTimeout(0);
System.out.println(apiClient.getBasePath());
Configuration.setDefaultApiClient(apiClient);
CoreV1Api api = new CoreV1Api();
V1PodList pods = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
V1ListMeta podsMeta = pods.getMetadata();
if (podsMeta != null) {
String resourceVersion = podsMeta.getResourceVersion();
Watch<V1Pod> watch = Watch.createWatch(
apiClient,
api.listPodForAllNamespacesCall(null, null, null, null, null, null, resourceVersion, null, true, null),
new TypeToken<Watch.Response<V1Pod>>(){}.getType());
while (watch.hasNext()) {
Watch.Response<V1Pod> response = watch.next();
V1Pod pod = response.object;
V1PodStatus status = pod.getStatus();
if (status != null) {
System.out.printf("Pod IP: %s\n", status.getPodIP());
System.out.printf("Pod Reason: %s\n", status.getReason());
}
}
watch.close();
}
This works relatively well. The big problem for me is that for this simple process, it adds a massive 40MB to my final Jar file.
I know that 40MB might not be much to some people. I just feel there's a more lightweight way for me to implement what I'm trying to do?
Is there a much better process to track these pods that are created and destroyed within the cluster that I am overlooking?