Although LoadBalancer is an undeniably recommended solution (especially in cloud environment), it's worth mentioning that NodePort also has load balancing capabilities.
The fact that you're accessing your NodePort
Service on a particular node doesn't mean that you are able to access this way only Pods
that have been scheduled on that particular node.
As you can read in NodePort
Service specification:
Each node proxies that port (the same port number on every Node)
into your Service
.
So by accessing port 30080
on one particular node your request is not going directly to some random Pod
, scheduled on that node. It is proxied to the Service
object which is an abstraction that spans across all nodes. And this is probably the key point here as your NodePort
Service isn't tied in any way to the node, IP of which you use to access your pods.
Therefore NodePort
Service is able to route client requests to all pods across the cluster using simple round robin algorithm.
You can verify it easily using the following Deployment
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo $MY_NODE_NAME > /usr/share/nginx/html/index.html"]
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
This will allow you to test to which node your http request is going to. You may additionally need to scale a bit this Deployment
to make sure that all nodes are used:
kubectl scale deployment nginx-deployment --replicas=9
Then verify that your pods are scheduled on different nodes:
kubectl get pods -o wide
List all your nodes:
kubectl get nodes -o wide
and pick the IP address of a node that you want to use to access your pods.
Now you can expose the Deployment
by running:
kubectl expose deployment nginx-deployment --type NodePort --port 80 --target-port 80
or if you want to specify the port number by yourself e.g. as 30080
, apply the following NodePort
Service definition as kubectl expose
doesn't allow you to specify the exact nodePort
value:
apiVersion: v1
kind: Service
metadata:
name: nginx-deployment
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
Then try to access your pods exposed via NodePort
Service using IP of the previously chosen node. You may need to try both normal and private/incognito modes or even different browser (simple refresh may not work) but eventually you will see that different requests land on pods scheduled on different nodes.
Keep in mind that if you decide to use NodePort
you won't be able to use well known ports. Actually it might be even feasible as you may change the default port range (30000-32767
) to something like 1-1024
in kube-apiserver configuration by using --service-node-port-range
option but its not recommended as it might lead to some unexpected issues.