16

I'm trying to restrict to my openvpn to allow accessing internal infrastructure and limit it only by 'develop' namespace, so I started with simple policy that denies all egress traffic and see no effect or any feedback from cluster that it was applied, I've read all docs both official and not and didn't find a solution, here is my policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: policy-openvpn
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: openvpn
  policyTypes:
  - Egress
  egress: []

I've applied network policy above with kubectl apply -f policy.yaml command, but I don't see any effect of this policy, I'm still able to connect to anything from my openvpn pod, how to debug this and see what's wrong with my policy?

It seems like a black-box for me and what can do only is try-error method, which seems not how it should work.

How can I validate that it finds pods and applies policy to them?

I'm using latest kubernetes cluster provided by GKE

I noticed that I didn't check 'use networkpolicy' in google cloud settings and after I checked my vpn just stopped worked, but I don't know how to check it, or why vpn just allows me to connect and blocks all network requests, very strange, is there a way to debug is instead of randomly changing stuff?

animekun
  • 1,789
  • 4
  • 28
  • 45

3 Answers3

20

GKE uses calico for implementing network policy. You need to enable network network policy for master and nodes before applying network policy. You can verify whether calico is enabled by looking for calico pods in kube-system namespace.

kubectl get pods --namespace=kube-system

For verifying the network policies you can see the following commands.

kubectl get networkpolicy
kubectl describe networkpolicy <networkpolicy-name>
newoxo
  • 272
  • 4
  • 8
9

When you run you can check the label used for a POD selector:

k describe netpol <networkpolicy-name>
Name:         <networkpolicy-name>
Namespace:    default
Created on:   2020-06-08 15:19:12 -0500 CDT
Labels:       <none>
Annotations:  Spec:
  PodSelector:     app=nginx

Pod selector will show you which labels this netpol applied too. Then you can present all the pods with this label by:

k get pods -l app=nginx
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-f7b9c7bb-5lt8j   1/1     Running   0          19h
nginx-deployment-f7b9c7bb-cf69l   1/1     Running   0          19h
nginx-deployment-f7b9c7bb-cxghn   1/1     Running   0          19h
nginx-deployment-f7b9c7bb-ppw4t   1/1     Running   0          19h
nginx-deployment-f7b9c7bb-v76vr   1/1     Running   0          19h
Trigoman
  • 91
  • 1
  • 3
2

Debug with the netcat(nc):

$ kubectl exec <openvpnpod> -- nc -zv -w 5 <domain> <port>

P.S: To deny all egress traffic, do not need to declare the spec.egress key as an empty array, however it affects same:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: policy-openvpn
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: openvpn
  policyTypes:
  - Egress

ref: https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/network-policy-v1/

  • egress ([]NetworkPolicyEgressRule) ... If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). ...
홍한석
  • 439
  • 7
  • 21