0

I would like to deny outgoing connections from existing pods for specific IP address. I created the following network policy (NP) in which I restricted the IP range of the database server (10.16.0.0/16). The policy only works after the pod is restarted (jdbc erros in log). If I apply Network Policy to a running pod, the pod is still able to communicate with the database. In case of other system (ldap) NP will block the communication immediately without having to restart the pod.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: egress-deny-ip
  namespace: <namespace>
spec:
  podSelector:
      matchLabels:
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - <cidr>
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: "kube-system"
    - podSelector:
        matchLabels:
          k8s-app: "kube-dns"
    ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53

I assumed that communication would be blocked immediately and errors would appear in the log. I tried blocking all communication from the pod, but it didn't affect the database (no error in log, only ldap errors). I also tried block ingress and egress for specific cidr, but nothing changed.

Has anyone encountered this behavior?

0 Answers0