I've got the following situation:
- EKS 1.21 (installed via eksctl)
- 2 managed node groups (1xspot->currently m type, 1xon_demand->t type)
- tigera-operator v3.23.1
- elasticsearch deployed via elasitc-operator (in logging ns)
- filebeat running as daemonset (also in logging ns)
Now I want to isolate the logging namespace with following netpol:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logging-default-netpol
namespace: logging
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: logging
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: elastic-system
---
After applying everything seems to work fine, however if I "restart" a filebeat pod with deleting it, it is no longer able to reach elasticsearch. Also the filebeat which is running on the same node as es seems not to be affected and can still reach es after restarting it.
In addition to that a random test pod which was create via kubectl run
command seems also to work as expected.
I know it should not be any different whether the pod was created via ds or deployment. What's going on there?