I have k3s cluster working pretty well with Grafana monitoring and traefik/klipper-lb. However my own app Ingress does not work.
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-esp
namespace: myapp
spec:
rules:
- host: test-esp.nip.io
http:
paths:
- path: /
backend:
serviceName: test-esp
servicePort: http
service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: test-esp
name: test-esp
namespace: myapp
spec:
ports:
- name: http
port: 22000
targetPort: 22000
selector:
app: test-esp
daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: test-esp
name: test-esp
namespace: myapp
spec:
selector:
matchLabels:
app: test-esp
template:
metadata:
labels:
app: test-esp
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: test-esp
image: <gitlab image pull url>
envFrom:
- configMapRef:
name: test-esp-config
ports:
- containerPort: 22000
command: ["/home/scripts/entrypoint.sh"]
args: ["1"]
imagePullPolicy: Always
resources: {}
** 3 Nodes in K3s have DaemonSet Pods running **
> kubectl get po -n myapp
NAME READY STATUS RESTARTS AGE
test-esp-9cwzp 1/1 Running 0 4m3s
test-esp-qjk5n 1/1 Running 0 4m3s
test-esp-j8nnk 1/1 Running 0 4m3s
** curl on individual nodes work **
masternode-1> curl -XGET http://masterip:22000
<long html page>
** curl from ingress times out **
mylocal> curl -XGET http://test-esp.nip.io
gateway timeout
** ping/telnet to test-esp.nip.io finds the ip works well **
Any experts out there, is this something do with iptables ?