3

Modified the helm chart of alerta to spin it up on an istio- enabled GKE cluster.

The alerta pod and its sidecar are created OK

▶ k get pods | grep alerta
alerta-758bc87dcf-tp5nv                        2/2     Running   0          22m

When I try to access the url that my virtual service is pointing to

I get the following error

upstream connect error or disconnect/reset before headers. reset reason: connection termination

▶ k get vs alerta-virtual-service -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    helm.fluxcd.io/antecedent: mynamespace:helmrelease/alerta
  creationTimestamp: "2020-04-23T14:45:04Z"
  generation: 1
  name: alerta-virtual-service
  namespace: mynamespace
  resourceVersion: "46844125"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/mynamespace/virtualservices/alerta-virtual-service
  uid: 2a3caa13-3900-4da1-a3a1-9f07322b52b0
spec:
  gateways:
  - mynamespace/istio-ingress-gateway
  hosts:
  - alerta.myurl.com
  http:
  - appendHeaders:
      x-request-start: t=%START_TIME(%s.%3f)%
    match:
    - uri:
        prefix: /
    route:
    - destination:
        host: alerta
        port:
          number: 80
    timeout: 60s

and here is the service

▶ k get svc alerta -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    helm.fluxcd.io/antecedent: mynamespace:helmrelease/alerta
  creationTimestamp: "2020-04-23T14:45:04Z"
  labels:
    app: alerta
    chart: alerta-0.1.0
    heritage: Tiller
    release: alerta
  name: alerta
  namespace: mynamespace
  resourceVersion: "46844120"
  selfLink: /api/v1/namespaces/mynamespace/services/alerta
  uid: 4d4a3c73-ee42-49e3-a4cb-8c51536a0508
spec:
  clusterIP: 10.8.58.228
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: alerta
    release: alerta
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

However, when I exec via another pod into the cluster and try to reach the alerta svc endpoint:

/ # curl -IL http://alerta
curl: (56) Recv failure: Connection reset by peer
/ # nc -zv -w 3 alerta 80
alerta (10.8.58.228:80) open

although as it is evident the port is open

any suggestion?

could it be that the chaining of the 2 proxies is creating issues? nginx behind envoy?

The container logs seem normal

2020-04-23 15:34:40,272 DEBG 'nginx' stdout output: 
ip=\- [\23/Apr/2020:15:34:40 +0000] "\GET / HTTP/1.1" \200 \994 "\-" "\kube-probe/1.15+"
/web | /index.html | > GET / HTTP/1.1

edit: Here is a verbose curl with host header explicitly set

/ # curl -v -H "host: alerta.myurl.com" http://alerta:80
* Rebuilt URL to: http://alerta:80/
*   Trying 10.8.58.228...
* TCP_NODELAY set
* Connected to alerta (10.8.58.228) port 80 (#0)
> GET / HTTP/1.1
> host: alerta.myurl.com
> User-Agent: curl/7.57.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

The nginx config file used by the app/pod is the following FWIW

worker_processes 4;
pid /tmp/nginx.pid;

daemon off;
error_log /dev/stderr info;

events {
        worker_connections 1024;
}

http {
        client_body_temp_path /tmp/client_body;
        fastcgi_temp_path /tmp/fastcgi_temp;
        proxy_temp_path /tmp/proxy_temp;
        scgi_temp_path /tmp/scgi_temp;
        uwsgi_temp_path /tmp/uwsgi_temp;

        include /etc/nginx/mime.types;

        gzip on;
        gzip_disable "msie6";

        log_format main 'ip=\$http_x_real_ip [\$time_local] '
        '"\$request" \$status \$body_bytes_sent "\$http_referer" '
        '"\$http_user_agent"' ;

        log_format scripts '$document_root | $uri | > $request';

        default_type application/octet-stream;

        server {
                listen 8080 default_server;


                access_log /dev/stdout main;
                access_log /dev/stdout scripts;

                location ~ /api {
                        include /etc/nginx/uwsgi_params;
                        uwsgi_pass unix:/tmp/uwsgi.sock;

                        proxy_set_header Host $host:$server_port;
                        proxy_set_header X-Real-IP $remote_addr;
                        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                root /web;
                index index.html;
                location / {
                        try_files $uri $uri/ /index.html;
                }
        }
}

edit 2: Trying to get the istio authentication policy

✔                                                                                                             18h55m  ⍉
▶ kubectl get peerauthentication.security.istio.io
No resources found.
✔                                                                                                              18h55m
▶ kubectl get peerauthentication.security.istio.io/default -o yaml
Error from server (NotFound): peerauthentications.security.istio.io "default" not found

edit 3: when performing curl to the service from within the istio proxy container

▶ k exec -it alerta-758bc87dcf-jzjgj -c istio-proxy bash
istio-proxy@alerta-758bc87dcf-jzjgj:/$ curl -v http://alerta:80
* Rebuilt URL to: http://alerta:80/
*   Trying 10.8.58.228...
* Connected to alerta (10.8.58.228) port 80 (#0)
> GET / HTTP/1.1
> Host: alerta
> User-Agent: curl/7.47.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
pkaramol
  • 16,451
  • 43
  • 149
  • 324
  • Could you please add to your question input from `curl -v -H "host: alerta.myurl.com" http://alerta:80`? Could you please change svc name from http to http-alerta and add second gateway to your virtual service which is `- mesh, gateways: - mynamespace/istio-ingress-gateway - mesh` and try again with above command and `curl -v alerta:80`? Additionally take a look at this [link](https://stackoverflow.com/a/59309172/11977760). – Jakub Apr 24 '20 at 08:57
  • Added the verbose output in the original question. Dunno if the rest is possible cause the infra is not totally manageable / accessible right now. Any suggestion(s) about what might be going wrong would be highly valuable. – pkaramol Apr 24 '20 at 09:29
  • I could rename the service from `alerta` to `http-alerta` however; can't see how this would help however – pkaramol Apr 24 '20 at 09:31
  • The change of the service name is based on [protocol selection](https://istio.io/docs/ops/configuration/traffic-management/protocol-selection/), sometimes when it's not correct, it reveal as 503 Service Unavailable, upstream connect error or disconnect/reset before headers. reset reason: connection termination. What about curl through ingress_gateway_ip/, same issue? What is the istio version? If you use mtls it's PERMISSIVE or STRICT? – Jakub Apr 24 '20 at 10:04
  • We have several other http services there not facing this protocol selection issue. Regarding `istio`, it has been installed via helm with `global.mtls.enabled=true`. Not sure if this creates permissive or strict config. – pkaramol Apr 24 '20 at 10:56
  • I have also updated the question with the nginx config – pkaramol Apr 24 '20 at 11:00
  • I would say it's something with mtls, you can check it with `kubectl get peerauthentication.security.istio.io/default -o yaml`. As far as I checked when i install istio on gke with global.mtls.enabled=true it's strict, then I checked this istio docs [here](https://istio.io/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode) and the problem occurs when request is from the client that doesn’t have proxy to the server with a proxy. Did you curl from injected pod? – Jakub Apr 24 '20 at 12:40
  • check my new updates on the original question – pkaramol Apr 24 '20 at 12:57
  • Hello @pkaramol, is your problem still unresolved? – Mikołaj Głodziak Dec 16 '21 at 10:27

1 Answers1

1

I created new gke cluster with istio 1.5.2, in fact if you check for mtls, there are no resources found

kubectl get peerauthentication --all-namespaces

No resources found.

kubectl get peerauthentication.security.istio.io/default

Error from server (NotFound): peerauthentications.security.istio.io "default" not found

So I tried to make this example and that clearly shows istio is in strict tls mode when you installed it with global.mtls.enabled=true.

If you add pods,namespaces as mentioned here it should be 200 for every request, but it's not

sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 000
command terminated with exit code 56
sleep.legacy to httpbin.bar: 000
command terminated with exit code 56
sleep.legacy to httpbin.legacy: 200

So if you change the mtls from strict to permissive with above below yaml

apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
  name: "default"
  namespace: "istio-system"
spec:
  mtls:
    mode: PERMISSIVE

it works now

sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 200
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200

Additionaly github issue with error you provided.


About the question

why the pod fails to mtls authenticate with itself, when curling from inside it

There is a github issue about this.


Additionally take a look at this istio docs.

Jakub
  • 8,189
  • 1
  • 17
  • 31