In a Hosted Rancher Kubernetes cluster, I have a service that exposes a websocket service (a Spring SockJS server). This service is exposed to the outside thanks an ingress rule:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myIngress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600s"
nginx.ingress.kubernetes.io/enable-access-log: "true"
spec:
rules:
- http:
paths:
- path: /app1/mySvc/
backend:
serviceName: mySvc
servicePort: 80
A web application connects to the web socket service throught an ingress nginx and it works fine. The loaded js scripts is:
var socket = new SockJS('ws');
stompClient = Stomp.over(socket);
stompClient.connect({}, onConnected, onError);
On the contrary, the standalone clients (js or python) do not work as they returns a 400 http error.
For example, here is the request sent by curl and the response from nginx:
curl --noproxy '*' --include \
--no-buffer \
-Lk \
--header "Sec-WebSocket-Key: l3ApADGCNFGSyFbo63yI1A==" \
--header "Sec-WebSocket-Version: 13" \
--header "Host: ingressHost" \
--header "Origin: ingressHost" \
--header "Connection: keep-alive, Upgrade" \
--header "Upgrade: websocket" \
--header "Sec-WebSocket-Extensions: permessage-deflate" \
--header "Sec-WebSocket-Protocol: v10.stomp, v11.stomp, v12.stomp" \
--header "Access-Control-Allow-Credentials: true" \
https://ingressHost/app1/mySvc/ws/websocket
HTTP/2 400
date: Wed, 20 Nov 2019 14:37:36 GMT
content-length: 34
vary: Origin
vary: Access-Control-Request-Method
vary: Access-Control-Request-Headers
access-control-allow-origin: ingressHost
access-control-allow-credentials: true
set-cookie: JSESSIONID=D0BC1540775544E34FFABA17D14C8898; Path=/; HttpOnly
strict-transport-security: max-age=15724800; includeSubDomains
Can "Upgrade" only to "WebSocket".
Why does it work with the browser and not standalone clients ?
Thanks