0

I'm trying to connect to a socket that is inside a container and deployed on Kubernetes. Locally everything works fine but when deployed it throws an error on connect. I tried with different options but with no success.

Client code

const ENDPOINT = "https://traveling.dev/api/chat";  // this will go to the endpoint of service where socket is running

const chatSocket = io(ENDPOINT, {
  rejectUnauthorized: false,
  forceNew: true,
  secure: false,
});
chatSocket.on("connect_error", (err) => {
  console.log(err);
  console.log(`connect_error due to ${err.message}`);
});
console.log("CS", chatSocket);

Server code

const app = express();

app.set("trust proxy", true);
app.use(cors());

const server = http.createServer(app);
const io = new Server(server, {
  cors: {
    origin: "*",
    methods: ["*"],
    allowedHeaders: ["*"],
  },
});


io.on("connection", (socket) => {

  console.log("Socket succesfully connected with id: " + socket.id);

});

const start = async () => {   

  server.listen(3000, () => {
    console.log("Started");
  });
};

start();

The thing is code is irrelevant here cause locally it all works fine but I posted it anyways. What can cause this while containerizing it and putting it on Kubernetes?

And the console log just says server error

Error: server error
    at Socket.onPacket (socket.js:397)
    at XHR.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
    at XHR.onPacket (transport.js:107)
    at callback (polling.js:98)
    at Array.forEach (<anonymous>)
    at XHR.onData (polling.js:102)
    at Request.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
    at Request.onData (polling-xhr.js:232)
    at Request.onLoad (polling-xhr.js:283)
    at XMLHttpRequest.xhr.onreadystatechange (polling-xhr.js:187)

Does anyone have any suggestions on what may cause this and how to fix it? Also, any idea on how to get more information about the error would be appreciated.

This is the YAML file that creates the Service and a Pod.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: chat-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: chat
  template:
    metadata:
      labels:
        app: chat
    spec:
      containers:
        - name: chat
          image: us.gcr.io/forward-emitter-321609/chat-service 
---
apiVersion: v1
kind: Service
metadata:
  name: chat-srv
spec:
  selector:
    app: chat
  ports:
    - name: chat
      protocol: TCP
      port: 3000
      targetPort: 3000

I'm using a loadbalancer on GKE with nginx which IP address is mapped to traveling.dev

This is how my ingress routing service config looks like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
    nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_headers "Access-Control-Allow-Origin: $http_origin";
spec:
  rules:
    - host: traveling.dev
      http:
        paths:
          - path: /api/chat/?(.*)
            backend:
              serviceName: chat-srv
              servicePort: 3000
          - path: /?(.*)
            backend:
              serviceName: client-srv
              servicePort: 3000

Thanks!

1 Answers1

0

Nginx Ingress by default support WebSocket proxying, but you need to configure it. For this you need to add annotation custom configuration snippet.

You can refer this already answered stackoverflow question.

Nginx ingress controller websocket support

Amjad Hussain Syed
  • 994
  • 2
  • 11
  • 23