0

Say if I want to open two ports, one for the public at 8080, and another one to process some public request but was forwarded by the 8080 port like such:

const http = require('http');
const publicServer = http.createServer(...).listen(8080);
const privateServer = http.createServer(...).listen(9999);
publicServer.on('connect', (req, cltSocket, head) => {
  ...
  if (...) {
    // let srvSocket = net.connect('9999', 'localhost', () => {
    let srvSocket = net.connect('9999', '127.0.0.1', () => {
      cltSocket.write('\r\n\r\n');
      srvSocket.write(head);
      srvSocket.pipe(cltSocket);
      cltSocket.pipe(srvSocket);
    });
  }
});

Is there some type of settings I can use to allow this? Currently it seems Openshift doesn't allow this setup. It is possible that it doesn't honor 127.0.0.1 or localhost and therefore not forwarding the request to the correct pod...

Aero Wang
  • 8,382
  • 14
  • 63
  • 99

1 Answers1

0

There should be no reason why you cant connect to port 9999 via localhost or 127.0.0.1 from any process in any container of the same pod.

Have you tried using oc rsh to access the running container and gone:

curl localhost:9999

to verify that your code is actually listening properly on port 9999?

Graham Dumpleton
  • 57,726
  • 6
  • 119
  • 134
  • Let me try...is it possible that `server.on('connect', cb)` might be where the issue lies? – Aero Wang Apr 17 '18 at 11:42
  • Looks like openshift online doesn't forward CONNECT to the pod. – Aero Wang Apr 17 '18 at 12:43
  • That is quite possible. How about explain in your question what you are trying to do, rather than ask why a solution doesn't work, were we don't know the problem you are trying to solve. There may be better ways to do what you want if we knew the original requirements. – Graham Dumpleton Apr 18 '18 at 04:36
  • 2
    Stepping back a bit. OpenShift should not block ``CONNECT`` if you are only trying to do this as loopback in the same container. There may be an issue with ``CONNECT`` working through an exposed route, but even that is only an assumption. The 9999 port here wouldn't be exposed outside of the OpenShift cluster anyway, if that is what you were hoping was the case. This is why you really need to explain the original problem you are trying to solve. – Graham Dumpleton Apr 18 '18 at 05:36
  • Oh, I want to use a different route that I wrote separately once I found that the request is HTTP/2.0 or HTTP/1.1. – Aero Wang Apr 18 '18 at 06:09
  • What OpenShift version are you using? For HTTP/2 support you need haproxy 1.8. Support for use of that version was only added in OpenShfit 3.9. The question is why you need to use a separate route. The way that HTTP/2 is designed is that you can use the same route as HTTP/1.1, with the protocol version being switched/upgraded once realise using HTTP/2. Never see anyone use a different host/route for HTTP/2 before. What is the reason for doing that? – Graham Dumpleton Apr 18 '18 at 06:24
  • You might also be able to do HTTP/2 if you exclusively use a pass through secure route. This means your app needs to terminate the secure connection itself. As for HTTP/2 support for OpenShift route when using HTTP or HTTPS edge termination, it is possible that haproxy configuration still may not support it under OpenShift 3.9, as haven't specifically heard about HTTP/2 being supported. Am checking. – Graham Dumpleton Apr 18 '18 at 06:34
  • No good and valid reason. Just because. The product was designed to be different. Why? I don't know. It's okay there are other ways to resolve this. This is just one lazy approach. We don't even deploy the live version on OpenShift Online anyways... – Aero Wang Apr 18 '18 at 06:34