11

Please help me clear up some confusion.

Laravel allows communication with socket.io by having you set up redis:

https://laravel.com/docs/5.4/broadcasting#configuration

To my understanding Redis simply holds the data in memory something similar to memcached? This allows third party software like socket.io to pick up the data. Is this really websocket behaviour though?

I know that you can also do something like this in PHP:

$address = 'localhost';
$port = 5600;
$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, $address, $port);

Why wouldn't they choose to something above instead of having you set up Redis? There is probably a good answer to this but I don't have that much experience with either Redis or websockets.

Any information on this would be appreciated.

Stephan-v
  • 19,255
  • 31
  • 115
  • 201
  • The code you posted is not websocket code, it's for normal sockets. I'm assuming PHP isn't that great with websockets and that's why they do that. – Sami Kuhmonen Feb 19 '17 at 22:00
  • I have taken that piece of code from: https://gonzalo123.com/2012/10/08/how-to-send-the-output-of-symfonys-process-component-to-a-node-js-server-in-real-time-with-socket-io/ so I guess it still works though. I still wonder why they chose redis though. I guess there may be downsides but I am not familiar with them. Would love to hear somebodies expert opinion on this. – Stephan-v Feb 19 '17 at 22:07

2 Answers2

18

You need to think about persistance of connection. A Request in Laravel only lives for the time it takes to get a response out. Once response is sent back, application shuts down until a new request hits the index.php and Laravel boots again.

So in fact, you can not establish a persistant connection this way. Socket.io for example, will let you connect to the service and remain connected. This is the main difference between a Rest and Websocket approach. In a Rest interface, the client continually polls the server... So if you have 1000 clients you have 1000 pesky little clients asking you if you have anything new every 30 seconds... annoying at best. But each time they ask, Laravel goes through the whole boot/shutdown process... nothing is persistant.

Now when using Socket.io through a Node service, each client will connect and have a persistant connection to the Node instance (which is a single persistant thread... better suited for this)... Having this connection to the 1000 clients, the clients simply listen to the socket for something new...

When a Laravel request creates an event that is of interest to the 1000 clients, it simply pushes the information to Redis queue... The Node instance reads from the Redis queue and can broadcast to the 1000 connected client as it has maintained the connection...

It is a nice use of both PHP and Node technology as it highlights the strengths of both...

Hope this helps...

Serge
  • 2,107
  • 1
  • 18
  • 24
  • The piece of code I posted is meant for opening a socket to catch the execution output of a shell command while it is running. This means my PHP it not 'suicidal' and will live until the process is done. Otherwise I would not even be able to log anything after the first line of shell execution output. In such a case there would be no need for something like Redis though? This would just be for one person to get live output of the shell execution. Cheers for answering by the way, helps out. – Stephan-v Feb 20 '17 at 09:22
  • Suicidal PHP.. lol PHP will run to completion, in the Laravel request lifecycle, after the response is sent the app shuts down. Naturally, if you keep it alive with something else to do afterwards then it will live on, or if you start another process. It is not impossible to do in PHP, it just requires persistance. Now if you kept every request you receive alive in it's own process, this does not scale. Node is a single thread handling all the connections, not one connection per thread... Simplifies reading the persistant queue when all connections need to be sent (broadcast) the same thing... – Serge Feb 20 '17 at 11:15
  • 1
    Also, in a scenario where you have multiple requests pushing events to the queue (Laravel request is born, sends message, then dies), then you need a queuing mechanism like Redis to make sure you don't loose any messages as the Websocket processing process might get overwhelmed and might not be able to server the Socket send in real time. This prevents creating a blocking call to send from the ephemeral Laravel request process... Naturally, you can get away with many things in a one client script... but that is the general idea and why Redis/Node is used in this context... Hope this helps – Serge Feb 20 '17 at 11:20
1

I have the same issue just a moment ago, now i will try to answer your question. that's let your server become stateless or the websocket stateless.for example: a server instance crashed, all the socket in that server destoryed.and the client will open a new connection to another server insance. for help new server aware what data the clent want. every new connection need include some extra message to be sent. channel name, sessionId, roomId etc.it's very similar to JWT, the password or channel or the sessionId moved to client from server.