I understand that questions like this can be a matter of opinion, which is why I tried to provide more details.
I'm developing a WebSocket server app with the following 'features':
- 2-3 connections can point to the same User
- Users can occasionally interact with each other. Either in bursts or in continuous exchange of data
- A portion of global data should be always synced between all users
- A portion of global data should be lazily (on demand) synced between some users
2 possible structures come to mind:
- Worker threads. Main thread handles all network IO, connection->User bundling and stores/interacts with global data. It also sends lots of
postMessage
-s to worker threads. Worker threads occasionally talk to each other when 2 players from different threads need to interact. Possible issues:postMessage
overhead (mostly from serialization from what I understand). Theoretically for most messages I could try to useSharedArrayBuffer
. I don't think it could be optimized beyond that.- Main thread would handle both network IO and global data interactions. Maybe this could become a bottleneck?
- Stateless NodeJS cluster that stores all data in Redis and also utilizes occasional locking. Issues/questions:
- Would use more RAM
- No matter how fast Redis is, communicating with it will most likely be slower than using
postMessage
-s. - Will multiple servers actually be benefitial? I heard that splitting the same type IO into multiple processes is a bad idea. Or at least is not as benefitial as using native Node asynchronous IO.
- I don't think there is a way to bundle connections based on something like User id.
- Getting and setting Redis data on request is fine. But how would one process know if some key was modified by another process?