I am creating a server to monitor the online presence of clients on a webpage.
- There will be 80-100 000 (eighty thousand) simultaneous clients to monitor.
- I’m using .Net to write this.
Clients will contact a (separate) server using JavaScript (on the HTML-page) to tell the server that they are alive/online.
I’m considering one of two approaches:
Persistent connections with keep-alive sent regularly. This will give me much higher precision on when clients disconnect and I don’t need to update memory structure (onlineinfo) too often because we know when client comes and goes. Additional benefits to network equipment / bandwidth.
Clients (re)connect at intervals to tell server they are alive. This requires a lot of connections and will necessarily decrease the accuracy. I imagine intervals like 2-3 minutes is the best we can do. 80k/120=660 connections per second… ASP.Net don’t execute too fast so I’m unsure about this. 8 core system = ~10 ms per execution.
With this many connections there are obviously some limitations. I can’t spawn that many threads simultaneously for instance. 1 request to IIS spawning an ASP.Net application will use 1 thread until request is done.
Is the best option to write a stand-alone http-server? Doesn’t .Nets TcpListener leverage httpd.sys (IIS)?
Any (constructive) thoughts on the subject would be appreciated.
Edit: Adding some useful links to this post found by following links from Nicolas Repiquets answer: