I've a c# code that works very well in Windows vista/seven, but not on Windows XP.
The problematic part is a "multicast" node, that basically read and send data over a multicast address+port.
The part that is reading/writing to the network is a singleton.
Every thread that access to this singleton must indicate when they need to start to listen, and when they stop.
Socket are listened when at least one thread need to "Start", and we stop when all thread "Stop"(they have to give a Guid token, that the Start method return).
This start/stop mecanism is to ensure that if no thread need to see what is happening on the network, we don't consume memory for that.
The problem I got is that on windows XP, I got this exception:
System.Net.Sockets.SocketException (0x80004005): The I/O operation has been aborted because of either a thread exit or an application request
at System.Net.Sockets.Socket.EndReceiveFrom(IAsyncResult asyncResult, EndPoint& endPoint)
at System.Net.Sockets.UdpClient.EndReceive(IAsyncResult asyncResult, IPEndPoint& remoteEP)
After some search, it appears, that on windows XP and below, when a thread ends, the OS release all its I/O resources. (VB.NET 3.5 SocketException on deployment but not on development machine ).
Is there a way to avoid this behavior? Because in my case, it's normal and often that I create a thread that will ends before the end of the execution, and I don't want to have its socket released?
If it's not possible, how will you handle this?