While learning some Rust, I saw a lot of tutorials that used two very simple models. One is on the server side, where all the accepted tcpstreams are moved to a new thread for use, and the other is on the client side, using blocking reads and then output.
But for a real project, this is definitely not enough. For example, on the client side, it is usually not possible to block the main thread to read the data. So either use non-blocking sockets, or use multi-threaded or asynchronous io.
Since I am new to Rust, I don't plan to use async io or tokio libraries for this.
Suppose I use a thread to block reading data, and send data or close the tcp connection in the main thread.
As a general practice, since the tcp connection is used in two threads, then generally we have to use Arc<Mutex<TcpStream>>
to use the connection variable.
But when I need to read in the read thread, I will do Mutex::lock()
to get the TcpStream
, and when I send or close in the main thread, I also need to do Mutex::lock()
. Won't this cause a deadlock?
Of course, another way is to poll a message queue in a new thread, and send commands like this one when the socket has a read event, or when the main thread needs to send data or close the connection. This way the access to the TcpStream is done in one thread. However, it seems to add a lot of extra code for maintaining the message queue.
If the TcpStream can generate two ends, just like channel, a read end and a write end. I will use them in different threads conveniently. But it seems no such function provided.
Is there a recommended approach?