I know there are many similar questions to my question but I need a clear answer regarding my question. Since we all know, TCP is connection-oriented while UDP is connectionless. If on the same network, I make two UDP sockets (server, client) and two TCP sockets (server, client) then who will consume more bandwidth? I mean according to my knowledge, keeping in mind the connection-oriented term, I assume that TCP will consume more bandwidth all the time while UDP will consume bandwidth only when data is being sent.
Could you please help me out in clearing this issue?