My question may be pretty basic TCP/IP / router based, but I need to couch it in terms of a Winpcap application pingplotter that I have used.
One of the features I did use in my last place of work was to use this Win32 application PingPlotter to test TCP/IP packet loss and transmission times for packets of size say 2000bytes.
Now to enable this feature, one needs to use the Winpcap driver.
It does seem like a bit of black magic though, how can one do random TCP/IP packet loss testing to a remote server when a receiving application is not on the other side?
Is it just looking for packet tcp/ip packet ack/nack messages, and analysing the IP stability at a lower level of protocol analysis? Typing out my question I might be answering my own question, but I really don't understand any detail so it's nice to validate or be corrected.
My second part of the question is this: the tool is able to create nice graphs of the packet loss of my test tcp/ip packet to every hop on the tracert.
To the target server, which I know to be a SQL Server to which I can connect, it seemingly fails every packet, though all the hops of the tracert seem to deliver ok. Is this something I know I might expect if the machine has a operating system based software firewall of some description, that is regularly seen in practice?
I note that PingPlotter generally tests on port 80.
Does anyone else have any other suggestions how I might better test the client connectivity to the server? We are getting messages from the SQL Server client library saying that keep-alives are occasionally going missing, so I was wanting to set up some tcp/ip packet testing and draw a nice graph of packet loss and transmission times for a 1k packet, monitored overnight.
In my last place of work, I did this and found a lot of problems, but the failure to validate the tcp/ip packet sent to port 80 is a bit of a bugger, as it report 100% packet loss immediately, despite functioning as a working and reachable SQL Server.