2

I am using boost to serialize a struct, then sending that struct over tcp to another application. (both on the same machine, for testing purposes).

This was all running well, and the total time to pack, send and unpack was around 10ms. Now, however, it has suddenly jumped to 30ms.

Am I measuring the latency correctly? And if so, what could be causing this slowdown? How can i get the speed back up?

struct:

struct frame
{

    long milliseconds;
    vector<float> buff;
    template <typename Archive>
    void serialize(Archive& ar, const unsigned int version)
    {

        ar & milliseconds;
        ar & buff;
    }
};

Sending application:

    frame data;

    static auto const flags = boost::archive::no_header | boost::archive::no_tracking;


    boost::asio::io_service ios;
    boost::asio::ip::tcp::endpoint endpoint
        = boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 4444);
    boost::asio::ip::tcp::acceptor acceptor(ios, endpoint);
    boost::asio::ip::tcp::iostream stream;

    // Your program stops here until client connects.
    acceptor.accept(*stream.rdbuf());

    //get ms time to test latency
    pt::ptime current_date_microseconds = pt::microsec_clock::local_time();
    long milliseconds = current_date_microseconds.time_of_day().total_milliseconds();


    //add dummy data to vector
    std::vector<float> temp(100, 0.0);
    frm.buff = temp;

    //add milliseconds, for latency check
    data.milliseconds = milliseconds;

   //send data
    boost::archive::binary_oarchive archive(stream);
    archive << data;

receiving application:

frame data_in;

std::string ip = "127.0.0.1";

boost::asio::ip::tcp::iostream stream(ip, "4444");

if (!stream)
    throw std::runtime_error("can't connect");

boost::archive::binary_iarchive archive(stream);
archive >> data_in;

pt::ptime current_date_microseconds = pt::microsec_clock::local_time();
long milliseconds = current_date_microseconds.time_of_day().total_milliseconds();
long timeElapsed = milliseconds - data_in.milliseconds;

std::cout << " tcp took: " << timeElapsed << "\n";
anti
  • 3,011
  • 7
  • 36
  • 86
  • We're not psychics. What changed? Boost versions? Compiler version/flags? Network config? Machine? 10ms vs 30ms might not be significant with most network latencies. Is this always on loopback? – sehe Mar 01 '18 at 12:34
  • Thanks for your reply. Sorry i should have specified.Nothing has changed. The project is the same, i opened it on the same machine, with exactly the same spec, same visual studio. What do you mean by 'always on loopback'? – anti Mar 01 '18 at 12:38
  • Hah. Visual studio seems relevant. Check virus scanners – sehe Mar 01 '18 at 12:38
  • antivirus software you mean? there is none running. – anti Mar 01 '18 at 12:45
  • 2
    Does that include things like Windows Defender? I'd certainly consider services like that. – sehe Mar 01 '18 at 12:46
  • Ok, it gets weirder... On one machine, running windows 7, I turned defender off, and got my speed back. But, then I moved the project to a windows 10 machine, and the speed drops from 10ms per transfer, to 500ms per transfer! What could be causing this massive slowdown? Defender is off, no antivirus running. Using the exact same exe on both machines. – anti Mar 01 '18 at 20:52
  • I have tried setting ` const boost::asio::ip::tcp::no_delay option(true); stream.rdbuf()->set_option(option);`, but this seems to have no effect. – anti Mar 01 '18 at 21:17
  • Check system load, use network analysis tools to figure out where the delay(s) happen. – sehe Mar 01 '18 at 21:27
  • Third party tools? Or is there something in windows? – anti Mar 01 '18 at 21:35
  • Cpu and memory usage is very low. – anti Mar 01 '18 at 21:44
  • 1
    Aha. I have added a thread sleep in my send loop. 5ms has fixed my issue. I guess I was choking the bandwidth trying to pack and send without a break! thank you for your help. – anti Mar 01 '18 at 22:43

0 Answers0