1

Our application normally transfers data (using HTTP GET) of several hundred megabytes, the default 64 Kb chunk size seems too small for optimal download rate. Changing the value to 5 Mb can reduce download time of 2 Gb data from 2minutes to 28 seconds.

The demo code, which just allocates the requested data in-memory and send them:

#include <Windows.h>
#include <cpprest/http_listener.h>
#include <cpprest/json.h>
#include <cpprest/streams.h>
#include <cpprest/filestream.h>
#include <cpprest/producerconsumerstream.h>
#include <algorithm>
#include <chrono>
#include <iostream>
#include <string>

using namespace concurrency::streams;
using namespace web;
using namespace http;
using namespace http::experimental::listener;

int main(int argc, char *argv[]) {

    http_listener listener(L"http://*:8080/bytes");

    listener.support(methods::GET, [](http_request &request) {

        auto q = web::uri::split_query(request.request_uri().query());

        // default: 100 MB of data
        std::size_t bytes_to_write = 100 * 1048576;

        if (q.find(L"b") != std::end(q)) {
            bytes_to_write = std::stoul(q[L"b"]);
        }
        if (q.find(L"kb") != std::end(q)) {
            bytes_to_write = std::stoul(q[L"kb"]) * 1024;
        }
        if (q.find(L"mb") != std::end(q)) {
            bytes_to_write = std::stoul(q[L"mb"]) * 1024 * 1024;
        }

        request.reply(status_codes::OK, std::string(bytes_to_write, '+'));
        std::cout << "Sent " << bytes_to_write << "bytes\n";
    });

    listener.open().wait();

    std::wcout << "Listening on " << listener.uri().port() << std::endl;

    while (true) {
        try {
            Sleep(1);
        }
        catch (...) {
            break;
        }
    }

    listener.close().wait();

    return 0;
}

And using curl for testing: curl -o NUL http://localhost:8080/bytes?mb=2000

Using 64Kb chunk size:

[root@localhost ~]# curl -o NUL 10.50.10.51:8080/bytes?mb=2000
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 2000M  100 2000M    0     0  15.7M      0  0:02:06  0:02:06 --:--:-- 14.8M

Using 5Mb chunck size:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 2000M  100 2000M    0     0  69.2M      0  0:00:28  0:00:28 --:--:-- 77.1M

Currently I am modifying the source code of cpprest to get the later result. It's a macro named CHUNK_SIZE defined in one of its source files (http_server_httpsys.cpp):

#define CHUNK_SIZE 64 * 1024

Is there an easier way to do this? Or am I using cpprest in the wrong way?

xiaofeng.li
  • 8,237
  • 2
  • 23
  • 30
  • why not make it a parameter? – Adam Oct 26 '15 at 06:29
  • @luke I don't think you can do in any other way. If you want to do, it involves lot of other changes to library which is not a good approach.. – Balu Oct 26 '15 at 06:39
  • @adam if only I know how to pass the parameter, I would very much not like to change the source code of the library. – xiaofeng.li Oct 26 '15 at 07:03
  • @Prakash I am going to try. Maybe I can replace the implementation with my own...but it means replacing something within a DLL...so the current approach seems lest surprising. – xiaofeng.li Oct 26 '15 at 11:22

0 Answers0