1

I was doing some pub&sub test with autobahn-cpp. However, I found that when you pub some data at a frequency that faster than the sub endpoint can consume, this will cause the router(crossbar) cache some data and the memory usage increases. Eventually, the router will use up all the memory and be killed by the os.

For example

publisher:

while(1)
{
    session->publish("com.pub.test",std::make_tuple(std::string("hello, world")) );
    std::this_thread::sleep_for(std::chrono::seconds(1));  // sleep 1s
}   // pub a string every seconds

subscriber:

void topic1(const autobahn::wamp_event& event)
{
    try
    {
        auto s = event.argument<std::string>(0);
        std::cerr << s << std::endl;
        std::this_thread::sleep_for(std::chrono::seconds(2)); //need 2s to finish the job
    }
    catch (std::exception& e) 
    {
        std::cerr << e.what() << std::endl;
    }
}
main()
{
    ...
    session>subscribe("com.pub.test", &topic1);
    ...  
}   // pub runs faster than the sub can consume

After several housr:

2016-01-7 10:11:32+0000 [Controller  16142] Worker 16145: Process connection gone (A process has ended with a probable error condition: process ended by signal 9.)

dmsg:

Out of memory: Kill process 16145(Crossbar.io Wor) score 4 or sacrifice child

My questions:

  • Is this normal (use up all the memory and be killed by the os) ?
  • Or is there any config options can be set to limit the memory usage?

I found a similar issue, see link https://github.com/crossbario/crossbar/issues/48

system info: ubuntu 14.04(32bit), CPython 2.7.6, Crossbar.io 0.11.1, Autobahn 0.10.9

Wei Guo
  • 544
  • 3
  • 15
  • No idea what this library offers, but usually queues intended for producer/consumer use offer the option to set a maximum size. They may default to unbounded, but you can prevent them from storing more than, say, 10,000 items; if the producer gets that far ahead, it will block when it tries to store the 10,001st item until the consumer drains at least one item from the queue. – ShadowRanger Jan 07 '16 at 02:25
  • 1
    This isn't related to the issue you linked above. But it's an issue - since we don't yet have the knob to control the maximum buffered amount of data per client connection. If we have that, the event would either be dropped for that single slow subscriber or we could also have an option to deny a publish when at least one subscriber connection hits that limit. – oberstet Jan 07 '16 at 21:17
  • I have filed https://github.com/crossbario/crossbar/issues/583 to track this. Please feel free to comment there .. – oberstet Jan 07 '16 at 21:27
  • Thanks for the reply @oberstet and with kind regards to all crossbar contributors. Maybe I should find a way to sync the publisher and subscriber. – Wei Guo Jan 08 '16 at 01:05

1 Answers1

0

The client is filling up with messages it hasn't delivered yet.

This is a "feature" of message based protocols.

Instead of request -> response It's request => response + response + etc

You're running into "backpressure", where the queue of responses to send is filling up faster than the client can receive them.

You should stop producing or drop responses. Do you need all the responses, or just the latest?

Here is some "backpressure" documentation from uWebsockets

There is an "Observable" pattern (similar to Promises), that can help, Rx.Js is for JavaScript, but I'm sure there is something similar for C++. It's like a streaming promise library.

Michael Cole
  • 15,473
  • 7
  • 79
  • 96