7

With more and more companies switching to public cloud services, I'm curious what you guys' thoughts are on TCP/IP tuning in the cloud. Is it worth bothering with? Given that you don't have access to the host-server, you're somewhat limited I presume

Let's say for the sake of the argument that you're running three MongoDB-servers in a replica-set on FreeBSD or Linux that all sync over an internal network.

I'd also be curious if anyone made any actual performance benchmarks to back up their arguments. I benchmarked the various network drivers available for KVM/Qemu here, but I'm curious what the gurus here suggest to tune further.

I started playing around a bit with the tuning-recommendations as suggested over here, but interestingly enough I saw a decrease in performance, rather than an increase, but perhaps I didn't fully understand the tweaks.

Update: I did a few more benchmarks and posted the result here. Unfortunately the result wasn't really what I expected.

vpetersson
  • 861
  • 1
  • 11
  • 22
  • 1
    I doubt your analysis in the blog is correct if you want to make assumtions about qemu-kvm and virtio_net, unless you want to specifically say all the conclusions are specific to FBSD as the guest OS (and I don't know what is being used o the hosts - might also be something suboptimal). If you want to test the actual and latest code, you need to go with Fedora for both host and guest. – dyasny Jan 24 '12 at 14:36
  • Of course the assumption in my blog was that FreeBSD was used. That was pretty clear. It was also stated that the tests were run using a public-cloud (CloudSigma to be specific). The assumption should also be made that it is a public cloud, just like stated in the question. I'm also not interested in testing the bleeding edge VirtIO-driver, but rather generic tuning tips. I did however do some further benchmarks and posted them [here](http://viktorpetersson.com/2012/01/24/benchmarking-and-tuning-freebsds-virtio-network-driver/). – vpetersson Jan 24 '12 at 20:13
  • 1
    I was addressing this comment: "While I performed these benchmarks on CloudSigma’s architecture, since they are running KVM/Qemu, they should be a good indicator of general performance under KVM/Qemu". I think it is misleading, and I'm really making an understatement by saying it so mildly. – dyasny Jan 24 '12 at 20:54
  • Sure, that's a good point. It is a somewhat simplistic assumption. I guess if you really want to do performance benchmarks and cut out as much potential architectural bottlenecks as possible, two VMs with a shared local LAN would give you a better indicator. – vpetersson Jan 25 '12 at 09:58
  • Of course. Moreover, you need to make sure you are running a proper host and guest OS, with the proper set of drivers on proper hardware. By proper I mean benchmark elements that are targeted by the development mainstream - Fedora and RHEL (possibly Ubuntu, but I'm not sure it's well tested - have seen some weird behaviour there that doesn't reproduce on fedora), enterprise grade hardware and so on. You're using unknown host software and hardware, and the guest OS is far from being popular in such tasks as well – dyasny Jan 25 '12 at 10:36

1 Answers1

2

There are two points I'd like to make that may impact your conclusions.

1) review what is written about autotuning. This feature which, as I recall, first appeared in the 2.6.18 linux kernel and has been improved in in subsequent kernels. Put simply, what this does is allow the kernel to dynamically alter those tcp tweaks that network programmers got used to making. Google autotune linux. Also refer to http://www.psc.edu/networking/projects/tcptune/?_sm_byp=iVVq2rrM1N2DqN0r#Linux

The short version is let Linux adjust the tcp stack parameters for you and don't intervene as that may make performance worse.

My second point is check with version of KVM_QEMU you are using. There has been a lot of work with performance and there was a bug in earlier versions of VIRTIO_NET that limited performance on high speed networks. Since KVM_QEMU is at 1.0 now, go with that.

wcorey
  • 21
  • 1