1

We are actually testing Riverbed Steelheads in order to accelerate replication using SnapMirror between two sites.

The distance ist 100km between the sites. Connection: 150Mbit MPLS Network

Systems: FAS6080 (Source) and FAS3160 (Destination) with ONTAP 7.3.4

The SnapMirror is configured as follows (snapmirror.conf):

FAS6080 = multi (10.128.85.43,10.128.136.15) (10.128.33.68,10.128.136.15)

FAS6080:/vol/M0P_DB/sapdata FAS3160:/vol/sm_M0P_DB_dbp_test/sapdata kbs=15360,wsize=4194304 15 2,6,10,14,18,22 * *

And the network on the NetApp:

mvif: flags=0xa2d08863 mtu 1500 ether 02:a0:98:0f:30:fe (Enabled virtual interface)

mvif-1604: flags=0x6948863 mtu 1500 inet 10.128.85.43 netmask 0xffffff00 broadcast 10.128.85.255 partner mvif-1604 (not in use) ether 02:a0:98:0f:30:fe (Enabled virtual interface)

mvif-1610:flags=0x6948863 mtu 1500 inet 10.128.33.68 netmask 0xffffffc0 broadcast 10.128.33.127 partner mvif-1610 (not in use) ether 02:a0:98:0f:30:fe (Enabled virtual interface)

Has anyone an idea if there is a special configuration that I forgot in order to optimize the replication?

The problem ist I had 8Mb/s replication speed before, and 16Mb/s now... Peek is 20! That's not enough and I can't find out where it comes from...

Thanks in advance for your help!

waszkiewicz
  • 1,002
  • 3
  • 17
  • 36
  • what about enabling compression directly? compression=enable in snapmirror.conf would do the job... of course, more cpu load on the controllers would be generated. – zero_r Nov 24 '11 at 18:34

1 Answers1

1

What riverbed model are you using?

  1. For Riverbed, general rule of thumb is DO NOT do application level compression. Let Riverbed do it.
  2. When optimizing traffic, if you leave the default settings, riverbed will try and use the disk for deduplication. The problem with this is Riverbed uses SATA with all but there most top end systems and this creates a bottle neck for high throughput traffic such as replication. Additionally, this traffic isn't typically very repeatable, so it basically wipes your disk cache for not benifit.

We ran into a similar situation with Equallogic replication. Go into your in-path rules and set the subnet that your SANs reside on to do memory only caching (make sure this is above your optimazie all rule so it applies first). This should speed up your replication a bit. Your basically giving up a little data reduciton in turn for better throughput.

Eric C. Singer
  • 2,329
  • 16
  • 17