3

We are experiencing a performance problem using Windows 2008 CIFS client. We have a FreeNAS server that delivers 1.3GB/s on ZFS write. We have 10Gb network connecting NAS server and CIFS clients. Using two Linux CIFS clients, we can get around 1.2GB/s. But windows 2008 clients can only give us 400MB/s.

Is that the best a Windows 2008 client can deliver or we do have a poorly configured Windows client?

Much appreciated.

squillman
  • 37,883
  • 12
  • 92
  • 146
Huamin
  • 31
  • 2
  • Depends...what sort of local throughput do you have on the Windows clients? It's limited by the disk speed on the guest too. – Nathan C Jun 24 '13 at 15:12
  • 1
    There are just to many variables to answer this properly. Hardware used, drivers, quality of drivers, how much RAM , caching, cpu load, how many cores, small files v.s. big files, file-system used, with our without acl and/or quota handling. – Tonny Jun 24 '13 at 15:15
  • Let's see if I got this. When writing to the FreeNAS server, the Linux CIFS clients get 1.2GB/s, but when the Win2008 clients write to the FreeNAS server they're only getting 400MB/s? – sysadmin1138 Jun 24 '13 at 15:20
  • It's not all about throughput... – ewwhite Jun 24 '13 at 15:22
  • 1
    Yes, we got poor throughput using Windows 2008. Linux CIFS is doing much better. We checked the protocol, Linux CIFS client used SMB 1. – Huamin Jun 24 '13 at 16:37
  • We used both Windows 2008 and Linux CIFS clients. Hardware configuration is close. I think the NAS box is doing well based on benchmark numbers we saw. Linux CIFS numbers are close to line speed. The only problem is the windows CIFS client. – Huamin Jun 24 '13 at 16:40
  • do the windows clients have av running and is that true for the linux clients? – tony roth Jun 24 '13 at 17:54
  • Also are the nics configured correctly on the win clients? – tony roth Jun 24 '13 at 17:56
  • No AV running on Windows or Linux. Other than jumbo frames, what else need to check NICs? – Huamin Jun 24 '13 at 20:08
  • and these are not 10g copper nics right? – tony roth Jun 24 '13 at 20:21
  • yes, 10g copper nics – Huamin Jun 25 '13 at 16:47
  • like @tonny said there are a lot of things to consider like tcp offload rss, how current are nics drivers etc.. Have you upgrade the nic drivers? – tony roth Jun 25 '13 at 18:14
  • I'd also suspect that if you used wireshark or tcpdump filtered to smb traffic I'd suspect the conversation between the devices would be quite different! – tony roth Jun 25 '13 at 18:17

2 Answers2

1

You might want to try and enable smb2 support on the server side.

max protocol = SMB2 in the [global] section of your smb.conf

Christian
  • 317
  • 1
  • 2
  • 8
0

Something to consider is the MTU size or what are called Jumbo Frames. Full-Duplex and TOE are two other things to look at.

I'm running FreeNAS 8.3.1 with exporting iSCSI disks back to linux boxes running virtual machines off the mounts that are pretty i/o intensive. Setting the frame size larger than the default 1500 had a dramatic impact on performance and throughput. This has to be set on both the client and server side or it doesn't take effect.

FreeNAS has some nice graphs to allow figuring out where your bottlenecks are on the system information tab.

Oh and a free heads up, changing the MTU size is part art and part science. Drivers in FreeBSD, Linux and Windows are unreliable in what sizes are allowed. You may have to dig into the driver documentation or experiment to get the sizes that are acceptable. Also, lowest value is the default for the entire path.

On linux or freebsd/freenas:

ifconfig -a | grep -i mtu
ifconfig eth0 mtu 9122 up
ifconfig em0 mtu 9122

On windows right-click the NIC in device manager and look in the properties of the NIC driver. MTU or Jumbo Frame or Framesize may be the name of the setting for your driver. The default value is usually 1500.

To test the route MTU values from Linux:

route get <ipaddr>

Some notes will mentioned hardwiring full-duplex but any modern switch will deal with this quickly and not be a problem. I did not see any issues with duplexing on modern hardware.

For my iSCSI usage, the blocksize of the exported volume was important to be larger and I set it to 4096 for the virtual device. Pay attention to block sizes of the underlying exported volume as those also have impact on performance. That may not impact your SMB exports.

One last question, if your 10Gb NIC TOE (TCP Offload Enabled) enabled or hardware accelerated?

TOE is the network card equivalent of a GPU from a graphic card along with something like DMA (Direct Memory Access) used by old style hard drive controllers. It allows for offloading the work of the TCP/IP stack to the NIC instead of running it through motherboard front side bus and CPU which are bottle necks for data being processed at this speed.

In order for what you are asking to work you will need your 10Gbps cards to have TOE (hardware acceleration) enabled in the OS and drivers. If you have already TOE enabled, then ignore this part of the response.