0

Would it make sense to virtualize windows using Xen or VMware ESX and expose storage on a JBOD as NFS, instead of exposing storage as CIFS/SMB for better I/O throughput?

Is it true that the below can yield only 20MB/sec for CIFS/SMB; 80MB/sec for NFS? assuming i use 2-3 servers (1Gbps NIC), NIC teaming, switch binding, jumbo frames etc...

My test rig can only afford:

  • Storage server: HP DL370G6: 2x Xeon 55xx -CPU, 16GB RAM, 300GB 10K RPM SAS, P410i - 512MB cache
  • Windows server: HP DL580: 2x Xeon 56xx, 64GB RAM, 1G NIC
  • HP procurve 3500 layer3 switch - 1GB

Planning to use distributed file system: GlusterFS (user space) over Redhat ent Linux 5.5/ext3 FS, but I am attracted by a Solaris post about CIFS @ 1Gbytes/sec !

http://blogs.oracle.com/brendan/entry/cifs_at_1_gbyte_sec

I am torn between the two... I know my hardware is not at par with above post, but does OpenSolaris offer better CIFS/SMB perf than any other?

Thanks in advance for your thoughts!

alanc
  • 1,500
  • 9
  • 12
JMS77
  • 1,275
  • 5
  • 27
  • 45

1 Answers1

2

I have a few Dell 2950's with lesser specs than your machines running Debian Lenny and they can easily max two bonded gigabit links using Samba. All of them have big SATA RAID-6's attached to them so they're not IOPS kings, but with large reads they keep the pipes filled.

That blog you linked was 1 gigabyte/s.

Ryan Bair
  • 489
  • 4
  • 13
  • yup, blog was about 1GB/s performance, any computer from the past 2 years with 4 1TB SATA disks can sustain 1 gigabit/s reads – Hubert Kario Oct 04 '10 at 18:10
  • thank guys, corrected the post about bytes! what kind of a switch were you using? – JMS77 Oct 05 '10 at 01:11