3

I have this strange behavior that I can not explain to myself - hopyfully someone here can.

We received some server (hardware) and mounted an NFS drive. We plan to use these servers as Splunk indexer but - since Splunk does not suggest NFS as storage - we wanted to do some performance tests before.

So I ran Bonnie++ and got really bad results (around 300 IOP/s) but the storage guys tell me that on their side they see around 1200 IOP/s which would be fine. How is this possible - what can I do to get this performance on the server ?

kofemann
  • 4,626
  • 1
  • 25
  • 30
pinas
  • 171
  • 1
  • 1
  • 5
  • IOW, single NFS request produces four IO requests on the server side? Did you run the same IO tests on the storage system directly? – kofemann Apr 05 '16 at 09:12
  • The "good" results came from tests done directly on the storage - actually not a real tests but just a performance data from the storage while the Bonnie++ test was executed on the server. – pinas Apr 05 '16 at 09:39

1 Answers1

1

http://veerapen.blogspot.com/2011/09/tuning-redhat-enterprise-linux-rhel-54.html

In short:

Configuring the Linux scheduler on systems with hardware RAID and changing the default from [cfq] to [noop] gives I/O improvements.

Use the nfsstat command, to calculate percentage of reads/writes. Set the RAID controller cache ratio to match.

For heavy workloads you will need to increase the number of NFS server threads.

Configure the nfs threads to write without delay to the disk using the no_delay option.

Tell the Linux kernel to flush as quickly as possible so that writes are kept as small as possible. In the Linux kernel, dirty pages writeback frequency can be controlled by two parameters.

For faster disk writes, use the filesystem data=journal option and prevent updates to file access times which in itself results in additional data written to the disk. This mode is the fastest when data needs to be read from and written to disk at the same time where it outperforms all other modes

Vasco V.
  • 44
  • 3