0

my colleagues are pursuing this with Netapp and Oracle - but I thought I'd post here on the off chance someone else has seen this

We have a RedHat 5 VM (fully up to date) running Oracle 11i with data disks mounted via the VM's linux kernel NFS using Oracle's recommended mount options and the performance is very inconsistent (Querys that should take < 2 seconds sometimes take > 60 seconds)

Funny thing is we can run the same queries perfectly consistently < 2 seconds on a VMDK residing on SAME NetApp NFS datastore!

Makes me wish Oracle and NetApp collaborated as closely as VMware and NetApp did on the Virtual Storage Console we used to perfectly set the NFS options and keep them in compliance...

We have tried a few Linux NFS options others have posted and not seen improvement so far.

We are now creating VMDK's for the VM to replace the Linux NFS mounted and workaround the issue as our developers need consistent performance ASAP.

Mark Henderson
  • 68,823
  • 31
  • 180
  • 259
  • What version of Oracle and what version of OnTAP? We had the same problem (but untested as a VMDK) with 11gR1 and CentOS. Changing to iSCSI mounts on the NetApp worked flawlessly. I run plenty of VMware via NFS on the box. I'm technically running the IBM version of a NetApp with OnTAP 7.2.4. I have a new box on which I'm about to test the same thing with CentOS 5.4 and 11gR2, hoping either the OS or updated Oracle fixed it. – Keith Stokes Mar 16 '10 at 02:57
  • The original installation is RHEL 4.5. – Keith Stokes Mar 16 '10 at 11:35

1 Answers1

1

We're seeing the same behaviour with Oracle Unbreakable Linux 5.4, Oracle 11gR2 and OnTap 7.3.2. Mounting 'raw' NFS is much slower compared to accessing the same storage via a VMDK (using the same underlying ESX host mounting NFS via the VMKernel). Both the 'raw' NFS volumes and the NFS Datastore are in the same aggregate and hence the same spindles etc.

We don't want to change to using either block storage or VMDK's as that'll change our backup and DR strategy not to mention support requirements. I'll post any solution I find back here or if anyone else can contribute please post!

Regards,

Ed Grigson

UPDATE: We resolved our case - it was the NFS mount options, in particular the 'noatime' parameter in combination with the 'actimeo' parameter. Setting 'noatime' and NOT using 'actimeo=0' solved it for us.

The mount option actimeo=0 that we used turns off attribute caching on the client. This means that the client always has the latest file attributes from the server but at the cost of increased latencies because of increased physical IO. Our performance problem was most acute during installation because we were expanding .ZIP files and updating thousands of date stamps. By using 'noatime' (both on the client mount options and on the Netapp volume properties) to disable date stamp updates we avoid this issue. NOTE: The behaviour of actimeo varies between the 2.4 and 2.6 Linux kernels which is another reason why this may not have been encountered sooner. NOTE: 'actimeo=0' is an Oracle recommended parameter for Oracle 10gR2 on Linux but there is no guidance for Websphere running on Oracle. https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb7518