We also use our NetApps as plain block storage for VMware and have been doing so for over two years now without issue (except that we use iSCSI). (I'm personally not too happy about that since it seems like our NetApps are overqualified for this.)
I don't have the exact commands we used to create the vol and LUN but here's what they look like now:
vmstorage4a> vol status vol1
Volume State Status Options
vol1 online raid_dp, flex nosnap=on, nosnapdir=on,
64-bit no_atime_update=on,
fractional_reserve=0
Containing aggregate: 'aggr0'
vmstorage4a> lun show -v
/vol/vol1/vms5a-0 8t (8796093022208) (r/w, online, mapped)
Serial#: -d9-P?B811NB
Share: none
Space Reservation: enabled
Multiprotocol Type: vmware
Maps: vm=0
Occupied Size: 3.4t (3793203814400)
Creation Time: Fri Jun 8 22:39:10 EDT 2012
Cluster Shared Volume Information: 0x0
vmstorage4a> df -h vol1
Filesystem total used avail capacity Mounted on
/vol/vol1/ 8500GB 8225GB 274GB 97% /vol/vol1/
snap reserve 0TB 0TB 0TB ---% /vol/vol1/..
This is mostly what you have except we also have no_atime_update=on. My understanding is that this prevents the last-access timestamp on the LUN from being updated every time the LUN is accessed, thereby reducing unnecessary write I/O.
If you have one LUN per volume, make sure that guarantee=volume isn't disabled (in vol status). If it is, your LUN could grow larger than the volume. I have had this happen and it was unfortunate.