I'm getting unexpectedly low transactions per second on GCE's "Local SSD" option (comparing to SSD Persistent Disk) using simple "pgbench" tests:
# With Local SSD
# /dev/mapper/vg0-data on /data type xfs (rw,noexec,noatime,attr2,inode64,noquota)
pg-dev-002:~$ pgbench -c 8 -j 2 -T 60 -U postgres
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 2
duration: 60 s
number of transactions actually processed: 10765
tps = 179.287875 (including connections establishing)
tps = 179.322407 (excluding connections establishing)
# With SSD Persistent Disk
# /dev/mapper/vg1-data on /data1 type xfs (rw,noexec,noatime,attr2,inode64,noquota)
pg-dev-002:/data$ pgbench -c 8 -j 2 -T 60 -U postgres
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 2
duration: 60 s
number of transactions actually processed: 62457
tps = 1040.806664 (including connections establishing)
tps = 1041.012782 (excluding connections establishing)
"fio" benchmarks show the advertised IOPS and throughput for Local SSD. However, executing "pg_test_fsync" on a Local SSD mount leads me to believe fsync latency is the culprit. The Local SSD numbers are after applying Google's IRQ script here:
# Local SSD
open_datasync 319.738 ops/sec 3128 usecs/op
fdatasync 321.963 ops/sec 3106 usecs/op
# Persistent SSD
open_datasync 1570.305 ops/sec 637 usecs/op
fdatasync 1561.469 ops/sec 640 usecs/op
- Tested with Ubuntu 14.04 and Debian 7 images
- Instance type: n1-highmem-4
- Mount options are identical for both volume types
I haven't seen anything regarding limitations of fsync and the Local SSD, but I'm not sure where else to check or test.