1

Summary: insert/delete/update transactions are taking 10-15x the time on CentOS 6.3 compared to MacOSX 10.8.2

I'm using SQLite (3.7.12) from Perl (DBD::SQLite 1.37). My application has a number of places where it does multiple writes (deletes, updates and inserts) within a transaction.

I've been comparing timings between 3 machines:

  • MBP: 2010 MacBook Pro with a regular disk
  • MBA: 2011 MacBook Air with SSD
  • CentOS 6.3 server (AMD Opteron 3250 with 1TB software RAID, 4 cores, 8GB RAM)

The transaction is taking roughly 10x to 15x longer on the CentOS server compared to the MBP and MBA. As expected, the MBA is a bit quicker, as it's got an SSD. If I turn pragma synchronous off, it's nice and fast, as expected.

We're running exactly the same test sequence every time, and end up with indentical databases. There's very little else (of note) running on the CentOS box at the time the test is running.

Benchmarking low-level disk write performance, the CentOS machine outperforms the others. Where should I look next?

Elbin
  • 492
  • 1
  • 3
  • 11
  • 1
    From continued googling for similar sounding problems, our current best theory is that the difference is down to disk write caches: that they might be enabled on my MBP, but disabled on the CentOS box. Looking into that... – Elbin Mar 09 '13 at 21:03
  • Have ended up looking into this in quite a lot of details, and posting on serverfault related to ext3: http://serverfault.com/questions/486677/should-we-mount-with-data-writeback-and-barrier-0-on-ext3 – Elbin Mar 20 '13 at 11:55

2 Answers2

0

I would start reducing dependencies.

Try running the test on an in-memory database.

Try running it in straight C to make sure that its not somehow perl. I kind of doubt it, but should be easy to mock up.

Tom Kerr
  • 10,444
  • 2
  • 30
  • 46
  • Thanks for the suggestions. See my comment above: if it turn out not to be the disk write cache, then I'll try a C version of our test. – Elbin Mar 09 '13 at 21:04
0

The issue turned out to be the way the ext3 filesystem was configured in /etc/fstab.

I ended up doing a lot of experiments and performance testing to better understand this, which I wrote up on server fault:

https://serverfault.com/questions/486677/should-we-mount-with-data-writeback-and-barrier-0-on-ext3

In summary, the filesystem was mounted with barrier=1; changing it to barrier=0, combined with data=ordered, gave back the performance we were "missing".

Community
  • 1
  • 1
Elbin
  • 492
  • 1
  • 3
  • 11