0

Is something like this bad with memcache?

1. GET LIST OF KEYS
2. FOR EACH KEY IN LIST OF KEYS
   - GET DATA

I'm expecting the list of keys to be around ~1000 long.

If this is bad, I'm wondering if there is a better way to do this? I figured memcache might be fast enough where such an O(n) query might not be so important. I would never do this in MySQL, for example.

Thanks.

ensnare
  • 40,069
  • 64
  • 158
  • 224
  • Seems like an usual indexed access to me, so nothing bad. As you say memcached is prepared for this kind of access without much penalty. – Diego Sevilla Oct 22 '10 at 21:23
  • 1
    consider using http://code.google.com/p/moxi/ , which will proxy between your app and memcached and can help speed up multiple get operations – Spike Gronim Oct 22 '10 at 21:40

1 Answers1

2

This will be slower than it needs to be, because each request will wait for the previous one to complete before being sent. If there's any latency at all to the memcache server, this will add up quickly: if there's just 100uS of latency (a typical Ethernet round-trip time), these 1000 lookups will take a tenth of a second, which is a long time in many applications.

The correct way of doing this is making batch requests: sending many requests to the server simultaneously, then receiving all of the responses back, so you don't take a latency penalty repeatedly.

The python-memcache module has the get_multi method to do this for you.

Glenn Maynard
  • 55,829
  • 10
  • 121
  • 131