I have my own City model (not django-cities-light) with more than 2M records in MySQL table. Each time I'm starting to type in autocomplete field, CPU load on htop table jumps over 200% on mysqld process, so it looks that script requests the table on each autocomplete.
I'd like to put the table into memcache to avoid this, and here is what I have so far:
autocomplete_light_registry.py
import autocomplete_light
from django.core.cache import cache, InvalidCacheBackendError
from cities.models import City
def prepare_choices(model):
key = "%s_autocomplete" % model.__name__.lower()
try:
qs = cache.get(key)
if cache.get(key): # return if not expired
return qs
except InvalidCacheBackendError:
pass
qs = model.objects.all() # populate cache
cache.set(key, qs, 60*60*24) # if expired or not set
return qs
class CityAutocomplete(autocomplete_light.AutocompleteModelBase):
search_fields = ['city_name']
choices = prepare_choices(City)
autocomplete_light.register(City, CityAutocomplete)
But it still keeps on requesting mysql.
Any suggestions?
UPDATE
I tried to set the cache for cities table in django shell, but the process breaks with Segmentation fault message.
>>> from django.core.cache import cache
>>> qs = City.objects.all()
>>> qs.count()
2246813
>>> key = 'city_autocomplete'
>>> cache.set(key, qs, 60*60*24)
Segmentation fault
But I was able to put smaller tables into cache, and I hope to overcome this problem, so the answer is still needed.