I have a program that scrapes a website for data. I want to be able to cache that data instead of loading it if its only been a few minutes since it was last retrieved. I looked at beaker but I'm extremely new to cache and not sure if this is what i need. I also do not really understand what the Cachemanager is and why i only use "cache.get" instead of using both "cache.set" and "cache.get". I have included the script that i have been using to test with.
from beaker.cache import CacheManager
from beaker.util import parse_cache_config_options
import sched, time
from datetime import datetime
cache_opts = {
'cache.type': 'file',
'cache.data_dir': '../Beaker/tmp/cache/data',
'cache.lock_dir': '../Beaker/tmp/cache/lock'
}
cache = CacheManager(**parse_cache_config_options(cache_opts))
tmpl_cache = cache.get_cache('mytemplate', type='file', expire=5)
def get_results():
# do something to retrieve data
print 'hey'
data = datetime.now()
return data
def get_results2():
return 'askdjfla;j'
s = sched.scheduler(time.time, time.sleep)
def get_time(sc):
results = tmpl_cache.get(key='gophers', createfunc=get_results)
results2 = tmpl_cache.get(key='hank', createfunc=get_results2)
print results,results2
sc.enter(1, 1, get_time, (sc,))
s.enter(1, 1, get_time, (s,))
s.run()
Am i going about this the right way?