I've run up against a problem with a small SQLite + Flask python app I've built. The app serves JSON via several endpoints -- never more than 150kb at a time. After some inactivity, the app will take 10-25 seconds to return a query that it usually returns in less than 1 second. During these 10-25 seconds other queries to the app are likewise blocked. The app never writes to the sqlite database; it only reads it.
So far I've tried:
Creating a singleton to keep a single open connection to the database, so not every query will have to re-open the connection.
Putting the database file itself on an in-memory filesystem to speed up access time.
Increasing memory on the virtual private server hosting the file (lots of free memory now, nothing is swapping to disk)
Loading the contents of the sqlite database into an in-memory sqlite database and reading from that on every query.
None of this has resolved the issue, and I'm running out of options --- I'd like to resolve this and stick with SQLite, but if it's not possible I'll refactor my app with Postgres.
Here is the flask code that reads the database:
@app.route('/filings/<int:year>', methods=['GET'])
@crossdomain(origin='*')
def getFilings ( year ):
if request.method == 'GET':
q = 'SELECT '
for c in Filings.columns:
q = q + c + ','
q = q[:-1] # remove trailing comma
q = q + ' FROM filings'
q = q + ' WHERE filings.date LIKE \'' + str(year) + '%\''
print "about to execute sql query:"
print " " + q
con = lite.connect('/mnt/memfs/filings.db')
cur = con.cursor()
cur.execute(q)
rows = cur.fetchall()
jsonrt = [ ]
for row in rows:
d = { }
x = 0
for c in Filings.columns:
d[c] = row[x]
x = x + 1
jsonrt.append(d)
con.close()
return jsonify(items=jsonrt)