I'm using python to generate reports from a mysql database. The report retrieves data from a single table. There is an index on f_id. In the report I'm filtering by various combinations of col1 - col5 and displaying the count of records. Simplified it looks something like this:
def get_query_result(f_id, col1, col2, col3, col4, col5):
mycursor = mydb.cursor()
sql = ("SELECT COUNT(*) "
"FROM mytable "
"WHERE f_id = %s "
"AND col1 = %s "
"AND col2 = %s "
"AND col3 = %s "
"AND col4 = %s "
"AND col5 = %s "
)
val = (f_id, col1, col2, col3, col4, col5)
mycursor.execute(sql, val)
myresult = mycursor.fetchall()
return myresult[0][0]
print(get_query_result(123, 'a', 'a', 'a', 'x', 1))
print(get_query_result(123, 'a', 'b', 'a', 'x', 1))
print(get_query_result(123, 'a', 'a', 'b', 'x', 1))
print(get_query_result(123, 'a', 'a', 'a', 'z', 1))
print(get_query_result(123, 'a', 'a', 'a', 'z', 2))
print(get_query_result(123, 'a', 'a', 'a', 'z', 3))
print(get_query_result(123, 'a', 'a', 'a', 'z', 4))
...
There are about 100 numbers to calculate which results in 100 individual queries. There is no particular pattern in the combination of values.
I'm thinking there should be a better way than generating 100 unique sql queries and instead query the database once and then perform the calculations in memory.
I was looking into pandas but just importing pandas takes 1 second which is way to slow.
ColdFusion used to have a neat feature where you could query the database once and then could further query against that query result using regular SQL.