I have an MEMORY
MYSQL database that contains a table with 200k+ rows. I use this database to test trading strategies, so it is queried repeatedly ad nauseum. No new data is added to the database.
One column in this database's primary table is "Time of day". At each query my stored procedure selects those rows where the "Time of day" falls a maximum of 2 hours before or 2 hours after the query input "Time of day".
Since the data in my database has a 5-minute resolution, there are total of 288 possible "Times of day" (24 * 60/5 = 288)
.
This means that each query selects roughly 1/288
of the table.
Furthermore, that means that there is a huge overlap between query A and query B (which immediately follows).
At this moment, there is simply 1 huge table which gets filtered at each query. This is (way too) slow.
I have been trying to figure out a more optimal setup. I have considered creating 288 pre-filtered tables and simply access them at query time. That, however, takes too much RAM and ends up slowing the query instead of speeding it up.
Are there any more optimal solutions?