0

I have a table in SQL server with over million rows. I just want to create a cache and store whole data in cache. Should I store all million rows there. Is it a good practice. If not, how should I proceed?

EDIT: We are not writing anything on this table. Its for read purpose only.

James Z
  • 12,209
  • 10
  • 24
  • 44
Amit Sinha
  • 566
  • 7
  • 22
  • The database engine will cache automatically. Why do you think you need to manage or supplement this? Alternatively, is this an application-level question? If so, the source of your data is irrelevant and you should remove the sql sever tag (and add those relevant to your application development environment). – SMor Nov 22 '18 at 13:40
  • Like Smor said, SQL Server will keep rows in buffer pool (=memory) if you just have enough memory to keep them, and you don't update them. If you need faster access to the data for certain queries, create suitable indexes. Covering indexes will be even faster. – James Z Nov 22 '18 at 16:52

1 Answers1

0

Contrary to the comments which offers that sql s will keep the records into the buffer, I have to say that this is not a solution to improve the performance. You cannot count on the sql cache because

it caches only:

  • Query plans
  • pages from the database files

but does NOT cache: results from a query See this post

Now lets discuss your questions:

puting data into the cache needs a strategy that depends on many factors: how frequent these 1 milion records are fetched ? are they fetched in big bunch of data ? our you have some queries that fetches a part of it.

Then you need a strategy for invalidation. and you need to answer these questions:

is it ok to cache data for couple of hours ?

in general is it ok to cache based on time or you need other invalidation strategy ?

And where is your cache ? in heap memory or an a distributed cache ?

Meysam
  • 555
  • 1
  • 3
  • 16