-4

We have calculated the initial database size for this implementation to be 2TB, rising by 1TB per year. How much memory would you recommend for the server?

Its a business application, reads and writes across 100 tables. The server wont be upgraded for 3 years.

75% of the data will be historic, 200 transactions per second.

sysadmin1138
  • 133,124
  • 18
  • 176
  • 300
JonAlb
  • 101
  • 1
  • 3
  • Impossible to say, as the storage requirements don't tell anything about the data access pattern. If you just add records without ever querying them, you don't need a lot, but if you do large queries over all tables, you can't get enough. – Sven Mar 16 '12 at 14:05
  • How big is your working set going to be? How big will it be in 5 years? and how often do you want to upgrade your server? –  Mar 16 '12 at 14:08
  • Its a business application, reads and writes across a large number of tables. The server wont be upgraded for 3 years. – JonAlb Mar 16 '12 at 14:08
  • Can you peolase provide answers that are not ridiculous non-sensicle? Who cares how many tabley ouy have and that you read or write. The write patterns are relevant - like the number of transactions, hot set (how much data is active most of the time, how much is histories only requested rarely etc.). – TomTom Mar 16 '12 at 14:10
  • 75% of the data will be historic, 200 transactions per second. I will provide as much information as required. I gave table qty as someone else eluded to something that would help. My first time on ServerFault, it seems a little more negative/less helpful than stackoverflow. Please let me know what other information would help improve this question. – JonAlb Mar 16 '12 at 14:12
  • @JonAlb When we're given relevant information, we're very helpful. The problem with this question is that it boils down to "size my app for me" which is impossible to do through a SE site. It requires a lot of benchmarking, extrapolating, and intimate knowledge of your configuration. That said, "as much as you can fit in the box" is usually the best answer to "How much RAM does my DB server need" – MDMarra Mar 16 '12 at 14:50

1 Answers1

1

I would recommend as much memory as you can afford.

Then look into sharding the database over multiple servers, as it sounds like you're going to have scaling problems with that amount of data.

Bart Silverstrim
  • 31,172
  • 9
  • 67
  • 87
  • I agree, scaling is a concern for us. We are putting effort into optimizing the application layer. This is to run a larger implementation of an Existing system. We need to make a recommendation to the client. Any ideas on Ball park figures? – JonAlb Mar 16 '12 at 14:20
  • Without stress testing, it's hard to say. Your best bet really is to get memory maxed out as much as possible; get a server, load a copy of the database into it, and test it out to see if it *gives acceptable performance for their use*. Personally you'd get best performance from a terabyte of memory so it can jam most of the database in working memory; it depends on how much of the DB is actively cached. – Bart Silverstrim Mar 16 '12 at 14:24
  • If you can run it entirely in memory, that's perfect. But with a terabyte each year adding on...I'd think that you're going to have scaling issues. I'd think you'd want to look at a series of smaller servers with sensible sharding and a caching server up front to deal with the load. – Bart Silverstrim Mar 16 '12 at 14:25
  • Yes that sounds sensible, and very possible with our current code base. Thanks for your help in a sea of downvotes – JonAlb Mar 16 '12 at 14:36
  • The votes were probably because this is very specific to your situation; I empathize for the situation, but it's really impossible to give you a canonical answer. You can have a huge database but with your client's practical use, 25 gig of memory would work. Or they could hammer it hard and you'll need most of the database in working memory in order to keep it from crawling. It's hard to tell without actual testing. – Bart Silverstrim Mar 16 '12 at 14:50