0

I am trying to build an application, that uses MySQL at the database level. The main table in it right now is about 10 Mbytes. However, the client specified that this application should be able to store about a billion of records for this table per year. By extrapolating the current size it turns out that this table should be able to store 750 Gbytes per year.

The question is - how to tune the database level and the upper levels for this requirement? More specifically:

  • We have a database level on MySQL.
  • We have a glassfish server that runs the app.
  • We have an application that uses JSP pages and JDBC connections for the database.
  • The operations we need to provide is the paging list (watch a part of records), add new records, update them and remove.

So what can I do at each level in order to improve performance?

SPIRiT_1984
  • 2,717
  • 3
  • 29
  • 46
  • For scalable applications, you never use a monolithic data storage. In other words, you split the data and use the application (Java) to connect the dots. Also, this is not a programming problem whatsoever. You should hire an expert to tell you what to do. – N.B. Nov 11 '14 at 10:43
  • 2
    You might want to ask the tuning question on http://dba.stackexchange.com/ – gbjbaanb Nov 11 '14 at 10:44
  • 1
    It depends on how many columns, how they are used, how records are used (much archiving, many wild generic queries, batch processing). Collect the exact expectations, sample queries. Maybe you can tell a bit more. If the data grows 700 GB a year it looks more like archivation, document management system, extracting metadata. – Joop Eggen Nov 11 '14 at 10:57

0 Answers0