0

I am working on an application in C# .Net, my backend is SQL Server 2005.

I am developing a sale point form, in my table tblInvoice and tblInvoiceDetail may be added 10000 rows per day .

This is a heavy volume of data, please guide me when 6 months data will be added in a table.

Then is it effects on the speed of data retrieving ? If yes then what should we do?

Please help me about this.

  • 2
    Read up on indexes, http://msdn.microsoft.com/en-us/library/ff650692.aspx –  Sep 14 '11 at 08:25
  • 4
    A million rows?? That's **nothing** for SQL Server - if you have the proper indices in place. Also: if you have this kind of data, you might want to look into [SQL Server table and index partitioning](http://msdn.microsoft.com/en-us/library/ms345146%28v=sql.90%29.aspx) – marc_s Sep 14 '11 at 08:26
  • 3
    10k records per day isn't very high load. As long as your database is indexed and partitioned properly, and your queries are profiled and optimal, you should be able to handle many year's worth of data. – StuartLC Sep 14 '11 at 08:26
  • how many records a table can contains ? is there any limit in sql server ? –  Sep 14 '11 at 09:20
  • @Haider Ali - the size of a SQL server table is limited only by your storage - see http://msdn.microsoft.com/en-us/library/ms143432.aspx - however, that's not to say that queries against the table will perform if you allow it to grow to many billions of rows – Ed Harper Sep 14 '11 at 12:57
  • From http://msdn.microsoft.com/en-us/library/ms143432.aspx : Database size 524,272 terabytes; Rows per table, Limited by available storage – bhamby Sep 14 '11 at 12:59

3 Answers3

1

10,000 records per day, even spread out over an mere 8 hour work day, only amounts to one record every 2.8 seconds. That's just not a lot of volume. If you have good indexes on the table, retrieval will not be a problem, either. So don't worry about it just yet. Sql Server is well able to handle this kind of volume.

Joel Coel
  • 12,932
  • 14
  • 62
  • 100
0

There will be no problem in data retrieval with good indices. You problem will be scalability problem.

Adding/removing column/index will take longer time, Backup will take more space and will take much longer time, Any unoptimized query will be deadly slow ... etc

Yousf
  • 229
  • 1
  • 5
  • 13
0

You should provide a method of archiving data older than X to another table, or set of tables. With a search function capable of searching all records, not just active ones. That way in the future, IF/WHEN the table size is too large, there is already an elegant solution.

tkrabec
  • 300
  • 1
  • 8