0

I have SQL Server 2008 Standard Edition.

Some of our tables contain around 2 million rows of data. We are using a M$ Access front end (horrible I know, but too big to re-write at the moment).

I want to increase performance and speed as we are starting to see a dip in our Access performance.

I have looked into partitioned tables, which seemed perfect, but its not possible in Standard edition, and as upgrading is ££,£££'s its out of the question.

I could split the database up into separate databases (one for each year) and use partitioned views to access the data, but im not sure how much of a performance increase this will give me for the effort.

We already rebuild the indexes every night, so that is ok.

Any ideas or suggestions?

Partitioned views is the main thing that looks like it could help, but im unsure of the real gains.

Thanks

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
adam Kearsley
  • 951
  • 2
  • 13
  • 23
  • 1
    Have you tried any indexing at all? – Adriaan Stander Sep 13 '13 at 10:51
  • If you read my question properly you will see my comment: We already rebuild the indexes every night, so that is ok. – adam Kearsley Sep 13 '13 at 10:53
  • 4
    This question is too broad. What is your specific problem? Storing is slow? Viewing is slow? Reports are slow? All SQL queries are slow? – MicSim Sep 13 '13 at 10:53
  • The majority of the queries are not too bad, of course Access isnt the quickest at displaying data from a SQL server, but recently we are starting to see Access crash more often and some of the searches can be painfully slow. I am looking for a method and best practice to avoid our tables growing too big and a nice way to keep everything optimized. – adam Kearsley Sep 13 '13 at 10:55
  • Does anyone know if splitting the database up yearly and using partitioned views will help increase performance? Ive heard mixed opinions – adam Kearsley Sep 13 '13 at 10:57
  • What query is slow? (include some info about the data layout) How long did it take once and how long does it take now? Also include the execution plan for that query. 2 million should not be any problem, pretty sure an index will give you subsecond response times again. Also do you use an ADP or do you connect with ODBC? ADP is much preffered, then Access is actually a very good tool to create LOB applications. – gjvdkamp Sep 13 '13 at 11:00
  • 1
    Why are you so fixated on splitting up the table? Quite possibly, a few useful indexes and a bit of query rewriting might do. You need to determine which queries are the worst performers - do a server-side tracing, see which queries consume the most time and I/O. Then deal with those - see if an index might help, add one, measure again. Maybe you can rewrite the query to fetch less data? – marc_s Sep 13 '13 at 11:38

2 Answers2

1

Firstly, you could archive off data that is no longer required, if this is an option.

For example, every day I used to archive off data older than 3 years from certain tables as it was not a requirement for the production system to refer to data older than this. This keeps table sizes manageable for full index rebuilds etc.

2 million rows doesn't sound much unless there's a lot of data in each row though (I'm talking above about tables with > 100 million rows).

Secondly, you could evaluate the queries you're using to see if there's any way they could be optimised.

Thirdly, revaluate which indexes you do and don't have. You can use the SQL Server Profiler and Analyser to help you build a list of recommended indexes to create (or drop!)

Gareth
  • 2,746
  • 4
  • 30
  • 44
  • Finding a nice 'cut off' point is hard, we keep ~6 years of data (we are a legal firm) and we still report on older files or may need to re-open them at a later date. This is why partitioned views could be handy as we could still access the older records, without cutting them off from the live database. But will this help performance? – adam Kearsley Sep 13 '13 at 10:58
  • It will help in some cases, but I'd definitely investigate the indexing before shipping data off to an archive. As GBoehm has said, you can have blazing fast tables with millions of rows of data but you *must* have the right indexes – Gareth Sep 13 '13 at 11:01
1

Have you tried to use indexes? Have you profiled the workload on your database and search for costly selects? normally if you have the selects and you execute them on the database you want to include the execution plan and look for missing indexes and lookups. you can reindex the database with the help of the Database Engine Tuning Advisor http://technet.microsoft.com/en-us/library/ms173494(v=sql.105).aspx for index maintainance (rebuilding all indexes every night is definitly not the way to go) you can use this sollution http://ola.hallengren.com/. On the last step you can think about changing the hardware to get better performance. and just to have a feeling for your database i work with blazing fast tables with 50million rows and database sizes exceeding 1TB.

GBoehm
  • 183
  • 1
  • 16
  • Im pretty new to the SQL server database tuning engine tool, also the database profiler. I have just ran a profile and then a tuning advisor and it found a 92% improvement by adding better indexed. I think im onto a win here! Thanks :) – adam Kearsley Sep 13 '13 at 15:01
  • just keep in mind that the profile should represent a good part of your daily working routine on the database every aspect (every select) should be in your trace otherwise you will perhaps choose the wrong indexes. Normally you would have one productive server where you take the traces and one development server where you would run the advisor against a recent backup of the productiv database since the advisor can slow down your database – GBoehm Sep 13 '13 at 15:43