0

We are moving our back end from MSAccess to Sql server 2008 R2 Express. We have changed our MSAccess queries to Store Procedures. The data is about 250,000+ records .

We have two tables in the backend of MSAccess that can become very large….PartsInventory_BE and Service_BE.

That being said, when we tested roughly 9,000 records from an Access Front End talking to our Access Back End .It took 20-30 seconds to load one form which showed pulled all 9,000 records. With that kind of performance, we made a work around so that the Front End now contains an identical table called PartsInventory which continually synchronizes both PartsInventory and PartsInventory_BE.

Do we need to adopt this style for sql server as back end in order to maintain/improve performance, If Yes then how? If No then can you also list some reference of it.

faheem khan
  • 471
  • 1
  • 7
  • 33
  • 2
    The real question is: What operation requiring a full 9,000 row record set is being done on the FE that should be on the BE? Unless you are printing a 150 page report, there is no reason for 9,000 rows (= 60 rows/page * 150 pages) to be downloading to the FE. – Pieter Geerkens Mar 08 '13 at 22:25
  • I have 45,000+ inventory items, i wanted to see 1 type of inventory (with some detail of it also).It took 20-30 seconds to load one form which showed pulled about 9,000 records – faheem khan Mar 08 '13 at 22:35
  • You can write stored procs to filter the inventory by type on the BACK-END if you want to reduce the amount of network traffic. But 9000 records is nothing. We routinely bring 25,000 records across the wire to the client in subsecond time. No need to maintain mirror copies of tables client-side. – Tim Mar 08 '13 at 22:54
  • @faheem: Unless you are actually prinitng a 150 page report, you are doing aggregations on the FE that should be migrated to the BE along with the data. For migration to work, as you have noticed, all aggregation of any significant data volume should occur on the BE, not the FE. If you fail to heed this advice, you will only end up slowly back-migrating your data to the FE. – Pieter Geerkens Mar 08 '13 at 22:54
  • But be aware, if you are continuing to use Access as the front-end, it is an inefficient tool for true client-server apps, since (IIRC) where-conditions are applied client-side if the query is not pass-through. Queries with joins in Access can create an immense amount of network traffic. – Tim Mar 08 '13 at 23:01
  • Queries in MSAccess are transfered in sql server as store procedures.I will use pass-through query to get store procedure. what is IIRC?. – faheem khan Mar 08 '13 at 23:12
  • 1
    No, filters are not applied client side. If you have a form bound to a million row table and open it with a simple and STANDARD where clause in the client side then ONLY the one record is brought down the network pipe. So no, filters are not applied client side at all. I suppose if one opens a form bound to a table without a "where" clause then one is asking for trouble but no one does that these days with large tables. So using the open form command with a where clause is passed correctly to SQL server and no need for pass-through or views or store procedures. – Albert D. Kallal Mar 09 '13 at 09:01
  • IIRC = "if I recall correctly". It means that he's not 100% sure whether what he said is correct. – Christian Specht Mar 09 '13 at 12:41

1 Answers1

2

Keep in mind that Access allows you to write Pass-Through queries, which will send SQL commands to your new backend, and simply return the results. You can use it like any other table or query in Access.

If you find that your data takes too long to pull, you may want to give some thought to your indexing scheme. To get information about your columns and indexes:

List of all index & index columns in SQL Server DB

Community
  • 1
  • 1
Neil Mussett
  • 710
  • 6
  • 8