1

I have a program I wrote for a group that is constantly writing, reading and deleting per session per user to our SQL server. I do not need to worry about what is being written or deleted as all the data being written/deleted by an individual will never be needed by another. Each users writes are separated by a unique ID and all queries are based on that unique ID.

I want to be able to write/delete rows from the same table by multiple users at the same time. I know I can set up the session to be able to read while data is being written using SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED.

The question:

That said can I have multiple users write/delete from the same table at the same time?

Currently my only idea if this is not possible is to set up the tool to write to temp tables per user session. But I don't think that is an efficient option to constantly create and delete temp tables hundreds of times a day.

Mike - SMT
  • 14,784
  • 4
  • 35
  • 79
  • If each user doesn't care about data of another user, why not just use a seperate table for each user?? If a user needs to read out data of any user, you could just `UNION` all the tables data together. – Ryan Wilson Dec 30 '19 at 17:47
  • @RyanWilson Because I need roughly 20 tables to make the data work and employees come and go so writing new tables for each user is not exactly practical. I have been thinking about using all temp tables. – Mike - SMT Dec 30 '19 at 17:49
  • You can always drop tables when an employee leaves the company. Some example table schema would be helpful. `"I need roughly 20 tables to make the data work"` - doesn't say a whole lot. – Ryan Wilson Dec 30 '19 at 17:50
  • @RyanWilson The goal is to set it up in such a way that no work is needed on my part once the tool is released. It needs to be dynamic enough that we simply add/remove a user to/from our SQL Server ADOM Group to manage who can use the tool. So at that point either everyone uses the same set of tables or I have to dynamically create temp tables (possible just not preferred). – Mike - SMT Dec 30 '19 at 17:52
  • As I said, when a user is added to the Group, you can create a table related to the user in your Group management, and store it as a key. Just use this link to let the user insert/delete data from their own table. Union all tables together if necessary to display (read) data out to a user. When a user is removed from the Group or leaves the company -> use a database trigger to drop their table. The creation of a user specific table could be done via a database trigger as well when a new user is added to your Group. – Ryan Wilson Dec 30 '19 at 17:56
  • @RyanWilson I will look into that as an option. We have a team that manages the SQL Server I just have access to manage one database. I will see if they can set something like that up. So are you saying I can add some kind of triggered event to my SQL Server that can create a database or set of tables that is a copy of the original tables needed for each new added user? – Mike - SMT Dec 30 '19 at 17:57
  • SQL Server generally takes row locks so there should be no issue here and no need to use read uncommitted. Just need to be careful with bulk write operations (typically that touch >5,000 rows) as they may end up with lock escalation from row to table. Reading queries can use one of the snasphot isolation levels too so not take any locks other than schema stability too – Martin Smith Dec 30 '19 at 17:59
  • @MartinSmith My tables are being locked up when multiple users are trying to read/write to the same tables that another user is writing to. This is why I needed to use "read uncommitted". But I am not sure that will prevent locking when multiple people are writing to the same tables. Everyone is out on holiday so I cannot stress test right now to see. So I am trying to find the best solutions before everyone comes back. – Mike - SMT Dec 30 '19 at 18:01
  • Ah you also need to have appropriate indexes so the read queries can seek into the user's rows without having to scan over other user's uncommitted rows. Or use snapshot isolation but sounds like the indexes should be there for perf reasons anyway if this is always present – Martin Smith Dec 30 '19 at 18:03

1 Answers1

2

Yes you can make this multi tenant approach work fine.

  1. Ensure leading column of all indexes is UserId so a query for one user never needs to scan rows belonging to a different user.
  2. Ensure all queries have an equality predicate on UserId and verify execution plans to ensure that they are seeking on it.
  3. Ensure no use of serializable isolation level as this can take range locks affecting adjacent users.
  4. Ensure that row locking is not disabled on all indexes and restrict DML operations to <= 5,000 rows (to prevent lock escalation)
  5. Consider using read committed snapshot isolation for your reading queries
Martin Smith
  • 438,706
  • 87
  • 741
  • 845
  • Already good on 1. I will need to do some research on your other points. A lot of this is new to me so I am learning as I go. I will have to look into the "how" on points 2 and 3. – Mike - SMT Dec 30 '19 at 18:17
  • @Mike-SMT - If you are encountering blocking you need to get details about the resource being contended and the process holding it / requesting it. You can set up collection of a "blocked process report" to gather this info so you can see what the issue is (e.g, user unexpectedly having to scan another user's rows and requesting row locks on them as indexing not sufficient or locks being taken out at higher than row granularity) – Martin Smith Dec 30 '19 at 18:21