0

I don't know if this is the best way, if there is a better way, please post.

I have an application that read a file and insert records. The entire file is processed in one transaction.

Before a record is inserted the table needs to be checked for duplicates (note: I can't make this a table constraint since there are exceptions)

So the duplicate check is a normal select statement, but the problem is, it reads the uncomitted records from the current transaction. I have included READPAST and READCOMMITTED hints in the select statement, but that still return the record.

Any ideas?

Hannes
  • 3
  • 1
  • how are you going to handle duplicates? do rollback of whole transaction or some other businnes logic? may be it's better to create a unique index for the field and then catch exceptions within application? – heximal Jul 05 '11 at 14:20
  • Thanks for your reply. The issue is, duplicates within the same file is acceptable, but not if the record was already created by a previous file. – Hannes Jul 05 '11 at 18:57
  • Part of the issue is to prevent users importing the file twice. I don't really want to move this to the application but it seems I have to. – Hannes Jul 05 '11 at 19:04

1 Answers1

0

the only way to implement this within the db is locking table. look at the ISOLATION LEVEL SERIALIZABLE

heximal
  • 10,327
  • 5
  • 46
  • 69
  • Thanks, that could work but concurrency will be a big problem. Ill handle this in the application. – Hannes Jul 06 '11 at 09:20