0

I have a scenario where I need to generate a batch number (primary key) in the below format.

Batch Number: ( X X ) ( X X X X X ) Location Sequence

         Eg:   0100001
               0100002
               0200001
               0100003
               0200002
               .......

Sequence starts with 00001 for each of the batches. However we cannot have a sequence number generator to do this. The possible solutions that I have over my head are:

  1. Create an extra table which holds the numbers. But there is a possibility that multiple users get the same sequence as there may be uncommitted transactions.
  2. Every time an entity is saved we get the max(substring(batchnum,2)) from that column and add +1. But this will have a very huge overload on performance and also has the issue of multiple users getting the same sequence.
  • 2
    "However we cannot have a sequence number generator to do this." Why not? (when encountering a new Batch sequence; dynamically create a new location sequence named the batch sequence. Using a function to get the next sequence based on a batch number... (but I'm probably missing something) – xQbert Apr 22 '14 at 17:22
  • Do you need to persist this number? Otherwise you could simply store batch identifier (`01`, `02`) and a sort order. When retrieving the data you can use `row_number()` to generate the final (complete) batch number –  Apr 22 '14 at 17:25
  • Row_number() too have disadvantages. If you delete a row, the row number will mess the succeeding sequence numbers that are used for recognizing the sequence in a batch – PlanetWired Apr 22 '14 at 17:53
  • @xQbert: I see that creating the sequences on the fly is really a good solution. But will there be any performance impact and database maintenance issues with this type of approach? If I have say 1000 batches, there will be 1000 sequences generated, which I think is not so clean right? – PlanetWired Apr 22 '14 at 17:58
  • 1
    @user3561431 I've not done it for the volume you're working with. I can't say what the performance impacts would be. It logically seemed like the cleanest solution long term (which is why it's a comment not an answer) My fear was you may run into the need to use dynamic SQL; but using a function to get the next sequence may reduce that concern some) I can say I work with a system having over 300 sequences with 1,744,464,335 new IDs over 10 years and I don't see a performance problem with our enterprise DB. – xQbert Apr 22 '14 at 18:06
  • @xQbert Thanks a lot. I appreciate your help. I too didn't see any better solution. I will go with this approach. – PlanetWired Apr 22 '14 at 18:20
  • What's not clean you have a BN001, BN1, BN2, ... BN99 sequence easily identifiable! if it's really an enterprise database, lookup if they have limits on sequences and use. I wouldn't think it should be a problem especially given the limited range of the sequences, the volume just isn't there. Max of 10 million IDs. I think it's easier to maintain, cleaner and each location sequence truly gets it's own unique sequence. The real question is WHY are the numbers important? if they are unique keys shouldn't any random unique value work? – xQbert Apr 22 '14 at 18:36

0 Answers0