-1

I'm running an application on multiple web servers that communicate with a distributed in-memory caching cluster, where I generate exclusive lock IDs on the application server -- the problem is that in high concurrency of parallel execution it is possible that more than one execution across all servers would generate the same pseudo random lock value.

The idea would be to initiate a single instance of the Random class per application pool using an incrementing seed within the distributed caching cluster, and to re-seed the randomizer after each helper method to generate a thread safe random number has reached a specific number of invocations.

Interested to see what thoughts you would have on this.

abatishchev
  • 98,240
  • 88
  • 296
  • 433
Sivart
  • 315
  • 1
  • 3
  • 12
  • 1
    I'm not actually sure this question is answerable, but have you considered just using GUIDs or putting a *single* random instance on a server (that all clients ask for the next random number) so everyone is cycling the same RNG? – BradleyDotNET Mar 31 '17 at 23:26
  • I wonder if generating a Random instance with the GUID as the seed would be sufficient. – Sivart Mar 31 '17 at 23:34
  • 1
    Given that GUIDs have more entropy, and are *designed* to be unique no matter how many you generate at a time, I would just use the GUID straight up. Random's always have a chance at collision – BradleyDotNET Mar 31 '17 at 23:35
  • While this question makes no sense (as random and unique can't go together) you probably looking for some sort of distributed auto-increment sequence generators. Some starting points can be found in http://stackoverflow.com/questions/7258619/distributed-primary-key-uuid-simple-auto-increment-or-custom-sequential-value – Alexei Levenkov Apr 01 '17 at 02:15

1 Answers1

3

Random is just that - RANDOM. Not guaranteed to be unique. Rolling a die is an example of a random event, yet you may get the same result 6 out of 6 times.

GUID objects are (almost) guaranteed to be unique, so just use those instead.

Rufus L
  • 36,127
  • 5
  • 30
  • 43
  • I believe they are intended to be unique across machines also. The "guaranteed part" seems wrong, generate enough and you *have* to have a duplicate (they are fixed length after all); its just that generating that many will take longer than our sun has left to live to do. – BradleyDotNET Mar 31 '17 at 23:40
  • Yeah, that's the intent. My memory is from when they were generated based on the MAC address and a timestamp, which I thought meant guaranteed unique on a particular machine, but I guess there are different ways they can be generated. I've updated my answer - thanks for the constructive comment! – Rufus L Mar 31 '17 at 23:47
  • 1
    Generating based on MAC means that they would be unique across machines, since every machine *should* have a unique MAC address :) – BradleyDotNET Mar 31 '17 at 23:55
  • @RufusL with coin you are *guaranteed* to have the same result at least 3 times out of 5 :) – Alexei Levenkov Apr 01 '17 at 02:13