0

I have code where 2 separate calls are made to it simultaneously. It writes to the database.

For example:

$x->KEY = 'ABC'; //Primary key
$x->TIME = date('His') . substr(microtime(), 2, 2); //Primary key
$x->save(); //saves the record in DB 

Both the processes come from a external service. So I have no control over that. As you may see, the primary key involves TIME which is causing Duplicate entry errors and multiple executions happen at the same time.

I've tried putting in DB entries describing the process has been executed once, so that the 2nd process doesn't go through. However, this has led to only Duplicate DB errors.

I've also tried lagging the execution by a rand($microsecs), hoping that the 2 calls would choose random delay times, but that doesn't work all the time either.

Can you please suggest alternate ways to solve this problem?

Thank you!!

Teena Thomas
  • 5,139
  • 1
  • 13
  • 17
  • 3
    This seems contrived—or more like an [XY problem](http://xyproblem.info/). What actual problem are you trying to solve? Can you show some code? – slhck Feb 12 '15 at 20:50
  • How different are the two function calls? Different process, different thread,...? – Willem Van Onsem Feb 12 '15 at 20:52
  • Set a `unique` constraint in the table for fields that are unique, or set a `primary key`. – developerwjk Feb 12 '15 at 20:53
  • Since php is single-threaded, and if you are sending the requests to the same server, then you could have some if-statement checking if the task has been completed, that would step the second request from executing. You could accomplish this with a global variable or a DB entry. – Wold Feb 12 '15 at 20:53
  • @Wold: php can be [multithreaded](http://php.net/manual/en/class.thread.php). – Willem Van Onsem Feb 12 '15 at 20:53
  • @CommuSoft yes, but since I know none of the details of this person's code, I am assuming this person is not using the thread class. – Wold Feb 12 '15 at 20:55
  • So really you're just trying to avoid database duplication that results from concurrency, is that the real problem? – slhck Feb 12 '15 at 21:02
  • @slhck: Yes, that's right. – Teena Thomas Feb 12 '15 at 21:05
  • unique constraint is the way to go, because not only will it prevent duplication from php, but also from the backend (e.g. someone using the MySQL console) – developerwjk Feb 12 '15 at 21:06
  • @developerwjk: Thank you. Any other solutions which can be dealt using only php and not altering the DB? – Teena Thomas Feb 12 '15 at 21:08
  • Its a bad idea to try and do this only via PHP or any other language. Has to be done on the db to be foolproof. Any attempt at a workaround from the programming language will fail at some point. – developerwjk Feb 12 '15 at 21:09
  • because you're not only dealing with concurrency in php, but in the database server. Since the database server is the final point, the concurrency issue must be dealt with there. – developerwjk Feb 12 '15 at 21:10
  • TIME is a primary key, so it has a unique constraint to it. – Teena Thomas Feb 12 '15 at 21:13
  • 1
    The issue here isn't concurrency. The issue is a poor choice of primary key. Database 101: Choose keys that are natural discriminators or generate artificial ones by incrementing an integer for example. Trying to lag the calls or adding more precision will only take you a little further but not solve your problem. – Dan Feb 12 '15 at 21:48
  • It's also unclear whether you want to record the calls (trying to lag the calls suggests this) or trying to drop duplicate ones (as choosing time as the primary key suggests). I don't know the context of your question but something is definitely wrong here. Either you picked the wrong primary key and the duplicate calls are legit or you chose the right key and you want to drop the duplicate calls anyway. – Dan Feb 12 '15 at 21:57
  • @Dan: The primary key has been set, and are correct. The duplicate calls need to be dropped, which they are when the duplicate error occurs. I need something to avoid that error. – Teena Thomas Feb 12 '15 at 23:14
  • Is changing the issued query an option? Depending on the database there might be an option to ignore errors on insert. http://stackoverflow.com/questions/812437/mysql-ignore-insert-error-duplicate-entry – Dan Feb 13 '15 at 22:25
  • If this isn't an option you should take a look at locking your table and checking whether an entry for the given primary key exists. This will decrease the performance though as your other threads will have to wait for the database to become available to them again. – Dan Feb 13 '15 at 22:26
  • Using `ignore` is not an option as we don't want to permanently ignore the errors for all inserts. Locking the table is not an option as there would be multiple users accessing that table. Thanks for your suggestions, Dan!! – Teena Thomas Feb 16 '15 at 14:39

1 Answers1

0

Try setting your TIME key using this code:

$microtime = explode(" ", microtime());
$x->TIME = $microtime[1];
$x->TIME .= str_replace("0.", "", $microtime[0]);


// in a tight for loop there are no duplicates:
// outputs 142377749062793500
//     ... 142377749062799300
//     ... 142377749062804200
//     ... 142377749062809100
//     ... 142377749062813800

//     ... your existing microtime code in the same loop would output:
//     ... 22515348
//     ... 22515348
//     ... 22515348
//     ... 22515348
//     ... 22515348

It's much more precise than your existing microtime code so should be more unique / less chance of colliding with an existing row.

Ian
  • 11,920
  • 27
  • 61
  • 77