0

In the last days I had to deal with distributed algorithms of timed process synchronisation for university. It was my main excercise to focus on Leslie Lamport's algorithm (partial ordering/total ordering of events) from 1978 and F. Mattern's and C.J. Fidge's idea of the vector time from 1988.

In the ideas of those three people I found a lot pros and cons for using their algorithms in distributed systems. But I wondered and did not find out if there is a "state-of-the-art" algorithm for today's timed process synchronisatzion in distributed systems.

How is this problem handled today?

Drudge
  • 528
  • 1
  • 6
  • 20
  • I do not think there is a "state-of-the-art" algorithm, what you need to do is just making trade-off to better suit your needs – Hanfeng Mar 19 '14 at 08:36

1 Answers1

0

You need partial and total ordering only for fully decentralised algorithms. Most distributed systems these days (Hadoop, NoSQL Databases, ...) select a master-node which is responsible for (a part of) the resources. This way, the events are automatically totally ordered on one machine.

Others than that, Richard Andrew Golding wrote a PhD in 1992 about Weak-consistency group communication and membership, where he describes the Timestamped Anti-Entropy (TSAE) algorithm, which is good example, to implement, to see how an eventual consistency algorithm could look like. Additionally to the vector-clocks he uses here matrix-clocks, I've described the rudimentary details as an answer to the question What do matrix clocks solve but vector clocks can't? However if you want to know more, I encourage you to read through chapter 5 of his thesis.

Community
  • 1
  • 1
peter
  • 14,348
  • 9
  • 62
  • 96