Do not depend on time being accurate on multiple machines for the core algorithm in a distributed system.
The biggest problem (especially with something like occasionally connected devices), is they can be disconnected for a long time and make many changes. We're not talking about drift here, we're talking potentially about many edits over long periods of time not connected to a server without a reliable time on the device.
If you want an approximation to interleave operations in a system where its not critical (facebook app, not financial), the server could maintain an ever increasing changeId long with every change - sort of a journal of changes applied. Then when the device changes an entity, it records it and references the last changeId it knows about. Writing that entity revision back to the server will cause a new changeId to get logged but the entities revision maintains the changeId when it was written by the device (what determines where it interleaves in the history of the entities writes) and the new server changeId (used when other clients get all revisions since changedId x).
the device can then retrieve all changes since the last changedId it knew about and will get all the entities - it can order revisions when viewing the entities based on the changeId it knew about when it was written.
That means a device thats connected more consistently will win more on writes (interleaved) and devices more occassionally connected will lose more often. But, all revisions will be written - just the latest wins. You could do that at a field or entity level. If at a field level, field changes by different users would effortlessly merge - if that's what you want.
It's an approximate interleaving of distributed revisions which simply interleaves based on your last message with the server.