Any write operation to a single document is guaranteed to be atomic. It will either totally succeed, or not at all, and a client requesting that same document is guaranteed to get a copy of that same document that is in a consistent state.
For bulk operations, this is not the case by default. Normally, if you update 10 documents in a single operation, then it's possible that other operations may interleave between the updates. You can isolate the operation by using the $isolated operator to ensure that nobody else can view that set of documents in an inconsistent state.
In a more general sense, MongoDB uses a writer-greedy readers-writer lock so that you can be pretty sure that if client A updates a document right before client B reads it, then B will see writer A's updates. Some of that will come down to the implementation of the driver and how you use it, of course, but that's true with any database.
Distributed environments
The above applies only if your talking about a single instance. Once you have a distributed mongo setup, then we start talking CAP theorem. One of the great features of MongoDB is how well is scales to deal with huge data sets. This is typically accomplished using either sharding, replica-sets, or both. In either of these scenarios, MongoDB is said to be eventually consistent. In a distributed setup, it may be possible for client B to read a document that is missing client A's update, because MongoDB chooses Availability over Consistency. Eventually, all those inconsistencies will get ironed out, and in practice it's usually consistent enough for most use cases. But it's can't make any guarantees when it's setup in a distributed environment.