Writes can not be applied simultaneously to the dataset. When a write is sent to a MongoDB instance, be it a shard or a standalone server, here is what happens
- A collection wide write lock (which resides in RAM) is requested
- When the lock is granted, the resulting data to be written (be it an update, an upsert or a new document) is checked against the unique indices (which usually reside in RAM)
- If there is no collision, the data is applied to the dataset in RAM
- The lock is released. Only now other writes can start performing changes to the data in memory.
- With the default write concern, the query returns now
- After commitIntervalMs the data is written to the journal
- Only after syncInterval seconds (60 per default), the journal is applied to the data files
That being said, we can look at the actual values. 1 million writes / second seem a bit much for a single server (simply because the mass storage can't handle it), so we assume a sharded cluster with 10 shards, with a shard key which distributes the writes more or less evenly. As we have seen above, all operations are applied in RAM. With today's hardware, some 3.5 billion instructions/s can be processed, or 3.5 instructions per nanosecond. Let's assume getting and releasing a lock each take 35 instructions or 10 nanoseconds. So locking and unlocking for each of our 100k writes would take 20 nanoseconds, altogether 1/500 of a second.
That would leave 499/500 of a second or 998000000 nanoseconds for the other stuff MongoDB needs to do, which translates to a whopping 3.493 billion instructions.
The locks to prevent concurrent writes are far from being the limiting factor for write operations. Syncing the changes to the journal and the data files is usually the limiting factor, followed by to less RAM to keep the indices and working set in RAM.