This is a problem with very large scope. How is the data structured? How are the "db loaders" get the data from the "data producing" machine? What happens if an update fails- is the data lost or must it be persisted at any cost?
I will make some assumptions and suggest a solution:
1. The data can be partitioned.
2. You have access to a central persistent buffer. e.g. MSMQ or WebSphere MQ.
The machine generating the data puts chunks inside a central queue. Each chunk is composed of a set of record IDs and the new values for relevant properties)- you decide the granularity.
The "db loaders" listen to the queue and each de-queues a chunk (the contention is only on the dequeue-stage and is very optimized) and updates its own set of ids.
This way insert work is distributed among the machines, each handles its own portion, and if one crashes, well- the others wills simply work a bit harder.
In case of a failure to update you can return the chunk to the queue and retry later (transactional read).