You might do something like this.
var filter = Builders<Customer>.Filter
.Eq(s => s.CustomerId, customer.CustomerId); //perhaps requires an index on CustomerId field
var update = Builders<Customer>.Update
.Set(p => p.CustomerId, customer.CustomerId);
if (!string.IsNullOrWhiteSpace(customer.City))
update = update.Set(p => p.City, customer.City);
if (!string.IsNullOrWhiteSpace(customer.Name))
update = update.Set(p => p.Name, customer.Name);
if (!string.IsNullOrWhiteSpace(customer.Family))
update = update.Set(p => p.Family, customer.Family);
if (!string.IsNullOrWhiteSpace(customer.Sex))
update = update.Set(p => p.Sex, customer.Sex);
customers.UpdateOne(filter, update);
A different approach might consider your customer entity as a whole (document oriented databases encourage this approach), thus always updating the entity altogether and avoiding fine-grained updates (that are more likely in case of R-DBMSs). In this case, you might write something like this.
var customerFromDb = customers
.Find(p => p.CustomerId == customer.CustomerId)
.Single();
if (!string.IsNullOrWhiteSpace(customer.City))
customerFromDb.City = customer.City;
if (!string.IsNullOrWhiteSpace(customer.Name))
customerFromDb.Name = customer.Name;
if (!string.IsNullOrWhiteSpace(customer.Family))
customerFromDb.Family = customer.Family;
if (!string.IsNullOrWhiteSpace(customer.Sex))
customerFromDb.Sex = customer.Sex;
customers.ReplaceOne(p => p.CustomerId == customer.CustomerId, customerFromDb);
This second approach has the following Pro and Cons.
Pro
- the update happens in the domain-layer (possibly through well suited methods), thus allowing enforcement of all domain rules and preventing dirty data to go into the database; thus it is more object-oriented.
Cons
- this approach requires two hits on the database (one for the read and one for the update);
- in case your entity is large, network has to transport more data than strictly needed.
Considering that premature optimization is the root of all evil (or at least most of it) in programming (here), in general I would go for the second approach, falling back to the first one only in case of very strict performance requirements or low network bandwidth.