At one extreme, write
waits until all the data has been confirmed as written to the remote system. It gives the greatest certainty of successful completion at the expense of being the slowest.
At the opposite extreme, you could just queue the data for writing and return immediately. This is fast, but gives no assurance at all that the data will actually be written. If a router was down, a DNS giving incorrect addresses, etc., you could be trying to send to some machine that isn't available and (possibly) hasn't been for a long time.
write_some
is kind of a halfway point between these two extremes. It doesn't return until at least some data has been written, so it assures you that the remote host you were trying to write to does currently exist (for some, possibly rather loose, definition of "currently"). It doesn't assure you that all the data will be written but may complete faster, and still gives a little bit of a "warm fuzzy" feeling that the write is likely to complete.
As to when you'd likely want to use it: the obvious scenario would be something like a large transfer over a local connection on a home computer. The likely problem here isn't with the hardware, but with the computer (or router) being mis-configured. As soon as one byte has gone through, you're fairly assured that the connection is configured correctly, and the transfer will probably complete. Since the transfer is large, you may be saving a lot of time in return for a minimal loss of assurance about successful completion.
As to when you'd want to avoid it: pretty much reverse the circumstances above. You're sending a small amount of data over (for example) an unreliable Internet connection. Since you're only sending a little data, you don't save much time by returning before all the data's sent. The connection is unreliable enough that the odds of a packet being transmitted are effectively independent of the odds for other packets--that is, sending one packet through tells you little about the likelihood of being able to send the next.