There is always the opportunity for two asset updates from different sources to hit two endorsers in a different order. That will create read/write sets that do not match. This can cause both to fail if the endorsement policy requires that all endorsers achieve the same r/w set. Adding reads of external data that can change or can fail owing to any number of momentary issues just exacerbates this issue.
But the changes will be handled correctly no matter which endorsement policy is in effect because only endorsed r/w sets get committed.
Now, if you decide to write to an external database from a chaincode, then all bets are off. A last minute failure because of key clashes in the same block (fabric v1 requires that updates targeting the same keys never be processed together and thus must be flow controlled) will cause the transaction to never get committed.
When that happens, the external writes should be rolled back, but the chaincode is long done executing and so the total dataset has now been corrupted. This is a non starter.
However, should someone decide to tempt fate and build an elaborate system where chaincode updates external data, the failure can be detected and the updates rolled back so long as the transaction ID is part of the journal entry for the update. This solution is unspeakably brittle, and if you try it then you will likely be sorry :-)