You should make the distinction between what the Spring Cloud Dataflow documentation calls "the server" and the apps that make up a managed stream.
"The server" is only here to receive deployment requests and honor them, spawning apps that make up your stream(s). If you deploy multiple instances of "the server", then there is nothing special about it. PCF will front it with a LB and either instance will handle your REST requests. When deploying on PCF, state is maintained in a bound service, so there is nothing special here.
If you're rather referring to "the apps", ie deploying a stream with some or all of its part using more than one instance, ie
stream create foo --definition "time | log"
stream deploy foo --properties "app.log.count=3"
then by default, it's up to the binder implementation to choose how to distribute data. This often means round robin balancing.
If you want to control how data pertaining to the same conceptual domain object ends up on the same app instance, you should tell Dataflow how to do so. Something like
stream deploy bar --properties "app.x.producer.partitionKeyExpression=<someDomainConcept>"
As for handling failures, I'm not sure what you're asking. The deployed apps are the stream. Once a request to have that many instances of the stream components has been sent and received by PCF, it will take care of honouring that request. It's out of the hands of Dataflow at that point, and this is exactly why the boundary for the Spring Cloud Deployer contract has been set there (same for other runtimes)/