I was hunting a weird edge case where a list of files in a directory didn't show results for 0...2 files but worked fine for 3...n files.
It turned out that the original observable sequence worked just fine. But I used a PublishSubject
in one subscriber to relay the effect of the change. All of this reportedly happened on the main queue, but it seems the PublishSubject
got fed values before it had a subscriber. (Since there's no replay, the subscriber wouldn't know.)
So the set-up of all components (origin -- relaying subscriber -- consuming subscriber) seems to have introduced time as a problem.
Weird observations:
- If I make the original
Observable
aDriver
, things work fine. - If I insert
.observeOn(MainScheduler.instance)
somewhere in the original chain (before the relaying subscriber), things work fine. Even though I can see through the use of breakpoints in map operations that the mapping happens oncom.apple.main-queue (serial)
already. - If I use
.subscribeOn(MainScheduler.instance)
at the origin or in the subscriber's code, I still get the problem. (Probably because at the origin it only affects the relaying subscriber, and later it's too late.)
Now I don't understand how to handle issues like these defensively.
It seems the PublishSubject
might not be a good fit for the situation. But why does observing-on the main queue improve the situation?
When should you (defensively) specify observable sequences to run on the main queue upon creation/production? (Again, this might be a pragmatic fix, but it only accidentally seems to solve the problem.)
Or, put differently, when should you assume things don't happen in time in the consumer's code, i.e. when the subscription is set up?
There was no way for me to tell that relaying events from input sequence to PublishSubject
caused trouble. It's not perceivable. Leaves me puzzled how to avoid bugs like this.