I solved my problem but I guess there is no right answer as it really depends on a specific situation. In my case, I decided to go with this approach:
One of the challenges that the original selector handled nicely was that the final information was compiled from many pieces that were delivered in an arbitrary order. If I decided to build up the final information in my reducers incrementally, I'd have to make sure to count with all possible scenarios (all possible orders in which the information pieces could arrive) and define transformations between all possible states. Whereas with reselect, I can simply take what I currently have and make something out of it.
To keep this functionality, I decided to move the selector logic into a wrapping parent reducer.
Okay, let's say that I have three reducers, A, B and C, and corresponding selectors. Each handles one piece of information. The piece could be loaded from server or it could originate from the user on the client side. This would be my original selector:
const makeFinalState(a, b, c) => (new List(a)).map(item =>
new MyRecord({ ...item, ...(b[item.id] || {}), ...(c[item.id] || {}) });
export const finalSelector = createSelector(
[selectorA, selectorB, selectorC],
(a, b, c) => makeFinalState(a, b, c,));
(This is not the actual code but I hope it makes sense. Note that regardless of the order in which the contents of individual reducers become available, the selector will eventually generate the correct output.)
I hope my problem is clear now. In case the content of any of those reducers changes, the selector is recalculated from scratch, generating completely new instances of all records which eventually results in complete re-renders of React components.
My current solution looks lite this:
export default function finalReducer(state = new Map(), action) {
state = state
.update('a', a => aReducer(a, action))
.update('b', b => bReducer(b, action))
.update('c', c => cReducer(c, action));
switch (action.type) {
case HEAVY_ACTION_AFFECTING_A:
case HEAVY_ACTION_AFFECTING_B:
case HEAVY_ACTION_AFFECTING_C:
return state.update('final', final => (final || new List()).mergeDeep(
makeFinalState(state.get('a'), state.get('b'), state.get('c')));
case LIGHT_ACTION_AFFECTING_C:
const update = makeSmallIncrementalUpdate(state, action.payload);
return state.update('final', final => (final || new List()).mergeDeep(update))
}
}
export const finalSelector = state => state.final;
The core idea is this:
- If something big happens (i.e. I get a huge chunk of data from the server), I rebuild the whole derived state.
- If something small happens (i.e. users selects an item), I just make a quick incremental change, both in the original reducer and in the wrapping parent reducer (there is a certain duplicity, but it's necessary to achieve both consistency and good performance).
The main difference from the selector version is that I always merge the new state with the old one. The Immutable.js library is smart enough not to replace the old Record instances with the new Record instances if their content is completely the same. Therefore the original instances are kept and as a result corresponding pure components are not re-rendered.
Obviously, the deep merge is a costly operation so this won't work for really large data sets. But the truth is that this kind of operations is still fast compared to React re-renders and DOM operations. So this approach can be a nice compromise between performance and code readability/conciseness.
Final note: If it wasn't for those light actions handled separately, this approach would be essentially equivalent to replacing shallowEqual
with deepEqual
inside shouldComponentUpdate
method of pure components.