-
-
Notifications
You must be signed in to change notification settings - Fork 15.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: waitFor() equivalent in combineReducers() using generators #1315
Comments
I don't think that at all (beside implementation details like preventing circular dependencies, which I dont think are difficult to solve). Unless there are some sematical issues I m unaware of, I think this is brillant idea; Because it solves 2 issues in the same time 1- One one hand, keeping the single immutable state atom This could have a great impact on how we think of our state shape. I find it somewhat similar to the spreadsheet model the actual model of reducer composition (combineReducers) constrains in some way how we structure the state shape (which tends to be coupled with the dependency graph of reducer calls). The proposed solution seems kind of liberating in this regard. |
@gaearon I've worked on this I think it solves your problem pretty well: https://github.com/slorber/rereduce import { createReducer } from 'rereduce'
const a = createReducer((state = 0, action) => {
switch (action.type) {
case 'increment':
return state + 1
case 'decrement':
return state - 1
default:
return state
}
})
const b = createReducer((state = 5, action) => {
switch (action.type) {
case 'increment':
return state + 10
case 'decrement':
return state - 10
default:
return state
}
})
const c = createReducer({ a,b }, (state = 0,action,{ a,b }) => {
const sum = a + b
return state + sum
})
const d = createReducer({ c }, (state = 0,action,{ c }) => {
return c + 1
})
const counters = combineReducers({ a, b , c, d })
let state
state = counters(state, { type: 'init' })
assert.equal(state.a, 0)
assert.equal(state.b, 5)
assert.equal(state.c, 5)
assert.equal(state.d, 6)
state = counters(state, { type: 'increment' })
assert.equal(state.a, 1)
assert.equal(state.b, 15)
assert.equal(state.c, 21)
assert.equal(state.d, 22) |
What are the benefits of using generators? If I understand the motivation correctly, I think this could be better solved in traditional ways with something like sequencify: function c(state = 0, action, aState, bState) {
return state + Math.abs(aState - bState)
}
const counters = combineReducers({
a: { reducer: a },
b: { reducer: b },
c: {
reducer: c,
deps: ['a', 'b'],
}
}) |
This moves dependency declaration to parent reducer which is what we want to avoid. I would also prefer to avoid reliance on strings. Here is another approach without generators: https://github.com/slorber/rereduce |
hmmm @gaearon I think it's does not work well with time-travel :'( will have to check that later |
That's just a matter of code organization, isn't it? E.g. inside child reducer file: I was more curious about the usage of generators as opposed to passing dependency state as arguments based on some annotation, because the way I see it, generators are less declarative, harder to analyze and test and needlessly more powerful (can be async, same reducer can be yielded twice). |
That looks neat. I'm still new to redux but I found an approach that seems to work for me, it's really simple but is it lacking in one way or another? import reduceReducers from 'reduce-reducers'
function a(state = 0, action) {
switch (action.type) {
case 'increment':
return state + 1
case 'decrement':
return state - 1
default:
return state
}
}
function b(state = 0, action) {
switch (action.type) {
case 'increment':
return state + 10
case 'decrement':
return state - 10
default:
return state
}
}
const c = (state = 0) => state;
function root(state = {}, action) {
let c = state.c + Math.abs(state.a - state.b)
return Object.assign({}, state, {c});
}
const counters = reduceReducers(
combineReducers({ a, b, c}),
root
); Also I think the outputs should be: let state
state = counters(state, { type: 'increment' }) // { a: 1 , b: 10, c: 9 }
state = counters(state, { type: 'increment' }) // { a: 2, b: 20, c: 27 }
state = counters(state, { type: 'increment' }) // { a: 3, b: 30, c: 54 }
state = counters(state, { type: 'decrement' }) // { a: 2, b: 20, c: 72 }
state = counters(state, { type: 'decrement' }) // { a: 1, b: 10, c: 81 } |
Yeah, maybe. I kinda like that we rely solely on language features here but I see your point.
Is this really the case? I think generators are actually really easy to test because you can supply any values to them and have full control over their execution. See also how Redux Saga is easier to test than Redux Thunk because generators.
Not really although I can see how this can confuse people. Generator functions aren't sync or async by their nature—what happens depends on their interpreter (in this case,
Sure, this definitely works too. I was looking for a simple mental model where you don't have to "think in stages" explicitly, and the library would determine the order of passes for you. |
About testing, in this example to test the reducer function* c(state = 0, action) {
const aState = yield a
const bState = yield b
return state + Math.abs(aState - bState)
} Swapping the yield calls will break the test. This makes it less declarative. Right now I can’t think of any elegant way to stub the state value without relying on generator’s yielding order. Partly this is because the reducer Would it make more sense if, instead of that, the reducer just couples to the name by yielding the key of the sibling state as a string instead? function* c(state = 0, action) {
const aState = yield 'a'
const bState = yield 'b'
return state + Math.abs(aState - bState)
} This is so that the behavior of const cState = combineReducers({
a: () => 42,
b: () => 420,
c
})(void 0, { type: 'increment' }).c
assert(cState === 378) |
I agree with @dtinth about the testing part. String references are not everybody's cup of tea but they do allow for certain niceties. @gaearon the upside of @melnikov-s solution is that we wouldn't have to care about checking for circular dependencies because it's a simple sequence. Want to compute something? Do it after your source has been computed. To me it seems like a clean version with minimal impact on redux as a whole because it utilizes what's already there. |
I came up with another approach. Not sure if I should create a new issue (e.g. “RFC: waitFor() equivalent in combineReducers() using magical object”) or not. I think I’ll just put my idea here for now. Here’s the approach: Have The reducer function* c(state = 0, action, sibling) {
return state + Math.abs(sibling.a - sibling.b)
} Or more simply: function* c(state = 0, action, { a, b }) {
return state + Math.abs(a - b)
} Now testing is trivial. You can test test('Test c', assert => {
const state = c(void 0, { type: 'increment' }, { a: 42, b: 420 })
assert.equal(state, 378)
assert.end()
}) Or test all reducers in combination: const counters = combineReducers({ a, b, c })
test('Test counters', assert => {
const state = counters(void 0, { type: 'increment' })
assert.deepEqual(state, { a: 1, b: 10, c: 9 })
assert.end()
}) I’ve implemented it in |
@johanneslumpe the solution of @melnikov-s will work fine in most cases but has some little drawbacks:
|
@slorber Making sure that things are ordered correctly does not seem like a huge deal to me. But what does seem like an issues is this: the way how keys of an object get iterated over is left to the implementors, it isn't defined in the spec. So even though most engines iterate over keys in the sequence in which they have been added, there could be an engine which doesn't do it. It would be spec compliant but this part of redux would break. This is probably a non issue in the real world though. In terms of the tree structure: most atoms are flat with a key per reducer. I don't see a huge difference between whether you have an explicit dependency on a reducer or a reducer which needs to access |
@johanneslumpe for me it's the issue. If you have 1 shared reducer, and 100 reducers depending on it, when moving the shared reducer in the tree you would have to refactor all the 100 dependent reducers. It is probably not a big deal in most real-world cases but still not so good. Flux/Redux is inspired by event-sourcing concepts, where the action log ls generally hosted on Kafka/EventStore, and reducers can live on different servers for scalability reasons (microservices). The fact that frontend is on a single server does not mean we shouldn't inspire from decoupled backend architectures that work well. I prefer having independ reducers, that may use other shared reducers as an implementation detail, rather than having reducers query each others. As we are on a single browser, memoization works great to avoid using additional CPU. |
@slorber are you still discussing my code snippet in your last comment? If so, I'm not sure what you're referring to here:
There's no explicit dependencies between reducers, each top level reducer gets a chance to modify the state in some linear order. If your state tree structure changes you might have to change some reducers, or you might not, it all depends on how you structured them. The 'compute' reducer in my example doesn't have to target the the state at the root, it too can use {
counts: {
a: 0,
b: 0,
c: 0
}
} you can do: const counters = reduceReducers(
combineReducers({counts: combineReducers({ a, b, c})}),
combineReducers({counts})
); sorry in advance @slorber if I misunderstood your comment. @gaearon I think the initial solution is really cool and I got a kick out of the way you used generators to make it happen. Though, I'm not sure about it being included in core redux it seems a bit complex on first sight. though if I understood the general implementation correctly it does allow for things like: function* a(state = 0, action) {
switch (action.type) {
case 'increment':
return state + 1
case 'decrement':
let bState = yield b;
return state - bState;
default:
return state
}
}
function* b(state = 0, action) {
switch (action.type) {
case 'increment':
let aState = yield a;
return state + aState
case 'decrement':
return state - 10
default:
return state
}
} where Not sure if that's a feature or abuse thereof :) |
@melnikov-s yes you understand the issue. In your proposed solution using I've updated rereduce so that it works fine with server-side rendering and time-travel. @gaearon or anyone else, can someone provide a valid simple usecase to this problem? Because even if I'm trying to solve it technically, nobody has provided any concrete usecase yet :D |
I'm not sure if I have any simple use cases here. I've seen some code in the wild that needs a solution like this, but it's very domain-specific and hard to explain. |
Would nested entity handling be a possible use case, such as #994 (comment)? One sub-reducer needs to wait for another reducer to create an entity, so it can reference the ID? |
I have one use case from my personal project, Bemuse, although it’s not using Redux. It’s a rhythm action game, and there are many songs to choose. Each song have many levels of varying difficulty.
You can see here is some complex data dependency here. I used Bacon.js to solve this problem in few lines of code by describing them as a graph of event streams and properties.
Not sure what would be the best way to solve this in Redux, but allowing |
Yes, true, but in what situation would you have to
Each additional function to |
Does this solve it? function selectedSongId(state, action) {
if (action.type === 'select-song') {
return action.songId;
}
return state;
}
function difficulty(state = 0, action) {
if (action.type === 'select-difficulty') {
return action.difficulty;
}
return state;
}
let songs = (state = []) => state;
function root(state, action) {
if (action.type === 'select-song') {
let {selectedSongId, difficulty, songs} = state;
let selectedSong = songs.find((song) => song.id === selectedSongId);
let newDifficulty = findClosestDifficulty(difficulty, selectedSong);
return Object.assing({}, state, {difficulty: newDifficulty});
}
return state;
}
reduceReducers(
combineReducers({selectedSongId, difficulty, songs}),
root
) provided that when you filter you call the 'select-song' action on the first visible item if the selected song is no longer in the list. |
@dtinth I've had similar usecases in my app, and all of them seems solvable by using redux-saga of @yelouafi I have not totally understood your example but I would write something like: export function* difficultyLevelSaga() {
let currentSong;
while ( true ) {
const event = yield take("SONG_SELECTED");
const oldSong = currentSong;
currentSong = event.payload;
const newDifficultyLevel = getClosestDifficultyLevel(oldSong,currentSong);
yield put("DIFFICULTY_LEVEL_UPDATED",newDifficultyLevel);
}
} If like the fact that a difficulty update is always triggered by a |
Right, but aren't you just moving those deductions into your sagas and then stripping the reducers down to a point where all they do is set data on the state? Essentially the reducer becomes function that can map
to
According to the redux docs it is the reducers that are responsible for state transitions given an action. So I think its totally valid to have the reducer change the difficulty level for a Though please keep in mind that while I have been working with redux in production the apps I've built have been small-medium in terms of size and complexity so perhaps I'm just lacking perspective. |
Yes :)
I'm pretty sure @gaearon write in docs the patterns he finds to be the best at a given point, but it's not written in stone and remains questionnable as we all are learning how to build complex frontend apps together :)
Well, it definitively works in this usecase, but I'm advocating for app maintainance scalability and I don't think it is a sustainable solution in the long run. What if you want to add a requirement like this to your app?
Now you have to start using your reducers to do coordination of complex workflows and it seems not so easy :) In real apps you encouter some requirements like that.
I have been working on a quite React app for 2 years, and been doing what you did initially, but something felt wrong and the code became complex. Particularly when I had to add an onboarding to our existing production app (a real app onboarding, not a stupid feature carousel). During this onboarding, regular app events kick in, and according to those events, you should make the onboarding progress, tweak a little some parts of the original app (so that the user does not get "blocked" in the onboarding if he clicks an unexpected button), display/hide tooltips and popups everywhere, and these popups can also have their own state... It became quite complex and coupled the components together in a bad way. And I did simplify it during the process to a quite sequential workflow while initially we wanted to have multiple parallel onboarding workflows. Now consider how redux-saga would add an onboarding to an existing app, taking TodoMVC as an example (see official doc): function* onBoarding() {
for(let i = 0; i < 3; i++)
yield take(INCREMENT_COUNTER)
yield put( showCongratulation() )
} You can see here that to add the onboarding, you don't need to modify existing TodoMVC reducers, actions and views: you just need to add an orchestrationg layer, a new reducer computing onboarding state ( Without Redux-saga you would have to keep track inside the reducer of how much increments have been done, and if the congratulation has already been done, and then compute from those dependencies if we should show the congratulation message. It is definitively possible but just try and see how much hard it is. Domain-Driven-DesignRedux is quite related to EventSourcing, which is already used for years in the backend. I think we should also learn a bit more about the lessons of domain driven design to architecture our frontend apps, even if the concepts do not map that easily. In DDD, we have Bounded Contexts In our TodoMVC example we could ask ourselves:
So basically it seems we have 2 distinct bounded contexts that can be totally decoupled. They can be on separate NPM repositories without any shared dependencies. Only the saga act as a coupling point between the 2 bounded contexts and glue them together. |
Thanks for the insightful post @slorber. I'll definitely need to take a closer look at sagas, and DDD. The one thing that confused me was:
I don't think reducers can even manage that type of co-ordination as they're synchronous and side-effect free. You'd have to push that logic to a middleware and have it send a |
@melnikov-s np :) Also a quite related blog post of yesterday I've answered the same way: http://blog.javascripting.com/2016/02/02/encapsulation-in-redux/ @melnikov-s if you introduce time events into your app (ie dispatch a TICK action every 10ms with the current timestamp), you can easily do whatever you want in your application that you would have been done with setInterval or setTimeout. However yes it will be very difficult to manage correctly as your app state could change dramatically from tick events in a non-explicit way. I mean, if a tick action can weither do nothing, or display a popup, or increment a counter, it becomes quite hard to follow on the long run isn't it? I think it's not the role of a reducer to interpret too much from a badly designed event log, but rather the event log to be clean and easy to understand in the first place. And I think it's ok to reuse the same reducer in multiple places, at least better than having reducers querying state tree / other reducers results. |
I don't have a proposed solution but I do have a use case so I thought I might share my feelings on applying the different techniques to it. I could define another action that is fired at the same time as the deviceAdded action, however that would mean that the state is out of sync for the time between them which feels bad. So how would the proposed solutions improve the situation?
|
It is indeed possible to dispatch 2 actions in a transactional way. I'm pretty sure I've seen this somewhere but can't remember the middleware name... About rereduce there is no need to use Still, I think sagas are appropriate for this usecase. But I think redux-saga does not permit to dispatch actions in the same event loop as the original action so it may be hard to achieve some kind of transactional behavior. @yelouafi ? |
The dispatch is done on the next microtask (using Promise.then). But it's within the same event loop. |
shameless plug: i wrote about how i solved this here: https://medium.com/@matt.krick/solving-redux-s-shortcoming-in-150-locs-540979ce6cf9#.1wdn9o9ta |
I’m glad we saw some different proposals here. I’m not sure we want to solve this in the core unless some obvious winner emerges, so closing. |
I know I’ve said many times that
waitFor()
is an anti-pattern and I still think that for most apps that is true. However some apps with complex data calculations have to either:Both are legitimate ways to solve the problem. However it feels like there is some middle ground for a Flux-like shortcut that would help you write (2) without burdening the parent reducer with the knowledge of the child reducer data dependencies.
In other words, the neat feature of
waitFor()
is how it allows one store (in our case, reducer) to say “I want to use your calculation in my calculation” declaratively, without writing the plumbing code to make this happen. This is especially important in large teams where plumbing code is where the merge conflicts inevitably happen.I don’t propose to add any form of
waitFor()
to Redux itself. (At least, not yet ;-). There even have been attempts at implementing it in userland. I just had this idea today: what if reducers inside a special version ofcombineReducers
(it can be in userland too) could be declared as generators, and yielding another reducer was the equivalent ofwaitFor()
?.Note how
c
has its ownstate
but is able to aska
andb
for their next state without calling them. The specialcombineReducers()
implementation treats generators differently, asking which reducers (declared on the same level and assumed to be unique, of course) those reducers depends upon, and makes sure that dependencies are resolved, and the relevant state is returned back byyield
. Just like before, all reducers are called before the combined next state is returned—the only difference is thatyield
inside one child reducer causes another child reducer to execute first, and its return state may be used for a dependent computation.This provides an easy way to accumulate dependent state without buying into the whole selector business and with zero API other than custom
combineReducers
. And who knows, maybe the defaultcombineReducers
could support it one day if this is not a dumb idea and can be done performantly.Just like with regular
combineReducers()
, this pattern can be applied recursively, although only reducers “on the same level” (inside the samecombineReducers()
call) would be allowed toyield
each other.Terribly naïve proof of concept is below. It would need optimizations, protection from recursion (just like in Flux), and protection from duplicate identical reducer values (which may be something we want to do anyway).
cc @acdlite @fisherwebdev @goatslacker @dtinth @slorber @ellbee
(Sorry if this idea is terrible!)
The text was updated successfully, but these errors were encountered: