-
Notifications
You must be signed in to change notification settings - Fork 47k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fiber] Add more tests and fix an issue with incremental error handling #8212
Conversation
These tests are currently failing: ✕ catches render error in a boundary during partial deferred mounting ✕ catches render error in a boundary during animation mounting ✕ propagates an error from a noop error boundary during full deferred mounting ✕ propagates an error from a noop error boundary during partial deferred mounting ✕ propagates an error from a noop error boundary during animation mounting The observed behavior is that unstable_handleError() unexpected gets called twice: "ErrorBoundary render success", "BrokenRender", "ErrorBoundary unstable_handleError", + "ErrorBoundary render success", + "BrokenRender", + "ErrorBoundary unstable_handleError", "ErrorBoundary render error"
The fix seems reasonable to me but I want to look at bit closer at how |
@@ -365,9 +366,12 @@ module.exports = function<T, P, I, TI, C>( | |||
workInProgress.firstEffect = null; | |||
workInProgress.lastEffect = null; | |||
|
|||
if (workInProgress.progressedPriority === priorityLevel) { | |||
if (workInProgress.progressedPriority === priorityLevel || | |||
priorityLevel === TaskPriority) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that this is correct. Let's say we had a very low-priority render that had started deleting one child but it never completed. Instead it did some other higher priority updates to it.
Then suddenly something in a boundary fails (or get another Task priority update). Then the child would be remounted instead of reused.
I don't have a better solution atm but I'll think about it some more. I don't fully understand how the failure case happens now.
Would always force unmounting children in an error boundary help with this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would always force unmounting children in an error boundary help with this case?
It would because we'd be recreating the whole tree instead of trying to reuse failed work... I think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would always force unmounting children in an error boundary help with this case?
I'm not sure. Maybe let's come back to this after #8227?
@gaearon Can you explain why "ErrorBoundary render success" gets called after "ErrorBoundary unstable_handleError"? Why doesn't it just abort and go directly to the "ErrorBoundary render error" case? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll just flag this as needing changes since it seems like this is waiting for #8304.
This was fixed by #8304. |
See individual commits.
Add more tests for incremental error handling
Added more tests that test how error boundaries behave inside different priorities.
These new tests are currently failing:
The observed behavior is that
unstable_handleError()
unexpectedly gets called twice:This is not a big deal but if people call error reporting services in
unstable_handleError
, they'll get each error twice on mounting.Always reuse progressChild when starting Task work
The above problem happens because
Task
is a higher priority work than animated or deferred, and so it throws away the already progressed child. This is why we render the failed path twice before continuing with the success path.As a fix, I changed it to always reuse the progressed child when we're working with
Task
priority. I’m not sure this is the right fix. Maybe we should only do this for error handling work. I couldn’t figure out how to come up with a counter-example to this so I’d appreciate special attention there.Verify batched updates get scheduled despite errors
This just adds a few tests I planned to do earlier so that we don’t accidentally regress on them.