-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rendering Using 'muted' attribute in a video tag prints a warning about flushing updates to the console. #470
Comments
Verified. I'm not sure what this could be! Would you mind digging a little deeper and see if you can figure out what's causing this warning and whether there's anything we can do within React Testing Library? Perhaps you could ask on https://spectrum.chat/testing-library/help-react |
@kentcdodds Of course, I'd love to take a look around and see what I can see, though it might not be until after the long weekend here in the US that I am able to dive in. |
I've run into the same issue. The problem is that setting the At first, I thought we should be setting Unfortunately, I haven't found a simple fix (and there are other issues open about this, e.g. this). In case you're interested, my workaround is to use my own wrapper for the const renderIgnoringUnstableFlushDiscreteUpdates = (component: React.ReactElement) => {
// tslint:disable: no-console
const originalError = console.error;
const error = jest.fn();
console.error = error;
const result = render(component);
expect(error).toHaveBeenCalledTimes(1);
expect(error).toHaveBeenCalledWith("Warning: unstable_flushDiscreteUpdates: Cannot flush updates when React is already rendering.%s", expect.any(String));
console.error = originalError;
// tslint:enable: no-console
return result;
}; |
Interesting. I think this is out of scope for React Testing Library as it appears to be something with React specifically. If anyone can come up with something we can do to improve things, then please feel free to comment further, but from what I can tell, there's not much we can do. |
Setting muted to video results in warning: Warning: unstable_flushDiscreteUpdates: Cannot flush updates when React is already rendering This seems to be a bug in React: testing-library/react-testing-library#470
@chriszwickerocteris thanks! Here's the same code in eslint: const renderIgnoringUnstableFlushDiscreteUpdates = component => {
/* eslint-disable no-console */
const originalError = console.error
const error = jest.fn()
console.error = error
const result = render( component )
expect( error ).toHaveBeenCalledTimes( 1 )
expect( error ).toHaveBeenCalledWith( 'Warning: unstable_flushDiscreteUpdates: Cannot flush updates when React is already rendering.%s', expect.any( String ) )
console.error = originalError
/* eslint-enable no-console */
return result
} |
Just to confirm that this is a react issue with muted event on videos. |
I was facing this warning in react-testing-library. I had been looking for the solution for around 4-5 hours. This works for me. Step 1: add this function at the top of the test file. Step 2: Wrap the render component from the step 1 function: EX: renderIgnoringUnstableFlushDiscreteUpdates(<ComponentName {...props} />); Step 3: Add this code in setupTest.js file for run video correctly. window.HTMLMediaElement.prototype.load = () => { |
For me the following snippet in jest setup is helped:
My video element usage is:
|
this should be fixed along with: |
I love you my boi, it helped! Just paste this code guys into setupTest.js and it's not showing those weird erros! ❤ |
used above^ |
This is mainly an issue in test environments. React treats 'muted' as a property and sets it after the test environment has been torn down, resulting in console warnings [1]. Fix this by mocking the setter property to do nothing. This doesn't pose any issue in test environments. [1]: testing-library/react-testing-library#470
This is mainly an issue in test environments. React treats 'muted' as a property and sets it after the test environment has been torn down, resulting in console warnings [1]. Fix this by mocking the setter property to do nothing. This doesn't pose any issue in test environments. [1]: testing-library/react-testing-library#470
* feat(npm): add npm script for testing frontend * chore(npm): install frontend test dependencies The following dependencies were installed: - msw to mock backend API responses - @testing-library/react for idiomatic react testing APIs - @testing-library/jest-dom for idiomatic DOM assertion APIs - @testing-library/user-event for idiomatic DOM user event APIs - jest-canvas-mock to mock Canvas for Lottie as jsdom doesn't implement it - @peculiar/webcrypto as jsdom doesn't implement SubtleCrypto Also install the necessary TypeScript definitions. * ga: allow init options to be passed in The purpose is to pass in the testMode option during tests. Note: I've also shifted the declaration of initialChecks() into the useEffect() call where it is used. This is mainly to silence the React Hooks ESLint warnings on missing dependencies in the useEffect() dependency list. * feat(modal): allow initial content and no children This allows us to specify the initial modal content when setting up test environments. * fix(config): Disregard REACT_APP env vars for tests We don't care about these environment variables in tests. Also, setting the axios baseURL for tests will result in flakiness in tests that involve API responses. * feat(test): set up initial test fixtures * fix(HTMLMediaElement): mock setter for 'muted' attributes This is mainly an issue in test environments. React treats 'muted' as a property and sets it after the test environment has been torn down, resulting in console warnings [1]. Fix this by mocking the setter property to do nothing. This doesn't pose any issue in test environments. [1]: testing-library/react-testing-library#470 * chore(travis): run the frontend tests * chore(amplify): run the frontend tests Make sure to set CI=true explicitly as Amplify doesn't set it by default. CI=true will run all the tests exactly once and exit. Also compile translations before running the tests. * chore(app): move global context providers to index.tsx We want to render App.tsx in a test environment with separate providers. Hence, move the providers to index.tsx. Also, move the `import 'locales'` statement to the top of index.tsx as well. * feat(test-utils): allow passing in router options This allows us to specify initial history locations during tests. Since we're making MemoryRouter the topmost provider, adjust index.tsx to match. * test(app): add an initial render test for App.tsx Just a simple render test to ensure that the test infrastructure works fine. * feat(setupTests): log an error on unhandled API requests This allows us to more easily debug missing endpoints and fix them. * chore(app): remove unnecessary mocked API endpoints I'll look into how to best mock API endpoints across tests later on. For now, only declare the endpoints necessary for App.test.tsx. * fix(amplify): fix test configuration * fix(travis): run frontend tests only after building This makes more sense as compilation errors are more important than test errors. This also matches the Amplify build configuration. * fix(setupTest): reset msw server handlers after resetting timers There may be pending timers that depend on API endpoints. Run all timers to completion before resetting the API endpoints. * chore(test): clean up test setup code and imports * fix(test): properly redeclare the type of `global` * fix(app): delete render test for App.tsx For some reason, this test on its own is failing on Travis (but succeeds on Amplify!) and I haven't been able to find the root cause. However, adding more tests seem to fix the failure on Travis. Hence, I'll just batch this test up along with the other campaign creation integration tests. * chore(test): pass the test suite if there are no tests This makes sense as the "neutral element", and allows the test suite to pass currently when we have zero tests. * Revert "fix(app): delete render test for App.tsx" This reverts commit 979b23e. * refactor(ga): inject options from config.ts * Remove the `gaOptions` parameter * Define all gaOptions in config.ts, with props conditionally defined based on the current environment (test, dev, prod, etc.) * fix(ga): define a dummy tracking ID for tests This is to silence the console warning regarding a missing tracking ID. * fix(setupTest): fake timers on a per-test basis * refactor(app.test): rename the render test for clarity * feat(app.test): add checks for the footer; improve docs * chore(test): make /stats a common API mock This API mock should be generic enough for all future tests. * feat(test): convert /auth/userinfo to a common API mock * Move /auth/userinfo to test-utils.tsx * Allow custom user IDs in future tests * chore(test-utils): refactor API mocks to more digestible chunks * Split the API mocks by topic (stats, auth, etc.) * feat(test-utils): return the state in mockCommonApis() API mocks in individual tests can now manipulate the state based on the needs of the test. * refactor: shift tests around according to standardized structure We've decided upon a standardized file structure for tests across frontend and backend. Example: - src/ - core/ - routes/ - auth.routes.ts - tests/ <-- `tests/` in the same directory as files to be tested - auth.routes.test.ts <-- tests in `tests/` directory - services/ - phone-number.service.ts - tests/ - phone-number.service.test.ts * refactor: use import maps to import test-utils * chore: add some inline comments for documentation This might help anyone writing tests in the future to get started more easily. Co-authored-by: Lam Kee Wei <[email protected]>
* feat(npm): add npm script for testing frontend * chore(npm): install frontend test dependencies The following dependencies were installed: - msw to mock backend API responses - @testing-library/react for idiomatic react testing APIs - @testing-library/jest-dom for idiomatic DOM assertion APIs - @testing-library/user-event for idiomatic DOM user event APIs - jest-canvas-mock to mock Canvas for Lottie as jsdom doesn't implement it - @peculiar/webcrypto as jsdom doesn't implement SubtleCrypto Also install the necessary TypeScript definitions. * ga: allow init options to be passed in The purpose is to pass in the testMode option during tests. Note: I've also shifted the declaration of initialChecks() into the useEffect() call where it is used. This is mainly to silence the React Hooks ESLint warnings on missing dependencies in the useEffect() dependency list. * feat(modal): allow initial content and no children This allows us to specify the initial modal content when setting up test environments. * fix(config): Disregard REACT_APP env vars for tests We don't care about these environment variables in tests. Also, setting the axios baseURL for tests will result in flakiness in tests that involve API responses. * feat(test): set up initial test fixtures * fix(HTMLMediaElement): mock setter for 'muted' attributes This is mainly an issue in test environments. React treats 'muted' as a property and sets it after the test environment has been torn down, resulting in console warnings [1]. Fix this by mocking the setter property to do nothing. This doesn't pose any issue in test environments. [1]: testing-library/react-testing-library#470 * chore(travis): run the frontend tests * chore(amplify): run the frontend tests Make sure to set CI=true explicitly as Amplify doesn't set it by default. CI=true will run all the tests exactly once and exit. Also compile translations before running the tests. * chore(app): move global context providers to index.tsx We want to render App.tsx in a test environment with separate providers. Hence, move the providers to index.tsx. Also, move the `import 'locales'` statement to the top of index.tsx as well. * feat(test-utils): allow passing in router options This allows us to specify initial history locations during tests. Since we're making MemoryRouter the topmost provider, adjust index.tsx to match. * test(app): add an initial render test for App.tsx Just a simple render test to ensure that the test infrastructure works fine. * feat(setupTests): log an error on unhandled API requests This allows us to more easily debug missing endpoints and fix them. * chore(app): remove unnecessary mocked API endpoints I'll look into how to best mock API endpoints across tests later on. For now, only declare the endpoints necessary for App.test.tsx. * fix(amplify): fix test configuration * fix(travis): run frontend tests only after building This makes more sense as compilation errors are more important than test errors. This also matches the Amplify build configuration. * fix(setupTest): reset msw server handlers after resetting timers There may be pending timers that depend on API endpoints. Run all timers to completion before resetting the API endpoints. * chore(test): clean up test setup code and imports * fix(test): properly redeclare the type of `global` * fix(app): delete render test for App.tsx For some reason, this test on its own is failing on Travis (but succeeds on Amplify!) and I haven't been able to find the root cause. However, adding more tests seem to fix the failure on Travis. Hence, I'll just batch this test up along with the other campaign creation integration tests. * chore(test): pass the test suite if there are no tests This makes sense as the "neutral element", and allows the test suite to pass currently when we have zero tests. * Revert "fix(app): delete render test for App.tsx" This reverts commit 979b23e. * refactor(ga): inject options from config.ts * Remove the `gaOptions` parameter * Define all gaOptions in config.ts, with props conditionally defined based on the current environment (test, dev, prod, etc.) * fix(ga): define a dummy tracking ID for tests This is to silence the console warning regarding a missing tracking ID. * fix(setupTest): fake timers on a per-test basis * refactor(app.test): rename the render test for clarity * feat(app.test): add checks for the footer; improve docs * chore(test): make /stats a common API mock This API mock should be generic enough for all future tests. * feat(test): convert /auth/userinfo to a common API mock * Move /auth/userinfo to test-utils.tsx * Allow custom user IDs in future tests * chore(test-utils): refactor API mocks to more digestible chunks * Split the API mocks by topic (stats, auth, etc.) * feat(test-utils): return the state in mockCommonApis() API mocks in individual tests can now manipulate the state based on the needs of the test. * refactor: shift tests around according to standardized structure We've decided upon a standardized file structure for tests across frontend and backend. Example: - src/ - core/ - routes/ - auth.routes.ts - tests/ <-- `tests/` in the same directory as files to be tested - auth.routes.test.ts <-- tests in `tests/` directory - services/ - phone-number.service.ts - tests/ - phone-number.service.test.ts * refactor: use import maps to import test-utils * feat(dashboard): write initial email campaign test * feat(dashboard): add initial SMS campaign workflow test Also refactor common API endpoints shared with the Email campaign test into a separate array. * feat(dashboard): test the flow for actually creating a new campaign * feat(dashboard): add test for telegram campaign creation workflow Also add assertions for testing the credential dropdown for SMS campaigns. * fix(email): select the correct channel button when creating email campaign Previously, we were selecting the SMS channel button, not the email button. * feat(dashboard): colocate APIs; implement initial campaign management * feat(dashboard): add happy path test for protected email campaign * chore(test): rename dashboard integration tests for clarity * fix(dashboard): fake timers for protected email integration test * refactor: use the common mock APIs * refactor: move /settings API mocks to the common APIs directory * refactor: move the rest of the campaign APIs to the common directory * refactor: move tests around according to standardized structure * feat: support custom initial campaigns in the context provider This allows tests to construct a custom initial campaign based on the requirements of the test. * refactor: use import maps to import test-utils * fix(unsubscribe): use react-router to determine search params In test environments, we use MemoryRouter instead of BrowserRouter for routing. Since MemoryRouter does not manipulate `window.location`, we cannot use `window.location` to determine the search parameters. Instead, use the `location` object provided by react-router with the `useLocation()` hook. * feat: add a simple API mock for unsubscring from campaigns * feat: add a test for successfully unsubscribing from a campaign * fix: use fake timers for dashboard tests * feat: mock protected message APIs * feat: add unit test for successfully decrypting a protected message * chore: correct a typo and add a setup comment * feat(unsubscribe): add a test for when the user chooses to stay * fix: add missing props to the mock API responses 1. Rename `templates` to `{email|sms|telegram}_templates` to match the actual API response 2. Convert `params` to an array to match the actual API 3. Add `has_credential` to match the actual API 4. Add a LOGGED job to the `job_queue` once a campaign is sent to better mimic the API * fix: fix linting error due to missing `has_credential` * refactor: split Dashboard integration tests to separate files Co-authored-by: Lam Kee Wei <[email protected]>
react-testing-library
version: 9.1.1react
version: 16.9.0node
version:npm
(oryarn
) version:Relevant code or config:
Code Sandbox: https://codesandbox.io/embed/agitated-snyder-mvvgi
Component:
Test:
What you did:
I noticed a waring in my logs when rendering a
video
element with themuted
attribute:The text was updated successfully, but these errors were encountered: