Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rethinking our test strategy #782

Open
vringar opened this issue Nov 9, 2020 · 0 comments
Open

Rethinking our test strategy #782

vringar opened this issue Nov 9, 2020 · 0 comments

Comments

@vringar
Copy link
Contributor

vringar commented Nov 9, 2020

To understand where we want to go we need to accurately describe the current situation of our tests.
What we have:

  • A lot of integration tests that enable different instruments and then assert that the database contains the elements that we expect to be there
  • A web server that gets automatically started when the tests begin running
  • The extension being built automatically every time the tests get run

What are the shortcomings of the current approach:

  • The rebuild of the WebExtension on a fast computer takes at least 40s meaning that rerunning a fixed tests takes 40s + actual runtime of the test => This makes TDD impossible since the delay is too big
  • We mostly/only test the happy path. Failures in our error handling are only discovered when tests are accidentally broken and the failure triggers a chain reaction
  • Since we spin up the complete platform running a single test more than 10s

My suggestions to fix this:

  • Test the different instruments by only creating an MPLogger and an in-memory StorageProvider, and using Selenium directly to control the browser, leaving the entire TaskManager/BrowserManager machinery out of this, similar to what was proposed in Move/add instrumentation tests to automation/Extension/webext-instrumentation #330
  • Test the TaskManager by monkey patching out the normal BrowserManager and replacing it with a TestBrowserManager, that different exceptions at different points in time to simulate the browser failing in different ways
  • Test the StorageProviders and StorageController by just creating them stand-alone and then feeding them different messages through the socket. This allows us to test a bunch of edge cases without having to mess around with the platform
  • Remove all autouse fixtures and instead explicitly inject them where they are needed.
  • Maybe don't rebuild the extension as part of the test suite (I see the use, but the drawback is also significant)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

1 participant