You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To understand where we want to go we need to accurately describe the current situation of our tests.
What we have:
A lot of integration tests that enable different instruments and then assert that the database contains the elements that we expect to be there
A web server that gets automatically started when the tests begin running
The extension being built automatically every time the tests get run
What are the shortcomings of the current approach:
The rebuild of the WebExtension on a fast computer takes at least 40s meaning that rerunning a fixed tests takes 40s + actual runtime of the test => This makes TDD impossible since the delay is too big
We mostly/only test the happy path. Failures in our error handling are only discovered when tests are accidentally broken and the failure triggers a chain reaction
Since we spin up the complete platform running a single test more than 10s
My suggestions to fix this:
Test the different instruments by only creating an MPLogger and an in-memory StorageProvider, and using Selenium directly to control the browser, leaving the entire TaskManager/BrowserManager machinery out of this, similar to what was proposed in Move/add instrumentation tests to automation/Extension/webext-instrumentation #330
Test the TaskManager by monkey patching out the normal BrowserManager and replacing it with a TestBrowserManager, that different exceptions at different points in time to simulate the browser failing in different ways
Test the StorageProviders and StorageController by just creating them stand-alone and then feeding them different messages through the socket. This allows us to test a bunch of edge cases without having to mess around with the platform
Remove all autouse fixtures and instead explicitly inject them where they are needed.
Maybe don't rebuild the extension as part of the test suite (I see the use, but the drawback is also significant)
The text was updated successfully, but these errors were encountered:
To understand where we want to go we need to accurately describe the current situation of our tests.
What we have:
What are the shortcomings of the current approach:
My suggestions to fix this:
The text was updated successfully, but these errors were encountered: