-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Flaky CI tests #3036
Comments
@achingbrain @hugomrdias until these issues can be worked out, can we please consider making known intermediately failing tests non fatal ? Maybe they could still run but do not abort run and do not affect overall ❌ / ✅ conclusion ? I understand that fixing actual tests end hands is the right way to go about things, but keeping tree red in the meantime doesn't seem good. Between various intermediate failures and bundle size checks and all I just can't get ✅ from CI, even though I'm not changing anything in existing code. |
I'd really rather not as some of these failures are caused by actual hard-to-replicate bugs. If we disable the tests they will never get fixed. That said, I'm not 100% on bundle sizes failing the build as the size delta is valuable information but to date it's has never stopped us publishing anything. |
Can we consider alternative where they aren't skipped / disabled but are marked to be non fatal ? It's just unless tree is kept green it's impossible to tell when new regressions are introduced. |
If there are individual tests that fail intermittently then potentially. The problem with the remaining errors above is that they can occur at pretty much any time so you'd be ignoring failures from most of the codebase. Are you seeing any tests that fail repeatedly? |
I have disabled bunch of test in #3081 because they would fail, but restarting them on CI would pass. I also verified that they were intermittent failures with them on master. If there are tests that fail even 30% of the time there’s no way of telling if they failure is regression or not. Furthermore it requires more coordination and attention from both reviewers and an author to make sure no new regressions are missed. Which is to suggest that keeping master green as a policy might be a good idea. For the very least I think we should annotate failing teat with corresponding issues so at least from output one can tell if it’s new or something known |
Closing as this repo is deprecated |
There are several intermittent test failures in CI. Here are the most common, their causes and the issues which when closed will resolve them:
What's happening here?
We are running an interface test over HTTP to a remote js-IPFS node. While the test is being torn down, a libp2p operation is still in progress on the remote node, which throws.
See: libp2p/js-libp2p-tcp#130
And the browser version: libp2p/js-libp2p-webrtc-star#222
Spurious simple-peer error: feross/simple-peer#660
What's happening here?
js-IPFS is rejecting connections from go-IPFS because the MAC of the incoming message is invalid.
See: libp2p/js-libp2p#310
Fails intermittently on webworkers for both Chrome & Firefox
ipfs: FAILED TESTS:
ipfs: interface-ipfs-core over ipfs-http-client tests against js-ipfs
ipfs: .swarm.peers
ipfs: ✖ should list peers only once even if they have multiple addresses
ipfs: Chrome Headless 87.0.4280.66 (Linux x86_64)
ipfs: HTTPError: Bad Request
ipfs: at HTTP.fetch (file:/home/travis/build/ipfs/js-ipfs/node_modules/ipfsd-ctl/node_modules/ipfs-utils/src/http.js:166:13)
ipfs: at async Client.start (file:/home/travis/build/ipfs/js-ipfs/node_modules/ipfsd-ctl/src/ipfsd-client.js:175:19)
ipfs: at async Factory.spawn (file:/home/travis/build/ipfs/js-ipfs/node_modules/ipfsd-ctl/src/factory.js:161:7)
ipfs: at async Context.eval (file:/home/travis/build/ipfs/js-ipfs/packages/interface-ipfs-core/src/swarm/peers.js:130:22)
ipfs: Command failed with exit code 1: karma start /home/travis/build/ipfs/js-ipfs/node_modules/aegir/src/config/karma.conf.js --files-custom
What's happening here?
Needs investigation
The text was updated successfully, but these errors were encountered: