Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] Add tests to github workflow #986

Merged
merged 1 commit into from
Dec 13, 2021

Conversation

kavilla
Copy link
Member

@kavilla kavilla commented Nov 30, 2021

Description

Add unit tests to github workflow and also creating a "bad apples"
environment variable. Some unit tests just fail on the CI for
hardware issues. They should be improved but step one will be
calling out the bad apples.

Also due to the flakiness we can cache the previous run results
and only run the tests that failed. It's too random to catch
with the bad apples mechanism. But still added the continue on
error for unit tests because it takes so long to re-run on the
CI. So instead if it does fail we automatically echo there
was a failure and ask them to re-run. However, if we can get
permission for a github action that can add a comment to the PR
then we could automatically add to PR.

Next step will be improving.

Also needed to limit the amount of workers because otherwise the
hardware can't handle well so then it will accidentally create conflicts.
This means we get an accurate test run but it is slower on the CI.

Included integration tests which worked out of the box.

Included e2e tests as well but it the chrome driver for the application
was different from github's chrome so to run it I just upgraded it for
the test run. Not ideal, ideally we should probably set up a
docker env and install the specific versions since we are now
depending on github's virtual env and the dependencies they installed
there. But at least this is a first pace.

Signed-off-by: Kawika Avilla [email protected]

Issues Resolved

n/a

Check List

  • New functionality includes testing.
    • All tests pass
      • yarn test:jest
      • yarn test:jest_integration
      • yarn test:ftr
  • New functionality has been documented.
  • Commits are signed per the DCO using --signoff

Add unit tests to github workflow and also creating a "bad apples"
environment variable. Some unit tests just fail on the CI for
hardware issues. They should be improved but step one will be
calling out the bad apples.

Also due to the flakiness we can cache the previous run results
and only run the tests that failed. It's too random to catch
with the bad apples mechanism. But still added the continue on
error for unit tests because it takes so long to re-run on the
CI. So instead if it does fail we automatically echo there
was a failure and ask them to re-run. However, if we can get
permission for a github action that can add a comment to the PR
then we could automatically add to PR.

Next step will be improving.

Also needed to limit the amount of workers because otherwise the
hardware can't handle well so then it will accidentally create conflicts.
This means we get an accurate test run but it is slower on the CI.

Included integration tests which worked out of the box.

Included e2e tests as well but it the chrome driver for the application
was different from github's chrome so to run it I just upgraded it for
the test run. Not ideal, ideally we should probably set up a
docker env and install the specific versions since we are now
depending on github's virtual env and the dependencies they installed
there. But at least this is a first pace.

Signed-off-by: Kawika Avilla <[email protected]>
@kavilla kavilla requested a review from a team as a code owner November 30, 2021 09:06
@kavilla
Copy link
Member Author

kavilla commented Nov 30, 2021

Variant of this PR: #906

Except this one differs by allowing re-runs if failures and will only run the specific steps that failed. Please note, if someone pushing again it will create a new job and re-run all the steps. The biggest con about this is that it will basically not logged the steps that were successful in previous runs. But I prefer this way because it's really painful to wait 2 hours because for tests that are intermittent on github (but dont really fall into the bad apples category because it doesn't always fail on the github CI), then re-run and have another flaky failure.

If you check out this run: https://github.com/kavilla/OpenSearch-Dashboards-1/actions/runs/1509682521

you can see the first run the unit tests were flaky, then the second run was a re-run and it didn't re-run the previous successful steps just the build and unit tests. But it failed cuz timestamp doesn't exist so I used git hash and also can't add comments to PRs in this repo since it requires explicit permission from the org.

@kavilla kavilla added the ci label Nov 30, 2021
- run: echo Unit tests completed unsuccessfully. However, unit tests are inconsistent on the CI so please verify locally with `yarn test:jest`.
if: steps.unit_tests_results.outputs.unit_tests_results != 'success' && steps.unit-tests.outcome != 'success'

# TODO: This gets rejected, we need approval to add this
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to mention the admin team to get the action created?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I wish I didn't have to add this step at all but at least it gets us one step to having a CI in github.

However there are a few github actions that can add a comment, should we agree to the best one before proposing. Another solution could be we just fail on this step and just have someone continue to re-run until it passes. At least it won't re-run the other steps. I wished github didn't give a green checkmark on contine-on-error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to explain the options so that the team can weigh in? I'm okay with failing on unit tests - that should push us more to eliminate flaky tests.

Copy link
Member Author

@kavilla kavilla Dec 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should push us more to eliminate flaky tests.

The flakiness is github runners being purposefully limit something to which I feel like might be out of our control unless we want to migrate to calling to Jenkins.

In terms of github actions, if we want to have something to at least log a message telling people to re-run but not block them we can use github script which is a verified action and it can future proof us if we want to do other things. But it is definitely more than what we need right now. This one, it comments on the PR and updates the same comment instead of appending a new comment everytime. But it's not verified.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm good with getting this out the door and iterating as needed!

@@ -1,25 +1,211 @@
# This workflow will do a clean install of node dependencies, build the source code and run tests across different versions of node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: run tests across different versions of node isn't true

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Must have been something carried over.

- run: echo Unit tests completed unsuccessfully. However, unit tests are inconsistent on the CI so please verify locally with `yarn test:jest`.
if: steps.unit_tests_results.outputs.unit_tests_results != 'success' && steps.unit-tests.outcome != 'success'

# TODO: This gets rejected, we need approval to add this
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to explain the options so that the team can weigh in? I'm okay with failing on unit tests - that should push us more to eliminate flaky tests.

@kavilla
Copy link
Member Author

kavilla commented Dec 10, 2021

We discussed that for a first iteration we can get this in and iterate as this is better than the current standard.

@kavilla kavilla mentioned this pull request Dec 12, 2021
5 tasks
Copy link
Contributor

@tmarkley tmarkley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@kavilla kavilla merged commit 2d44dab into opensearch-project:main Dec 13, 2021
@kavilla kavilla deleted the avillk/github_wf_4 branch April 12, 2022 06:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
3 participants