-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve release docs and IT prechecks #914
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the additional check. I've asked for two small changes but overall it looks fine.
IT fails due to rally-tracks change that has been reverted. Let's try again. @elasticmachine run tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for iterating. I left a comment about the implementation of the check. Can you please also change the PR title as this check how now moved elsewhere?
integration-test.sh
Outdated
@@ -43,6 +44,13 @@ TEST_SUCCESS=0 | |||
function check_prerequisites { | |||
exit_if_docker_not_running | |||
|
|||
DOCKER_MEM=$(docker info | grep Memory | awk '{print int($3 + 0)}') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I specify exactly 6GB in the Docker Desktop UI (on Mac) this check fails.
The reason is that memory with docker info
is reported in GiB
instead of GB. I played around a bit with docker info
and you can actually force it to show the raw number in bytes with docker info --format="{{.MemTotal}}"
. You can also use the expr
utility to convert the GB value specified by MIN_DOCKER_MEM_GB
to bytes with expr $MIN_DOCKER_MEM_GB \* 1024 \* 1024 \* 1024
and base the entire calculation on bytes.
One confusing aspect that I noticed is that I get 6237241344
(bytes) as output from docker info
when specifying 6GB in the UI but 6GB converted to bytes is 6000000000 bytes so I assume their UI should actually label this as GiB which results in 6442450944 bytes and I further assume that this is the actual value used for the underlying VM but it reserves ~195MB for the OS and only provides 6237241344 bytes to Docker. I wonder whether we should allow for a small margin of - say 5% - in the actual check to account for that (even better would be to get the actual value specified in the UI of course)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is less important to check on the UI values, compared to the actual value available to the VM as this is what will break integration testing. The UI inconsistency is unfortunate, but ultimately out of scope here.
That said, the 6 GB minimum isn't as exact as the check would make it seem, but it works and I don't see much advantage to a margin here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the target audience that's fine for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for iterating. Your code change is fine for me but my earlier comment about the PR title and description is not addressed yet. Before I finally approve can you please ensure to:
- Change the title of this PR
- Amend the body with more details
Thank you! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine now. Thank you!
Failing integration tests delayed a recent release.
Note that IT should be passing as a precondition to release, as well as add a precondition to verify it will be possible to run the Docker portion instead of failing at the very end.