-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-19303: Changed OKD/FCOS workaround to also support Agent-based Installer #7484
Conversation
/test okd-e2e-agent-sno-ipv6 |
f6039d7
to
3025088
Compare
/test okd-e2e-agent-sno-ipv6 |
1 similar comment
/test okd-e2e-agent-sno-ipv6 |
3025088
to
545c3c9
Compare
/test okd-e2e-agent-sno-ipv6 |
545c3c9
to
e1d36fb
Compare
/test okd-e2e-agent-sno-ipv6 |
e1d36fb
to
e9434d1
Compare
/test okd-e2e-agent-sno-ipv6 |
@@ -41,26 +42,28 @@ if [ ! -f /opt/openshift/.pivot-done ]; then | |||
|
|||
record_service_stage_start "rebase-to-okd-os-image" | |||
{{if .IsFCOS -}} | |||
# get more space because 7.8gb is not enough |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this due openshift-metal3/dev-scripts#1542? Sometimes ago we were affected by https://issues.redhat.com/browse/OCPBUGS-8036, but then it was fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The increased space is required because container images could not be pulled due to insufficient storage and because i copy (rsync) all files from the machine os image to a podman volume (which had to do because the podman image would be "unmounted" at one point, quasi-erasing /usr)
e9434d1
to
000dca4
Compare
/test okd-e2e-agent-sno-ipv6 |
000dca4
to
2927fe7
Compare
/test okd-e2e-agent-sno-ipv6 |
2 similar comments
/test okd-e2e-agent-sno-ipv6 |
/test okd-e2e-agent-sno-ipv6 |
2927fe7
to
8db5880
Compare
/test okd-e2e-agent-sno-ipv6 |
e9e1887
to
f7e27e5
Compare
/test okd-e2e-agent-sno-ipv6 |
/test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-agent-sno-ipv6 |
f7e27e5
to
f9f8c46
Compare
/test okd-e2e-agent-sno-ipv6 |
/test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-agent-compact-ipv4 |
Retrying, since openshift/okd-machine-os#706 landed /test okd-e2e-agent-compact-ipv4 |
We'll have to fix the Samples Operator first: openshift/cluster-samples-operator#525 |
/test okd-e2e-agent-compact-ipv4 |
Do you believe that? okd-e2e-agent-compact-ipv4 and okd-e2e-agent-ha-dualstack have passed 😱 |
/test okd-e2e-aws-ovn |
/test okd-e2e-agent-sno-ipv6 |
1 similar comment
/test okd-e2e-agent-sno-ipv6 |
/test okd-e2e-aws-ovn |
/test e2e-aws-ovn |
Both jobs okd-e2e-aws-ovn-upgrade and okd-e2e-aws-ovn complete deployment successfully, then run and pass e2e tests. Still they fail because some step is running into a timeout, but which one?!? Job okd-e2e-agent-sno-ipv6 still fails for unknown reasons. It lacks logs to debug it properly. |
/test okd-e2e-aws-ovn |
/test okd-e2e-agent-sno-ipv6 |
1 similar comment
/test okd-e2e-agent-sno-ipv6 |
/test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-agent-sno-ipv6 |
1 similar comment
/test okd-e2e-agent-sno-ipv6 |
@JM1: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Looks like we got a pretty solid green runs also on agent okd sno jobs! /lgtm |
/retest-required |
@JM1: Jira Issue OCPBUGS-19303: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-19303 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-4.15 |
@LorbusChris: new pull request created: #7830 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
OKD/FCOS uses FCOS as its bootimage, i.e. when booting cluster nodes the first time during installation. FCOS does not provide tools such as OpenShift Client (oc) or crio.service which Agent-based Installer uses at the rendezvous host, e.g. to launch the bootstrap control plane.
RHCOS and SCOS include these tools, but FCOS has to pivot the root fs to okd-machine-os first in order to make those tools available.
Pivoting uses 'rpm-ostree rebase' but the rendezvous host is booted the first time the node boots from a FCOS Live ISO where the root fs and /sysroot are mounted read-only. Thus 'rpm-ostree rebase' fails and necessary tools will not be available, causing the setup to stall.
Until rpm-ostree has implemented support for rebasing Live ISOs, this patch adapts the workaround for SNO installations to also support Agent-based Installer.
In particular, the Go conditional {{- if .BootstrapInPlace }} which is used to mark a SNO install has been replaced with a shell if-else which checks at runtime whether the system is launched from are on a Live ISO. Most code in the OpenShift ecosystem is written with RHCOS in mind and often assumes that tools like oc or crio.service are available. These assumptions can be satisfied by applying this workaround to all Live ISO boots. It will not remove functionality or overwrite configuration files in /etc and thus side effects should be minimal.
The Go conditional {{- if .BootstrapInPlace }} in the release-image-pivot.service has been dropped completely. This service is only used in OKD only, so OCP will not be impacted at all. The 'Before=' option will not cause systemd to fail if a service does not exist. So, in case bootkube.service or kubelet.service do not exist, the option will have no effect. When bootkube.service or kubelet.service do exist, it must always be ensured that release-image-pivot.service is started first because it might reboot the system or change /usr in the Live ISO use case. So it is safe to drop the Go conditional and ask systemd to always launch release-image-pivot.service before bootkube.service and kubelet.service.