This repository contains scripts and files that are used to package cert-manager for Red Hat's Operator Lifecycle Manager (OLM). This allows users of OpenShift and OperatorHub to easily install cert-manager into their clusters. It is currently an experimental deployment method.
The package is called an Operator Bundle and it is a container image that stores the Kubernetes manifests and metadata associated with an operator. A bundle is meant to represent a specific version of an operator.
The bundles are indexed in a Catalog Image which is pulled by OLM in the Kubernetes cluster.
Clients such as kubectl operator
then interact with the OLM CRDs to "subscribe" to a particular release channel.
OLM will then install the newest cert-manager bundle in that release channel and perform upgrades as newer versions are added to that release channel.
📖 Read the Operator Lifecycle Manager installation documentation for cert-manager.
For each release, we first publish one or more release candidates on Kubernetes Community Operators Repository and OpenShift Community Operators Repository, in the "candidate" channel.
This allows us to discover whether our latest "bundle" sources will pass RedHat's conformance scripts, which are automatically run in those two repositories. And it allows us test precisely the catalog entry that is generated by RedHat's release scripts, which are automatically run when PRs are merged in those two repositories.
ℹ️ Why? It may seem long-winded, but before I started doing this, there were occasions when I didn't find bugs in the new version until after I'd published the broken packages to the community-operators catalog, despite my best local testing efforts with Kind + OLM + locally generated Catalog and Bundle images. And I never found a way to test a locally generated Catalog with a CRC OpenShift cluster.
This RC process allows me to start CRC OpenShift and test the release candidate in the official RedHat catalogs.
-
Update
CERT_MANAGER_VERSION
at the top of theMakefile
-
(release candidate only)
- Add
-rc1
suffix toBUNDLE_VERSION
- Add
-
(final release only )
- Remove the
-rc1
suffix fromBUNDLE_VERSION
- Remove the
-
Run
make bundle-generate
- Inspect the changes
- *pause to investigate if there are unexpected changes
-
Run
make bundle-validate
to check the generated bundle files.- Inspect warnings
- Consider whether the warnings can be easily fixed and if so fix them.
-
git commit
the bundle changes. -
Preview the generated clusterserviceversion file on OperatorHub
-
Test the generated bundle locally (See testing below)
-
Create a PR on the Kubernetes Community Operators Repository, adding the new or updated bundle files to
operators/cert-manager/
under a sub-directory named after the bundle versionmake update-community-operators
-
Create a PR on the OpenShift Community Operators Repository, adding the new or updated bundle files to
operators/cert-manager/
under a sub-directory named after the bundle versionmake update-community-operators-prod
-
Test the new release on the latest stable OpenShift (See [Testing on OpenShift][#testing-on-openshift]).
The bundle Docker image and a temporary catalog Docker image can be built and pushed to a temporary Docker registry.
These can then be used by OLM running on a Kubernetes cluster.
Run make bundle-test
to create the bundle and catalog then deploy them with OLM, installed on a local Kind cluster, for testing.
The test will wait for cert-manager to be installed and then print the version using cmctl version
.
make bundle-test
Run some of the cert-manager E2E conformance tests:
cd projects/cert-manager/cert-manager
git checkout $(CERT_MANAGER_VERSION)
make e2e-build
_bin/test/e2e.test --repo-root=/dev/null --ginkgo.focus="Vault\ Issuer" --ginkgo.skip="Gateway"
⚠️ Requires cert-manager >=v1.14.0. Older versions of the cert-manager E2E tests require a non-standard Vault OCI image to be preloaded into the Kubernetes clusters. See:
There are a few ways to create an OpenShift cluster for testing.
Here we will describe using crc
(code-ready-containers) to install a single node local OpenShift cluster.
Alternatives are:
- Initializing Red Hat OpenShift Service on AWS using
rosa
: known to work but takes ~45min to create a multi-node OpenShift cluster. - Install OpenShift on any cloud using OpenShift Installer: did not work on GCP at time of writing due to Installer can't get managedZones while service account and gcloud cli can on GCP #5300.
crc
requires: 4 virtual CPUs (vCPUs), 9 GiB of free memory, 35 GiB of storage space
but for crc-v1.34.0, this is insufficient and you will need 8 CPUs and 32GiB,
which is more than is available on most laptops.
Download your pull secret from the crc-download page and supply the path in the command line below:
make crc-instance OPENSHIFT_VERSION=4.13 PULL_SECRET=${HOME}/Downloads/pull-secret
This will create a VM and automatically install the chosen version of OpenShift, using a suitable version of crc
.
The crc
installation, setup and start are performed by a startup-script
which is run when the VM boots.
You can monitor the progress of the script as follows:
gcloud compute instances tail-serial-port-output crc-4-13
You can log in to the VM and interact with the cluster as follows:
gcloud compute ssh crc@crc-4-13 -- -D 8080
sudo journalctl -u google-startup-scripts.service --output cat
eval $(bin/crc-2.28.0 oc-env)
oc get pods -A
Log in to the VM using SSH and enable socks proxy forwarding so that you will be able to connect to the Web UI of crc
when it starts.
gcloud compute ssh crc@crc-4-13 -- -D 8080
Now configure your web browser to use the socks5 proxy at localhost:8080
.
Also configure it to use the socks proxy for DNS requests.
With this configuration you should now be able to visit the OpenShift web console page:
https://console-openshift-console.apps-crc.testing
You will be presented with a couple of "bad SSL certificate" error pages, because the web console is using self-signed TLS certificiates. Click "Acccept and proceed anyway".
Now click the "Operators > OperatorHub" link on the left hand menu.
Search for "cert-manager" and click the "community" entry and then click "install".
Once you have installed cert-manager on the crc-instance
you can run the cert-manager E2E tests,
to verify that cert-manager has been installed properly and is reconciling Certificates.
First compile the cert-manager E2E test binary as follows:
cd projects/cert-manager/cert-manager
make e2e-build
And then upload the binary to the remote VM and run them against cert-manager installed in the crc OpenShift cluster:
cd projects/cert-manager/cert-manager-olm
make crc-e2e \
OPENSHIFT_VERSION=4.13 \
E2E_TEST=../cert-manager/_bin/test/e2e.test
If you can't use the automated script to create the crc
VM
you can create one manually, as follows.
Create a powerful cloud VM on which to run crc
, as follows:
GOOGLE_CLOUD_PROJECT_ID=$(gcloud config get-value project)
gcloud compute instances create crc-4-9 \
--enable-nested-virtualization \
--min-cpu-platform="Intel Haswell" \
--custom-memory 32GiB \
--custom-cpu 8 \
--image-family=rhel-8 \
--image-project=rhel-cloud \
--boot-disk-size=200GiB \
--boot-disk-type=pd-ssd
NOTE: The VM must support nested-virtualization because crc
creates another VM using libvirt
.
Now log in to the VM using SSH and enable socks proxy forwarding so that you will be able to connect to the Web UI of crc
when it starts.
gcloud compute ssh crc@crc-4-9 -- -D 8080
Download crc
and get a pull secret from the RedHat Console.
The latest version of crc
will install the latest version of OpenShift (4.9 at time of writing).
If you want to test on an older version of OpenShift you will need to download an older version of crc
which corresponds to the target OpenShift version.
Download the archive, extract it and move the crc
binary to your system path:
curl -SLO https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/1.34.0/crc-linux-amd64.tar.xz
tar xf crc-linux-amd64.tar.xz
sudo mv crc-linux-1.34.0-amd64/crc /usr/local/bin/
Run crc setup
to prepare the system for running the crc
VM:
crc setup
...
INFO Uncompressing crc_libvirt_4.9.0.crcbundle
crc.qcow2: 11.50 GiB / 11.50 GiB [---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%
oc: 117.16 MiB / 117.16 MiB [--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%
Your system is correctly setup for using CodeReady Containers, you can now run 'crc start' to start the OpenShift cluster
Run crc start
to create the VM and start OpenShift
(Paste in the pull secret which you can copy from the crc-download page when prompted)
crc start
...
CodeReady Containers requires a pull secret to download content from Red Hat.
You can copy it from the Pull Secret section of https://cloud.redhat.com/openshift/create/local.
? Please enter the pull secret
...
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: ******
Log in as user:
Username: developer
Password: *******
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443