- Red Hat Advanced Cluster Management for Kubernetes
Red Hat Advanced Cluster Management for Kubernetes (referred to as RHACM throughout the rest of this page) provides end-to-end management visibility and control to manage your Kubernetes environment.
This repository contains governance policies and placement rules for Argo CD and the Argo CD Application resources representing the Cloud Paks.
-
An OpenShift Container Platform cluster, version 4.12 or later.
The applications were tested on both managed and self-managed deployments.
-
Adequate worker node capacity in the cluster for RHACM to be installed.
Refer to the RHACM documentation to determine the required capacity for the cluster.
-
An entitlement key to the IBM Entitled Registry. This key is required in the RHACM cluster to be copied to the managed clusters when a cluster matches a policy to install a Cloud Pak.
This section contains a simple shortcut, but you can choose to follow the instructions in the Red Hat OpenShift GitOps Installation page instead, with special care to use a release at or above gitops-1.8
.) These instructions are always validated with the latest OpenShift GitOps release.
The shortcut in case you choose to skip the official instructions:
-
Create the
Subscription
resource for the operator:cat << EOF | oc apply -f - --- apiVersion: v1 kind: Namespace metadata: name: openshift-gitops-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: channel: latest installPlanApproval: Automatic name: openshift-gitops-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
Wait until the ArgoCD instance appears as ready in the
openshift-gitops
namespace.oc wait ArgoCD openshift-gitops \ -n openshift-gitops \ --for=jsonpath='{.status.phase}'=Available \ --timeout=600s
-
Log in to the Argo CD server
gitops_url=https://github.com/IBM/cloudpak-gitops gitops_branch=main argo_pwd=$(oc get secret openshift-gitops-cluster \ -n openshift-gitops \ -o jsonpath='{.data.admin\.password}' | base64 -d ; echo ) \ && argo_url=$(oc get route openshift-gitops-server \ -n openshift-gitops \ -o jsonpath='{.spec.host}') \ && argocd login "${argo_url}" \ --username admin \ --password "${argo_pwd}"
This repository contains an optimized configuration for the default ArgoCD instance.
That configuration has custom health checks for RHACM resources, which allows ArgoCD to monitor the health of resources such as the MultiCluster engine resource.
Consider adding this application to your cluster if your organization does not have another preference for the default configuration of the ArgoCD instance.
-
This step assumes you still have the shell variables assigned from previous actions:
argocd proj create argocd-control-plane \ --dest "https://kubernetes.default.svc,openshift-gitops" \ --src ${gitops_url:?} \ --upsert \ && argocd app create argo-app \ --project argocd-control-plane \ --dest-namespace openshift-gitops \ --dest-server https://kubernetes.default.svc \ --repo ${gitops_url:?} \ --path config/argocd \ --helm-set repoURL=${gitops_url:?} \ --helm-set-string targetRevision="${gitops_branch}" \ --revision ${gitops_branch:?} \ --sync-policy automated \ --upsert \ && argocd app wait argo-app
These steps assume you logged in to the OCP server with the oc
command-line interface:
-
Add the Argo application:
argocd proj create rhacm-control-plane \ --dest "https://kubernetes.default.svc,openshift-gitops" \ --dest "https://kubernetes.default.svc,open-cluster-management" \ --allow-cluster-resource Namespace \ --allow-namespaced-resource argoproj.io/Application \ --allow-namespaced-resource argoproj.io/AppProject \ --src ${gitops_url:?} \ --upsert \ && argocd app create rhacm-app \ --project rhacm-control-plane \ --dest-namespace open-cluster-management \ --dest-server https://kubernetes.default.svc \ --repo ${gitops_url:?} \ --path config/argocd-rhacm/ \ --helm-set repoURL=${gitops_url:?} \ --helm-set targetRevision=${gitops_branch:?} \ --sync-policy automated \ --revision ${gitops_branch:?} \ --upsert \ && argocd app wait -l app.kubernetes.io/instance=rhacm-app \ --sync \ --health
Note that if you did not follow the optional step of installing the argo-app
application, the Argo CD instance may not be able to assess the health of the MultiClusterHub
resource.
Without that check, Argo CD will not wait until the resource is ready and may generate transient messages complaining about synchronization errors and retries until the cluster hub is fully available.
If you don't already have an entitlement key to the IBM Entitled Registry, obtain your key using the following instructions:
-
Go to the Container software library.
-
Click the "Copy key."
-
Copy the entitlement key to a safe place to update the cluster's global pull secret.
-
(Optional) Verify the validity of the key by logging in to the IBM Entitled Registry using a container tool:
export IBM_ENTITLEMENT_KEY=the key from the previous steps podman login cp.icr.io --username cp --password "${IBM_ENTITLEMENT_KEY:?}"
Global pull secrets require granting too much privilege to the OpenShift GitOps service account, so we have started transitioning to the definition of pull secrets at a namespace level.
The Application resources are transitioning to use PreSync
hooks to copy the entitlement key from a Secret
named ibm-entitlement-key
in the openshift-gitops
namespace, so issue the following command to create that secret:
# Note that if you just created the OpenShift GitOps operator
# the namespace may not be ready yet, so you may need to wait
# a minute or two
oc create secret docker-registry ibm-entitlement-key \
--docker-server=cp.icr.io \
--docker-username=cp \
--docker-password="${IBM_ENTITLEMENT_KEY:?}" \
--docker-email="[email protected]" \
--namespace=openshift-gitops
Once Argo completes synchronizing the applications, your cluster will have policies, placement rules, and placement bindings to deploy Cloud Paks to matching clusters.
openshift-gitops-argo-app
: Configures an Argo server with custom health checks for Cloud Paks.openshift-gitops-cloudpaks-cp-shared
: Deploys common Cloud Pak prerequisites.openshift-gitops-cloudpaks-cp4a
: Deploys the Argo applications for Cloud Pak for Business Automation.openshift-gitops-cloudpaks-cp4d
: Deploys the Argo applications for Cloud Pak for Data.openshift-gitops-cloudpaks-cp4aiops
: Deploys the Argo applications for Cloud Pak for AIOps.openshift-gitops-cloudpaks-cp4i
: Deploys the Argo applications for Cloud Pak for Integration.openshift-gitops-cloudpaks-cp4s
: Deploys the Argo applications for Cloud Pak for Security.openshift-gitops-installed
: Deploys OpenShift GitOps.
Labels:
gitops-branch
+cp4a
: Placement for Cloud Pak for Business Automation.gitops-branch
+cp4d
: Placement for Cloud Pak for Data.gitops-branch
+cp4i
: Placement for Cloud Pak for Integration.gitops-branch
+cp4s
: Placement for Cloud Pak for Security.gitops-branch
+cp4aiops
: Placement for Cloud Pak for AIOps.gitops-remote
+true
: Assign cluster to thegitops-cluster
cluster-set, registering it to the GitOps Cluster.
Values for each label:
gitops-branch
: Branch of this repo for the Argo applications. Unless you are developing and testing on a new branch, use the default valuemain
.cp4a
: Namespace for deploying the Cloud Pak.cp4aiops
: Namespace for deploying the Cloud Pak.cp4d
: Namespace for deploying the Cloud Pak.cp4i
: Namespace for deploying the Cloud Pak.cp4s
: Namespace for deploying the Cloud Pak.
Labeling an OCP cluster with gitops-branch=main
and cp4i=cp4ins
deploys the following policies to a target cluster:
openshift-gitops-installed
openshift-gitops-argo-app
openshift-gitops-cloudpaks-cp-shared
openshift-gitops-cloudpaks-cp4i
Labeling an OCP cluster with gitops-branch=main
and cp4i=cp4ins
deploys the following policies to a target cluster:
openshift-gitops-installed
: The latest version of the OpenShift GitOps operator.openshift-gitops-argo-app
: The Argo configuration is pulled from themain
branch of this repository.openshift-gitops-cloudpaks-cp-shared
: The Argo configuration is pulled from this repository'smain
branch.openshift-gitops-cloudpaks-cp4i
: The Cloud Pak is deployed to the namespacecp4ins
The repository creates the roles and role bindings for a "rhacm-users" user group.
Users in that group will be granted permission to manage clusters in the "default" cluster set but WITHOUT the permission to manage cloud credentials. That arrangement is ideal for environments where a set of people manages the clusters but not necessarily the underlying cloud accounts.
Refer to OpenShift's documentation for more information on user management, such as configuring identity providers and adding users to the Openshift cluster
Once you have the respective users added to the cluster, you can add them to the group via the OCP console using the "Add users" option in the panel for the user group (under "User Management" -> "Groups" in the left navigation bar) or using the following command from a terminal window:
oc adm groups add-users rhacm-users "${username:?}"
If you used the approach where OpenShift GitOps is not installed in the same server as RHACM, fork this repository and use the resulting clone URI in the instructions above.
If using OpenShift GitOps installed in the RHACM server, you need to modify the settings of the Argo application to reference a fork of this repository instead of using the default reference to this repository.
The instructions for that setup are documented in the CONTRIBUTING.md page, where you need to ensure you use the rhacm-app
application name as the parameter for the argocd app set
commands.