- AWS CLI is installed and configured with access and secret keys
- OC CLI is installed and you are logged in into OpenShift
- GIT CLI is installed and you are logged in into Git
-
Fork the following repo https://github.com/open-sudo/rosa-idp.git to your github repo.
-
Clone the repo you just forked
git clone https://github.com/<YOUR GIT USER NAME>/rosa-idp.git
- Execute the deployment script
cd rosa-idp
./deploy.sh
The deploy.sh script does 3 things:
- Modify argocd/root-application.yaml to insert the actual cluster name, the aws account Id, and the region.
- Modify all files at argocd/applications/templates so they point to the forked repo instead of open-sudo
- Execute the cloudformation scripts and wait for their completion
- Once all stacks are CREATE_COMPLETE, push the modified codebase to your github repo
git push
- Deploy all resources
oc apply -f ./argocd/operator.yaml
oc apply -f ./argocd/rbac.yaml
oc get route openshift-gitops-server -n openshift-gitops # repeat until this returns a route
oc apply -f ./argocd/argocd.yaml
oc apply -f ./argocd/root-application.yaml
To configure second cluster so that it uses the same repo, simply log in into the cluster via CLI and repeat steps 2 to 5.
Use following steps to validate your deployment.
Get your ArgoCD URL:
oc get routes openshift-gitops-server -n openshift-gitops
Log in into ArgoCD by selecting "Log in via OpenShift". Validate that all tasks are synched and healthy.
Validate that all stacks were executed successfully.
aws cloudformation list-stacks | head -70
Log into the cloudformation console and explore the last 4 stacks that where created. Also review resources that were created by the stacks: roles, credentials, policies, etc.
Validate log streams have been created in Cloudwatch for your cluster
aws logs describe-log-groups --log-group-name-prefix rosa-${CLUSTER_NAME}
Validate that an external secret was created to allow sending of metrics to Cloudwatch.
oc get secretstore -n amazon-cloudwatch
You should see the result:
NAME AGE STATUS CAPABILITIES READY
rosa-cloudwatch-metrics-secret-store 4m36s Valid ReadWrite True
The following command shows further success:
oc get externalsecret rosa-cloudwatch-metrics-credentials-${CLUSTER_NAME} -n amazon-cloudwatch
Following result is expected with status SecretSynced and readiness set to True
NAME STORE REFRESH INTERVAL STATUS READY
rosa-cloudwatch-metrics-credentials-${CLUSTER_NAME} rosa-cloudwatch-metrics-secret-store 1m SecretSynced True
Validate that the external secret was converted into a secret called aws-credentials.
oc get secret aws-credentials
Credentials used in your cluster are all kept in AWS Secret Manager. Login into this secret manager, and validate that you can see an entry called: rosa-cloudwatch-metrics-credentials-${CLUSTER_NAME}. Retrieve its value and apply a base64 decoder to it. The result should be of the form:
[AmazonCloudWatchAgent]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Compare this value to aws-credentials mentioned above. They should be identical. These credentials are accessible only the service account named iam-external-secrets-sa that runs within the project called amazon-cloudwatch. The policy that gives permission to this service account is registered in AWS IAM. Look for the role called -RosaClusterSecrets. It should have a policy called ExternalSecretCloudwatchCredentials. Open it and review its content.
To test the camel-k deployment, download the Camel-K CLI that matches your operating system. Create a file called Hello.grovy with following content: It is important for the client's version number to match the operator's version number.
from("platform-http:/")
.setBody(constant("Hello from CamelK!"));
At the CLI, run the following commands:
oc new-project camel-examples
kamel run Hello.groovy
This will deploy your route. To check its status, execute following commands:
kamel get hello
oc get routes
Once you get a route, invoke it using curl. Be sure to use http instead of https.
Download the dashboard json file. Customize it with following commands:
sed -i "s/__CLUSTER_NAME__/$YOUR_CLUSTER_NAME/g" dashboard.json
sed -i "s/__REGION_NAME__/$YOUR_CLUSTER_REGION/g" dashboard.json
Use following command to create a dashboard in Cloudwatch:
aws cloudwatch put-dashboard --dashboard-name "ROSAMetricsDashboard" --dashboard-body file://dashboard.json
Finally, log into Cloudwatch to review your dashboard and your cluster metrics.
get the URL to access MTA:
oc get routes -n mta
The initial credentials for MTA are admin/Passw0rd!. You will be required to change the password.
To test this module, create three projects of different sizes: small, medium, and large:
cat << EOF | oc apply -f -
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: cat-project
labels:
size: small
EOF
cat << EOF | oc apply -f -
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: tiger-project
labels:
size: medium
EOF
cat << EOF | oc apply -f -
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: elephant-project
labels:
size: large
EOF
Now, describe each project with following command:
oc describe project <PROJECT NAME>>
You should see project quotas of 10Gi, 30Gi, and 50Gi respectivelly.
Quota:
Name: tiger-project-quota
Resource Used Hard
-------- ---- ----
requests.cpu 0 30
requests.ephemeral-storage 0 30Gi
requests.memory 0 30Gi
Log in into GitLab. Click on Applications in the User Settings menu. Then create an application making sure oidc is selected and using following URL as redirect link:
echo `oc whoami --show-console | cut -c35-` | awk '{print "https://oauth-openshift."$1"/oauth2callback/GitLab"}'
Copy the secret and client id that are provided and polulate following environment variables:
export GITLAB_CLIENT_ID=<your client id>
export GITLAB_CLIENT_SECRET=<your client secret>
Next, copy paste following command to create a secret:
oc create secret generic gitlab-oauth-client-secret --from-literal=clientID=${GITLAB_CLIENT_ID} --from-literal=clientSecret=${GITLAB_CLIENT_SECRET} -n openshift-config
Wait a few minutes and go to your OpenShift console to validate that you can now log in with GitLab.