Cluster Keeper provides a CLI for managing usage of multiple OpenShift clusters via Hive ClusterPools
, ClusterClaims
, and ClusterDeployments
.
It is compatible with scheduled hibernation provided by hibernate-cronjob.
With the ck
CLI you can:
- List and get ClusterPools, ClusterClaims, and ClusterDeployments
- Create and delete clusters
- Run and hibernate clusters manually
- Lock clusters to temporarily disable scheduled hibernation and other disruptive actions
- Switch your kubeconfig context between clusters or run a single command in a given context
- Launch the OpenShift or Advanced Cluster Management consoles and have the password automatically copied to the clipboard for easy log-in
When any command requires communication with a cluster, Cluster Keeper will resume the cluster if it is currently hibernating (unless it is currently locked).
Except for the ck use
command, Cluster Keeper will never change your current kubeconfig context. But Cluster Keeper creates a context for each cluster named according to the ClusterClaim. For any command that takes the name of a ClusterClaim, Cluster Keeper will infer it from the current context if it is not provided.
Cluster Keeper leverages Lifeguard for many functions, but it sets the environment variables for you and does not require you to change directories.
- Clone the repository. For example:
git clone [email protected]:open-cluster-management/cluster-keeper.git
- (Optional, but highly recommended) Create a symlink to
ck
on your path. For example:ln -s $(pwd)/ck /usr/local/bin/ck
- Make sure you have all the dependencies.
bash
- version 4 or newer
- on macOS with Homebrew installed, run
brew install bash
. This bash must be first in your path, but need not be/bin/bash
or your default login shell.
oc
version 4.3 or newerjq
- on macOS with Homebrew installed, run
brew install jq
- on macOS with Homebrew installed, run
gsed
- Other projects from the
open-cluster-management
organization. (If you havegit
configured for CLI access, these will be automatically cloned to thedependencies/
directory. Otherwise, you can manually clone these projects to the same directory where you clonedcluster-keeper
.)
In your clone of cluster-keeper
, create a user.env
file.
Each line in this file has the form VARIABLE=value
and will be sourced directly. You must set the required variables
Name | Description |
---|---|
CLUSTERPOOL_CLUSTER |
The API address of the cluster where your ClusterPools are defined. Also referred to as the "ClusterPool host" |
CLUSTERPOOL_CONSOLE |
The URL of the OpenShift console for the ClusterPool host |
CLUSTERPOOL_TARGET_NAMESPACE |
Namespace where ClusterPools are defined |
CLUSTERCLAIM_GROUP_NAME |
Name of a Group (user.openshift.io/v1 ) that should be added to each ClusterClaim for team access |
Name | Default | Description |
---|---|---|
AUTO_HIBERNATION |
true |
If value is true , all new clusters are configured to opt-in to hibernation by hibernate-cronjob |
CLUSTER_WAIT_MAX |
60 |
Maximum wait time in minutes for a ClusterDeployment to be assigned to the ClusterClaim when requesting a new cluster |
HIBERNATE_WAIT_MAX |
15 |
Maximum wait time in minutes for a cluster to resume from hibernation |
VERBOSITY |
0 |
Default verbosity level |
COMMAND_VERBOSITY |
2 |
Verbosity level at which commands are logged |
OUTPUT_VERBOSITY |
3 |
Verbosity level at which command output is logged |
CLUSTERPOOL_CONTEXT_NAME |
ck |
Context name for the ClusterPool host itself |
On first use, Cluster Keeper will check if you are logged in to the ClusterPool host. If not, you will be prompted to log in and the OpenShift console will be opened. Copy the log-in command, run it in your terminal, then try your ck
command again. Cluster Keeper will create a ServiceAccount
on the ClusterPool host for you, then update your kubeconfig with a context for the ClusterPool host that uses this ServiceAccount
. By default, the context is named ck
. Now you can execute ck
commands without needing to continually log in to the ClusterPool host.
Online help is available directly from the CLI using the global -h
option.
If you need to change the namespace or cluster that is hosting your ClusterPools, you can do the following.
- Delete the
ck
context (assuming you have not customized the context name with theCLUSTERPOOL_CONTEXT_NAME
variable).oc config delete-context ck
- Update your
user.env
file, changing theCLUSTERPOOL_TARGET_NAMESPACE
orCLUSTERCLAIM_GROUP_NAME
variables, for example. - Run a cluster-keeper command such as
ck list pools
, which will open the ClusterPool host console in your browser. Copy the login command, paste and run it in your shell, then rerun theck list pools
command to complete setup.
If you frequently use different ClusterPool host settings, you can set up multiple copies of cluster-keeper that use a different context name for the ClusterPool host.
- Clone a copy of cluster-keeper. Create the
user.env
file and use theCLUSTERPOOL_CONTEXT_NAME
variable to define a unique context name. For example:CLUSTERPOOL_CONTEXT_NAME=ck:dev
- Create a unique symlink for this copy of cluster-keeper.
ln -s $(pwd)/ck /usr/local/bin/ck:dev
- Now you can run commands like
ck:dev list cc
to see ClusterClaims in your "dev" environment orck:dev use ck:dev
to work with the ClusterPool host directly.