Skip to content

Commit

Permalink
docs(deployment): updates the deployment prep content and download re…
Browse files Browse the repository at this point in the history
…ferences (#10638)

Signed-off-by: prmellor <[email protected]>
  • Loading branch information
PaulRMellor authored Sep 25, 2024
1 parent e681399 commit 18b9e53
Show file tree
Hide file tree
Showing 21 changed files with 73 additions and 121 deletions.
11 changes: 0 additions & 11 deletions documentation/assemblies/deploying/assembly-deploy-options.adoc

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@
// deploying/deploying.adoc

[id="deploy-tasks-prereqs_{context}"]
= Preparing for your Strimzi deployment
= Preparing for your deployment

[role="_abstract"]
Prepare for a deployment of Strimzi by completing any necessary pre-deployment tasks.
Take the necessary preparatory steps according to your specific requirements, such as the following:

* xref:deploy-prereqs-{context}[Ensuring you have the necessary prerequisites before deploying Strimzi]
* xref:downloads-{context}[Downloading the Strimzi release artifacts to facilitate your deployment]
* xref:con-deploy-operator-best-practices-{context}[Considering operator deployment best practices]
* xref:container-images-{context}[Pushing the Strimzi container images into your own registry (if required)]
* xref:adding-users-the-strimzi-admin-role-{context}[Setting up admin roles to enable configuration of custom resources used in the deployment]

Expand All @@ -20,8 +20,6 @@ NOTE: To run the commands in this guide, your cluster user must have the rights
include::../../modules/deploying/con-deploy-prereqs.adoc[leveloffset=+1]
//operator deployment tips
include::../../modules/deploying/con-deploy-operator-best-practices.adoc[leveloffset=+1]
//How to access release artifacts
include::../../modules/deploying/con-deploy-product-downloads.adoc[leveloffset=+1]
//Container images
include::../../modules/deploying/con-deploy-container-images.adoc[leveloffset=+1]
//Designating administrators to manage the install process
Expand Down
7 changes: 2 additions & 5 deletions documentation/assemblies/deploying/assembly-deploy-tasks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,10 @@
// deploying/deploying.adoc

[id="deploy-tasks_{context}"]
= Deploying Strimzi using installation artifacts
= Deploying Strimzi using installation files

[role="_abstract"]
Having xref:deploy-tasks-prereqs_{context}[prepared your environment for a deployment of Strimzi], you can deploy Strimzi to a Kubernetes cluster.
Use the installation files provided with the release artifacts.
Download and use the Strimzi xref:downloads-{context}[deployment files] to deploy Strimzi components to a Kubernetes cluster.

ifdef::Section[]
You can deploy Strimzi {ProductVersion} on Kubernetes {KubernetesVersion}.
Expand All @@ -32,8 +31,6 @@ The steps to deploy Strimzi using the installation files are as follows:

NOTE: To run the commands in this guide, a Kubernetes user must have the rights to manage role-based access control (RBAC) and CRDs.

//Deployment paths
include::../../modules/deploying/con-deploy-paths.adoc[leveloffset=+1]
//Options and instructions for deploying Cluster Operator
include::assembly-deploy-cluster-operator.adoc[leveloffset=+1]
//Options and instructions for deploying Kafka resource
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,13 +57,6 @@ webhooks:
# ...
----

[id='drain-cleaner-prereqs-{context}']
== Downloading the Strimzi Drain Cleaner deployment files

To deploy and use the Strimzi Drain Cleaner, you need to download the deployment files.

The Strimzi Drain Cleaner deployment files are available from the link:{ReleaseDownload}.

//steps for deploying drain cleaner
include::../../modules/drain-cleaner/proc-drain-cleaner-deploying.adoc[leveloffset=+1]
ifdef::Section[]
Expand Down
17 changes: 15 additions & 2 deletions documentation/assemblies/overview/assembly-kafka-components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,20 @@
[id="kafka-components_{context}"]
= Strimzi deployment of Kafka

//standard kafka deployment intro
include::../../shared/snip-intro-kafka-deployment.adoc[leveloffset=+1]
Strimzi enables the deployment of Apache Kafka components to a Kubernetes cluster, typically running as clusters for high availability.

A standard Kafka deployment using Strimzi might include the following components:

* *Kafka* cluster of broker nodes as the core component
* *Kafka Connect* cluster for external data connections
* *Kafka MirrorMaker* cluster to mirror data to another Kafka cluster
* *Kafka Exporter* to extract additional Kafka metrics data for monitoring
* *Kafka Bridge* to enable HTTP-based communication with Kafka
* *Cruise Control* to rebalance topic partitions across brokers

Not all of these components are required, though you need Kafka as a minimum for a Strimzi-managed Kafka cluster.
Depending on your use case, you can deploy the additional components as needed.
These components can also be used with Kafka clusters that are not managed by Strimzi.

//Overview of Kafka component interaction
include::../../modules/overview/con-kafka-concepts-components.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ If you deployed the Cluster Operator using a Helm chart, use `helm upgrade`.

The `helm upgrade` command does not upgrade the {HelmCustomResourceDefinitions}.
Install the new CRDs manually after upgrading the Cluster Operator.
You can access the CRDs from the {ReleaseDownload} or find them in the `crd` subdirectory inside the Helm Chart.
You can download the CRDs from the {ReleaseDownload} or find them in the `crd` subdirectory inside the Helm Chart.

[id='con-upgrade-cluster-operator-unsupported-kafka-{context}']
== Upgrading the Cluster Operator returns Kafka version error
Expand Down
4 changes: 1 addition & 3 deletions documentation/assemblies/upgrading/assembly-upgrade.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
= Upgrading Strimzi

[role="_abstract"]
Upgrade your Strimzi installation to version {ProductVersion} and benefit from new features, performance improvements, and enhanced security options.
Download the latest Strimzi xref:downloads-{context}[deployment files] and upgrade your Strimzi installation to version {ProductVersion} to benefit from new features, performance improvements, and enhanced security options.
During the upgrade, Kafka is also be updated to the latest supported version, introducing additional features and bug fixes to your Strimzi deployment.

Use the same method to upgrade the Cluster Operator as the initial method of deployment.
Expand All @@ -16,8 +16,6 @@ Kafka upgrades are performed by the Cluster Operator through rolling updates of

If you encounter any issues with the new version, Strimzi can be xref:assembly-downgrade-{context}[downgraded] to the previous version.

Released Strimzi versions can be found at {ReleaseDownload}.

.Upgrade without downtime

For topics configured with high availability (replication factor of at least 3 and evenly distributed partitions), the upgrade process should not cause any downtime for consumers and producers.
Expand Down
10 changes: 6 additions & 4 deletions documentation/deploying/deploying.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,16 @@ include::shared/attributes.adoc[]

//Introduction to the install process
include::assemblies/deploying/assembly-deploy-intro.adoc[leveloffset=+1]
//Using Kafka in Kraft mode
include::assemblies/deploying/assembly-kraft-mode.adoc[leveloffset=+1]
//Install options
include::modules/deploying/con-strimzi-installation-methods.adoc[leveloffset=+1]
//Checklist to show deployment order and the options available
include::assemblies/deploying/assembly-deploy-options.adoc[leveloffset=+1]
//Deployment path
include::modules/deploying/con-deploy-paths.adoc[leveloffset=+1]
//How to access release artifacts
include::modules/deploying/con-deploy-product-downloads.adoc[leveloffset=+1]
//Prep for the deployment
include::assemblies/deploying/assembly-deploy-tasks-prep.adoc[leveloffset=+1]
//Using Kafka in Kraft mode
include::assemblies/deploying/assembly-kraft-mode.adoc[leveloffset=+1]
//Deployment steps using installation artifacts
include::assemblies/deploying/assembly-deploy-tasks.adoc[leveloffset=+1]
//Deployment using operatorhub.io
Expand Down
2 changes: 1 addition & 1 deletion documentation/modules/configuring/con-config-examples.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

[role="_abstract"]
Further enhance your deployment by incorporating additional supported configuration.
Example configuration files are provided with the downloadable release artifacts from the {ReleaseDownload}.
Example configuration files are included in the Strimzi xref:downloads-{context}[deployment files].
ifdef::Section[]
You can also access the example files directly from the
link:https://github.com/strimzi/strimzi-kafka-operator/tree/{GithubVersion}/examples/[`examples` directory^].
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ In order to prevent issues arising when client consumer requests are processed b
Additionally, each independent Kafka Bridge instance must have a replica.
A Kafka Bridge instance has its own state which is not shared with another instances.

For a deeper understanding of the Kafka Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] and the {BookURLConfiguring}[Strimzi Custom Resource API Reference^].
For a deeper understanding of the Kafka Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] guide and the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^].

.Example `KafkaBridge` custom resource configuration
[source,yaml,subs="+quotes,attributes"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@ If brokers are running on nodes with heterogeneous network resources, you can us

If an empty object (`{}`) is used for the `cruiseControl` configuration, all properties use their default values.

Strimzi provides xref:config-examples-{context}[example configuration files], which include `Kafka` custom resources with Cruise Control configuration.
For more information on the configuration options for Cruise Control, see the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^].

.Prerequisites

* A Kubernetes cluster
* A running Cluster Operator
* xref:deploying-cluster-operator-str[The Cluster Operator must be deployed.]

.Procedure

Expand Down
23 changes: 0 additions & 23 deletions documentation/modules/deploying/con-deploy-options-order.adoc

This file was deleted.

28 changes: 16 additions & 12 deletions documentation/modules/deploying/con-deploy-paths.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,26 @@
// deploying/assembly_deploy-tasks.adoc

[id='con-deploy-paths-{context}']
= Basic deployment path
= Deployment path

[role="_abstract"]
You can set up a deployment where Strimzi manages a single Kafka cluster in the same namespace.
You might use this configuration for development or testing.
Or you can use Strimzi in a production environment to manage a number of Kafka clusters in different namespaces.
You can configure a deployment where Strimzi manages a single Kafka cluster in the same namespace, suitable for development or testing.
Alternatively, Strimzi can manage multiple Kafka clusters across different namespaces in a production environment.

The basic deployment path is as follows:
The basic deployment path includes the following steps:

. xref:downloads-{context}[Download the release artifacts]
. Create a Kubernetes namespace in which to deploy the Cluster Operator
. xref:cluster-operator-{context}[Deploy the Cluster Operator]
.. Update the `install/cluster-operator` files to use the namespace created for the Cluster Operator
.. Install the Cluster Operator to watch one, multiple, or all namespaces
. xref:kafka-cluster-{context}[Create a Kafka cluster]
. Create a Kubernetes namespace for the Cluster Operator.
. Deploy the Cluster Operator based on your chosen deployment method.
. Deploy the Kafka cluster, including the Topic Operator and User Operator if desired.
. Optionally, deploy additional components:
** The Topic Operator and User Operator as standalone components, if not deployed with the Kafka cluster
** Kafka Connect
** Kafka MirrorMaker
** Kafka Bridge
** Metrics monitoring components

After which, you can deploy other Kafka components and set up monitoring of your deployment.
The Cluster Operator creates Kubernetes resources such as `Deployment`, `Service`, and `Pod` for each component.
The resource names are appended with the name of the deployed component.
For example, a Kafka cluster named `my-kafka-cluster` will have a service named `my-kafka-cluster-kafka`.


22 changes: 13 additions & 9 deletions documentation/modules/deploying/con-deploy-product-downloads.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,22 @@
// deploying/assembly_deploy-tasks-prep.adoc

[id='downloads-{context}']
= Downloading Strimzi release artifacts
= Downloading deployment files

[role="_abstract"]
To use deployment files to install Strimzi, download and extract the files from the {ReleaseDownload}.
To deploy Strimzi components using YAML files, download and extract the latest release archive (`{ReleaseFile}`) from the {ReleaseDownload}.

Strimzi release artifacts include sample YAML files to help you deploy the components of Strimzi to Kubernetes, perform common operations,
and configure your Kafka cluster.
The release archive contains sample YAML files for deploying Strimzi components to Kubernetes using `kubectl`.

Use `kubectl` to deploy the Cluster Operator from the `install/cluster-operator` folder of the downloaded ZIP file.
For more information about deploying and configuring the Cluster Operator, see xref:cluster-operator-{context}[].
Begin by deploying the Cluster Operator from the `install/cluster-operator` directory to watch a single namespace, multiple namespaces, or all namespaces.

In addition, if you want to use standalone installations of the Topic and User Operators with a Kafka cluster that is not managed by the Strimzi Cluster Operator, you can deploy them from the `install/topic-operator` and `install/user-operator` folders.
In the `install` folder, you can also deploy other Strimzi components, including:

NOTE: Strimzi container images are also available through the {DockerRepository}.
However, we recommend that you use the YAML files provided to deploy Strimzi.
* Strimzi administrator roles (`strimzi-admin`)
* Standalone Topic Operator (`topic-operator`)
* Standalone User Operator (`user-operator`)
* Strimzi Drain Cleaner (`drain-cleaner`)

The `examples` folder xref:config-examples-str[provides examples of Strimzi custom resources] to help you develop your own Kafka configurations.

NOTE: Strimzi container images are available through the {DockerRepository}, but we recommend using the provided YAML files for deployment.
Original file line number Diff line number Diff line change
Expand Up @@ -3,39 +3,27 @@
// deploying.adoc (downstream)

[id="con-strimzi-installation-methods_{context}"]
= Strimzi installation methods
= Deployment methods

[role="_abstract"]
You can install Strimzi on Kubernetes {KubernetesVersion} in three ways.
You can deploy Strimzi on Kubernetes {KubernetesVersion} using one of the following methods:

[cols="2*",options="header"]
|===

|Installation method
|Description

|xref:deploy-tasks_str[Installation artifacts (YAML files)]
a|Download the release artifacts from the {ReleaseDownload}.

Download the `strimzi-_<version>_.zip` or `strimzi-_<version>_.tar.gz` archive file.
The archive file contains installation artifacts and example configuration files.

Deploy the YAML installation artifacts to your Kubernetes cluster using `kubectl`.
You start by deploying the Cluster Operator from `install/cluster-operator` to a single namespace, multiple namespaces, or all namespaces.

You can also use the `install/` artifacts to deploy the following:

* Strimi administrator roles (`strimzi-admin`)
* A standalone Topic Operator (`topic-operator`)
* A standalone User Operator (`user-operator`)
* Strimzi Drain Cleaner (`drain-cleaner`)

|xref:deploy-tasks_str[Deployment files (YAML files)]
a|xref:downloads-{context}[Download the deployment files] to manually deploy Strimzi components.

|xref:deploying-strimzi-from-operator-hub-str[OperatorHub.io]
|Use the *Strimzi Kafka* operator in the OperatorHub.io to deploy the Cluster Operator. You then deploy Strimzi components using custom resources.
|Deploy the Strimzi Cluster operator through the OperatorHub.io, then deploy Strimzi components using custom resources.

ifdef::Section[]
|xref:deploying-cluster-operator-helm-chart-str[Helm chart]
|Use a Helm chart to deploy the Cluster Operator. You then deploy Strimzi components using custom resources.
|Use a Helm chart to deploy the Cluster Operator, then deploy Strimzi components using custom resources.
endif::Section[]

|===

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ A system administrator can designate Strimzi administrators after the Cluster Op

.Prerequisites

* The Strimzi admin deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files].
* The Strimzi Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been xref:cluster-operator-{context}[deployed with the Cluster Operator].

.Procedure
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ In this way, you can use Topic Operators with multiple Kafka clusters.

.Prerequisites

* The standalone Topic Operator deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files].
* You are running a Kafka cluster for the Topic Operator to connect to.
+
As long as the standalone Topic Operator is correctly configured for connection,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ In this way, you can use the User Operator with multiple Kafka clusters.

.Prerequisites

* The standalone User Operator deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files].
* You are running a Kafka cluster for the User Operator to connect to.
+
As long as the standalone User Operator is correctly configured for connection,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ For the legacy mode to work, you have to configure the `PodDisruptionBudget` to

.Prerequisites

* You have xref:drain-cleaner-prereqs-str[downloaded the Strimzi Drain Cleaner deployment files].
* The Drain Cleaner deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files].
* You have a highly available Kafka cluster deployment running with Kubernetes worker nodes that you would like to update.
* Topics are replicated for high availability.
+
Expand Down
1 change: 1 addition & 0 deletions documentation/shared/attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@

// Source and download links
:ReleaseDownload: https://github.com/strimzi/strimzi-kafka-operator/releases[GitHub releases page^]
:ReleaseFile: strimzi-{ProductVersion}.*
:supported-configurations: https://strimzi.io/downloads/

//Monitoring links
Expand Down
15 changes: 0 additions & 15 deletions documentation/shared/snip-intro-kafka-deployment.adoc

This file was deleted.

0 comments on commit 18b9e53

Please sign in to comment.