diff --git a/docs/config.toml b/docs/config.toml index f008a2b598..d24f633ed2 100644 --- a/docs/config.toml +++ b/docs/config.toml @@ -178,7 +178,3 @@ no = 'Sorry to hear that. Please - This page describes several core concepts in PipeCD. ---- - -![](/images/architecture-overview.png) -

-Component Architecture -

- -### Piped - -`piped` is a single binary component you run as an agent in your cluster, your local network to handle the deployment tasks. -It can be run inside a Kubernetes cluster by simply starting a Pod or a Deployment. -This component is designed to be stateless, so it can also be run in a single VM or even your local machine. - -### Control Plane - -A centralized component managing deployment data and provides gPRC API for connecting `piped`s as well as all web-functionalities of PipeCD such as -authentication, showing deployment list/details, application list/details, delivery insights... - -### Project - -A project is a logical group of applications to be managed by a group of users. -Each project can have multiple `piped` instances from different clouds or environments. - -There are three types of project roles: - -- **Viewer** has only permissions of viewing to deployment and application in the project. -- **Editor** has all viewer permissions, plus permissions for actions that modify state such as manually trigger/cancel the deployment. -- **Admin** has all editor permissions, plus permissions for managing project data, managing project `piped`. - -### Application - -A collect of resources (containers, services, infrastructure components...) and configurations that are managed together. -PipeCD supports multiple kinds of applications such as `KUBERNETES`, `TERRAFORM`, `ECS`, `CLOUDRUN`, `LAMBDA`... - -### Application Configuration - -A YAML file that contains information to define and configure application. -Each application requires one file at application directory stored in the Git repository. -The default file name is `app.pipecd.yaml`. - -### Application Directory - -A directory in Git repository containing application configuration file and application manifests. -Each application must have one application directory. - -### Deployment - -A deployment is a process that does transition from the current state (running state) to the desired state (specified state in Git) of a specific application. -When the deployment is success, it means the running state is being synced with the desired state specified in the target commit. - -### Sync Strategy - -There are 3 strategies that PipeCD supports while syncing your application state with its configuration stored in Git. Which are: -- Quick Sync: a fast way to make the running application state as same as its Git stored configuration. The generated pipeline contains only one predefined `SYNC` stage. -- Pipeline Sync: sync the running application state with its Git stored configuration through a pipeline defined in its application configuration. -- Auto Sync: depends on your defined application configuration, `piped` will decide the best way to sync your application state with its Git stored configuration. - -### Platform Provider - -Note: The previous name of this concept was Cloud Provider. - -PipeCD supports multiple platforms and multiple kinds of applications. -Platform Provider defines which platform, cloud and where application should be deployed to. - -Currently, PipeCD is supporting these five platform providers: `KUBERNETES`, `ECS`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. - -### Analysis Provider -An external product that provides metrics/logs to evaluate deployments, such as `Prometheus`, `Datadog`, `Stackdriver`, `CloudWatch` and so on. -It is mainly used in the [Automated deployment analysis](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) context. diff --git a/docs/content/en/docs-v0.37.x/contribution-guidelines/_index.md b/docs/content/en/docs-v0.37.x/contribution-guidelines/_index.md deleted file mode 100755 index b47753d9aa..0000000000 --- a/docs/content/en/docs-v0.37.x/contribution-guidelines/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Contributor Guide" -linkTitle: "Contributor Guide" -weight: 6 -description: > - This guide is for anyone who want to contribute to PipeCD project. We are so excited to have you! ---- diff --git a/docs/content/en/docs-v0.37.x/contribution-guidelines/architectural-overview.md b/docs/content/en/docs-v0.37.x/contribution-guidelines/architectural-overview.md deleted file mode 100644 index fd2557fabc..0000000000 --- a/docs/content/en/docs-v0.37.x/contribution-guidelines/architectural-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: "Architectural overview" -linkTitle: "Architectural overview" -weight: 3 -description: > - This page describes the architecture of PipeCD. ---- - -![](/images/architecture-overview.png) -

-Component Architecture -

- -### Piped - -A single binary component runs in your cluster, your local network to handle the deployment tasks. -It can be run inside a Kubernetes cluster by simply starting a Pod or a Deployment. -This component is designed to be stateless, so it can also be run in a single VM or even your local machine. - -### Control Plane - -A centralized component manages deployment data and provides gPRC API for connecting `piped`s as well as all web-functionalities of PipeCD such as -authentication, showing deployment list/details, application list/details, delivery insights... - -Control Plane contains the following components: -- `server`: a service to provide api for piped, web and serve static assets for web. -- `ops`: a service to provide administrative features for Control Plane owner like adding/managing projects. -- `cache`: a redis cache service for caching internal data. -- `datastore`: data storage for storing deployment, application data - - this can be a fully-managed service such as `Firestore`, `Cloud SQL`... - - or a self-managed such as `MySQL` -- `filestore`: file storage for storing logs, application states - - this can a fully-managed service such as `GCS`, `S3`... - - or a self-managed service such as `Minio` - -For more information, see [Architecture overview of Control Plane](../../user-guide/managing-controlplane/architecture-overview/). diff --git a/docs/content/en/docs-v0.37.x/contribution-guidelines/contributing.md b/docs/content/en/docs-v0.37.x/contribution-guidelines/contributing.md deleted file mode 100644 index 84eaaf95d0..0000000000 --- a/docs/content/en/docs-v0.37.x/contribution-guidelines/contributing.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: "Contributing" -linkTitle: "Contributing" -weight: 1 -description: > - This page describes how to contribute to PipeCD. ---- - -PipeCD is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! - -## Contributor License Agreement - -Contributions to this project must be accompanied by a Contributor License Agreement ("CLA") described at [pipe-cd/pipecd/master/CLA.md](https://github.com/pipe-cd/pipecd/blob/master/CLA.md). You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project. - -You generally only need to sign a CLA once, so if you've already signed one, you probably don't need to do it again. - -In case you have not signed yet, [pipecd-bot](https://github.com/pipecd-bot) will guide you to sign the CLA _when you send the first pull request to [pipe-cd/pipecd](https://github.com/pipe-cd/pipecd) repository_. - -## Creating an issue - -If you've found a problem, please create an issue in the [pipe-cd/pipecd](https://github.com/pipe-cd/pipecd/issues) repository. - -## Creating a pull request - -Look at our [help wanted issues](https://github.com/pipe-cd/pipecd/issues?q=is%3Aissue+is%3Aopen+label%3A"help+wanted") or our [good first issues](https://github.com/pipe-cd/pipecd/issues?q=is%3Aissue+is%3Aopen+label%3A"good+first+issue") for finding some good issues for your first pull request. - -### Small tips - -The pull request title is used to generate our release changelog. Therefore, it would be great if you write the title that is easier to understand from the user's point of view. - -## Code reviews - -All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult [GitHub Help](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests) for more information on using pull requests. diff --git a/docs/content/en/docs-v0.37.x/contribution-guidelines/development.md b/docs/content/en/docs-v0.37.x/contribution-guidelines/development.md deleted file mode 100644 index 5e5556a65d..0000000000 --- a/docs/content/en/docs-v0.37.x/contribution-guidelines/development.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: "Development" -linkTitle: "Development" -weight: 2 -description: > - This page describes how to build, test PipeCD source code at your local environment. ---- - -## Prerequisites - -- [Go 1.19](https://go.dev/) -- [Docker](https://www.docker.com/) -- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) (If you want to run Control Plane locally) -- [helm 3.8](https://helm.sh/docs/intro/install/) (If you want to run Control Plane locally) - -## Repositories -- [pipecd](https://github.com/pipe-cd/pipecd): contains all source code and documentation of PipeCD project. -- [examples](https://github.com/pipe-cd/examples): contains various generated examples to demonstrate how to use PipeCD. - -## Commands - -- `make build/go`: builds all go modules including pipecd, piped, pipectl. -- `make build/web`: builds the static files for web. - -- `make test/go`: runs all unit tests of go modules. -- `make test/web`: runs all unit tests of web. -- `make test/integration`: runs integration tests. - -- `make run/piped`: runs Piped locally (for more information, see [here](#how-to-run-piped-agent-locally)). -- `make run/site`: runs PipeCD site locally (requires [hugo](https://github.com/gohugoio/hugo) with `_extended` version `0.92.1` or later to be installed). - -- `make gen/code`: generate Go and Typescript code from protos and mock configs. You need to run it if you modified any proto or mock definition files. - -For the full list of available commands, please see the Makefile at the root of repository. - -## How to run Control Plane locally - -1. Start running a Kubernetes cluster - - ``` console - make kind-up - ``` - - Once it is no longer used, run `make kind-down` to delete it. - -2. Install Control Plane into the local cluster - - ``` console - make run/pipecd - ``` - - Once all components are running up, use `kubectl port-forward` to expose the installed Control Plane on your localhost: - - ``` console - kubectl -n pipecd port-forward svc/pipecd 8080 - ``` - -3. Access to the local Control Plane web console - - Point your web browser to [http://localhost:8080](http://localhost:8080) to login with the configured static admin account: project = `quickstart`, username = `hello-pipecd`, password = `hello-pipecd`. - -## How to run Piped agent locally - -1. Prepare the piped configuration file `piped-config.yaml` - -2. Ensure that your `kube-context` is connecting to the right kubernetes cluster - -3. Run the following command to start running `piped` (if you want to connect Piped to a locally running Control Plane, add `INSECURE=true` option) - - ``` console - make run/piped CONFIG_FILE=piped-config.yaml - ``` - -## Docs and workaround with docs - -PipeCD official site contains multiple versions of documentation, all placed under the `/docs/content/en` directory, which are: -- `/docs`: stable version docs, usually synced with the latest released version docs. -- `/docs-dev`: experimental version docs, contains docs for not yet released features or changes. -- `/docs-v0.x.x`: contains docs for specified version family (a version family is all versions which in the same major release). - -Basically, we have two simple rules: -- Do not touch to the `/docs` content directly. -- Keep stable docs version synced with the latest released docs version. - -Here are the flow of docs contribution regard some known scenarios: -1. Update docs that are related to a specified version (which is not the latest released version): -In such case, update the docs under `/docs-v0.x.x` is enough. -2. Update docs for not yet released features or changes: -In such case, update the docs under `/docs-dev` is enough. -3. Update docs that are related to the latest released docs version: -- Change the docs' content that fixes the issue under `/docs-dev` and `/docs-v0.x.x`, they share the same file structure so you should find the right files in both directories. -- Use `make gen/stable-docs` command to sync the latest released version docs under `/docs-v0.x.x` to `/docs` - -If you find any issues related to the docs, we're happy to accept your help. diff --git a/docs/content/en/docs-v0.37.x/examples/_index.md b/docs/content/en/docs-v0.37.x/examples/_index.md deleted file mode 100755 index 96a0197d7f..0000000000 --- a/docs/content/en/docs-v0.37.x/examples/_index.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: "Examples" -linkTitle: "Examples" -weight: 7 -description: > - Some examples of PipeCD in action! ---- - -One of the best ways to see what PipeCD can do, and learn how to deploy your applications with it, is to see some real examples. - -We have prepared some examples for each kind of application. -The examples can be found at the following repository: - -https://github.com/pipe-cd/examples - -### Kubernetes Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/kubernetes/simple) | Deploy plain-yaml manifests in application directory without using pipeline. | -| [helm-local-chart](https://github.com/pipe-cd/examples/tree/master/kubernetes/helm-local-chart) | Deploy a helm chart sourced from the same Git repository. | -| [helm-remote-chart](https://github.com/pipe-cd/examples/tree/master/kubernetes/helm-remote-chart) | Deploy a helm chart sourced from a [Helm Chart Repository](https://helm.sh/docs/topics/chart_repository/). | -| [helm-remote-git-chart](https://github.com/pipe-cd/examples/tree/master/kubernetes/helm-remote-git-chart) | Deploy a helm chart sourced from another Git repository. | -| [kustomize-local-base](https://github.com/pipe-cd/examples/tree/master/kubernetes/kustomize-local-base) | Deploy a kustomize package that just uses the local bases from the same Git repository. | -| [kustomize-remote-base](https://github.com/pipe-cd/examples/tree/master/kubernetes/kustomize-remote-base) | Deploy a kustomize package that uses remote bases from other Git repositories. | -| [canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary) | Deployment pipeline with canary strategy. | -| [canary-by-config-change](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary-by-config-change) | Deployment pipeline with canary strategy when ConfigMap was changed. | -| [canary-patch](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary-patch) | Demonstrate how to customize manifests for Canary variant using [patches](../user-guide/configuration-reference/#kubernetescanaryrolloutstageoptions) option. | -| [bluegreen](https://github.com/pipe-cd/examples/tree/master/kubernetes/bluegreen) | Deployment pipeline with bluegreen strategy. This also contains a manual approval stage. | -| [mesh-istio-canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-canary) | Deployment pipeline with canary strategy by using Istio for traffic routing. | -| [mesh-istio-bluegreen](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-bluegreen) | Deployment pipeline with bluegreen strategy by using Istio for traffic routing. | -| [mesh-smi-canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-smi-canary) | Deployment pipeline with canary strategy by using SMI for traffic routing. | -| [mesh-smi-bluegreen](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-smi-bluegreen) | Deployment pipeline with bluegreen strategy by using SMI for traffic routing. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/kubernetes/wait-approval) | Deployment pipeline that contains a manual approval stage. | -| [multi-steps-canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/multi-steps-canary) | Deployment pipeline with multiple canary steps. | -| [analysis-by-metrics](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-by-metrics) | Deployment pipeline with analysis stage by metrics. | -| [analysis-by-http](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-by-http) | Deployment pipeline with analysis stage by running http requests. | -| [analysis-by-log](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-by-log) | Deployment pipeline with analysis stage by checking logs. | -| [analysis-with-baseline](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-with-baseline) | Deployment pipeline with analysis stage by comparing baseline and canary. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/kubernetes/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | - -### Terraform Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/terraform/simple) | Automatically applies when any changes were detected. | -| [local-module](https://github.com/pipe-cd/examples/tree/master/terraform/local-module) | Deploy application that using local terraform modules from the same Git repository. | -| [remote-module](https://github.com/pipe-cd/examples/tree/master/terraform/remote-module) | Deploy application that using remote terraform modules from other Git repositories. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/terraform/wait-approval) | Deployment pipeline that contains a manual approval stage. | -| [autorollback](https://github.com/pipe-cd/examples/tree/master/terraform/auto-rollback) | Automatically rollback the changes when deployment was failed. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/terraform/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | - -### Cloud Run Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/cloudrun/simple) | Quick sync by rolling out the new version and switching all traffic to it. | -| [canary](https://github.com/pipe-cd/examples/tree/master/cloudrun/canary) | Deployment pipeline with canary strategy. | -| [analysis](https://github.com/pipe-cd/examples/tree/master/cloudrun/analysis) | Deployment pipeline that contains an analysis stage. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/cloudrun/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/cloudrun/wait-approval) | Deployment pipeline that contains a manual approval stage. | - -### Lambda Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/lambda/simple) | Quick sync by rolling out the new version and switching all traffic to it. | -| [canary](https://github.com/pipe-cd/examples/tree/master/lambda/canary) | Deployment pipeline with canary strategy. | -| [analysis](https://github.com/pipe-cd/examples/tree/master/lambda/analysis) | Deployment pipeline that contains an analysis stage. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/lambda/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/lambda/wait-approval) | Deployment pipeline that contains a manual approval stage. | -| [remote-git](https://github.com/pipe-cd/examples/tree/master/lambda/remote-git) | Deploy the lambda code sourced from another Git repository. | -| [zip-packing-s3](https://github.com/pipe-cd/examples/tree/master/lambda/zip-packing-s3) | Deployment pipeline of kind Lambda which uses s3 stored zip file as function code. | - -### ECS Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/ecs/simple) | Quick sync by rolling out the new version and switching all traffic to it. | -| [canary](https://github.com/pipe-cd/examples/tree/master/ecs/canary) | Deployment pipeline with canary strategy. | -| [bluegreen](https://github.com/pipe-cd/examples/tree/master/ecs/bluegreen) | Deployment pipeline with blue-green strategy. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/ecs/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/ecs/wait-approval) | Deployment pipeline that contains a manual approval stage. | - -### Deployment chain - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/deployment-chain/simple) | Simple deployment chain which uses application name as a filter in chain configuration. | diff --git a/docs/content/en/docs-v0.37.x/faq/_index.md b/docs/content/en/docs-v0.37.x/faq/_index.md deleted file mode 100644 index 1a58110ddd..0000000000 --- a/docs/content/en/docs-v0.37.x/faq/_index.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: "FAQ" -linkTitle: "FAQ" -weight: 9 -description: > - List of frequently asked questions. ---- - -If you have any other questions, please feel free to create the issue in the [pipe-cd/pipecd](https://github.com/pipe-cd/pipecd/issues/new/choose) repository or contact us on [Cloud Native Slack](https://slack.cncf.io) (channel [#pipecd](https://app.slack.com/client/T08PSQ7BQ/C01B27F9T0X)). - -### 1. What kind of application (platform provider) will be supported? - -Currently, PipeCD can be used to deploy `Kubernetes`, `ECS`, `Terraform`, `CloudRun`, `Lambda` applications. - -In the near future we also want to support `Crossplane`... - -### 2. What kind of templating methods for Kubernetes application will be supported? - -Currently, PipeCD is supporting `Helm` and `Kustomize` as templating method for Kubernetes applications. - -### 3. Istio is supported now? - -Yes, you can use PipeCD for both mesh (Istio, SMI) applications and non-mesh applications. - -### 4. What are the differences between PipeCD and FluxCD? - -- Not just Kubernetes applications, PipeCD also provides a unified interface for other cloud services (CloudRun, AWS Lamda...) and Terraform -- One tool for both GitOps sync and progressive deployment -- Supports multiple Git repositories -- Has web UI for better visibility - - Log viewer for each deployment - - Visualization of application component/state in realtime - - Show configuration drift in realtime -- Also supports Canary and BlueGreen for non-mesh applications -- Has built-in secrets management -- Supports gradual rollout of a single app to multiple clusters -- Shows the delivery performance insights - -### 5. What are the differences between PipeCD and ArgoCD? - -- Not just Kubernetes applications, PipeCD also provides a unified interface for other cloud services (GCP CloudRun, AWS Lamda...) and Terraform -- One tool for both GitOps sync and progressive deployment -- Don't need another CRD or changing the existing manifests for doing Canary/BlueGreen. PipeCD just uses the standard Kubernetes deployment object -- Easier and safer to operate multi-tenancy, multi-cluster for multiple teams (even some teams are running in a private/restricted network) -- Has built-in secrets management -- Supports gradual rollout of a single app to multiple clusters -- Shows the delivery performance insights - -### 6. What should I do if I lost my Piped key? - -You can create a new Piped key. Go to the `Piped` tab at `Settings` page, and click the vertical ellipsis of the Piped that you would like to create the new Piped key. Don't forget deleting the old Key, too. diff --git a/docs/content/en/docs-v0.37.x/feature-status/_index.md b/docs/content/en/docs-v0.37.x/feature-status/_index.md deleted file mode 100644 index 7ef80f0da3..0000000000 --- a/docs/content/en/docs-v0.37.x/feature-status/_index.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: "Feature Status" -linkTitle: "Feature Status" -weight: 8 -description: > - This page lists the relative maturity of every PipeCD features. ---- - -Please note that the phases (Incubating, Alpha, Beta, and Stable) are applied to individual features within the project, not to the project as a whole. - -## Feature Phase Definitions - -| Phase | Definition | -|-|-| -| Incubating | Under planning/developing the prototype and still not ready to be used. | -| Alpha | Demo-able, works end-to-end but has limitations. No guarantees on backward compatibility. | -| Beta | **Usable in production**. Documented. | -| Stable | Production hardened. Backward compatibility. Documented. | - -## Provider - -### Kubernetes - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. canary, analysis) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Beta | -| [Application live state](../user-guide/managing-application/application-live-state/) | Beta | -| Support Helm | Beta | -| Support Kustomize | Beta | -| Support Istio service mesh | Beta | -| Support SMI service mesh | Incubating | -| Support [AWS App Mesh](https://aws.amazon.com/app-mesh/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Beta | - -### Terraform - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. manual-approval) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Incubating | -| [Application live state](../user-guide/managing-application/application-live-state/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Beta | - -### Cloud Run - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. canary, analysis) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Alpha | -| [Application live state](../user-guide/managing-application/application-live-state/) | Alpha | -| [Plan preview](../user-guide/plan-preview) | Alpha | - -### Lambda - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. canary, analysis) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Incubating | -| [Application live state](../user-guide/managing-application/application-live-state/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Alpha | - -### Amazon ECS - -| Feature | Phase | -|-|-| -| Quick sync deployment | Alpha | -| Deployment with a defined pipeline (e.g. canary, analysis) | Alpha | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](/docs/user-guide/configuration-drift-detection/) | Incubating | -| [Application live state](../user-guide/managing-application/application-live-state/) | Incubating | -| Support [AWS App Mesh](https://aws.amazon.com/app-mesh/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Alpha | - -## Piped Agent - -| Feature | Phase | -|-|-| -| [Deployment wait stage](/docs/user-guide/adding-a-wait-stage/) | Beta | -| [Deployment manual approval stage](../user-guide/managing-application/customizing-deployment/adding-a-manual-approval/) | Beta | -| [Notification](../user-guide/managing-piped/configuring-notifications/) to Slack | Beta | -| [Notification](../user-guide/managing-piped/configuring-notifications/) to external service via webhook | Alpha | -| [Secrets management](/docs/user-guide/secret-management/) - Storing secrets safely in the Git repository | Beta | -| [Event watcher](../user-guide/event-watcher/) - Updating files in Git automatically for given events | Alpha | -| [Pipectl](../user-guide/command-line-tool/) - Command-line tool for interacting with Control Plane | Beta | -| Deployment plugin - Allow executing user-created deployment plugin | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) (Automated Deployment Analysis) by Prometheus metrics | Alpha | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by Datadog metrics | Alpha | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by Stackdriver metrics | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by Stackdriver log | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by CloudWatch metrics | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by CloudWatch log | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by HTTP request (smoke test...) | Incubating | -| [Remote upgrade](/docs/operator-manual/piped/remote-upgrade-remote-config/#remote-upgrade) - Ability to upgrade Piped from the web console | Beta | -| [Remote config](../user-guide/managing-piped/remote-upgrade-remote-config/#remote-config) - Watch and reload configuration from a remote location such as Git | Beta | - -## Control Plane - -| Feature | Phase | -|-|-| -| Project/Piped/Application/Deployment management | Beta | -| Rendering deployment pipeline in realtime | Beta | -| Canceling a deployment from console | Beta | -| Triggering a deployment manually from console | Beta | -| RBAC on PipeCD resources such as Application, Piped... | Incubating | -| Authentication by username/password for static admin | Beta | -| GitHub & GitHub Enterprise SSO | Beta | -| Google SSO | Incubating | -| Support GCP [Firestore](https://cloud.google.com/firestore) as data store | Beta | -| Support [MySQL v8.0](https://www.mysql.com/) as data store | Beta | -| Support GCP [GCS](https://cloud.google.com/storage) as file store | Beta | -| Support AWS [S3](https://aws.amazon.com/s3/) as file store | Beta | -| Support [Minio](https://github.com/minio/minio) as file store | Beta | -| Support using file storage such as GCS, S3, Minio for both data store and file store (It means no database is required to run control plane) | Incubating | -| [Insights](../user-guide/insights/) - Show the delivery performance of a team or an application | Incubating | -| [Deployment Chain](../user-guide/managing-application/deployment-chain/) - Allow rolling out to multiple clusters gradually or promoting across environments | Alpha | -| [Metrics](../user-guide/managing-controlplane/metrics/) - Dashboards for PipeCD and Piped metrics | Beta | diff --git a/docs/content/en/docs-v0.37.x/installation/_index.md b/docs/content/en/docs-v0.37.x/installation/_index.md deleted file mode 100644 index 76a1629a37..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/_index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "Installation" -linkTitle: "Installation" -weight: 4 -description: > - Complete guideline for installing and configuring PipeCD on your own. ---- - -Before starting to install PipeCD, let’s have a look at PipeCD’s components, determine your role, and which components you will interact with while installing/using PipeCD. You’re recommended to read about PipeCD’s [Control Plane](../concepts/#control-plane) and [Piped](../concepts/#piped) on the concepts page. - -![](/images/architecture-overview-with-roles.png) -

-PipeCD's components with roles -

- -Basically, there are two types of users/roles that exist in the PipeCD system, which are: -- Developers/Production team: Users who use PipeCD to manage their applications’ deployments. You will interact with Piped and may or may not need to install Piped by yourself. -- Operators/Platform team: Users who operate the PipeCD for other developers can use it. You will interact with the Control Plane and Piped, you will be the one who installs the Control Plane and keeps it up for other Pipeds to connect while managing their applications’ deployments. - -This section contains the guideline for installing PipeCD's Control Plane and Piped step by step. You can choose what to read based on your roles. diff --git a/docs/content/en/docs-v0.37.x/installation/install-controlplane.md b/docs/content/en/docs-v0.37.x/installation/install-controlplane.md deleted file mode 100644 index 5bdfd579c6..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/install-controlplane.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: "Install Control Plane" -linkTitle: "Install Control Plane" -weight: 2 -description: > - This page describes how to install control plane on a Kubernetes cluster. ---- - -## Prerequisites - -- Having a running Kubernetes cluster -- Installed [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - -## Installation - -### 1. Preparing an encryption key - -PipeCD requires a key for encrypting sensitive data or signing JWT token while authenticating. You can use one of the following commands to generate an encryption key. - -``` console -openssl rand 64 | base64 > encryption-key - -# or -cat /dev/urandom | head -c64 | base64 > encryption-key -``` - -### 2. Preparing Control Plane configuration file and installing - -![](/images/control-plane-components.png) -

-Control Plane Architecture -

- -The Control Plane of PipeCD is constructed by several components, as shown in the above graph (for more in detail please read [Control Plane architecture overview docs](../../user-guide/managing-controlplane/architecture-overview/)). As mentioned in the graph, the PipeCD's data can be stored in one of the provided fully-managed or self-managed services. So you have to decide which kind of [data store](../../user-guide/managing-controlplane/architecture-overview/#data-store) and [file store](../../user-guide/managing-controlplane/architecture-overview/#file-store) you want to use and prepare a Control Plane configuration file suitable for that choice. - -#### Using Firestore and GCS - -PipeCD requires a GCS bucket and service account files to access Firestore and GCS service. Here is an example of configuration file: - -``` yaml -apiVersion: "pipecd.dev/v1beta1" -kind: ControlPlane -spec: - stateKey: {RANDOM_STRING} - datastore: - type: FIRESTORE - config: - namespace: pipecd - environment: dev - project: {YOUR_GCP_PROJECT_NAME} - # Must be a service account with "Cloud Datastore User" and "Cloud Datastore Index Admin" roles - # since PipeCD needs them to creates the needed Firestore composite indexes in the background. - credentialsFile: /etc/pipecd-secret/firestore-service-account - filestore: - type: GCS - config: - bucket: {YOUR_BUCKET_NAME} - # Must be a service account with "Storage Object Admin (roles/storage.objectAdmin)" role on the given bucket - # since PipeCD need to write file object such as deployment log file to that bucket. - credentialsFile: /etc/pipecd-secret/gcs-service-account -``` - -See [ConfigurationReference](../../user-guide/managing-controlplane/configuration-reference/) for the full configuration. - -After all, install the Control Plane as bellow: - -``` console -helm install pipecd oci://ghcr.io/pipe-cd/chart/pipecd --version {{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data=path-to-control-plane-configuration-file \ - --set-file secret.encryptionKey.data=path-to-encryption-key-file \ - --set-file secret.firestoreServiceAccount.data=path-to-service-account-file \ - --set-file secret.gcsServiceAccount.data=path-to-service-account-file -``` - -Currently, besides `Firestore` PipeCD supports other databases as its datastore such as `MySQL`. Also as for filestore, PipeCD supports `AWS S3` and `MINIO` either. - -For example, in case of using `MySQL` as datastore and `MINIO` as filestore, the ControlPlane configuration will be as follow: - -```yaml -apiVersion: "pipecd.dev/v1beta1" -kind: ControlPlane -spec: - stateKey: {RANDOM_STRING} - datastore: - type: MYSQL - config: - url: {YOUR_MYSQL_ADDRESS} - database: {YOUR_DATABASE_NAME} - filestore: - type: MINIO - config: - endpoint: {YOUR_MINIO_ADDRESS} - bucket: {YOUR_BUCKET_NAME} - accessKeyFile: /etc/pipecd-secret/minio-access-key - secretKeyFile: /etc/pipecd-secret/minio-secret-key - autoCreateBucket: true -``` - -You can find required configurations to use other datastores and filestores from [ConfigurationReference](../../user-guide/managing-controlplane/configuration-reference/). - -__Caution__: In case of using `MySQL` as Control Plane's datastore, please note that the implementation of PipeCD requires some features that only available on [MySQL v8](https://dev.mysql.com/doc/refman/8.0/en/), make sure your MySQL service is satisfied the requirement. - -### 3. Accessing the PipeCD web - -If your installation was including an [ingress](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L7), the PipeCD web can be accessed by the ingress's IP address or domain. -Otherwise, private PipeCD web can be accessed by using `kubectl port-forward` to expose the installed Control Plane on your localhost: - -``` console -kubectl port-forward svc/pipecd 8080 --namespace={NAMESPACE} -``` - -Now go to [http://localhost:8080](http://localhost:8080) on your browser, you will see a page to login to your project. - -Up to here, you have a installed PipeCD's Control Plane. To logging in, you need to initialize a new project. - -### 4. Initialize a new project - -To create a new project, you need to access to the `ops` pod in your installed PipeCD control plane, using `kubectl port-forward` command: - -```console -kubectl port-forward service/pipecd-ops 9082 --namespace={NAMESPACE} -``` - -Then, access to [http://localhost:9082](http://localhost:9082). - -On that page, you will see the list of registered projects and a link to register new projects. Registering a new project requires only a unique ID string and an optional description text. - -Once a new project has been registered, a static admin (username, password) will be automatically generated for the project admin, you can use that to login via the login form in the above section. - -For more about adding a new project in detail, please read the following [docs](../../user-guide/managing-controlplane/adding-a-project/). - -## Production Hardening - -This part provides guidance for a production hardened deployment of the control plane. - -- Publishing the control plane - - You can allow external access to the control plane by enabling the [ingress](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L7) configuration. - -- End-to-End TLS - - After switching to HTTPs, do not forget to set the `api.args.secureCookie` parameter to be `true` to disallow using cookie on unsecured HTTP connection. - - Alternatively in the case of GKE Ingress, PipeCD also requires a TLS certificate for internal use. This can be a self-signed one and generated by this command: - - ``` console - openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN={YOUR_DOMAIN}" - ``` - Those key and cert can be configured via [`secret.internalTLSKey.data`](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L118) and [`secret.internalTLSCert.data`](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L121). - - To enable internal tls connection, please set the `gateway.internalTLS.enabled` parameter to be `true`. - - Otherwise, the `cloud.google.com/app-protocols` annotation is also should be configured as the following: - - ``` yaml - service: - port: 443 - annotations: - cloud.google.com/app-protocols: '{"service":"HTTP2"}' - ``` diff --git a/docs/content/en/docs-v0.37.x/installation/install-piped/_index.md b/docs/content/en/docs-v0.37.x/installation/install-piped/_index.md deleted file mode 100644 index 71a5199f66..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/install-piped/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Install Piped" -linkTitle: "Install Piped" -weight: 3 -description: > - This page describes how to install a Piped. ---- - -Since Piped is a stateless agent, no database or storage is required to run. In addition, a Piped can interact with one or multiple platform providers, so the number of Piped and where they should run is entirely up to your preference. For example, you can run your Pipeds in a Kubernetes cluster to deploy not just Kubernetes applications but your Terraform and Cloud Run applications as well. diff --git a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-cloudrun.md b/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-cloudrun.md deleted file mode 100644 index dd1dcb8161..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-cloudrun.md +++ /dev/null @@ -1,170 +0,0 @@ ---- -title: "Installing on Cloud Run" -linkTitle: "Installing on Cloud Run" -weight: 3 -description: > - This page describes how to install Piped on Cloud Run. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Preparing a piped configuration file as the following: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyData: {BASE64_ENCODED_PIPED_KEY} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - - git: - sshKeyData: {BASE64_ENCODED_PRIVATE_SSH_KEY} - - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - - # Optional - # Enable this Piped to handle Cloud Run application. - platformProviders: - - name: cloudrun-in-project - type: CLOUDRUN - config: - project: {GCP_PROJECT_ID} - region: {GCP_PROJECT_REGION} - - # Optional - # Uncomment this if you want to enable this Piped to handle Terraform application. - # - name: terraform-gcp - # type: TERRAFORM - - # Optional - # Uncomment this if you want to enable SecretManagement feature. - # https://pipecd.dev//docs/user-guide/managing-application/secret-management/ - # secretManagement: - # type: KEY_PAIR - # config: - # privateKeyData: {BASE64_ENCODED_PRIVATE_KEY} - # publicKeyData: {BASE64_ENCODED_PUBLIC_KEY} - ``` - -- Creating a new secret in [SecretManager](https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets) to store above configuration data securely - - ``` console - gcloud secrets create cloudrun-piped-config --data-file={PATH_TO_CONFIG_FILE} - ``` - - then make sure that Cloud Run has the ability to access that secret as [this guide](https://cloud.google.com/run/docs/configuring/secrets#access-secret). - -- Running Piped in Cloud Run - - Prepare a Cloud Run service manifest file as below. - - {{< tabpane >}} - {{< tab lang="yaml" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. - -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: piped -spec: - template: - metadata: - annotations: - autoscaling.knative.dev/maxScale: '1' # This must be 1. - autoscaling.knative.dev/minScale: '1' # This must be 1. - run.googleapis.com/ingress: internal - run.googleapis.com/ingress-status: internal - run.googleapis.com/cpu-throttling: "false" # This is required. - spec: - containerConcurrency: 1 # This must be 1 to ensure Piped work correctly. - containers: - - image: ghcr.io/pipe-cd/launcher:{{< blocks/latest_version >}} - args: - - launcher - - --launcher-admin-port=9086 - - --config-file=/etc/piped-config/config.yaml - ports: - - containerPort: 9086 - volumeMounts: - - mountPath: /etc/piped-config - name: piped-config - resources: - limits: - cpu: 1000m - memory: 2Gi - volumes: - - name: piped-config - secret: - secretName: cloudrun-piped-config - items: - - path: config.yaml - key: latest - {{< /tab >}} - {{< tab lang="yaml" header="Piped" >}} -# This just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data you have to restart it. - -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: piped -spec: - template: - metadata: - annotations: - autoscaling.knative.dev/maxScale: '1' # This must be 1. - autoscaling.knative.dev/minScale: '1' # This must be 1. - run.googleapis.com/ingress: internal - run.googleapis.com/ingress-status: internal - run.googleapis.com/cpu-throttling: "false" # This is required. - spec: - containerConcurrency: 1 # This must be 1. - containers: - - image: ghcr.io/pipe-cd/piped:{{< blocks/latest_version >}} - args: - - piped - - --config-file=/etc/piped-config/config.yaml - ports: - - containerPort: 9085 - volumeMounts: - - mountPath: /etc/piped-config - name: piped-config - resources: - limits: - cpu: 1000m - memory: 2Gi - volumes: - - name: piped-config - secret: - secretName: cloudrun-piped-config - items: - - path: config.yaml - key: latest - {{< /tab >}} - {{< /tabpane >}} - - Run Piped service on Cloud Run with the following command: - - ``` console - gcloud beta run services replace cloudrun-piped-service.yaml - ``` - - Note: Make sure that the created secret is accessible from this Piped service. See more [here](https://cloud.google.com/run/docs/configuring/secrets#access-secret). diff --git a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-fargate.md b/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-fargate.md deleted file mode 100644 index f75b59fcc2..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-fargate.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -title: "Installing on ECS Fargate" -linkTitle: "Installing on ECS Fargate" -weight: 4 -description: > - This page describes how to install Piped as a task on ECS cluster backed by AWS Fargate. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Preparing a piped configuration file as follows: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyData: {BASE64_ENCODED_PIPED_KEY} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - - git: - sshKeyData: {BASE64_ENCODED_PRIVATE_SSH_KEY} - - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - - # Optional - # Enable this Piped to handle ECS application. - platformProviders: - - name: ecs-dev - type: ECS - config: - region: {ECS_RUNNING_REGION} - - # Optional - # Uncomment this if you want to enable this Piped to handle Terraform application. - # - name: terraform-dev - # type: TERRAFORM - - # Optional - # Uncomment this if you want to enable SecretManagement feature. - # https://pipecd.dev//docs/user-guide/managing-application/secret-management/ - # secretManagement: - # type: KEY_PAIR - # config: - # privateKeyData: {BASE64_ENCODED_PRIVATE_KEY} - # publicKeyData: {BASE64_ENCODED_PUBLIC_KEY} - ``` - -- Store the above configuration data to AWS to enable using it while creating piped task. Both [AWS SecretManager](https://aws.amazon.com/secrets-manager/) and [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) are available to address this task. - - {{< tabpane >}} - {{< tab lang="bash" header="Store in AWS SecretManager" >}} - aws secretsmanager create-secret --name PipedConfig \ - --description "Configuration of piped running as ECS Fargate task" \ - --secret-string `base64 piped-config.yaml` - {{< /tab >}} - {{< tab lang="bash" header="Store in AWS Systems Manager Parameter Store" >}} - aws ssm put-parameter \ - --name PipedConfig \ - --value `base64 piped-config.yaml` \ - --type SecureString - {{< /tab >}} - {{< /tabpane >}} - -- Prepare task definition for your piped task. Basically, you can just define your piped TaskDefinition as normal TaskDefinition, the only thing that needs to be beware is, in case you used [AWS SecretManager](https://aws.amazon.com/secrets-manager/) to store piped configuration, to enable your piped accesses it's configuration we created as a secret on above, you need to add `secretsmanager:GetSecretValue` policy to your piped task `executionRole`. Read more in [Required IAM permissions for Amazon ECS secrets](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html). - - A sample TaskDefinition for Piped as follows - - {{< tabpane >}} - {{< tab lang="json" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. - -{ - "family": "piped", - "executionRoleArn": "{PIPED_TASK_EXECUTION_ROLE_ARN}", - "containerDefinitions": [ - { - "name": "piped", - "essential": true, - "image": "ghcr.io/pipe-cd/launcher:{{< blocks/latest_version >}}", - "entryPoint": [ - "sh", - "-c" - ], - "command": [ - "/bin/sh -c \"launcher launcher --config-data=$(echo $CONFIG_DATA)\"" - ], - "secrets": [ - { - "valueFrom": "{PIPED_SECRET_MANAGER_ARN}", - "name": "CONFIG_DATA" - } - ], - } - ], - "requiresCompatibilities": [ - "FARGATE" - ], - "networkMode": "awsvpc", - "memory": "512", - "cpu": "256" -} - {{< /tab >}} - {{< tab lang="json" header="Piped" >}} -# This just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data you have to restart it. - -{ - "family": "piped", - "executionRoleArn": "{PIPED_TASK_EXECUTION_ROLE_ARN}", - "containerDefinitions": [ - { - "name": "piped", - "essential": true, - "image": "ghcr.io/pipe-cd/piped:{{< blocks/latest_version >}}", - "entryPoint": [ - "sh", - "-c" - ], - "command": [ - "/bin/sh -c \"piped piped --config-data=$(echo $CONFIG_DATA)\"" - ], - "secrets": [ - { - "valueFrom": "{PIPED_SECRET_MANAGER_ARN}", - "name": "CONFIG_DATA" - } - ], - } - ], - "requiresCompatibilities": [ - "FARGATE" - ], - "networkMode": "awsvpc", - "memory": "512", - "cpu": "256" -} - {{< /tab >}} - {{< /tabpane >}} - - Register this piped task definition and start piped task: - - ```console - aws ecs register-task-definition --cli-input-json file://taskdef.json - aws ecs run-task --cluster {ECS_CLUSTER} --task-definition piped - ``` - - Once the task is created, it will run continuously because of the piped execution. Since this task is run without [startedBy](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_StartTask.html#API_StartTask_RequestSyntax) setting, in case the piped is stopped, it will not automatically be restarted. To do so, you must define an ECS service to control piped task deployment. - - A sample Service definition to control piped task deployment. - - ```json - { - "cluster": "{ECS_CLUSTER}", - "serviceName": "piped", - "desiredCount": 1, # This must be 1. - "taskDefinition": "{PIPED_TASK_DEFINITION_ARN}", - "deploymentConfiguration": { - "minimumHealthyPercent": 0, - "maximumPercent": 100 - }, - "schedulingStrategy": "REPLICA", - "launchType": "FARGATE", - "networkConfiguration": { - "awsvpcConfiguration": { - "assignPublicIp": "ENABLED", # This is need to enable ECS deployment to pull piped container images. - ... - } - } - } - ``` - - Then start your piped task controller service. - - ```console - aws ecs create-service \ - --cluster {ECS_CLUSTER} \ - --service-name piped \ - --cli-input-json file://service.json - ``` diff --git a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-google-cloud-vm.md b/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-google-cloud-vm.md deleted file mode 100644 index 476e4a7dcd..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-google-cloud-vm.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: "Installing on Google Cloud VM" -linkTitle: "Installing on Google Cloud VM" -weight: 2 -description: > - This page describes how to install Piped on Google Cloud VM. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its `PIPED_ID` and `PIPED_KEY` strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Preparing a piped configuration file as the following: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyData: {BASE64_ENCODED_PIPED_KEY} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - - git: - sshKeyData: {BASE64_ENCODED_PRIVATE_SSH_KEY} - - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - - # Optional - # Uncomment this if you want to enable this Piped to handle Cloud Run application. - # platformProviders: - # - name: cloudrun-in-project - # type: CLOUDRUN - # config: - # project: {GCP_PROJECT_ID} - # region: {GCP_PROJECT_REGION} - - # Optional - # Uncomment this if you want to enable this Piped to handle Terraform application. - # - name: terraform-gcp - # type: TERRAFORM - - # Optional - # Uncomment this if you want to enable SecretManagement feature. - # https://pipecd.dev//docs/user-guide/managing-application/secret-management/ - # secretManagement: - # type: KEY_PAIR - # config: - # privateKeyData: {BASE64_ENCODED_PRIVATE_KEY} - # publicKeyData: {BASE64_ENCODED_PUBLIC_KEY} - ``` - -- Creating a new secret in [SecretManager](https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets) to store above configuration data securely - - ``` shell - gcloud secrets create vm-piped-config --data-file={PATH_TO_CONFIG_FILE} - ``` - -- Creating a new Service Account for Piped and giving it needed roles - - ``` shell - gcloud iam service-accounts create vm-piped \ - --description="Using by Piped running on Google Cloud VM" \ - --display-name="vm-piped" - - # Allow Piped to access the created secret. - gcloud secrets add-iam-policy-binding vm-piped-config \ - --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role="roles/secretmanager.secretAccessor" - - # Allow Piped to write its log messages to Google Cloud Logging service. - gcloud projects add-iam-policy-binding {GCP_PROJECT_ID} \ - --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role="roles/logging.logWriter" - - # Optional - # If you want to use this Piped to handle Cloud Run application - # run the following command to give it the needed roles. - # https://cloud.google.com/run/docs/reference/iam/roles#additional-configuration - # - # gcloud projects add-iam-policy-binding {GCP_PROJECT_ID} \ - # --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - # --role="roles/run.developer" - # - # gcloud iam service-accounts add-iam-policy-binding {GCP_PROJECT_NUMBER}-compute@developer.gserviceaccount.com \ - # --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - # --role="roles/iam.serviceAccountUser" - ``` - -- Running Piped on a Google Cloud VM - - {{< tabpane >}} - {{< tab lang="console" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. - - gcloud compute instances create-with-container vm-piped \ - --container-image="ghcr.io/pipe-cd/launcher:{{< blocks/latest_version >}}" \ - --container-arg="launcher" \ - --container-arg="--config-from-gcp-secret=true" \ - --container-arg="--gcp-secret-id=projects/{GCP_PROJECT_ID}/secrets/vm-piped-config/versions/{SECRET_VERSION}" \ - --network="{VPC_NETWORK}" \ - --subnet="{VPC_SUBNET}" \ - --scopes="cloud-platform" \ - --service-account="vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" - {{< /tab >}} - {{< tab lang="console" header="Piped" >}} -# This just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data you have to restart it. - - gcloud compute instances create-with-container vm-piped \ - --container-image="ghcr.io/pipe-cd/piped:{{< blocks/latest_version >}}" \ - --container-arg="piped" \ - --container-arg="--config-gcp-secret=projects/{GCP_PROJECT_ID}/secrets/vm-piped-config/versions/{SECRET_VERSION}" \ - --network="{VPC_NETWORK}" \ - --subnet="{VPC_SUBNET}" \ - --scopes="cloud-platform" \ - --service-account="vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" - {{< /tab >}} - {{< /tabpane >}} - -After that, you can see on PipeCD web at `Settings` page that Piped is connecting to the Control Plane. -You can also view Piped log as described [here](https://cloud.google.com/compute/docs/containers/deploying-containers#viewing_logs). diff --git a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-kubernetes.md b/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-kubernetes.md deleted file mode 100644 index be1d40e8c6..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-kubernetes.md +++ /dev/null @@ -1,244 +0,0 @@ ---- -title: "Installing on Kubernetes cluster" -linkTitle: "Installing on Kubernetes cluster" -weight: 1 -description: > - This page describes how to install Piped on Kubernetes cluster. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## In the cluster-wide mode -This way requires installing cluster-level resources. Piped installed with this way can perform deployment workloads against any other namespaces than the where Piped runs on. - -- Preparing a piped configuration file as the following - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyFile: /etc/piped-secret/piped-key - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - git: - sshKeyFile: /etc/piped-secret/ssh-key - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - syncInterval: 1m - ``` - -- Installing by using [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - - {{< tabpane >}} - {{< tab lang="bash" header="Piped" >}} -# This command just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data -# you have to restart it by re-running this command. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. -# But we still need to restart Piped when we want to update its config data. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade and Remote-config" >}} -# Enable both remote-upgrade and remote-config features of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-config -# Beside of the ability to upgrade Piped to a new version from the web console, -# remote-config allows loading the Piped config stored in a remote location such as a Git repository. -# Whenever the config data is changed, it loads the new config and restarts Piped to use that new config. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set launcher.configFromGitRepo.enabled=true \ - --set launcher.configFromGitRepo.repoUrl=git@github.com:{GIT_ORG}/{GIT_REPO}.git \ - --set launcher.configFromGitRepo.branch={GIT_BRANCH} \ - --set launcher.configFromGitRepo.configFile={RELATIVE_PATH_TO_PIPED_CONFIG_FILE_IN_GIT_REPO} \ - --set launcher.configFromGitRepo.sshKeyFile=/etc/piped-secret/ssh-key \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} - {{< /tab >}} - {{< /tabpane >}} - - Note: Be sure to set `--set args.insecure=true` if your Control Plane has not TLS-enabled yet. - - See [values.yaml](https://github.com/pipe-cd/pipecd/blob/master/manifests/piped/values.yaml) for the full values. - -## In the namespaced mode -The previous way requires installing cluster-level resources. If you want to restrict Piped's permission within the namespace where Piped runs on, this way is for you. -Most parts are identical to the previous way, but some are slightly different. - -- Adding a new cloud provider like below to the previous piped configuration file - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyFile: /etc/piped-secret/piped-key - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - git: - sshKeyFile: /etc/piped-secret/ssh-key - repositories: - - repoId: REPO_ID_OR_NAME - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - syncInterval: 1m - # This is needed to restrict to limit the access range to within a namespace. - platformProviders: - - name: my-kubernetes - type: KUBERNETES - config: - appStateInformer: - namespace: {NAMESPACE} - ``` - -- Installing by using [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - - {{< tabpane >}} - {{< tab lang="bash" header="Piped" >}} -# This command just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data -# you have to restart it by re-running this command. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. -# But we still need to restart Piped when we want to update its config data. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade and Remote-config" >}} -# Enable both remote-upgrade and remote-config features of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-config -# Beside of the ability to upgrade Piped to a new version from the web console, -# remote-config allows loading the Piped config stored in a remote location such as a Git repository. -# Whenever the config data is changed, it loads the new config and restarts Piped to use that new config. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set launcher.configFromGitRepo.enabled=true \ - --set launcher.configFromGitRepo.repoUrl=git@github.com:{GIT_ORG}/{GIT_REPO}.git \ - --set launcher.configFromGitRepo.branch={GIT_BRANCH} \ - --set launcher.configFromGitRepo.configFile={RELATIVE_PATH_TO_PIPED_CONFIG_FILE_IN_GIT_REPO} \ - --set launcher.configFromGitRepo.sshKeyFile=/etc/piped-secret/ssh-key \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - {{< /tab >}} - {{< /tabpane >}} - -#### In case on OpenShift less than 4.2 - -OpenShift uses an arbitrarily assigned user ID when it starts a container. -Starting from OpenShift 4.2, it also inserts that user into `/etc/passwd` for using by the application inside the container, -but before that version, the assigned user is missing in that file. That blocks workloads of `gcr.io/pipecd/piped` image. -Therefore if you are running on OpenShift with a version before 4.2, please use `gcr.io/pipecd/piped-okd` image with the following command: - -- Installing by using [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - - {{< tabpane >}} - {{< tab lang="bash" header="Piped" >}} -# This command just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data -# you have to restart it by re-running this command. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - --set args.addLoginUserToPasswd=true \ - --set securityContext.runAsNonRoot=true \ - --set securityContext.runAsUser={UID} \ - --set securityContext.fsGroup={FS_GROUP} \ - --set securityContext.runAsGroup=0 \ - --set image.repository="ghcr.io/pipe-cd/piped-okd" - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. -# But we still need to restart Piped when we want to update its config data. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - --set args.addLoginUserToPasswd=true \ - --set securityContext.runAsNonRoot=true \ - --set securityContext.runAsUser={UID} \ - --set securityContext.fsGroup={FS_GROUP} \ - --set securityContext.runAsGroup=0 \ - --set launcher.image.repository="ghcr.io/pipe-cd/launcher-okd" - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade and Remote-config" >}} -# Enable both remote-upgrade and remote-config features of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-config -# Beside of the ability to upgrade Piped to a new version from the web console, -# remote-config allows loading the Piped config stored in a remote location such as a Git repository. -# Whenever the config data is changed, it loads the new config and restarts Piped to use that new config. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set launcher.configFromGitRepo.enabled=true \ - --set launcher.configFromGitRepo.repoUrl=git@github.com:{GIT_ORG}/{GIT_REPO}.git \ - --set launcher.configFromGitRepo.branch={GIT_BRANCH} \ - --set launcher.configFromGitRepo.configFile={RELATIVE_PATH_TO_PIPED_CONFIG_FILE_IN_GIT_REPO} \ - --set launcher.configFromGitRepo.sshKeyFile=/etc/piped-secret/ssh-key \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - --set args.addLoginUserToPasswd=true \ - --set securityContext.runAsNonRoot=true \ - --set securityContext.runAsUser={UID} \ - --set securityContext.fsGroup={FS_GROUP} \ - --set securityContext.runAsGroup=0 \ - --set launcher.image.repository="ghcr.io/pipe-cd/launcher-okd" - {{< /tab >}} - {{< /tabpane >}} diff --git a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-single-machine.md b/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-single-machine.md deleted file mode 100644 index 0b56578170..0000000000 --- a/docs/content/en/docs-v0.37.x/installation/install-piped/installing-on-single-machine.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: "Installing on a single machine" -linkTitle: "Installing on a single machine" -weight: 5 -description: > - This page describes how to install a Piped on a single machine. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Downloading the latest `piped` binary for your machine - - https://github.com/pipe-cd/pipecd/releases - -- Preparing a piped configuration file as the following: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyFile: {PATH_TO_PIPED_KEY_FILE} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - git: - sshKeyFile: {PATH_TO_SSH_KEY_FILE} - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - syncInterval: 1m - ``` - -- Start running the `piped` - - ``` console - ./piped piped --config-file={PATH_TO_PIPED_CONFIG_FILE} - ``` - diff --git a/docs/content/en/docs-v0.37.x/overview/_index.md b/docs/content/en/docs-v0.37.x/overview/_index.md deleted file mode 100644 index 724cbec785..0000000000 --- a/docs/content/en/docs-v0.37.x/overview/_index.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: "Overview" -linkTitle: "Overview" -weight: 1 -description: > - Overview about PipeCD. ---- - -![](/images/pipecd-explanation.png) -

-PipeCD - a Gitops style continuous delivery solution -

- -## What Is PipeCD? - -{{% pageinfo %}} -PipeCD provides a unified continuous delivery solution for multiple application kinds on multi-cloud that empowers engineers to deploy faster with more confidence, a GitOps tool that enables doing deployment operations by pull request on Git. -{{% /pageinfo %}} - -## Why PipeCD? - -**Visibility** -- Deployment pipeline UI shows clarify what is happening -- Separate logs viewer for each individual deployment -- Realtime visualization of application state -- Deployment notifications to slack, webhook endpoints -- Insights show metrics like lead time, deployment frequency, MTTR and change failure rate to measure delivery performance - -**Automation** -- Automated deployment analysis to measure deployment impact based on metrics, logs, emitted requests -- Automatically roll back to the previous state as soon as analysis or a pipeline stage fails -- Automatically detect configuration drift to notify and render the changes -- Automatically trigger a new deployment when a defined event has occurred (e.g. container image pushed, helm chart published, etc) - -**Safety and Security** -- Support single sign-on and role-based access control -- Credentials are not exposed outside the cluster and not saved in the Control Plane -- Piped makes only outbound requests and can run inside a restricted network -- Built-in secrets management - -**Multi-provider & Multi-Tenancy** -- Support multiple application kinds on multi-cloud including Kubernetes, Terraform, Cloud Run, AWS Lambda, Amazon ECS -- Support multiple analysis providers including Prometheus, Datadog, Stackdriver, and more -- Easy to operate multi-cluster, multi-tenancy by separating Control Plane and Piped - -**Open Source** - -- Released as an Open Source project -- Under APACHE 2.0 license, see [LICENSE](https://github.com/pipe-cd/pipecd/blob/master/LICENSE) - -## Where should I go next? - -For a good understanding of the PipeCD's components, see the [Concepts](../concepts) page. - -If you are an **operator** wanting to install and configure PipeCD for other developers. -- [Quickstart](../quickstart/) -- [Managing Control Plane](../user-guide/managing-controlplane/) -- [Managing Piped](../user-guide/managing-piped/) - -If you are a **user** using PipeCD to deploy your application/infrastructure: -- [User Guide](../user-guide/) -- [Examples](../user-guide/examples) - -If you want to be a **contributor**: -- [Contributor Guide](../contribution-guidelines/) diff --git a/docs/content/en/docs-v0.37.x/quickstart/_index.md b/docs/content/en/docs-v0.37.x/quickstart/_index.md deleted file mode 100644 index d22b239153..0000000000 --- a/docs/content/en/docs-v0.37.x/quickstart/_index.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -title: "Quickstart" -linkTitle: "Quickstart" -weight: 3 -description: > - This page describes how to quickly get started with PipeCD on Kubernetes. ---- - -This page is a guideline for installing PipeCD into your Kubernetes cluster and deploying a "hello world" application to that same Kubernetes cluster. - -Note: It's not required to install the PipeCD control plane to the cluster where your applications are running. Please read this [blog post](/blog/2021/12/29/pipecd-best-practice-01-operate-your-own-pipecd-cluster/) to understand more about PipeCD in real life use cases. - -### Prerequisites -- Having a Kubernetes cluster -- Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) and [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) -- Forked the [Examples](https://github.com/pipe-cd/examples) repository - -### 1. Installing control plane - -``` console -helm install pipecd oci://ghcr.io/pipe-cd/chart/pipecd --version {{< blocks/latest_version >}} \ - --namespace pipecd --create-namespace \ - --values https://raw.githubusercontent.com/pipe-cd/pipecd/{{< blocks/latest_version >}}/quickstart/control-plane-values.yaml -``` - -Once installed, use `kubectl port-forward` to expose the web console on your localhost: - -``` console -kubectl -n pipecd port-forward svc/pipecd 8080 -``` - -The PipeCD web console will be available at [http://localhost:8080](http://localhost:8080). To login, you can use the configured static admin account as below: -- project name: `quickstart` -- username: `hello-pipecd` -- password: `hello-pipecd` - -![](/images/quickstart-login.png) - -### 2. Installing a `piped` -Before running a piped, you have to register it on the web and take the generated ID and Key strings. - -Navigate to the `Piped` tab on the same page as before, click on the `Add` button. Then you enter as: - -![](/images/quickstart-adding-piped.png) - -Click on the `Save` button, and then you can see the piped-id and secret-key. -Be sure to keep a copy for later use. - -![](/images/quickstart-piped-registered.png) - -Then complete the installation by running the following command after replacing `{PIPED_ID}`, `{PIPED_KEY}`, `{FORKED_GITHUB_ORG}` with what you just got: - -``` console -helm install piped oci://ghcr.io/pipe-cd/chart/piped --version {{< blocks/latest_version >}} \ - --namespace pipecd \ - --set quickstart.enabled=true \ - --set quickstart.pipedId={PIPED_ID} \ - --set secret.data.piped-key={PIPED_KEY} \ - --set quickstart.gitRepoRemote=https://github.com/{FORKED_GITHUB_ORG}/examples.git -``` - -### 3. Registering a kubernetes application -Navigate to the `Applications` page, click on the `ADD` button on the top left corner. - -Go to the `ADD FROM SUGGESTIONS` tab, then select: -- Piped: `dev` (you just registered) -- PlatformProvider: `kubernetes-default` - -You should see a lot of suggested applications. Select the `canary` application and click the `SAVE` button to register. - -![](/images/quickstart-adding-application-from-suggestions.png) - -After a bit, the first deployment would be complete automatically to sync the application to the state specified in the current Git commit. - -![](/images/quickstart-first-deployment.png) - -### 4. Let's deploy! -Let's get started with deployment! All you have to do is to make a PR to update the image tag, scale the replicas, or change the manifests. - -For instance, open the `kubernetes/canary/deployment.yaml` under the forked examples' repository, then change the tag from `v0.1.0` to `v0.2.0`. - -![](/images/quickstart-update-image-tag.png) - -After a short wait, a new deployment will be started to update to `v0.2.0`. - -![](/images/quickstart-deploying.png) - -### 5. Cleanup -When you’re finished experimenting with PipeCD, you can uninstall with: - -``` console -helm -n pipecd uninstall piped -helm -n pipecd uninstall pipecd -kubectl delete deploy canary -n pipecd -kubectl delete svc canary -n pipecd -``` - -### What's next? - -To prepare your PipeCD for a production environment, please visit the [Installation](../installation/) guideline. For guidelines to use PipeCD to deploy your application in daily usage, please visit the [User guide](../user-guide/) docs. diff --git a/docs/content/en/docs-v0.37.x/user-guide/_index.md b/docs/content/en/docs-v0.37.x/user-guide/_index.md deleted file mode 100755 index 5482b97115..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "User Guide" -linkTitle: "User Guide" -weight: 5 -description: > - Guideline to use PipeCD, from installation to common features for daily usage. ---- - - diff --git a/docs/content/en/docs-v0.37.x/user-guide/command-line-tool.md b/docs/content/en/docs-v0.37.x/user-guide/command-line-tool.md deleted file mode 100644 index f29f92f55b..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/command-line-tool.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: "Command-line tool: pipectl" -linkTitle: "Command-line tool: pipectl" -weight: 8 -description: > - This page describes how to install and use pipectl to manage PipeCD's resources. ---- - -Besides using web UI, PipeCD also provides a command-line tool, pipectl, which allows you to run commands against your project's resources. -You can use pipectl to add and sync applications, wait for a deployment status. - -## Installation - -### Binary - -1. Download the appropriate version for your platform from [PipeCD Releases](https://github.com/pipe-cd/pipecd/releases). - - We recommend using the latest version of pipectl to avoid unforeseen issues. - Run the following script: - - ``` console - # OS="darwin" or "linux" - curl -Lo ./pipectl https://github.com/pipe-cd/pipecd/releases/download/{{< blocks/latest_version >}}/pipectl_{{< blocks/latest_version >}}_{OS}_amd64 - ``` - -2. Make the pipectl binary executable. - - ``` console - chmod +x ./pipectl - ``` - -3. Move the binary to your PATH. - - ``` console - sudo mv ./pipectl /usr/local/bin/pipectl - ``` - -4. Test to ensure the version you installed is up-to-date. - - ``` console - pipectl version - ``` - -### Docker -We are storing every version of docker image for pipectl on Google Cloud Container Registry. -Available versions are [here](https://github.com/pipe-cd/pipecd/releases). - -``` -docker run --rm gcr.io/pipecd/pipectl:{VERSION} -h -``` - -## Authentication - -In order for pipectl to authenticate with PipeCD's Control Plane, it needs an API key, which can be created from `Settings/API Key` tab on the web UI. -There are two kinds of key role: `READ_ONLY` and `READ_WRITE`. Depending on the command, it might require an appropriate role to execute. - -![](/images/settings-api-key.png) -

-Adding a new API key from Settings tab -

- -When executing a command of pipectl you have to specify either a string of API key via `--api-key` flag or a path to the API key file via `--api-key-file` flag. - -## Usage - -### Help - -Run `help` to know the available commands: - -``` console -$ pipectl --help - -The command line tool for PipeCD. - -Usage: - pipectl [command] - -Available Commands: - application Manage application resources. - deployment Manage deployment resources. - encrypt Encrypt the plaintext entered in either stdin or the --input-file flag. - event Manage event resources. - help Help about any command - piped Manage piped resources. - plan-preview Show plan preview against the specified commit. - version Print the information of current binary. - -Flags: - -h, --help help for pipectl - --log-encoding string The encoding type for logger [json|console|humanize]. (default "humanize") - --log-level string The minimum enabled logging level. (default "info") - --metrics Whether metrics is enabled or not. (default true) - --profile If true enables uploading the profiles to Stackdriver. - --profile-debug-logging If true enables logging debug information of profiler. - --profiler-credentials-file string The path to the credentials file using while sending profiles to Stackdriver. - -Use "pipectl [command] --help" for more information about a command. -``` - -### Adding a new application - -Add a new application into the project: - -``` console -pipectl application add \ - --address=CONTROL_PLANE_API_ADDRESS \ - --api-key=API_KEY \ - --app-name=simple \ - --app-kind=KUBERNETES \ - --piped-id=PIPED_ID \ - --platform-provider=kubernetes-default \ - --repo-id=examples \ - --app-dir=kubernetes/simple -``` - -Run `help` to know what command flags should be specified: - -``` console -$ pipectl application add --help - -Add a new application. - -Usage: - pipectl application add [flags] - -Flags: - --app-dir string The relative path from the root of repository to the application directory. - --app-kind string The kind of application. (KUBERNETES|TERRAFORM|LAMBDA|CLOUDRUN) - --app-name string The application name. - --platform-provider string The platform provider name. One of the registered providers in the piped configuration. The previous name of this field is cloud-provider. - --config-file-name string The configuration file name. (default "app.pipecd.yaml") - --description string The description of the application. - -h, --help help for add - --piped-id string The ID of piped that should handle this application. - --repo-id string The repository ID. One the registered repositories in the piped configuration. - -Global Flags: - --address string The address to Control Plane api. - --api-key string The API key used while authenticating with Control Plane. - --api-key-file string Path to the file containing API key used while authenticating with Control Plane. - --cert-file string The path to the TLS certificate file. - --insecure Whether disabling transport security while connecting to Control Plane. - --log-encoding string The encoding type for logger [json|console|humanize]. (default "humanize") - --log-level string The minimum enabled logging level. (default "info") - --metrics Whether metrics is enabled or not. (default true) - --profile If true enables uploading the profiles to Stackdriver. - --profile-debug-logging If true enables logging debug information of profiler. - --profiler-credentials-file string The path to the credentials file using while sending profiles to Stackdriver. -``` - -### Syncing an application - -- Send a request to sync an application and exit immediately when the deployment is triggered: - - ``` console - pipectl application sync \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} - ``` - -- Send a request to sync an application and wait until the triggered deployment reaches one of the specified statuses: - - ``` console - pipectl application sync \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} \ - --wait-status=DEPLOYMENT_SUCCESS,DEPLOYMENT_FAILURE - ``` - -### Getting an application - -Display the information of a given application in JSON format: - -``` console -pipectl application get \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} -``` - -### Listing applications - -Find and display the information of matching applications in JSON format: - -``` console -pipectl application list \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-name={APPLICATION_NAME} \ - --app-kind=KUBERNETES \ -``` - -### Waiting a deployment status - -Wait until a given deployment reaches one of the specified statuses: - -``` console -pipectl deployment wait-status \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --deployment-id={DEPLOYMENT_ID} \ - --status=DEPLOYMENT_SUCCESS -``` - -### Registering an event for EventWatcher - -Register an event that can be used by EventWatcher: - -``` console -pipectl event register \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --name=example-image-pushed \ - --data=gcr.io/pipecd/example:v0.1.0 -``` - -### Encrypting the data you want to use when deploying - -Encrypt the plaintext entered either in stdin or via the `--input-file` flag. - -You can encrypt it the same way you do [from the web](../managing-application/secret-management/#encrypting-secret-data). - -- From stdin: - - ``` console - pipectl encrypt \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --piped-id={PIPED_ID} <{PATH_TO_SECRET_FILE} - ``` - -- From the `--input-file` flag: - - ``` console - pipectl encrypt \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --piped-id={PIPED_ID} \ - --input-file={PATH_TO_SECRET_FILE} - ``` - -### You want more? - -We always want to add more needed commands into pipectl. Please let us know what command you want to add by creating issues in the [pipe-cd/pipe](https://github.com/pipe-cd/pipecd/issues) repository. We also welcome your pull request to add the command. diff --git a/docs/content/en/docs-v0.37.x/user-guide/configuration-reference.md b/docs/content/en/docs-v0.37.x/user-guide/configuration-reference.md deleted file mode 100644 index fd0a0f50dd..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/configuration-reference.md +++ /dev/null @@ -1,671 +0,0 @@ ---- -title: "Configuration reference" -linkTitle: "Configuration reference" -weight: 9 -description: > - This page describes all configurable fields in the application configuration and analysis template. ---- - -## Kubernetes Application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes (if you want to create PipeCD application through the application configuration file) | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [KubernetesDeploymentInput](#kubernetesdeploymentinput) | Input for Kubernetes deployment such as kubectl version, helm version, manifests filter... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| commitMatcher | [CommitMatcher](#commitmatcher) | Forcibly use QuickSync or Pipeline when commit message matched the specified pattern. | No | -| quickSync | [KubernetesQuickSync](#kubernetesquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| service | [KubernetesService](#kubernetesservice) | Which Kubernetes resource should be considered as the Service of application. Empty means the first Service resource will be used. | No | -| workloads | [][KubernetesWorkload](#kubernetesworkload) | Which Kubernetes resources should be considered as the Workloads of application. Empty means all Deployment resources. | No | -| trafficRouting | [KubernetesTrafficRouting](#kubernetestrafficrouting) | How to change traffic routing percentages. | No | -| triggerPaths | []string | List of directories or files where their changes will trigger the deployment. Regular expression can be used. This field is `deprecated`, please use [`spec.trigger.onCommit.paths`](#deploymenttrigger) instead. | No (deprecated) | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| variantLabel | [KubernetesVariantLabel](#kubernetesvariantlabel) | The label will be configured to variant manifests used to distinguish them. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## Terraform application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: TerraformApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [TerraformDeploymentInput](#terraformdeploymentinput) | Input for Terraform deployment such as terraform version, workspace... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [TerraformQuickSync](#terraformquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| triggerPaths | []string | List of directories or files where their changes will trigger the deployment. Regular expression can be used. This field is `deprecated`, please use [`spec.trigger.onCommit.paths`](#deploymenttrigger) instead. | No (deprecated) | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## Cloud Run application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: CloudRunApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [CloudRunDeploymentInput](#cloudrundeploymentinput) | Input for Cloud Run deployment such as docker image... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [CloudRunQuickSync](#cloudrunquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| triggerPaths | []string | List of directories or files where their changes will trigger the deployment. Regular expression can be used. This field is `deprecated`, please use [`spec.trigger.onCommit.paths`](#deploymenttrigger) instead. | No (deprecated) | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## Lambda application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaApp -spec: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [LambdaQuickSync](#lambdaquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| triggerPaths | []string | List of directories or files where their changes will trigger the deployment. Regular expression can be used. This field is `deprecated`, please use [`spec.trigger.onCommit.paths`](#deploymenttrigger) instead. | No (deprecated) | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## ECS application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: ECSApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| input | [ECSDeploymentInput](#ecsdeploymentinput) | Input for ECS deployment such as TaskDefinition, Service... | Yes | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [ECSQuickSync](#ecsquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| triggerPaths | []string | List of directories or files where their changes will trigger the deployment. Regular expression can be used. This field is `deprecated`, please use [`spec.trigger.onCommit.paths`](#deploymenttrigger) instead. | No (deprecated) | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## Analysis Template Configuration - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: AnalysisTemplate -spec: - metrics: - grpc_error_rate_percentage: - interval: 1m - provider: prometheus-dev - failureLimit: 1 - expected: - max: 10 - query: awesome_query -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| metrics | map[string][AnalysisMetrics](#analysismetrics) | Template for metrics. | No | - -## Event Watcher Configuration (deprecated) - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: EventWatcher -spec: - events: - - name: helloworld-image-update - replacements: - - file: helloworld/deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The event name. | Yes | -| labels | map[string]string | Additional attributes of event. This can make an event definition unique even if the one with the same name exists. | No | -| replacements | [][EventWatcherReplacement](#eventwatcherreplacement) | List of places where will be replaced when the new event matches. | Yes | - -## EventWatcherReplacement -One of `yamlField` or `regex` is required. - -| Field | Type | Description | Required | -|-|-|-|-| -| file | string | The relative path from the repository root to the file to be updated. | Yes | -| yamlField | string | The yaml path to the field to be updated. It requires to start with `$` which represents the root element. e.g. `$.foo.bar[0].baz`. | No | -| regex | string | The regex string that specify what should be replaced. The only first capturing group enclosed by `()` will be replaced with the new value. e.g. `host.xz/foo/bar:(v[0-9].[0-9].[0-9])` | No | - -## CommitMatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| quickSync | string | Regular expression string to forcibly do QuickSync when it matches the commit message. | No | -| pipeline | string | Regular expression string to forcibly do Pipeline when it matches the commit message. | No | - -## SecretEncryption - -| Field | Type | Description | Required | -|-|-|-|-| -| encryptedSecrets | map[string]string | List of encrypted secrets. | No | -| decryptionTargets | []string | List of files to be decrypted before using. | No | - -## DeploymentPlanner - -| Field | Type | Description | Required | -|-|-|-|-| -| alwaysUsePipeline | bool | Always use the defined pipeline to deploy the application in all deployments. Default is `false`. | No | - -## DeploymentTrigger - -| Field | Type | Description | Required | -|-|-|-|-| -| onCommit | [OnCommit](#oncommit) | Controls triggering new deployment when new Git commits touched the application. | No | -| onCommand | [OnCommand](#oncommand) | Controls triggering new deployment when received a new `SYNC` command. | No | -| onOutOfSync | [OnOutOfSync](#onoutofsync) | Controls triggering new deployment when application is at `OUT_OF_SYNC` state. | No | -| onChain | [OnChain](#onchain) | Controls triggering new deployment when the application is counted as a node of some chains. | No | - -## OnCommit - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when new Git commits touched it. Default is `false`. | No | -| paths | []string | List of directories or files where any changes of them will be considered as touching the application. Regular expression can be used. Empty means watching all changes under the application directory. | No | - -## OnCommand - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when received a new `SYNC` command. Default is `false`. | No | - -## OnOutOfSync - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when application is at `OUT_OF_SYNC` state. Default is `true`. | No | -| minWindow | duration | Minimum amount of time must be elapsed since the last deployment. This can be used to avoid triggering unnecessary continuous deployments based on `OUT_OF_SYNC` status. Default is `5m`. | No | - -## OnChain - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when application is counted as a node of some chains. Default is `true`. | No | - -## Pipeline - -| Field | Type | Description | Required | -|-|-|-|-| -| stages | [][PipelineStage](#pipelinestage) | List of deployment pipeline stages. | No | - -## PipelineStage - -| Field | Type | Description | Required | -|-|-|-|-| -| id | string | The unique ID of the stage. | No | -| name | string | One of the provided stage names. | Yes | -| desc | string | The description about the stage. | No | -| timeout | duration | The maximum time the stage can be taken to run. | No | -| with | [StageOptions](#stageoptions) | Specific configuration for the stage. This must be one of these [StageOptions](#stageoptions). | No | - -## DeploymentNotification - -| Field | Type | Description | Required | -|-|-|-|-| -| mentions | [][NotificationMention](#notificationmention) | List of users to be notified for each event. | No | - -## NotificationMention - -| Field | Type | Description | Required | -|-|-|-|-| -| event | string | The event to be notified to users. | Yes | -| slack | []string | List of user IDs for mentioning in Slack. See [here](https://api.slack.com/reference/surfaces/formatting#mentioning-users) for more information on how to check them. | No | - -## KubernetesDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| manifests | []string | List of manifest files in the application directory used to deploy. Empty means all manifest files in the directory will be used. | No | -| kubectlVersion | string | Version of kubectl will be used. Empty means the [default version](https://github.com/pipe-cd/pipecd/blob/master/tool/piped-base/install-kubectl.sh#L24) will be used. | No | -| kustomizeVersion | string | Version of kustomize will be used. Empty means the [default version](https://github.com/pipe-cd/pipecd/blob/master/tool/piped-base/install-kustomize.sh#L24) will be used. | No | -| kustomizeOptions | map[string]string | List of options that should be used by Kustomize commands. | No | -| helmVersion | string | Version of helm will be used. Empty means the [default version](https://github.com/pipe-cd/pipecd/blob/master/tool/piped-base/install-helm.sh#L24) will be used. | No | -| helmChart | [HelmChart](#helmchart) | Where to fetch helm chart. | No | -| helmOptions | [HelmOptions](#helmoptions) | Configurable parameters for helm commands. | No | -| namespace | string | The namespace where manifests will be applied. | No | -| autoRollback | bool | Automatically reverts all deployment changes on failure. Default is `true`. | No | - -## KubernetesVariantLabel - -| Field | Type | Description | Required | -|-|-|-|-| -| key | string | The key of the label. Default is `pipecd.dev/variant`. | No | -| primaryValue | string | The label value for PRIMARY variant. Default is `primary`. | No | -| canaryValue | string | The label value for CANARY variant. Default is `canary`. | No | -| baselineValue | string | The label value for BASELINE variant. Default is `baseline`. | No | - -## HelmChart - -| Field | Type | Description | Required | -|-|-|-|-| -| gitRemote | string | Git remote address where the chart is placing. Empty means the same repository. | No | -| ref | string | The commit SHA or tag value. Only valid when gitRemote is not empty. | No | -| path | string | Relative path from the repository root to the chart directory. | No | -| repository | string | The name of a registered Helm Chart Repository. | No | -| name | string | The chart name. | No | -| version | string | The chart version. | No | - -## HelmOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| releaseName | string | The release name of helm deployment. By default, the release name is equal to the application name. | No | -| valueFiles | []string | List of value files should be loaded. Only local files stored under the application directory or remote files served at the http(s) endpoint are allowed. | No | -| setFiles | map[string]string | List of file path for values. | No | -| apiVersions | []string | Kubernetes api versions used for Capabilities.APIVersions. | No | -| kubeVersion | string | Kubernetes version used for Capabilities.KubeVersion. | No | - -## KubernetesQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| -| addVariantLabelToSelector | bool | Whether the PRIMARY variant label should be added to manifests if they were missing. Default is `false`. | No | -| prune | bool | Whether the resources that are no longer defined in Git should be removed or not. Default is `false` | No | - -## KubernetesService - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of Service manifest. | No | - -## KubernetesWorkload - -| Field | Type | Description | Required | -|-|-|-|-| -| kind | string | The kind name of workload manifests. Currently, only `Deployment` is supported. In the future, we also want to support `ReplicationController`, `DaemonSet`, `StatefulSet`. | No | -| name | string | The name of workload manifest. | No | - -## KubernetesTrafficRouting - -| Field | Type | Description | Required | -|-|-|-|-| -| method | string | Which traffic routing method will be used. Available values are `istio`, `smi`, `podselector`. Default is `podselector`. | No | -| istio | [IstioTrafficRouting](#istiotrafficrouting)| Istio configuration when the method is `istio`. | No | - -## IstioTrafficRouting - -| Field | Type | Description | Required | -|-|-|-|-| -| editableRoutes | []string | List of routes in the VirtualService that can be changed to update traffic routing. Empty means all routes should be updated. | No | -| host | string | The service host. | No | -| virtualService | [IstioVirtualService](#istiovirtualservice) | The reference to VirtualService manifest. Empty means the first VirtualService resource will be used. | No | - -## IstioVirtualService - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of VirtualService manifest. | No | - -## TerraformDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| workspace | string | The terraform workspace name. Empty means `default` workspace. | No | -| terraformVersion | string | The version of terraform should be used. Empty means the pre-installed version will be used. | No | -| vars | []string | List of variables that will be set directly on terraform commands with `-var` flag. The variable must be formatted by `key=value`. | No | -| varFiles | []string | List of variable files that will be set on terraform commands with `-var-file` flag. | No | -| commandFlags | [TerraformCommandFlags](#terraformcommandflags) | List of additional flags will be used while executing terraform commands. | No | -| commandEnvs | [TerraformCommandEnvs](#terraformcommandenvs) | List of additional environment variables will be used while executing terraform commands. | No | -| autoRollback | bool | Automatically reverts all changes from all stages when one of them failed. | No | - -## TerraformQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| -| retries | int | How many times to retry applying terraform changes. Default is `0`. | No | - -## TerraformCommandFlags - -| Field | Type | Description | Required | -|-|-|-|-| -| shared | []string | List of additional flags used for all Terraform commands. | No | -| init | []string | List of additional flags used for Terraform `init` command. | No | -| plan | []string | List of additional flags used for Terraform `plan` command. | No | -| apply | []string | List of additional flags used for Terraform `apply` command. | No | - -## TerraformCommandEnvs - -| Field | Type | Description | Required | -|-|-|-|-| -| shared | []string | List of additional environment variables used for all Terraform commands. | No | -| init | []string | List of additional environment variables used for Terraform `init` command. | No | -| plan | []string | List of additional environment variables used for Terraform `plan` command. | No | -| apply | []string | List of additional environment variables used for Terraform `apply` command. | No | - -## CloudRunDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| serviceManifestFile | string | The name of service manifest file placing in application directory. Default is `service.yaml`. | No | -| autoRollback | bool | Automatically reverts to the previous state when the deployment is failed. Default is `true`. | No | - -## CloudRunQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| - -## LambdaDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| - -## LambdaQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| - -## ECSDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| serviceDefinitionFile | string | The path ECS Service configuration file. Allow file in both `yaml` and `json` format. The default value is `service.json`. See [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html) for parameters.| No | -| taskDefinitionFile | string | The path to ECS TaskDefinition configuration file. Allow file in both `yaml` and `json` format. The default value is `taskdef.json`. See [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) for parameters. | No | -| targetGroups | [ECSTargetGroupInput](#ecstargetgroupinput) | The target groups configuration, will be used to routing traffic to created task sets. | Yes (if you want to perform progressive delivery) | - -### ECSTargetGroupInput - -| Field | Type | Description | Required | -|-|-|-|-| -| primary | ECSTargetGroupObject | The PRIMARY target group, will be used to register the PRIMARY ECS task set. | Yes | -| canary | ECSTargetGroupObject | The CANARY target group, will be used to register the CANARY ECS task set if exist. It's required to enable PipeCD to perform the multi-stage deployment. | No | - -Note: You can get examples for those object from [here](../../examples/#ecs-applications). - -## ECSQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| - -## AnalysisMetrics - -| Field | Type | Description | Required | -|-|-|-|-| -| provider | string | The unique name of provider defined in the Piped Configuration. | Yes | -| strategy | string | The strategy name. One of `THRESHOLD` or `PREVIOUS` or `CANARY_BASELINE` or `CANARY_PRIMARY` is available. Defaults to `THRESHOLD`. | No | -| query | string | A query performed against the [Analysis Provider](../../concepts/#analysis-provider). The stage will be skipped if no data points were returned. | Yes | -| expected | [AnalysisExpected](#analysisexpected) | The statically defined expected query result. This field is ignored if there was no data point as a result of the query. | Yes if the strategy is `THRESHOLD` | -| interval | duration | Run a query at specified intervals. | Yes | -| failureLimit | int | Acceptable number of failures. e.g. If 1 is set, the `ANALYSIS` stage will end with failure after two queries results failed. Defaults to 1. | No | -| skipOnNoData | bool | If true, it considers as a success when no data returned from the analysis provider. Defaults to false. | No | -| deviation | string | The stage fails on deviation in the specified direction. One of `LOW` or `HIGH` or `EITHER` is available. This can be used only for `PREVIOUS`, `CANARY_BASELINE` or `CANARY_PRIMARY`. Defaults to `EITHER`. | No | -| baselineArgs | map[string][string] | The custom arguments to be populated for the Baseline query. They can be reffered as `{{ .VariantCustomArgs.xxx }}`. | No | -| canaryArgs | map[string][string] | The custom arguments to be populated for the Canary query. They can be reffered as `{{ .VariantCustomArgs.xxx }}`. | No | -| primaryArgs | map[string][string] | The custom arguments to be populated for the Primary query. They can be reffered as `{{ .VariantCustomArgs.xxx }}`. | No | -| timeout | duration | How long after which the query times out. | No | -| template | [AnalysisTemplateRef](#analysistemplateref) | Reference to the template to be used. | No | - - -## AnalysisLog - -| Field | Type | Description | Required | -|-|-|-|-| - -## AnalysisHttp - -| Field | Type | Description | Required | -|-|-|-|-| - -## AnalysisExpected - -| Field | Type | Description | Required | -|-|-|-|-| -| min | float64 | Failure, if the query result is less than this value. | No | -| max | float64 | Failure, if the query result is larger than this value. | No | - -## AnalysisTemplateRef - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The template name to refer. | Yes | -| appArgs | map[string]string | The arguments for custom-args. | No | - -## StageOptions - -### KubernetesPrimaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| suffix | string | Suffix that should be used when naming the PRIMARY variant's resources. Default is `primary`. | No | -| createService | bool | Whether the PRIMARY service should be created. Default is `false`. | No | -| addVariantLabelToSelector | bool | Whether the PRIMARY variant label should be added to manifests if they were missing. Default is `false`. | No | -| prune | bool | Whether the resources that are no longer defined in Git should be removed or not. Default is `false` | No | - -### KubernetesCanaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| replicas | int | How many pods for CANARY workloads. Default is `1` pod. Alternatively, can be specified a string suffixed by "%" to indicate a percentage value compared to the pod number of PRIMARY | No | -| suffix | string | Suffix that should be used when naming the CANARY variant's resources. Default is `canary`. | No | -| createService | bool | Whether the CANARY service should be created. Default is `false`. | No | -| patches | [][KubernetesResourcePatch](#kubernetesresourcepatch) | List of patches used to customize manifests for CANARY variant. | No | - -### KubernetesCanaryCleanStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| | | | | - -### KubernetesBaselineRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| replicas | int | How many pods for BASELINE workloads. Default is `1` pod. Alternatively, can be specified a string suffixed by "%" to indicate a percentage value compared to the pod number of PRIMARY | No | -| suffix | string | Suffix that should be used when naming the BASELINE variant's resources. Default is `baseline`. | No | -| createService | bool | Whether the BASELINE service should be created. Default is `false`. | No | - -### KubernetesBaselineCleanStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| | | | | - -### KubernetesTrafficRoutingStageOptions -This stage routes traffic with the method specified in [KubernetesTrafficRouting](#kubernetestrafficrouting). -When using `podselector` method as a traffic routing method, routing is done by updating the Service selector. -Therefore, note that all traffic will be routed to the primary if the the primary variant's service is rolled out by running the `K8S_PRIMARY_ROLLOUT` stage. - -| Field | Type | Description | Required | -|-|-|-|-| -| all | string | Which variant should receive all traffic. Available values are "primary", "canary", "baseline". Default is `primary`. | No | -| primary | [Percentage](#percentage) | The percentage of traffic should be routed to PRIMARY variant. | No | -| canary | [Percentage](#percentage) | The percentage of traffic should be routed to CANARY variant. | No | -| baseline | [Percentage](#percentage) | The percentage of traffic should be routed to BASELINE variant. | No | - -### TerraformPlanStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| - -### TerraformApplyStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| retries | int | How many times to retry applying terraform changes. Default is `0`. | No | - -### CloudRunPromoteStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| percent | [Percentage](#percentage) | Percentage of traffic should be routed to the new version. | No | - -### LambdaCanaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| - -### LambdaPromoteStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| percent | [Percentage](#percentage) | Percentage of traffic should be routed to the new version. | No | - -### ECSPrimaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| - -### ECSCanaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| scale | [Percentage](#percentage) | The percentage of workloads should be rolled out as CANARY variant's workload. | Yes | - -### ECSTrafficRoutingStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| primary | [Percentage](#percentage) | The percentage of traffic should be routed to PRIMARY variant. | No | -| canary | [Percentage](#percentage) | The percentage of traffic should be routed to CANARY variant. | No | - -Note: By default, the sum of traffic is rounded to 100. If both `primary` and `canary` numbers are not set, the PRIMARY variant will receive 100% while the CANARY variant will receive 0% of the traffic. - -### AnalysisStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| duration | duration | Maximum time to perform the analysis. | Yes | -| metrics | [][AnalysisMetrics](#analysismetrics) | Configuration for analysis by metrics. | No | - -## PostSync - -| Field | Type | Description | Required | -|-|-|-|-| -| chain | [DeploymentChain](#deploymentchain) | Deployment chain configuration, used to determine and build deployments that should be triggered once the current deployment is triggered. | No | - -### DeploymentChain - -| Field | Type | Description | Required | -|-|-|-|-| -| applications | [][DeploymentChainApplication](#deploymentchainapplication) | The list of applications which should be triggered once deployment of this application rolled out successfully. | Yes | - -### DeploymentChainApplication - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of PipeCD application, note that application name is not unique in PipeCD datastore | No | -| kind | string | The kind of the PipeCD application, which should be triggered as a node in deployment chain. The value will be one of: KUBERNETES, TERRAFORM, CLOUDRUN, LAMBDA, ECS. | No | - -## PipeCD rich defined types - -### Percentage -A wrapper of type `int` to represent percentage data. Basically, you can pass `10` or `"10"` or `10%` and they will be treated as `10%` in PipeCD. - -### KubernetesResourcePatch - -| Field | Type | Description | Required | -|-|-|-|-| -| target | [KubernetesResourcePatchTarget](#kubernetesresourcepatchtarget) | Which manifest, which field will be the target of patch operations. | Yes | -| ops | [][KubernetesResourcePatchOp](#kubernetesresourcepatchop) | List of operations should be applied to the above target. | No | - -### KubernetesResourcePatchTarget - -| Field | Type | Description | Required | -|-|-|-|-| -| kind | string | The resource kind. e.g. `ConfigMap` | Yes | -| name | string | The resource name. e.g. `config-map-name` | Yes | -| documentRoot | string | In case you want to manipulate the YAML or JSON data specified in a field of the manfiest, specify that field's path. The string value of that field will be used as input for the patch operations. Otherwise, the whole manifest will be the target of patch operations. e.g. `$.data.envoy-config` | No | - -### KubernetesResourcePatchOp - -| Field | Type | Description | Required | -|-|-|-|-| -| op | string | The operation type. This must be one of `yaml-replace`, `yaml-add`, `yaml-remove`, `json-replace`, `text-regex`. Default is `yaml-replace`. | No | -| path | string | The path string pointing to the manipulated field. For yaml operations it looks like `$.foo.array[0].bar`. | No | -| value | string | The value string whose content will be used as new value for the field. | No | - -## EventWatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| matcher | [EventWatcherMatcher](#eventwatchermatcher) | Which event will be handled. | Yes | -| handler | [EventWatcherHandler](#eventwatcherhandler) | What to do for the event which matched by the above matcher. | Yes | - -### EventWatcherMatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The event name. | Yes | -| labels | map[string]string | Additional attributes of event. This can make an event definition unique even if the one with the same name exists. | No | - -### EventWatcherHandler - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | The handler type. Currently, only `GIT_UPDATE` is supported. | Yes | -| config | [EventWatcherHandlerConfig](#eventwatcherhandlerconfig) | Configuration for the event watcher handler. | Yes | - -### EventWatcherHandlerConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| commitMessage | string | The commit message used to push after replacing values. Default message is used if not given. | No | -| replacements | [][EventWatcherReplacement](#eventwatcherreplacement) | List of places where will be replaced when the new event matches. | Yes | diff --git a/docs/content/en/docs-v0.37.x/user-guide/event-watcher.md b/docs/content/en/docs-v0.37.x/user-guide/event-watcher.md deleted file mode 100644 index ba32f9fc21..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/event-watcher.md +++ /dev/null @@ -1,233 +0,0 @@ ---- -title: "Connect between CI and CD with event watcher" -linkTitle: "Event watcher" -weight: 3 -description: > - A helper facility to automatically update files when it finds out a new event. ---- - -![](/images/diff-by-eventwatcher.png) - -The only way to upgrade your application with PipeCD is modifying configuration files managed by the Git repositories. -It brings benefits quite a bit, but it can be painful to manually update them every time in some cases (e.g. continuous deployment to your development environment for debugging, the latest prerelease to the staging environment). - -If you're experiencing any of the above pains, Event watcher is for you. -Event watcher works as a helper facility to seamlessly link CI and CD. This feature lets you automatically update files managed by your Piped when an arbitrary event has occurred. -While it empowers you to build pretty versatile workflows, the canonical use case is that you trigger a new deployment by image updates, package releases, etc. - -This guide walks you through configuring Event watcher and how to push an Event. - -## Prerequisites -Before we get into configuring EventWatcher, be sure to configure Piped. See [here](../managing-piped/configuring-event-watcher/) for more details. - -## Usage -File updating can be done by registering the latest value corresponding to the Event in the Control Plane and comparing it with the current value. - -Therefore, you mainly need to: -1. define which values in which files should be updated when a new Event found. -1. integrate a step to push an Event to the Control Plane using `pipectl` into your CI workflow. - -### 1. Defining Events -#### Use the `.pipe/` directory ->NOTE: This way is deprecated and will be removed in the future, so please use the application configuration. - -Prepare EventWatcher configuration files under the `.pipe/` directory at the root of your Git repository. -In that files, you define which values in which files should be updated when the Piped found out a new Event. - -For instance, suppose you want to update the Kubernetes manifest defined in `helloworld/deployment.yaml` when an Event with the name `helloworld-image-update` occurs: - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: EventWatcher -spec: - events: - - name: helloworld-image-update - replacements: - - file: helloworld/deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The full list of configurable `EventWatcher` fields are [here](../configuration-reference/#event-watcher-configuration-deprecated). - -#### Use the application configuration - -Define what to do for which event in the application configuration file of the target application. - -- `matcher`: Which event should be handled. -- `handler`: What to do for the event which is specified by matcher. - -For instance, suppose you want to update the Kubernetes manifest defined in `helloworld/deployment.yaml` when an Event with the name `helloworld-image-update` occurs: -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - name: helloworld - eventWatcher: - - matcher: - name: helloworld-image-update - handler: - type: GIT_UPDATE - config: - replacements: - - file: deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The full list of configurable `eventWatcher` fields are [here](../configuration-reference/#eventwatcher). - -### 2. Pushing an Event with `pipectl` -To register a new value corresponding to Event such as the above in the Control Plane, you need to perform `pipectl`. -And we highly recommend integrating a step for that into your CI workflow. - -You first need to set-up the `pipectl`: - -- Install it on your CI system or where you want to run according to [this guide](../command-line-tool/#installation). -- Grab the API key to which the `READ_WRITE` role is attached according to [this guide](../command-line-tool/#authentication). - -Once you're all set up, pushing a new Event to the Control Plane by the following command: - -```bash -pipectl event register \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --name=helloworld-image-update \ - --data=gcr.io/pipecd/helloworld:v0.2.0 -``` - -You can see the status on the event list page. - -![](/images/event-list-page.png) - - -After a while, Piped will create a commit as shown below: - -```diff - spec: - containers: - - name: helloworld -- image: gcr.io/pipecd/helloworld:v0.1.0 -+ image: gcr.io/pipecd/helloworld:v0.2.0 -``` - -NOTE: Keep in mind that it may take a little while because Piped periodically fetches the new events from the Control Plane. You can change its interval according to [here](../managing-piped/configuration-reference/#eventwatcher). - -### [optional] Using labels -Event watcher is a project-wide feature, hence an event name is unique inside a project. That is, you can update multiple repositories at the same time if you use the same event name for different events. - -On the contrary, if you want to explicitly distinguish those, we recommend using labels. You can make an event definition unique by using any number of labels with arbitrary keys and values. -Suppose you define an event with the labels `env: dev` and `appName: helloworld`: - -When you use the `.pipe/` directory, you can configure like below. -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: EventWatcher -spec: - events: - - name: image-update - labels: - env: dev - appName: helloworld - replacements: - - file: helloworld/deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The other example is like below. -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: ApplicationKind -spec: - name: helloworld - eventWatcher: - - matcher: - name: image-update - labels: - env: dev - appName: helloworld - handler: - type: GIT_UPDATE - config: - replacements: - - file: deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The file update will be executed only when the labels are explicitly specified with the `--labels` flag. - -```bash -pipectl event register \ - --address=CONTROL_PLANE_API_ADDRESS \ - --api-key=API_KEY \ - --name=image-update \ - --labels env=dev,appName=helloworld \ - --data=gcr.io/pipecd/helloworld:v0.2.0 -``` - -Note that it is considered a match only when labels are an exact match. - -## Examples -Suppose you want to update your configuration file after releasing a new Helm chart. - -You define the configuration for event watcher in `helloworld/app.pipecd.yaml` file like: - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - name: helloworld - version: 0.1.0 - eventWatcher: - - matcher: - name: image-update - labels: - env: dev - appName: helloworld - handler: - type: GIT_UPDATE - config: - replacements: - - file: app.pipecd.yaml - yamlField: $.spec.input.helmChart.version -``` - -Push a new version `0.2.0` as data when the Helm release is completed. - -```bash -pipectl event register \ - --address=CONTROL_PLANE_API_ADDRESS \ - --api-key=API_KEY \ - --name=helm-release \ - --labels env=dev,appName=helloworld \ - --data=0.2.0 -``` - -Then you'll see that Piped updates as: - -```diff -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - name: helloworld -- version: 0.1.0 -+ version: 0.2.0 - eventWatcher: - - matcher: - name: image-update - labels: - env: dev - appName: helloworld - handler: - type: GIT_UPDATE - config: - replacements: - - file: app.pipecd.yaml - yamlField: $.spec.input.helmChart.version -``` - -## Github Actions -If you're using Github Actions in your CI workflow, [actions-event-register](https://github.com/marketplace/actions/pipecd-register-event) is for you! -With it, you can easily register events without any installation. diff --git a/docs/content/en/docs-v0.37.x/user-guide/examples/_index.md b/docs/content/en/docs-v0.37.x/user-guide/examples/_index.md deleted file mode 100755 index 9a6c69f276..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/examples/_index.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Examples" -linkTitle: "Examples" -weight: 10 -description: > - Some examples of PipeCD in action! ---- - -One of the best ways to see what PipeCD can do, and learn how to deploy your applications with it, is to see some real examples. - -We have prepared some examples for each kind of application, please visit the [PipeCD examples](../../examples/) page for details. diff --git a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-bluegreen-with-istio.md b/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-bluegreen-with-istio.md deleted file mode 100644 index 7544f8ca79..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-bluegreen-with-istio.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -title: "BlueGreen deployment for Kubernetes app with Istio" -linkTitle: "BlueGreen k8s app with Istio" -weight: 2 -description: > - How to enable blue-green deployment for Kubernetes application with Istio. ---- - -Similar to [canary deployment](../k8s-app-canary-with-istio/), PipeCD allows you to enable and automate the blue-green deployment strategy for your application based on Istio's weighted routing feature. - -In both canary and blue-green strategies, the old version and the new version of the application get deployed at the same time. -But while the canary strategy slowly routes the traffic to the new version, the blue-green strategy quickly routes all traffic to one of the versions. - -In this guide, we will show you how to configure the application configuration file to apply the blue-green strategy. - -Complete source code for this example is hosted in [pipe-cd/examples](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-bluegreen) repository. - -## Before you begin - -- Add a new Kubernetes application by following the instructions in [this guide](../../managing-application/adding-an-application/) -- Ensure having `pipecd.dev/variant: primary` [label](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-bluegreen/deployment.yaml#L17) and [selector](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-bluegreen/deployment.yaml#L12) in the workload template -- Ensure having at least one Istio's `DestinationRule` and defining the needed subsets (`primary` and `canary`) with `pipecd.dev/variant` label - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: DestinationRule -metadata: - name: mesh-istio-bluegreen -spec: - host: mesh-istio-bluegreen - subsets: - - name: primary - labels: - pipecd.dev/variant: primary - - name: canary - labels: - pipecd.dev/variant: canary - trafficPolicy: - tls: - mode: ISTIO_MUTUAL -``` - -- Ensure having at least one Istio's `VirtualService` manifest and all traffic is routed to the `primary` - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: VirtualService -metadata: - name: mesh-istio-bluegreen -spec: - hosts: - - mesh-istio-bluegreen.pipecd.dev - gateways: - - mesh-istio-bluegreen - http: - - route: - - destination: - host: mesh-istio-bluegreen - subset: primary - weight: 100 -``` - -## Enabling blue-green strategy - -- Add the following application configuration file into the application directory in the Git repository. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 100% - - name: K8S_TRAFFIC_ROUTING - with: - all: canary - - name: WAIT_APPROVAL - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_TRAFFIC_ROUTING - with: - all: primary - - name: K8S_CANARY_CLEAN - trafficRouting: - method: istio - istio: - host: mesh-istio-bluegreen -``` - -- Send a PR to update the container image version in the Deployment manifest and merge it to trigger a new deployment. PipeCD will plan the deployment with the specified blue-green strategy. - -![](/images/example-bluegreen-kubernetes-istio.png) -

-Deployment Details Page -

- -- Now you have an automated blue-green deployment for your application. 🎉 - -## Understanding what happened - -In this example, you configured the application configuration file to switch all traffic from an old to a new version of the application using Istio's weighted routing feature. - -- Stage 1: `K8S_CANARY_ROLLOUT` ensures that the workloads of canary variant (new version) should be deployed. But at this time, they still handle nothing, all traffic is handled by workloads of primary variant. -The number of workloads (e.g. pod) for canary variant is configured to be 100% of the replicas number of primary varant. - -![](/images/example-bluegreen-kubernetes-istio-stage-1.png) - -- Stage 2: `K8S_TRAFFIC_ROUTING` ensures that all traffic should be routed to canary variant. Because the `trafficRouting` is configured to use Istio, PipeCD will find Istio's VirtualService resource of this application to control the traffic percentage. -(You can add an [ANALYSIS](../../managing-application/customizing-deployment/automated-deployment-analysis/) stage after this to validate the new version. When any negative impacts are detected, an auto-rollback stage will be executed to switch all traffic back to the primary variant.) - -![](/images/example-bluegreen-kubernetes-istio-stage-2.png) - -- Stage 3: `WAIT_APPROVAL` waits for a manual approval from someone in your team. - -- Stage 4: `K8S_PRIMARY_ROLLOUT` ensures that all resources of primary variant will be updated to the new version. - -![](/images/example-bluegreen-kubernetes-istio-stage-4.png) - -- Stage 5: `K8S_TRAFFIC_ROUTING` ensures that all traffic should be routed to primary variant. Now primary variant is running the new version so it means all traffic is handled by the new version. - -![](/images/example-bluegreen-kubernetes-istio-stage-5.png) - -- Stage 6: `K8S_CANARY_CLEAN` ensures all created resources for canary variant should be destroyed. - -![](/images/example-bluegreen-kubernetes-istio-stage-6.png) diff --git a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-bluegreen-with-pod-selector.md b/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-bluegreen-with-pod-selector.md deleted file mode 100644 index c303b64cbe..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-bluegreen-with-pod-selector.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "BlueGreen deployment for Kubernetes app with PodSelector" -linkTitle: "BlueGreen k8s app with PodSelector" -weight: 4 -description: > - How to enable blue-green deployment for Kubernetes application with PodSelector. ---- - -> TBA - -For applications that are not deployed on a service mesh, PipeCD can enable blue-green deployment with Kubernetes L4 networking. diff --git a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-canary-with-istio.md b/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-canary-with-istio.md deleted file mode 100644 index 286b361ded..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-canary-with-istio.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: "Canary deployment for Kubernetes app with Istio" -linkTitle: "Canary k8s app with Istio" -weight: 1 -description: > - How to enable canary deployment for Kubernetes application with Istio. ---- - -> Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody. -> -- [martinfowler.com/canaryrelease](https://martinfowler.com/bliki/CanaryRelease.html) - -With Istio, we can accomplish this goal by configuring a sequence of rules that route a percentage of traffic to each [variant](../../managing-application/defining-app-configuration/kubernetes/#sync-with-the-specified-pipeline) of the application. -And with PipeCD, you can enable and automate the canary strategy for your Kubernetes application even easier. - -In this guide, we will show you how to configure the application configuration file to send 10% of traffic to the new version and keep 90% to the primary variant. Then after waiting for manual approval, you will complete the migration by sending 100% of traffic to the new version. - -Complete source code for this example is hosted in [pipe-cd/examples](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-canary) repository. - -## Before you begin - -- Add a new Kubernetes application by following the instructions in [this guide](../../managing-application/adding-an-application/) -- Ensure having `pipecd.dev/variant: primary` [label](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-canary/deployment.yaml#L17) and [selector](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-canary/deployment.yaml#L12) in the workload template -- Ensure having at least one Istio's `DestinationRule` and defining the needed subsets (`primary` and `canary`) with `pipecd.dev/variant` label - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: DestinationRule -metadata: - name: mesh-istio-canary -spec: - host: mesh-istio-canary.default.svc.cluster.local - subsets: - - name: primary - labels: - pipecd.dev/variant: primary - - name: canary - labels: - pipecd.dev/variant: canary -``` - -- Ensure having at least one Istio's `VirtualService` manifest and all traffic is routed to the `primary` - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: VirtualService -metadata: - name: mesh-istio-canary -spec: - hosts: - - mesh-istio-canary.pipecd.dev - gateways: - - mesh-istio-canary - http: - - route: - - destination: - host: mesh-istio-canary.default.svc.cluster.local - subset: primary - weight: 100 -``` - -## Enabling canary strategy - -- Add the following application configuration file into the application directory in Git. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 50% - - name: K8S_TRAFFIC_ROUTING - with: - canary: 10 - primary: 90 - - name: WAIT_APPROVAL - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_TRAFFIC_ROUTING - with: - primary: 100 - - name: K8S_CANARY_CLEAN - trafficRouting: - method: istio - istio: - host: mesh-istio-canary.default.svc.cluster.local -``` - -- Send a PR to update the container image version in the Deployment manifest and merge it to trigger a new deployment. PipeCD will plan the deployment with the specified canary strategy. - -![](/images/example-canary-kubernetes-istio.png) -

-Deployment Details Page -

- -- Now you have an automated canary deployment for your application. 🎉 - -## Understanding what happened - -In this example, you configured the application configuration file to migrate traffic from an old to a new version of the application using Istio's weighted routing feature. - -- Stage 1: `K8S_CANARY_ROLLOUT` ensures that the workloads of canary variant (new version) should be deployed. But at this time, they still handle nothing, all traffic are handled by workloads of primary variant. -The number of workloads (e.g. pod) for canary variant is configured to be 50% of the replicas number of primary varant. - -![](/images/example-canary-kubernetes-istio-stage-1.png) - -- Stage 2: `K8S_TRAFFIC_ROUTING` ensures that 10% of traffic should be routed to canary variant and 90% to primary variant. Because the `trafficRouting` is configured to use Istio, PipeCD will find Istio's VirtualService resource of this application to control the traffic percentage. - -![](/images/example-canary-kubernetes-istio-stage-2.png) - -- Stage 3: `WAIT_APPROVAL` waits for a manual approval from someone in your team. - -- Stage 4: `K8S_PRIMARY_ROLLOUT` ensures that all resources of primary variant will be updated to the new version. - -![](/images/example-canary-kubernetes-istio-stage-4.png) - -- Stage 5: `K8S_TRAFFIC_ROUTING` ensures that all traffic should be routed to primary variant. Now primary variant is running the new version so it means all traffic is handled by the new version. - -![](/images/example-canary-kubernetes-istio-stage-5.png) - -- Stage 6: `K8S_CANARY_CLEAN` ensures all created resources for canary variant should be destroyed. - -![](/images/example-canary-kubernetes-istio-stage-6.png) diff --git a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-canary-with-pod-selector.md b/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-canary-with-pod-selector.md deleted file mode 100644 index 5993bc101e..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/examples/k8s-app-canary-with-pod-selector.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: "Canary deployment for Kubernetes app with PodSelector" -linkTitle: "Canary k8s app with PodSelector" -weight: 3 -description: > - How to enable canary deployment for Kubernetes application with PodSelector. ---- - -Using service mesh like [Istio](../k8s-app-canary-with-istio/) helps you doing canary deployment easier with many powerful features, but not all teams are ready to use service mesh in their environment. This page will walk you through using PipeCD to enable canary deployment for Kubernetes application running in a non-mesh environment. - -Basically, the idea behind is described as this [Kubernetes document](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments); the Service resource uses the common label set to route the traffic to both canary and primary workloads, and percentage of traffic for each variant is based on their replicas number. - -## Enabling canary strategy - -Assume your application has the following `Service` and `Deployment` manifests: - -- service.yaml - -``` yaml -apiVersion: v1 -kind: Service -metadata: - name: helloworld -spec: - selector: - app: helloworld - ports: - - protocol: TCP - port: 9085 -``` - -- deployment.yaml - -``` yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: helloworld - labels: - app: helloworld - pipecd.dev/variant: primary -spec: - replicas: 30 - revisionHistoryLimit: 2 - selector: - matchLabels: - app: helloworld - pipecd.dev/variant: primary - template: - metadata: - labels: - app: helloworld - pipecd.dev/variant: primary - spec: - containers: - - name: helloworld - image: gcr.io/pipecd/helloworld:v0.1.0 - args: - - server - ports: - - containerPort: 9085 -``` - -In PipeCD context, manifests defined in Git are the manifests for primary variant, so please note to ensure that your deployment manifest contains `pipecd.dev/variant: primary` label and selector in the spec. - -To enable canary strategy for this Kubernetes application, you will update your application configuration file to be as below: - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - # Deploy the workloads of CANARY variant. In this case, the number of - # workload replicas of CANARY variant is 50% of the replicas number of PRIMARY variant. - - name: K8S_CANARY_ROLLOUT - with: - replicas: 50% - - name: WAIT_APPROVAL - with: - duration: 10s - # Update the workload of PRIMARY variant to the new version. - - name: K8S_PRIMARY_ROLLOUT - # Destroy all workloads of CANARY variant. - - name: K8S_CANARY_CLEAN -``` - -That is all, now let try to send a PR to update the container image version in the Deployment manifest and merge it to trigger a new deployment. Then, PipeCD will plan the deployment with the specified canary strategy. - -![](/images/example-canary-kubernetes.png) -

-Deployment Details Page -

- -Complete source code for this example is hosted in [pipe-cd/examples](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary) repository. - -## Understanding what happened - -In this example, you configured your application to be deployed with a canary strategy using a native feature of Kubernetes: pod selector. -The traffic will be routed to both canary and primary workloads because they are sharing the same label: `app: helloworld`. -The percentage of traffic for each variant is based on the respective number of pods. - -Here are what happened in details: - -- Before deploying, all traffic gets routed to primary workloads. - - - -- Stage 1: `K8S_CANARY_ROLLOUT` ensures that the workloads of canary variant (new version) should be deployed. -The number of workloads (e.g. pod) for canary variant is configured to be 50% of the replicas number of primary variant. It means 15 canary pods will be started, and they receive 33.3% traffic while primary workloads receive the remaining 66.7% traffic. - - - -- Stage 2: `WAIT_APPROVAL` waits for a manual approval from someone in your team. - -- Stage 3: `K8S_PRIMARY_ROLLOUT` ensures that all resources of primary variant will be updated to the new version. - - - -- Stage 4: `K8S_CANARY_CLEAN` ensures all created resources for canary variant should be destroyed. After that, the primary workloads running in with the new version will receive all traffic. - - diff --git a/docs/content/en/docs-v0.37.x/user-guide/insights.md b/docs/content/en/docs-v0.37.x/user-guide/insights.md deleted file mode 100644 index e71eff7b14..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/insights.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: "Insights" -linkTitle: "Insights" -weight: 5 -description: > - This page describes how to see delivery performance. ---- - -> TBA - -Based on executed deployment data, PipeCD provides the graphs at the `Insights` page that helps you understand the delivery performance of a single application or your whole project. -The graph of the following metrics will be provided: - -### Lead Time for Changes -How long does it take to go from code committed to code successfully running on production. - -> Screenshot - -### Deployment Frequency -How often does your application/project deploy code to production. - -> Screenshot - -### Mean Time To Restore -How long does it generally take to restore service when a service incident occurs. - -> Screenshot - -### Change Failure Rate -How often deployment failures occur in production that requires an immediate remedy (fix, rollback...). - -> Screenshot diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/_index.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/_index.md deleted file mode 100644 index 99468227f5..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Managing application" -linkTitle: "Managing application" -weight: 2 -description: > - This guide is for developers who have PipeCD installed for them and are using PipeCD to deploy their applications. ---- - -> Note: You must have at least one activated/running Piped to enable using any of the following features of PipeCD. Please refer to [Piped installation docs](../../installation/install-piped/) if you do not have any Piped in your pocket. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/adding-an-application.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/adding-an-application.md deleted file mode 100644 index 822b446c99..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/adding-an-application.md +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: "Adding an application" -linkTitle: "Adding an application" -weight: 1 -description: > - This page describes how to add a new application. ---- - -An application is a collection of resources and configurations that are managed together. -It represents the service which you are going to deploy. With PipeCD, all application's manifests and its application configuration (`app.pipecd.yaml`) must be committed into a directory of a Git repository. That directory is called as application directory. - -Each application can be handled by one and only one `piped`. Currently, PipeCD is supporting 5 kinds of application: Kubernetes, Terraform, CloudRun, Lambda, ECS. - -Before deploying an application, it must be registered to help PipeCD knows -- where the application configuration is placed -- which `piped` should handle it and which platform the application should be deployed to - -Through the web console, you can register a new application in one of the following ways: -- Picking from a list of unused apps suggested by Pipeds while scanning Git repositories (Recommended) -- Manually configuring application information - -(If you prefer to use [`pipectl`](../../command-line-tool/#adding-a-new-application) command-line tool, see its usage for the details.) - -## Picking from a list of unused apps suggested by Pipeds - -You have to __prepare a configuration file__ which contains your application configuration and store that file in the Git repository which your Piped is watching first to enable adding a new application this way. - -The application configuration file name must be suffixed by `.pipecd.yaml` because Piped periodically checks for files with this suffix. - -{{< tabpane >}} -{{< tab lang="yaml" header="KubernetesApp" >}} -# For application's configuration in detail for KubernetesApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/kubernetes/ - -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="TerraformApp" >}} -# For application's configuration in detail for TerraformApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/terraform/ - -apiVersion: pipecd.dev/v1beta1 -kind: TerraformApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="LambdaApp" >}} -# For application's configuration in detail for LambdaApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/lambda/ - -apiVersion: pipecd.dev/v1beta1 -kind: LambdaApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="CloudRunApp" >}} -# For application's configuration in detail for CloudRunApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/cloudrun/ - -apiVersion: pipecd.dev/v1beta1 -kind: CloudRunApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="ECSApp" >}} -# For application's configuration in detail for ECSApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/ecs/ - -apiVersion: pipecd.dev/v1beta1 -kind: ECSApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< /tabpane >}} - -To define your application deployment pipeline which contains the guideline to show Piped how to deploy your application, please visit [Defining app configuration](../defining-app-configuration/). - -Go to the PipeCD web console on application list page, click the `+ADD` button at the top left corner of the application list page and then go to the `ADD FROM GIT` tab. - -Select the Piped and Platform Provider that you deploy to, once the Piped that's watching your Git repository catches the new unregistered application configuration file, it will be listed up in this panel. Click `ADD` to complete the registration. - -![](/images/registering-an-application-from-suggestions-new.png) -

-

- -## Manually configuring application information - -This way, you can postpone the preparation for your application's configuration after submitting all the necessary information about your app on the web console. - -By clicking on `+ADD` button at the application list page, a popup will be revealed from the right side as below: - -![](/images/registering-an-application-manually-new.png) -

-

- -After filling all the required fields, click `Save` button to complete the application registering. - -Here are the list of fields in the register form: - -| Field | Description | Required | -|-|-|-|-| -| Name | The application name | Yes | -| Kind | The application kind. Select one of these values: `Kubernetes`, `Terraform`, `CloudRun`, `Lambda` and `ECS`. | Yes | -| Piped | The piped that handles this application. Select one of the registered `piped`s at `Settings/Piped` page. | Yes | -| Repository | The Git repository contains application configuration and application configuration. Select one of the registered repositories in `piped` configuration. | Yes | -| Path | The relative path from the root of the Git repository to the directory containing application configuration and application configuration. Use `./` means repository root. | Yes | -| Config Filename | The name of application configuration file. Default is `app.pipecd.yaml`. | No | -| Platform Provider | Where the application will be deployed to. Select one of the registered cloud/platform providers in `piped` configuration. This field name previously was `Cloud Provider`. | Yes | - -> Note: Labels couldn't be set via this form. If you want, try the way to register via the application configuration defined in the Git repository. - -After submitting the form, one more step left is adding the application configuration file for that application into the application directory in Git repository same as we prepared in [the above method](../adding-an-application/#picking-from-a-list-of-unused-apps-suggested-by-pipeds). - -Please refer [Define your app's configuration](../defining-app-configuration/) or [pipecd/examples](../../examples/) for the examples of being supported application kind. - -## Updating an application -Regardless of which method you used to register the application, the web console can only be used to disable/enable/delete the application, besides the adding operation. All updates on application information must be done via the application configuration file stored in Git as a single source of truth. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: AppKind -spec: - name: new-name - labels: - team: new-team -``` - -Refer to [configuration reference](../../configuration-reference/) to see the full list of configurable fields. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/application-live-state.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/application-live-state.md deleted file mode 100644 index 6cab5cd950..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/application-live-state.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: "Application live state" -linkTitle: "Application live state" -weight: 7 -description: > - The live states of application components as well as their health status. ---- - -By default, `piped` continuously monitors the running resources/components of all deployed applications to determine the state of them and then send those results to the control plane. The application state will be visualized and rendered at the application details page in realtime. That helps developers can see what is running in the cluster as well as their health status. The application state includes: -- visual graph of application resources/components. Each resource/component node includes its metadata and health status. -- health status of the whole application. Application health status is `HEALTHY` if and only if the health statuses of all of its resources/components are `HEALTHY`. - -![](/images/application-details.png) -

-Application Details Page -

- -By clicking on the resource/component node, a popup will be revealed from the right side to show more details about that resource/component. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/cancelling-a-deployment.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/cancelling-a-deployment.md deleted file mode 100644 index 457a305e70..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/cancelling-a-deployment.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: "Cancelling a deployment" -linkTitle: "Cancelling a deployment" -weight: 5 -description: > - This page describes how to cancel a running deployment. ---- - -A running deployment can be cancelled from web UI at the deployment details page. - -If the application rollback is enabled in the application configuration, the rollback process will be executed after the cancelling. You can also explicitly specify to rollback after the cancelling or not from the web UI by clicking on `▼` mark on the right side of the `CANCEL` button to select your option. - -![](/images/cancel-deployment.png) -

-Cancel a Deployment from web UI -

- diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/configuration-drift-detection.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/configuration-drift-detection.md deleted file mode 100644 index 9c48ca9305..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/configuration-drift-detection.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: "Configuration drift detection" -linkTitle: "Configuration drift detection" -weight: 8 -description: > - Automatically detecting the configuration drift. ---- - -Configuration Drift is a phenomenon where running resources of service become more and more different from the definitions in Git as time goes on, due to manual ad-hoc changes and updates. -As PipeCD is using Git as a single source of truth, all application resources and infrastructure changes should be done by making a pull request to Git. Whenever a configuration drift occurs it should be notified to the developers and be fixed. - -PipeCD includes `Configuration Drift Detection` feature, which periodically compares running resources/configurations with the definitions in Git to detect the configuration drift and shows the comparing result in the application details web page as well as sends the notifications to the developers. - -### Detection Result -There are three statuses for the drift detection result: `SYNCED`, `OUT_OF_SYNC`, `DEPLOYING`. - -###### SYNCED - -This status means no configuration drift was detected. All resources/configurations are synced from the definitions in Git. From the application details page, this status is shown by a green "Synced" mark. - -![](/images/application-synced.png) -

-Application is in SYNCED state -

- -###### OUT_OF_SYNC - -This status means a configuration drift was detected. An application is in this status when at least one of the following conditions is satisfied: -- at least one resource is defined in Git but NOT running in the cluster -- at least one resource is NOT defined in Git but running in the cluster -- at least one resource that is both defined in Git and running in the cluster but NOT in the same configuration - -This status is shown by a red "Out of Sync" mark on the application details page. - -![](/images/application-out-of-sync.png) -

-Application is in OUT_OF_SYNC state -

- -Click on the "SHOW DETAILS" button to see more details about why the application is in the `OUT_OF_SYNC` status. In the below example, the replicas number of a Deployment was not matching, it was `300` in Git but `3` in the cluster. - -![](/images/application-out-of-sync-details.png) -

-The details shows why the application is in OUT_OF_SYNC state -

- -###### DEPLOYING - -This status means the application is deploying and the configuration drift detection is not running a white. Whenever a new deployment of the application was started, the detection process will temporarily be stopped until that deployment finishes and will be continued after that. - -### How to enable - -This feature is automatically enabled for all applications. - -You can change the checking interval as well as [configure the notification](../../managing-piped/configuring-notifications/) for these events in `piped` configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/_index.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/_index.md deleted file mode 100644 index 3f42bbdd32..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/_index.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Customizing application's deployment pipeline" -linkTitle: "Customizing deployment" -weight: 3 -description: > - This page describes how to customize an application's deployment pipeline with PipeCD defined stages. ---- - -In the previous section, we knew how to use PipeCD supporting application kind's stages to build up a pipeline that defines how Piped should deploy your application. In this section, aside from the application kind specified stages, we will talk about some commonly defined pipeline stages, which can be used to build up a more fashionable deployment pipeline for your application. - -![](/images/deployment-wait-stage.png) -

-Example deployment with a WAIT stage -

diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/adding-a-manual-approval.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/adding-a-manual-approval.md deleted file mode 100644 index 3ee946b5fd..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/adding-a-manual-approval.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: "Adding a manual approval stage" -linkTitle: "Manual approval stage" -weight: 2 -description: > - This page describes how to add a manual approval stage. ---- - -While deploying an application to production environments, some teams require manual approvals before continuing. -The manual approval stage enables you to control when the deployment is allowed to continue by requiring a specific person or team to approve. -This stage is named by `WAIT_APPROVAL` and you can add it to your pipeline before some stages should be approved before they can be executed. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - - name: WAIT_APPROVAL - with: - timeout: 6h - approvers: - - user-abc - - name: K8S_PRIMARY_ROLLOUT -``` - -As above example, the deployment requires an approval from `user-abc` before `K8S_PRIMARY_ROLLOUT` stage can be executed. - -The value of user ID in the `approvers` list depends on your [SSO configuration](../../../managing-controlplane/auth/), it must be GitHub's user ID if your SSO was configured to use GitHub provider, it must be Gmail account if your SSO was configured to use Google provider. - -In case the `approvers` field was not configured, anyone in the project who has `Editor` or `Admin` role can approve the deployment pipeline. - -Also, it will end with failure when the time specified in `timeout` has elapsed. Default is `6h`. - -![](/images/deployment-wait-approval-stage.png) -

-Deployment with a WAIT_APPROVAL stage -

diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/adding-a-wait-stage.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/adding-a-wait-stage.md deleted file mode 100644 index f2d381d8f8..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/adding-a-wait-stage.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: "Adding a wait stage" -linkTitle: "Wait stage" -weight: 1 -description: > - This page describes how to add a WAIT stage. ---- - -In addition to waiting for approvals from someones, the deployment pipeline can be configured to wait an amount of time before continuing. -This can be done by adding the `WAIT` stage into the pipeline. This stage has one configurable field is `duration` to configure how long should be waited. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - - name: WAIT - with: - duration: 5m - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_CANARY_CLEAN -``` - -![](/images/deployment-wait-stage.png) -

-Deployment with a WAIT stage -

diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/automated-deployment-analysis.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/automated-deployment-analysis.md deleted file mode 100644 index 2d16a427c4..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/customizing-deployment/automated-deployment-analysis.md +++ /dev/null @@ -1,297 +0,0 @@ ---- -title: "Adding an automated deployment analysis stage" -linkTitle: "Automated deployment analysis stage" -weight: 3 -description: > - This page describes how to configure Automated Deployment Analysis feature. ---- - ->NOTE: This feature is currently alpha status. - -Automated Deployment Analysis (ADA) evaluates the impact of the deployment you are in the middle of by analyzing the metrics data, log entries, and the responses of the configured HTTP requests. -The analysis of the newly deployed application is often carried out in a manual, ad-hoc or statistically incorrect manner. -ADA automates that and helps to build a robust deployment process. -ADA is available as a stage in the pipeline specified in the application configuration file. - -ADA does the analysis by periodically performing queries against the [Analysis Provider](../../../../concepts/#analysis-provider) and evaluating the results to know the impact of the deployment. Then based on these evaluating results, the deployment can be rolled back immediately to minimize any negative impacts. - -The canonical use case for this stage is to determine if your canary deployment should proceed. - -![](/images/deployment-analysis-stage.png) -

-Automatic rollback based on the analysis result -

- -## Prerequisites -Before enabling ADA inside the pipeline, all required Analysis Providers must be configured in the Piped Configuration according to [this guide](../../../managing-piped/adding-an-analysis-provider/). - -## Analysis by metrics -### Strategies -You can choose one of the four strategies to fit your use case. - -- `THRESHOLD`: A simple method to compare against a statically defined threshold (same as the typical analysis method up to `v0.18.0`). -- `PREVIOUS`: A method to compare metrics with the last successful deployment. -- `CANARY_BASELINE`: A method to compare the metrics between the Canary and Baseline variants. -- `CANARY_PRIMARY`(not recommended): A method to compare the metrics between the Canary and Primary variants. - -`THRESHOLD` is the simplest strategy, so it's for you if you attempt to evaluate this feature. - -`THRESHOLD` only checks if the query result falls within the statically specified range, whereas others evaluate by checking the deviation of two time-series data. -Therefore, those configuration fields are slightly different from each other. The next section covers how to configure the ADA stage for each strategy. - -### Configuration -Here is an example for the `THRESHOLD` strategy. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: THRESHOLD - provider: my-prometheus - interval: 5m - expected: - max: 0.01 - query: | - sum (rate(http_requests_total{status=~"5.*"}[5m])) - / - sum (rate(http_requests_total[5m])) -``` - -In the `provider` field, put the name of the provider in Piped configuration prepared in the [Prerequisites](#prerequisites) section. - -The `ANALYSIS` stage will continue to run for the period specified in the `duration` field. -In the meantime, Piped sends the given `query` to the Analysis Provider at each specified `interval`. - -For each query, it checks if the result is within the expected range. If it's not expected, this `ANALYSIS` stage will fail (typically the rollback stage will be started). -You can change the acceptable number of failures by setting the `failureLimit` field. - -The other strategies are basically the same, but there are slight differences. Let's take a look at them. - -##### PREVIOUS strategy -In the `PREVIOUS` strategy, Piped queries the analysis provider with the time range when the deployment was previously successful, and compares that metrics with the current metrics. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: PREVIOUS - provider: my-prometheus - deviation: HIGH - interval: 5m - query: | - sum (rate(http_requests_total{status=~"5.*"}[5m])) - / - sum (rate(http_requests_total[5m])) -``` - -In the `THRESHOLD` strategy, we used `expected` to evaluate the deployment, but here we use `deviation` instead. -The stage fails on deviation in the specified direction. In the above example, it fails if the current metrics is higher than the previous. - -##### CANARY strategy - -**With baseline**: - -In the `CANARY_BASELINE` strategy, Piped checks if there is a significant difference between the metrics of the two running variants, Canary and Baseline. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: CANARY_BASELINE - provider: my-prometheus - deviation: HIGH - interval: 5m - query: | - sum (rate(http_requests_total{job="foo-{{ .Variant.Name }}", status=~"5.*"}[5m])) - / - sum (rate(http_requests_total{job="foo-{{ .Variant.Name }}"}[5m])) -``` - -Like `PREVIOUS`, you specify the conditions for failure with `deviation`. - -It generates different queries for Canary and Baseline to compare the metrics. You can use the Variant args to template the queries. -Analysis Template uses the [Go templating engine](https://golang.org/pkg/text/template/) which only replaces values. This allows variant-specific data to be embedded in the query. - -The available built-in args currently are: - -| Property | Type | Description | -|-|-|-| -| Variant.Name | string | "canary", "baseline", or "primary" will be populated | - -Also, you can define the custom args using `baselineArgs` and `canaryArgs`, and refer them like `{{ .VariantCustom.Args.job }}`. - -```yaml - metrics: - - strategy: CANARY_BASELINE - provider: my-prometheus - deviation: HIGH - baselineArgs: - job: bar - canaryArgs: - job: baz - interval: 5m - query: cpu_usage{job="{{ .VariantCustomArgs.job }}", status=~"5.*"} -``` - -**With primary (not recommended)**: - -If for some reason you cannot provide the Baseline variant, you can also compare Canary and Primary. -However, we recommend that you compare it with Baseline that is a variant launched at the same time as Canary as much as possible. - -##### Comparison algorithm -The metric comparison algorithm in PipeCD uses a nonparametric statistical test called [Mann-Whitney U test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test) to check for a significant difference between two metrics collection (like Canary and Baseline, or the previous deployment and the current metrics). - -### Example pipelines - -**Analyze the canary variant using the `THRESHOLD` strategy:** - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 20% - - name: ANALYSIS - with: - duration: 30m - metrics: - - provider: my-prometheus - interval: 10m - expected: - max: 0.1 - query: rate(cpu_usage_total{app="foo"}[10m]) - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_CANARY_CLEAN -``` - -**Analyze the primary variant using the `PREVIOUS` strategy:** - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_PRIMARY_ROLLOUT - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: PREVIOUS - provider: my-prometheus - interval: 5m - deviation: HIGH - query: rate(cpu_usage_total{app="foo"}[5m]) -``` - -**Analyze the canary variant using the `CANARY_BASELINE` strategy:** - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 20% - - name: K8S_BASELINE_ROLLOUT - with: - replicas: 20% - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: CANARY_BASELINE - provider: my-prometheus - interval: 10m - deviation: HIGH - query: rate(cpu_usage_total{app="foo", variant="{{ .Variant.Name }}"}[10m]) - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_CANARY_CLEAN - - name: K8S_BASELINE_CLEAN -``` - -The full list of configurable `ANALYSIS` stage fields are [here](../../../configuration-reference/#analysisstageoptions). - -See more the [example](https://github.com/pipe-cd/examples/blob/master/kubernetes/analysis-by-metrics/app.pipecd.yaml). - -## Analysis by logs - ->TBA - -## Analysis by http - ->TBA - -### [Optional] Analysis Template -Analysis Templating is a feature that allows you to define some shared analysis configurations to be used by multiple applications. These templates must be placed at the `.pipe` directory at the root of the Git repository. Any application in that Git repository can use to the defined template by specifying the name of the template in the application configuration file. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: AnalysisTemplate -spec: - metrics: - http_error_rate: - interval: 30m - provider: my-prometheus - expected: - max: 0 - query: | - sum without(status) (rate(http_requests_total{status=~"5.*", job="{{ .App.Name }}"}[1m])) - / - sum without(status) (rate(http_requests_total{job="{{ .App.Name }}"}[1m])) -``` - -Once the AnalysisTemplate is defined, you can reference from the application configuration using the `template` field. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - template: - name: http_error_rate -``` - -Analysis Template uses the [Go templating engine](https://golang.org/pkg/text/template/) which only replaces values. This allows deployment-specific data to be embedded in the analysis template. - -The available built-in args are: - -| Property | Type | Description | -|-|-|-| -| App.Name | string | Application Name. | -| K8s.Namespace | string | The Kubernetes namespace where manifests will be applied. | - -Also, custom args is supported. Custom args placeholders can be defined as `{{ .AppCustomArgs. }}`. - -Of course, it can be used in conjunction with [Variant args](#canary-strategy). - -See [here](https://github.com/pipe-cd/examples/blob/master/.pipe/analysis-template.yaml) for more examples. -And the full list of configurable `AnalysisTemplate` fields are [here](/docs/user-guide/configuration-reference/#analysis-template-configuration). diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/_index.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/_index.md deleted file mode 100644 index 6bcca6b06f..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Defining application's configuration" -linkTitle: "Defining app configuration" -weight: 2 -description: > - This page describes how to configure your application's deployment for each application kind. ---- - -In the previous section, we knew that each PipeCD application requires a configuration file (we call it the application configuration file) that contains the application's information (such as name, label, etc) and also defines how should Piped deploy that application. In this section, we will show you how to define a deployment pipeline like that for each kind of PipeCD supporting application. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/cloudrun.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/cloudrun.md deleted file mode 100644 index 7333dedf93..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/cloudrun.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: "Configuring Cloud Run application" -linkTitle: "Cloud Run" -weight: 3 -description: > - Specific guide to configuring deployment for Cloud Run application. ---- - -Deploying a Cloud Run application requires a `service.yaml` file placing inside the application directory. That file contains the service specification used by Cloud Run as following: - -``` yaml -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: SERVICE_NAME -spec: - template: - metadata: - annotations: - autoscaling.knative.dev/maxScale: '5' - spec: - containerConcurrency: 80 - containers: - - args: - - server - image: gcr.io/pipecd/helloworld:v0.5 - ports: - - containerPort: 9085 - resources: - limits: - cpu: 1000m - memory: 128Mi -``` - -## Quick sync - -By default, when the [pipeline](../../../configuration-reference/#cloud-run-application) was not specified, PipeCD triggers a quick sync deployment for the merged pull request. -Quick sync for a Cloud Run deployment will roll out the new version and switch all traffic to it. - -## Sync with the specified pipeline - -The [pipeline](../../../configuration-reference/#cloud-run-application) field in the application configuration is used to customize the way to do the deployment. -You can add a manual approval before routing traffic to the new version or add an analysis stage the do some smoke tests against the new version before allowing them to receive the real traffic. - -These are the provided stages for Cloud Run application you can use to build your pipeline: - -- `CLOUDRUN_PROMOTE` - - promote the new version to receive an amount of traffic - -and other common stages: -- `WAIT` -- `WAIT_APPROVAL` -- `ANALYSIS` - -See the description of each stage at [Customize application deployment](../../customizing-deployment/). - -Here is an example that rolls out the new version gradually: - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: CloudRunApp -spec: - pipeline: - stages: - # Promote new version to receive 10% of traffic. - - name: CLOUDRUN_PROMOTE - with: - percent: 10 - - name: WAIT - with: - duration: 10m - # Promote new version to receive 50% of traffic. - - name: CLOUDRUN_PROMOTE - with: - percent: 50 - - name: WAIT - with: - duration: 10m - # Promote new version to receive all traffic. - - name: CLOUDRUN_PROMOTE - with: - percent: 100 -``` - -## Reference - -See [Configuration Reference](../../../configuration-reference/#cloud-run-application) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/ecs.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/ecs.md deleted file mode 100644 index 18eda91166..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/ecs.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: "Configuring ECS application" -linkTitle: "ECS" -weight: 5 -description: > - Specific guide to configuring deployment for Amazon ECS application. ---- - -Deploying an Amazon ECS application requires `TaskDefinition` and `Service` configuration files placing inside the application directory. Those files contain all configuration for [ECS TaskDefinition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) object and [ECS Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) object, and will be used by Piped agent while deploy your application/service to ECS cluster. - -If you're not familiar with ECS, you can get examples for those files from [here](../../../../examples/#ecs-applications). - -## Quick sync - -By default, when the [pipeline](../../../configuration-reference/#ecs-application) was not specified, PipeCD triggers a quick sync deployment for the merged pull request. -Quick sync for an ECS deployment will roll out the new version and switch all traffic to it immediately. - -## Sync with the specified pipeline - -The [pipeline](../../../configuration-reference/#ecs-application) field in the application configuration is used to customize the way to do the deployment. -You can add a manual approval before routing traffic to the new version or add an analysis stage the do some smoke tests against the new version before allowing them to receive the real traffic. - -These are the provided stages for ECS application you can use to build your pipeline: - -- `ECS_CANARY_ROLLOUT` - - deploy workloads of the new version as CANARY variant, but it is still receiving no traffic. -- `ECS_PRIMARY_ROLLOUT` - - deploy workloads of the new version as PRIMARY variant, but it is still receiving no traffic. -- `ECS_TRAFFIC_ROUTING` - - routing traffic to the specified variants. -- `ECS_CANARY_CLEAN` - - destroy all workloads of CANARY variant. - -and other common stages: -- `WAIT` -- `WAIT_APPROVAL` -- `ANALYSIS` - -See the description of each stage at [Customize application deployment](../../customizing-deployment/). - -Here is an example that rolls out the new version gradually: - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: ECSApp -spec: - input: - # Path to Service configuration file in Yaml/JSON format. - # Default is `service.json` - serviceDefinitionFile: servicedef.yaml - # Path to TaskDefinition configuration file in Yaml/JSON format. - # Default is `taskdef.json` - taskDefinitionFile: taskdef.yaml - targetGroups: - primary: - targetGroupArn: arn:aws:elasticloadbalancing:ap-northeast-1:XXXX:targetgroup/ecs-canary-blue/YYYY - containerName: web - containerPort: 80 - canary: - targetGroupArn: arn:aws:elasticloadbalancing:ap-northeast-1:XXXX:targetgroup/ecs-canary-green/YYYY - containerName: web - containerPort: 80 - pipeline: - stages: - # Deploy the workloads of CANARY variant, the number of workload - # for CANARY variant is equal to 30% of PRIMARY's workload. - # But this is still receiving no traffic. - - name: ECS_CANARY_ROLLOUT - with: - scale: 30 - # Change the traffic routing state where - # the CANARY workloads will receive the specified percentage of traffic. - # This is known as multi-phase canary strategy. - - name: ECS_TRAFFIC_ROUTING - with: - canary: 20 - # Optional: We can also add an ANALYSIS stage to verify the new version. - # If this stage finds any not good metrics of the new version, - # a rollback process to the previous version will be executed. - - name: ANALYSIS - # Update the workload of PRIMARY variant to the new version. - - name: ECS_PRIMARY_ROLLOUT - # Change the traffic routing state where - # the PRIMARY workloads will receive 100% of the traffic. - - name: ECS_TRAFFIC_ROUTING - with: - primary: 100 - # Destroy all workloads of CANARY variant. - - name: ECS_CANARY_CLEAN -``` - -## Reference - -See [Configuration Reference](../../../configuration-reference/#ecs-application) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/kubernetes.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/kubernetes.md deleted file mode 100644 index 0b744bc102..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/kubernetes.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: "Configuring Kubernetes application" -linkTitle: "Kubernetes" -weight: 1 -description: > - Specific guide to configuring deployment for Kubernetes application. ---- - -Based on the application configuration and the pull request changes, PipeCD plans how to execute the deployment: doing quick sync or doing progressive sync with the specified pipeline. - -## Quick sync - -Quick sync is a fast way to sync application to the state specified in the target Git commit without any progressive strategy. It just applies all the defined manifiests to sync the application. -The quick sync will be planned in one of the following cases: -- no pipeline was specified in the application configuration file -- [pipeline](../../../configuration-reference/#pipeline) was specified but the PR did not make any changes on workload (e.g. Deployment's pod template) or config (e.g. ConfigMap, Secret) - -For example, the application configuration as below is missing the pipeline field. This means any pull request touches the application will trigger a quick sync deployment. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - repository: pipecd - name: helloworld - version: v0.3.0 -``` - -In another case, even when the pipeline was specified, a PR that just changes the Deployment's replicas number for scaling will also trigger a quick sync deployment. - -## Sync with the specified pipeline - -The `pipeline` field in the application configuration is used to customize the way to do deployment by specifying and configuring the execution stages. You may want to configure those stages to enable a progressive deployment with a strategy like canary, blue-green, a manual approval, an analysis stage. - -To enable customization, PipeCD defines three variants for each Kubernetes application: primary (aka stable), baseline and canary. -- `primary` runs the current version of code and configuration. -- `baseline` runs the same version of code and configuration as the primary variant. (Creating a brand-new baseline workload ensures that the metrics produced are free of any effects caused by long-running processes.) -- `canary` runs the proposed change of code or configuration. - -Depending on the configured pipeline, any variants can exist and receive the traffic during the deployment process but once the deployment is completed, only the `primary` variant should be remained. - -These are the provided stages for Kubernetes application you can use to build your pipeline: - -- `K8S_PRIMARY_ROLLOUT` - - update the primary resources to the state defined in the target commit -- `K8S_CANARY_ROLLOUT` - - generate canary resources based on the definition of the primary resource in the target commit and apply them -- `K8S_CANARY_CLEAN` - - remove all canary resources -- `K8S_BASELINE_ROLLOUT` - - generate baseline resources based on the definition of the primary resource in the target commit and apply them -- `K8S_BASELINE_CLEAN` - - remove all baseline resources -- `K8S_TRAFFIC_ROUTING` - - split traffic between variants - -and other common stages: -- `WAIT` -- `WAIT_APPROVAL` -- `ANALYSIS` - -See the description of each stage at [Customize application deployment](../../customizing-deployment/). - -## Manifest Templating - -In addition to plain-YAML, PipeCD also supports Helm and Kustomize for templating application manifests. - -A helm chart can be loaded from: -- the same git repository with the application directory, we call as a `local chart` - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - path: ../../local/helm-charts/helloworld -``` - -- a different git repository, we call as a `remote git chart` - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - gitRemote: git@github.com:pipe-cd/manifests.git - ref: v0.5.0 - path: manifests/helloworld -``` - -- a Helm chart repository, we call as a `remote chart` - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - repository: pipecd - name: helloworld - version: v0.5.0 -``` - -A kustomize base can be loaded from: -- the same git repository with the application directory, we call as a `local base` -- a different git repository, we call as a `remote base` - -See [Examples](../../../examples/#kubernetes-applications) for more specific. - -## Reference - -See [Configuration Reference](../../../configuration-reference/#kubernetes-application) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/lambda.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/lambda.md deleted file mode 100644 index 3b1a180505..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/lambda.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: "Configuring Lambda application" -linkTitle: "Lambda" -weight: 4 -description: > - Specific guide to configuring deployment for Lambda application. ---- - -Deploying a Lambda application requires a `function.yaml` file placing inside the application directory. That file contains values to be used to deploy Lambda function on your AWS cluster. -Currently, Piped supports deploying all types of Lambda deployment packages: -- container images (called [container image as Lambda function](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/)) -- `.zip` file archives (which stored in AWS S3) - -Besides, Piped also supports deploying your Lambda function __directly from the function source code__ which is stored in a remote git repository. - -#### Deploy container image as Lambda function - -A sample `function.yaml` file for container image as Lambda function used deployment as follows: - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaFunction -spec: - name: SimpleFunction - image: ecr.ap-northeast-1.amazonaws.com/lambda-test:v0.0.1 - role: arn:aws:iam::76xxxxxxx:role/lambda-role - # The amount of memory available to the Lambda application - # at runtime. The value can be any multiple of 1 MB. - memory: 512 - # Timeout of the Lambda application, the value must - # in between 1 to 900 seconds. - timeout: 30 - tags: - app: simple - environments: - FOO: bar -``` - -Except the `tags` and the `environments` field, all others are required fields for the deployment to run. - -The `role` value represents the service role (for your Lambda function to run), not for Piped agent to deploy your Lambda application. To be able to pull container images from AWS ECR, besides policies to run as usual, you need to add `Lambda.ElasticContainerRegistry` __read__ permission to your Lambda function service role. - -The `environments` field represents environment variables that can be accessed by your Lambda application at runtime. __In case of no value set for this field, all environment variables for the deploying Lambda application will be revoked__, so make sure you set all currently required environment variables of your running Lambda application on `function.yaml` if you migrate your app to PipeCD deployment. - -#### Deploy .zip file archives as Lambda function - -It's recommended to use container image as Lambda function due to its simplicity, but as mentioned above, below is a sample `function.yaml` file for Lambda which uses zip packing source code stored in AWS S3. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaFunction -spec: - name: SimpleZipPackingS3Function - role: arn:aws:iam::76xxxxxxx:role/lambda-role - # --- 3 next lines allow Piped to determine your Lambda function code stored in AWS S3. - s3Bucket: pipecd-sample-lambda - s3Key: pipecd-sample-src - s3ObjectVersion: 1pTK9_v0Kd7I8Sk4n6abzCL - # --- - handler: app.lambdaHandler - runtime: nodejs14.x - memory: 512 - timeout: 30 - environments: - FOO: bar - tags: - app: simple-zip-s3 -``` - -Value for the `runtime` field should be listed in [AWS Lambda runtimes official docs](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html). All other fields setting are remained as in the case of using [container image as Lambda function](#deploy-container-image-as-lambda-function) pattern. - -#### Deploy source code directly as Lambda function - -In case you don’t have a separated CI pipeline that provides artifacts (such as container image, built zip files) as its outputs and want to set up a simple pipeline to deploy the Lambda function directly from your source code, this deployment package is for you. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaFunction -spec: - name: SimpleCanaryZipFunction - role: arn:aws:iam::76xxxxxxx:role/lambda-role - # source configuration use to determine the source code of your Lambda function. - source: - # git remote address where the source code is placing. - git: git@github.com:username/lambda-function-code.git - # the commit SHA or tag for remote git. Use branch name means you will always use - # the latest code of that branch as Lambda function code which is NOT recommended. - ref: dede7cdea5bbd3fdbcc4674bfcd2b2f9e0579603 - # relative path from the repository root directory to the function code directory. - path: hello-world - handler: app.lambdaHandler - runtime: nodejs14.x - memory: 128 - timeout: 5 - tags: - app: canary-zip -``` - -All other fields setting are remained as in the case of using [.zip archives as Lambda function](#deploy-zip-file-archives-as-lambda-function) pattern. - -## Quick sync - -By default, when the [pipeline](../../../configuration-reference/#lambda-application) was not specified, PipeCD triggers a quick sync deployment for the merged pull request. -Quick sync for a Lambda deployment will roll out the new version and switch all traffic to it. - -## Sync with the specified pipeline - -The [pipeline](../../../configuration-reference/#lambda-application) field in the application configuration is used to customize the way to do the deployment. -You can add a manual approval before routing traffic to the new version or add an analysis stage the do some smoke tests against the new version before allowing them to receive the real traffic. - -These are the provided stages for Lambda application you can use to build your pipeline: - -- `LAMBDA_CANARY_ROLLOUT` - - deploy workloads of the new version, but it is still receiving no traffic. -- `LAMBDA_PROMOTE` - - promote the new version to receive an amount of traffic. - -and other common stages: -- `WAIT` -- `WAIT_APPROVAL` -- `ANALYSIS` - -See the description of each stage at [Customize application deployment](../../customizing-deployment/). - -Here is an example that rolls out the new version gradually: - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaApp -spec: - pipeline: - stages: - # Deploy workloads of the new version. - # But this is still receiving no traffic. - - name: LAMBDA_CANARY_ROLLOUT - # Promote new version to receive 10% of traffic. - - name: LAMBDA_PROMOTE - with: - percent: 10 - - name: WAIT - with: - duration: 10m - # Promote new version to receive 50% of traffic. - - name: LAMBDA_PROMOTE - with: - percent: 50 - - name: WAIT - with: - duration: 10m - # Promote new version to receive all traffic. - - name: LAMBDA_PROMOTE - with: - percent: 100 -``` - -## Reference - -See [Configuration Reference](../../../configuration-reference/#lambda-application) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/terraform.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/terraform.md deleted file mode 100644 index 351992e133..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/defining-app-configuration/terraform.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "Configuring Terraform application" -linkTitle: "Terraform" -weight: 2 -description: > - Specific guide to configuring deployment for Terraform application. ---- - -## Quick Sync - -By default, when the [pipeline](../../../configuration-reference/#terraform-application) was not specified, PipeCD triggers a quick sync deployment for the merged pull request. -Quick sync for a Terraform deployment does `terraform plan` and if there are any changes detected it applies those changes automatically. - -## Sync with the specified pipeline - -The [pipeline](../../../configuration-reference/#terraform-application) field in the application configuration is used to customize the way to do the deployment. -You can add a manual approval before doing `terraform apply` or add an analysis stage after applying the changes to determine the impact of those changes. - -These are the provided stages for Terraform application you can use to build your pipeline: - -- `TERRAFORM_PLAN` - - do the terraform plan and show the changes will be applied -- `TERRAFORM_APPLY` - - apply all the infrastructure changes - -and other common stages: -- `WAIT` -- `WAIT_APPROVAL` -- `ANALYSIS` - -See the description of each stage at [Customize application deployment](../../customizing-deployment/). - -## Module location - -Terraform module can be loaded from: - -- the same git repository with the application directory, we call as a `local module` -- a different git repository, we call as a `remote module` - -## Reference - -See [Configuration Reference](../../../configuration-reference/#terraform-application) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/deployment-chain.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/deployment-chain.md deleted file mode 100644 index ac4bee471d..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/deployment-chain.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: "Deployment chain" -linkTitle: "Deployment chain" -weight: 10 -description: > - Specific guide for configuring chain of deployments. ---- - -For users who want to use PipeCD to build a complex deployment flow, which contains multiple applications across multiple application kinds and roll out them to multiple clusters gradually or promoting across environments, this guideline will show you how to use PipeCD to archive that requirement. - -## Configuration - -The idea of this feature is to trigger the whole deployment chain when a specified deployment is triggered. To enable trigger the deployment chain, we need to add a configuration section named `postSync` which contains all configurations that be used when the deployment is triggered. For this `Deployment Chain` feature, configuration for it is under `postSync.chain` section. - -A canonical configuration looks as below: - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: TerraformApp -spec: - input: - ... - pipeline: - ... - postSync: - chain: - applications: - # Find all applications with name `application-2` and trigger them. - - name: application-2 - # Fill all applications with name `application-3` of kind `KUBERNETES` - # and trigger them. - - name: application-3 - kind: KUBERNETES -``` - -As a result, the above configuration will be used to create a deployment chain like the below figure - -![](/images/deployment-chain-figure.png) - -In the context of the deployment chain in PipeCD, a chain is made up of many `blocks`, and each block contains multiple `nodes` which is the reference to a deployment. The first block in the chain always contains only one node, which is the deployment that triggers the whole chain. Other blocks of the chain are built using filters which are configurable via `postSync.chain.applications` section. As for the above example, the second block `Block 2` contains 2 different nodes, which are 2 different PipeCD applications with the same name `application-2`. - -__Tip__: - -1. If you followed all the configuration references and built your deployment chain configuration, but some deployments in your defined chain are not triggered as you want, please re-check those deployments [`trigger configuration`](../triggering-a-deployment/#trigger-configuration). The `onChain` trigger is __disabled by default__; you need to enable that configuration to enable your deployment to be triggered as a node in the deployment chain. -2. Values configured under `postSync.chain.applications` - we call it __Application matcher__'s values are merged using `AND` operator. Currently, only `name` and `kind` are supported, but `labels` will also be supported soon. - -See [Examples](../../examples/#deployment-chain) for more specific. - -## Deployment chain characteristic - -Something you need to care about while creating your deployment chain with PipeCD - -1. The deployment chain blocks are run in sequence, one by one. But all nodes in the same block are run in parallel, you should ensure that all nodes(deployments) in the same block do not depend on each other. -2. Once a node in a block has finished with `FAILURE` or `CANCELLED` status, the containing block will be set to fail, and all other nodes which have not yet finished will be set to `CANCELLED` status (those nodes will be rolled back if they're in the middle of its deploying process). Consequently, all blocks after that failed block will be set to `CANCELLED` status and be stopped. - -## Console view - -![](/images/deployment-chain-console.png) - -The UI for this deployment chain feature currently is under deployment, we can only __view deployments in chain one by one__ on the deployments page and deployment detail page as usual. - -## Reference - -See [Configuration Reference](../../configuration-reference/#postsync) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/rolling-back-a-deployment.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/rolling-back-a-deployment.md deleted file mode 100644 index 4997f41bb5..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/rolling-back-a-deployment.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: "Rolling back a deployment" -linkTitle: "Rolling back a deployment" -weight: 6 -description: > - This page describes when a deployment is rolled back automatically and how to manually rollback a deployment. ---- - -Rolling back a deployment can be automated by enabling the `autoRollback` field in the application configuration of the application. When `autoRollback` is enabled, the deployment will be rolled back if any of the following conditions are met: -- a stage of the deployment pipeline was failed -- an analysis stage determined that the deployment had a negative impact -- any error occurs while deploying - -When the rolling back process is triggered, a new `ROLLBACK` stage will be added to the deployment pipeline and it reverts all the applied changes. - -![](/images/rolled-back-deployment.png) -

-A deployment was rolled back -

- -Alternatively, manually rolling back a running deployment can be done from web UI by clicking on `Cancel with rollback` button. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/secret-management.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/secret-management.md deleted file mode 100755 index c1ddc15912..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/secret-management.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: "Secret management" -linkTitle: "Secret management" -weight: 9 -description: > - Storing secrets safely in the Git repository. ---- - -When doing GitOps, user wants to use Git as a single source of truth. But storing credentials like Kubernetes Secret or Terraform's credentials directly in Git is not safe. -This feature helps you keep that sensitive information safely in Git, right next to your application manifests. - -Basically, the flow will look like this: -- user encrypts their secret data via the PipeCD's Web UI and stores the encrypted data in Git -- `Piped` decrypts them before doing deployment tasks - -## Prerequisites - -Before using this feature, `Piped` needs to be started with a key pair for secret encryption. - -You can use the following command to generate a key pair: - -``` console -openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out private-key -openssl pkey -in private-key -pubout -out public-key -``` - -Then specify them while [installing](../../../installation/install-piped/installing-on-kubernetes) the `Piped` with these options: - -``` console ---set-file secret.data.secret-public-key=PATH_TO_PUBLIC_KEY_FILE \ ---set-file secret.data.secret-private-key=PATH_TO_PRIVATE_KEY_FILE -``` - -Finally, enable this feature in Piped configuration file with `secretManagement` field as below: - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - pipedID: your-piped-id - ... - secretManagement: - type: KEY_PAIR - config: - privateKeyFile: /etc/piped-secret/secret-private-key - publicKeyFile: /etc/piped-secret/secret-public-key -``` - -## Encrypting secret data - -In order to encrypt the secret data, go to the application list page and click on the options icon at the right side of the application row, choose "Encrypt Secret" option. -After that, input your secret data and click on "ENCRYPT" button. -The encrypted data should be shown for you. Copy it to store in Git. - -![](/images/sealed-secret-application-list.png) -

-Application list page -

- -
- -![](/images/sealed-secret-encrypting-form.png) -

-The form for encrypting secret data -

- -## Storing encrypted secrets in Git - -To make encrypted secrets available to an application, they must be specified in the application configuration file of that application. - -- `encryptedSecrets` contains a list of the encrypted secrets. -- `decryptionTargets` contains a list of files that are using one of the encrypted secrets and should be decrypted by `Piped`. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -# One of Piped defined app kind such as: KubernetesApp -kind: {APPLICATION_KIND} -spec: - encryption: - encryptedSecrets: - password: encrypted-data - decryptionTargets: - - secret.yaml -``` - -## Accessing encrypted secrets - -Any file in the application directory can use `.encryptedSecrets` context to access secrets you have encrypted and stored in the application configuration. - -For example, - -- Accessing by a Kubernets Secret manfiest - -``` yaml -apiVersion: v1 -kind: Secret -metadata: - name: simple-sealed-secret -data: - password: "{{ .encryptedSecrets.password }}" -``` - -- Configuring ENV variable of a Lambda function to use a encrypted secret - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaFunction -spec: - name: HelloFunction - environments: - KEY: "{{ .encryptedSecrets.key }}" -``` - -In all cases, `Piped` will decrypt the encrypted secrets and render the decryption target files before using to handle any deployment tasks. - -## Examples - -- [examples/kubernetes/secret-management](https://github.com/pipe-cd/examples/tree/master/kubernetes/secret-management) -- [examples/cloudrun/secret-management](https://github.com/pipe-cd/examples/tree/master/cloudrun/secret-management) -- [examples/lambda/secret-management](https://github.com/pipe-cd/examples/tree/master/lambda/secret-management) -- [examples/terraform/secret-management](https://github.com/pipe-cd/examples/tree/master/terraform/secret-management) -- [examples/ecs/secret-management](https://github.com/pipe-cd/examples/tree/master/ecs/secret-management) diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-application/triggering-a-deployment.md b/docs/content/en/docs-v0.37.x/user-guide/managing-application/triggering-a-deployment.md deleted file mode 100644 index 3fcb5559ab..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-application/triggering-a-deployment.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: "Triggering a deployment" -linkTitle: "Triggering a deployment" -weight: 4 -description: > - This page describes when a deployment is triggered automatically and how to manually trigger a deployment. ---- - -PipeCD uses Git as a single source of truth; all application resources are defined declaratively and immutably in Git. Whenever a developer wants to update the application or infrastructure, they will send a pull request to that Git repository to propose the change. The state defined in Git is the desired state for the application and infrastructure running in the cluster. - -PipeCD applies the proposed changes to running resources in the cluster by triggering needed deployments for applications. The deployment mission is syncing all running resources of the application in the cluster to the state specified in the newest commit in Git. - -By default, when a new merged pull request touches an application, a new deployment for that application will be triggered to execute the sync process. But users can configure the application to control when a new deployment should be triggered or not. For example, using [`onOutOfSync`](#trigger-configuration) to enable the ability to attempt to resolve `OUT_OF_SYNC` state whenever a configuration drift has been detected. - -### Trigger configuration - -Configuration for the trigger used to determine whether we trigger a new deployment. There are several configurable types: -- `onCommit`: Controls triggering new deployment when new Git commits touched the application. -- `onCommand`: Controls triggering new deployment when received a new `SYNC` command. -- `onOutOfSync`: Controls triggering new deployment when application is at `OUT_OF_SYNC` state. -- `onChain`: Controls triggering new deployment when the application is counted as a node of some chains. - -See [Configuration Reference](../../configuration-reference/#deploymenttrigger) for the full configuration. - -After a new deployment was triggered, it will be queued to handle by the appropriate `piped`. And at this time the deployment pipeline was not decided yet. -`piped` schedules all deployments of applications to ensure that for each application only one deployment will be executed at the same time. -When no deployment of an application is running, `piped` picks queueing one to plan the deploying pipeline. -`piped` plans the deploying pipeline based on the application configuration and the diff between the running state and the specified state in the newest commit. -For example: - -- when the merged pull request updated a Deployment's container image or updated a mounting ConfigMap or Secret, `piped` planner will decide that the deployment should use the specified pipeline to do a progressive deployment. -- when the merged pull request just updated the `replicas` number, `piped` planner will decide to use a quick sync to scale the resources. - -You can force `piped` planner to decide to use the [QuickSync](../../../concepts/#sync-strategy) or the specified pipeline based on the commit message by configuring [CommitMatcher](../../configuration-reference/#commitmatcher) in the application configuration. - -After being planned, the deployment will be executed as the decided pipeline. The deployment execution including the state of each stage as well as their logs can be viewed in realtime at the deployment details page. - -![](/images/deployment-details.png) -

-A Running Deployment at the Deployment Details Page -

- -As explained above, by default all deployments will be triggered automatically by checking the merged commits but you also can manually trigger a new deployment from web UI. -By clicking on `SYNC` button at the application details page, a new deployment for that application will be triggered to sync the application to be the state specified at the newest commit of the master branch (default branch). - -![](/images/application-details.png) -

-Application Details Page -

- diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/_index.md b/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/_index.md deleted file mode 100644 index efdfe70387..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Managing Control Plane" -linkTitle: "Managing Control Plane" -weight: 6 -description: > - This guide is for administrators and operators wanting to install and configure PipeCD for other developers. ---- diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/adding-a-project.md b/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/adding-a-project.md deleted file mode 100644 index e162c6adf5..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/adding-a-project.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: "Adding a project" -linkTitle: "Adding a project" -weight: 2 -description: > - This page describes how to set up a new project. ---- - -The control plane ops can add a new project for a team. -Project adding can be simply done from an internal web page prepared for the ops. -Because that web service is running in an `ops` pod, so in order to access it, using `kubectl port-forward` command to forward a local port to a port on the `ops` pod as following: - -``` console -kubectl port-forward service/pipecd-ops 9082 --namespace={NAMESPACE} -``` - -Then, access to [http://localhost:9082](http://localhost:9082). - -On that page, you will see the list of registered projects and a link to register new projects. -Registering a new project requires only a unique ID string and an optional description text. - -Once a new project has been registered, a static admin (username, password) will be automatically generated for the project admin. You can send that information to the project admin. The project admin first uses the provided static admin information to log in to PipeCD. After that, they can change the static admin information, configure the SSO, RBAC or disable static admin user. - -__Caution:__ The Role-Based Access Control (RBAC) setting is required to enable your team login using SSO, please make sure you have that setup before disable static admin user. \ No newline at end of file diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/architecture-overview.md b/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/architecture-overview.md deleted file mode 100644 index 4166700b69..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/architecture-overview.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "Architecture overview" -linkTitle: "Architecture overview" -weight: 1 -description: > - This page describes the architecture of control plane. ---- - -![](/images/control-plane-components.png) -

-Component Architecture -

- -The control plane is a centralized part of PipeCD. It contains several services as below to manage the application, deployment data and handle all requests from `piped`s and web clients: - -##### Server - -`server` handles all incoming gRPC requests from `piped`s, web clients, incoming HTTP requests such as auth callback from third party services. -It also serves all web assets including HTML, JS, CSS... -This service can be easily scaled by updating the pod number. - -##### Cache - -`cache` is a single pod service for caching internal data used by `server` service. Currently, this `cache` service is powered by `redis`. -You can configure the control plane to use a fully-managed redis cache service instead of launching a cache pod in your cluster. - -##### Ops - -`ops` is a single pod service for operating PipeCD owner's tasks. -For example, it provides an internal web page for adding and managing projects; it periodically removes the old data; it collects and saves the deployment insights. - -##### Data Store - -`Data store` is a storage for storing model data such as applications and deployments. This can be a fully-managed service such as GCP [Firestore](https://cloud.google.com/firestore), GCP [Cloud SQL](https://cloud.google.com/sql) or AWS [RDS](https://aws.amazon.com/rds/) (currently we choose [MySQL v8](https://www.mysql.com/) as supported relational data store). You can also configure the control plane to use a self-managed MySQL server. -When installing the control plane, you have to choose one of the provided data store services. - -##### File Store - -`File store` is a storage for storing stage logs, application live states. This can be a fully-managed service such as GCP [GCS](https://cloud.google.com/storage), AWS [S3](https://aws.amazon.com/s3/), or a self-managed service such as [Minio](https://github.com/minio/minio). -When installing the control plane, you have to choose one of the provided file store services. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/auth.md b/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/auth.md deleted file mode 100644 index 4540e6fc00..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/auth.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "Authentication and authorization" -linkTitle: "Authentication and authorization" -weight: 3 -description: > - This page describes about PipeCD Authentication and Authorization. ---- - -![](/images/settings-project.png) - -### Static Admin - -When the PipeCD owner [adds a new project](../adding-a-project/), an admin account will be automatically generated for the project. After that, PipeCD owner sends that static admin information including username, password strings to the project admin, who can use that information to log in to PipeCD web with the admin role. - -After logging, the project admin should change the provided username and password. Or disable the static admin account after configuring the single sign-on for the project. - -### Single Sign-On (SSO) - -Single sign-on (SSO) allows users to log in to PipeCD by relying on a trusted third-party service such as GitHub, GitHub Enterprise, Google Gmail, Bitbucket... - -Before configuring the SSO, you need an OAuth application of the using service. For example, GitHub SSO requires creating a GitHub OAuth application as described in this page: - -https://docs.github.com/en/developers/apps/creating-an-oauth-app - -The authorization callback URL should be `https://YOUR_PIPECD_ADDRESS/auth/callback`. - -![](/images/settings-update-sso.png) - -The project can be configured to use a shared SSO configuration (shared OAuth application) instead of needing a new one. In that case, while creating the project, the PipeCD owner specifies the name of the shared SSO configuration should be used, and then the project admin can skip configuring SSO at the settings page. - -### Role-Based Access Control (RBAC) - -Role-based access control (RBAC) allows restricting access on the PipeCD web-based on the roles of user groups within the project. Before using this feature, the SSO must be configured. - -PipeCD provides three roles: - -- `viewer`: has only permissions to view application, deployment list, and details. -- `editor`: has all viewer permissions, plus permissions for actions that modify state, such as manually syncing application, canceling deployment... -- `admin`: has all editor permissions, plus permissions for updating project configurations. - -Configuring RBAC means setting up 3 teams (GitHub) /groups (Google) corresponding to 3 above roles. All users belong to a team/group will have all permissions of that team/group. - -![](/images/settings-update-rbac.png) diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/configuration-reference.md b/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/configuration-reference.md deleted file mode 100644 index fe25c482c6..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/configuration-reference.md +++ /dev/null @@ -1,145 +0,0 @@ ---- -title: "Configuration reference" -linkTitle: "Configuration reference" -weight: 6 -description: > - This page describes all configurable fields in the Control Plane configuration. ---- - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: ControlPlane -spec: - address: https://your-pipecd-address - ... -``` - -## Control Plane Configuration - -| Field | Type | Description | Required | -|-|-|-|-| -| stateKey | string | A randomly generated string used to sign oauth state. | Yes | -| datastore | [DataStore](#datastore) | Storage for storing application, deployment data. | Yes | -| filestore | [FileStore](#filestore) | File storage for storing deployment logs and application states. | Yes | -| cache | [Cache](#cache) | Internal cache configuration. | No | -| address | string | The address to the control plane. This is required if SSO is enabled. | No | -| sharedSSOConfigs | [][SharedSSOConfig](#sharedssoconfig) | List of shared SSO configurations that can be used by any projects. | No | -| projects | [][Project](#project) | List of debugging/quickstart projects. Please note that do not use this to configure the projects running in the production. | No | - -## DataStore - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | Which type of data store should be used. Can be one of the following values
`FIRESTORE`, `MYSQL`. | Yes | -| config | [DataStoreConfig](#datastoreconfig) | Specific configuration for the datastore type. This must be one of these DataStoreConfig. | Yes | - -## DataStoreConfig - -Must be one of the following objects: - -### DataStoreFireStoreConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| namespace | string | The root path element considered as a logical namespace, e.g. `pipecd`. | Yes | -| environment | string | The second path element considered as a logical environment, e.g. `dev`. All pipecd collections will have path formatted according to `{namespace}/{environment}/{collection-name}`. | Yes | -| collectionNamePrefix | string | The prefix for collection name. This can be used to avoid conflicts with existing collections in your Firestore database. | No | -| project | string | The name of GCP project hosting the Firestore. | Yes | -| credentialsFile | string | The path to the service account file for accessing Firestores. | No | - - -### DataStoreMySQLConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| url | string | The address to MySQL server. Should attach with the database port info as `127.0.0.1:3307` in case you want to use another port than the default value. | Yes | -| database | string | The name of database. | Yes | -| usernameFile | string | Path to the file containing the username. | No | -| passwordFile | string | Path to the file containing the password. | No | - - -## FileStore - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | Which type of file store should be used. Can be one of the following values
`GCS`, `S3`, `MINIO` | Yes | -| config | [FileStoreConfig](#filestoreconfig) | Specific configuration for the filestore type. This must be one of these FileStoreConfig. | Yes | - -## FileStoreConfig - -Must be one of the following objects: - -### FileStoreGCSConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| bucket | string | The bucket name. | Yes | -| credentialsFile | string | The path to the service account file for accessing GCS. | No | - -### FileStoreS3Config - -| Field | Type | Description | Required | -|-|-|-|-| -| bucket | string | The AWS S3 bucket name. | Yes | -| region | string | The AWS region name. | Yes | -| profile | string | The AWS profile name. Default value is `default`. | No | -| credentialsFile | string | The path to AWS credential file. Requires only if you want to auth by specified credential file, by default PipeCD will use `$HOME/.aws/credentials` file. | No | -| roleARN | string | The IAM role arn to use when assuming an role. Requires only if you want to auth by `WebIdentity` pattern. | No | -| tokenFile | string | The path to the WebIdentity token PipeCD should use to assume a role with. Requires only if you want to auth by `WebIdentity` pattern. | No | - -### FileStoreMinioConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| endpoint | string | The address of Minio. | Yes | -| bucket | string | The bucket name. | Yes | -| accessKeyFile | string | The path to the access key file. | No | -| secretKeyFile | string | The path to the secret key file. | No | -| autoCreateBucket | bool | Whether the given bucket should be made automatically if not exists. | No | - -## Cache - -| Field | Type | Description | Required | -|-|-|-|-| -| ttl | duration | The time that in-memory cache items are stored before they are considered as stale. | Yes | - -## Project - -| Field | Type | Description | Required | -|-|-|-|-| -| id | string | The unique identifier of the project. | Yes | -| desc | string | The description about the project. | No | -| staticAdmin | [ProjectStaticUser](#projectstaticuser) | Static admin account of the project. | Yes | - -## ProjectStaticUser - -| Field | Type | Description | Required | -|-|-|-|-| -| username | string | The username string. | Yes | -| passwordHash | string | The bcrypt hashed value of the password string. | Yes | - -## SharedSSOConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The unique name of the configuration. | Yes | -| provider | string | The SSO service provider. Can be one of the following values
`GITHUB`, `GOOGLE`... | Yes | -| github | [SSOConfigGitHub](#ssoconfiggithub) | GitHub sso configuration. | No | -| google | [SSOConfigGoogle](#ssoconfiggoogle) | Google sso configuration. | No | - -## SSOConfigGitHub - -| Field | Type | Description | Required | -|-|-|-|-| -| clientId | string | The client id string of GitHub oauth app. | Yes | -| clientSecret | string | The client secret string of GitHub oauth app. | Yes | -| baseUrl | string | The address of GitHub service. Required if enterprise. | No | -| uploadUrl | string | The upload url of GitHub service. | No | -| proxyUrl | string | The address of the proxy used while communicating with the GitHub service. | No | - -## SSOConfigGoogle - -| Field | Type | Description | Required | -|-|-|-|-| -| clientId | string | The client id string of Google oauth app. | Yes | -| clientSecret | string | The client secret string of Google oauth app. | Yes | diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/metrics.md b/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/metrics.md deleted file mode 100644 index 312ac02925..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/metrics.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: "Metrics" -linkTitle: "Metrics" -weight: 5 -description: > - This page describes how to enable monitoring system for collecting PipeCD' metrics. ---- - -PipeCD comes with a monitoring system including Prometheus, Alertmanager, and Grafana. -This page walks you through how to set up and use them. - -## Enable monitoring system -To enable monitoring system for PipeCD, you first need to set the following value to `helm install` when [installing](../../../installation/install-controlplane/#2-preparing-control-plane-configuration-file-and-installing). - -``` ---set monitoring.enabled=true -``` - -## Dashboards -If you've already enabled monitoring system in the previous section, you can access Grafana using port forwarding: - -``` -kubectl port-forward -n {NAMESPACE} svc/{PIPECD_RELEASE_NAME}-grafana 3000:80 -``` - -#### Control Plane dashboards -There are three dashboards related to Control Plane: -- Overview - usage stats of PipeCD -- Incoming Requests - gRPC and HTTP requests stats to check for any negative impact on users -- Go - processes stats of PipeCD components - -#### Piped dashboards -> TODO - -#### Cluster dashboards -Because cluster dashboards tracks cluster-wide metrics, defaults to disable. You can enable it with: - -``` ---monitoring.clusterStats=true -``` - -There are three dashboards that track metrics for: -- Node - nodes stats within the Kubernetes cluster where PipeCD runs on -- Pod - stats for pods that make PipeCD up -- Prometheus - stats for Prometheus itself - -## Alert notifications -If you want to send alert notifications to external services like Slack, you need to set an alertmanager configuration file. - -For example, let's say you use Slack as a receiver. Create `values.yaml` and put the following configuration to there. - -```yaml -prometheus: - alertmanagerFiles: - alertmanager.yml: - global: - slack_api_url: {YOUR_WEBHOOK_URL} - route: - receiver: slack-notifications - receivers: - - name: slack-notifications - slack_configs: - - channel: '#your-channel' -``` - -And give it to the `helm install` command when [installing](../../../installation/install-controlplane/#2-preparing-control-plane-configuration-file-and-installing). - -``` ---values=values.yaml -``` - -See [here](https://prometheus.io/docs/alerting/latest/configuration/) for more details on AlertManager's configuration. \ No newline at end of file diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/registering-a-piped.md b/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/registering-a-piped.md deleted file mode 100644 index 9719f26f8d..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-controlplane/registering-a-piped.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: "Registering a piped" -linkTitle: "Registering a piped" -weight: 4 -description: > - This page describes how to register a new piped to a project. ---- - -The list of pipeds are shown in the Settings page. Anyone who has the project admin role can register a new piped by clicking on the `+ADD` button. - -
- -![](/images/settings-add-piped.png) -

-Registering a new piped -

diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/_index.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/_index.md deleted file mode 100644 index ef848b8856..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/_index.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Managing Piped" -linkTitle: "Managing Piped" -weight: 7 -description: > - This guide is for administrators and operators wanting to install and configure piped for other developers. ---- - -In order to use Piped you need to register through PipeCD control plane, so please refer [register a Piped docs](../managing-controlplane/registering-a-piped/) if you do not have already. After registering successfully, you can monitor your Piped live state via the PipeCD console on the settings page. - -![piped-list-page](/images/piped-list-page.png) diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-cloud-provider.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-cloud-provider.md deleted file mode 100644 index e05aad45af..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-cloud-provider.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -title: "Adding a cloud provider" -linkTitle: "Adding cloud provider" -weight: 3 -description: > - This page describes how to add a cloud provider to enable its applications. ---- - -> NOTE: Starting from version v0.35.0, the CloudProvider concept is being replaced by PlatformProvider. It's a name change due to the PipeCD vision improvement. __The CloudProvider configuration is marked as deprecated, please migrate your piped agent configuration to use PlatformProvider__. - -PipeCD supports multiple clouds and multiple application kinds. -Cloud provider defines which cloud and where the application should be deployed to. -So while registering a new application, the name of a configured cloud provider is required. - -Currently, PipeCD is supporting these five kinds of cloud providers: `KUBERNETES`, `ECS`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. -A new cloud provider can be enabled by adding a [CloudProvider](../configuration-reference/#cloudprovider) struct to the piped configuration file. -A piped can have one or multiple cloud provider instances from the same or different cloud provider kind. - -The next sections show the specific configuration for each kind of cloud provider. - -### Configuring Kubernetes cloud provider - -By default, piped deploys Kubernetes application to the cluster where the piped is running in. An external cluster can be connected by specifying the `masterURL` and `kubeConfigPath` in the [configuration](../configuration-reference/#cloudproviderkubernetesconfig). - -And, the default resources (defined at [here](https://github.com/pipe-cd/pipecd/blob/master/pkg/app/piped/platformprovider/kubernetes/resourcekey.go)) from all namespaces of the Kubernetes cluster will be watched for rendering the application state in realtime and detecting the configuration drift. In case you want to restrict piped to watch only a single namespace, let specify the namespace in the [KubernetesAppStateInformer](../configuration-reference/#kubernetesappstateinformer) field. You can also add other resources or exclude resources to/from the watching targets by that field. - -Below configuration snippet just specifies a name and type of cloud provider. It means the cloud provider `kubernetes-dev` will connect to the Kubernetes cluster where the piped is running in, and this cloud provider watches all of the predefined resources from all namespaces inside that cluster. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: kubernetes-dev - type: KUBERNETES -``` - -See [ConfigurationReference](../configuration-reference/#cloudproviderkubernetesconfig) for the full configuration. - -### Configuring Terraform cloud provider - -A terraform cloud provider contains a list of shared terraform variables that will be applied while running the deployment of its applications. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: terraform-dev - type: TERRAFORM - config: - vars: - - "project=pipecd" -``` - -See [ConfigurationReference](../configuration-reference/#cloudproviderterraformconfig) for the full configuration. - -### Configuring Cloud Run cloud provider - -Adding a Cloud Run provider requires the name of the Google Cloud project and the region name where Cloud Run service is running. A service account file for accessing to Cloud Run is also required if the machine running the piped does not have enough permissions to access. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: cloudrun-dev - type: CLOUDRUN - config: - project: {GCP_PROJECT} - region: {CLOUDRUN_REGION} - credentialsFile: {PATH_TO_THE_SERVICE_ACCOUNT_FILE} -``` - -See [ConfigurationReference](../configuration-reference/#cloudprovidercloudrunconfig) for the full configuration. - -### Configuring Lambda cloud provider - -Adding a Lambda provider requires the region name where Lambda service is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: lambda-dev - type: LAMBDA - config: - region: {LAMBDA_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -You will generally need your AWS credentials to authenticate with Lambda. Piped provides multiple methods of loading these credentials. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -Therefore, you don't have to set credentialsFile if you use the environment variables or the EC2 Instance Role. Keep in mind the IAM role/user that you use with your Piped must possess the IAM policy permission for at least `Lambda.Function` and `Lambda.Alias` resources controll (list/read/write). - -See [ConfigurationReference](../configuration-reference/#cloudproviderlambdaconfig) for the full configuration. - -### Configuring ECS cloud provider - -Adding a ECS provider requires the region name where ECS cluster is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: ecs-dev - type: ECS - config: - region: {ECS_CLUSTER_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -Just same as Lambda cloud provider, there are several ways to authorize Piped agent to enable it performs deployment jobs. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -See [ConfigurationReference](../configuration-reference/#cloudproviderecsconfig) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-git-repository.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-git-repository.md deleted file mode 100644 index 97bf68b200..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-git-repository.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "Adding a git repository" -linkTitle: "Adding git repository" -weight: 2 -description: > - This page describes how to add a new Git repository. ---- - -In the `piped` configuration file, we specify the list of Git repositories should be handled by the `piped`. -A Git repository contains one or more deployable applications where each application is put inside a directory called as [application directory](../../../concepts/#application-directory). -That directory contains an application configuration file as well as application manifests. -The `piped` periodically checks the new commits and fetches the needed manifests from those repositories for executing the deployment. - -A single `piped` can be configured to handle one or more Git repositories. -In order to enable a new Git repository, let's add a new [GitRepository](../configuration-reference/#gitrepository) block to the `repositories` field in the `piped` configuration file. - -For example, with the following snippet, `piped` will take the `master` branch of [pipe-cd/examples](https://github.com/pipe-cd/examples) repository as a target Git repository for doing deployments. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - repositories: - - repoId: examples - remote: git@github.com:pipe-cd/examples.git - branch: master -``` - -In most of the cases, we want to deal with private Git repositories. For accessing those private repositories, `piped` needs a private SSH key, which can be configured while [installing](../../../installation/install-piped/installing-on-kubernetes/) with `secret.sshKey` in the Helm chart. - -``` console -helm install dev-piped pipecd/piped --version={VERSION} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} -``` - -You can see this [configuration reference](../configuration-reference/#git) for more configurable fields about Git commands. - -Currently, `piped` allows configuring only one private SSH key for all specified Git repositories. So you can configure the same SSH key for all of those private repositories, or break them into separate `piped`s. In the near future, we also want to update `piped` to support loading multiple SSH keys. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-platform-provider.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-platform-provider.md deleted file mode 100644 index d231c26e38..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-a-platform-provider.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: "Adding a platform provider" -linkTitle: "Adding platform provider" -weight: 4 -description: > - This page describes how to add a platform provider to enable its applications. ---- - -PipeCD supports multiple platforms and multiple application kinds which run on those platforms. -Platform provider defines which platform and where the application should be deployed to. -So while registering a new application, the name of a configured platform provider is required. - -Currently, PipeCD is supporting these five kinds of platform providers: `KUBERNETES`, `ECS`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. -A new platform provider can be enabled by adding a [PlatformProvider](../configuration-reference/#platformprovider) struct to the piped configuration file. -A piped can have one or multiple platform provider instances from the same or different platform provider kind. - -The next sections show the specific configuration for each kind of platform provider. - -### Configuring Kubernetes platform provider - -By default, piped deploys Kubernetes application to the cluster where the piped is running in. An external cluster can be connected by specifying the `masterURL` and `kubeConfigPath` in the [configuration](../configuration-reference/#platformproviderkubernetesconfig). - -And, the default resources (defined at [here](https://github.com/pipe-cd/pipecd/blob/master/pkg/app/piped/platformprovider/kubernetes/resourcekey.go)) from all namespaces of the Kubernetes cluster will be watched for rendering the application state in realtime and detecting the configuration drift. In case you want to restrict piped to watch only a single namespace, let specify the namespace in the [KubernetesAppStateInformer](../configuration-reference/#kubernetesappstateinformer) field. You can also add other resources or exclude resources to/from the watching targets by that field. - -Below configuration snippet just specifies a name and type of platform provider. It means the platform provider `kubernetes-dev` will connect to the Kubernetes cluster where the piped is running in, and this platform provider watches all of the predefined resources from all namespaces inside that cluster. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: kubernetes-dev - type: KUBERNETES -``` - -See [ConfigurationReference](../configuration-reference/#platformproviderkubernetesconfig) for the full configuration. - -### Configuring Terraform platform provider - -A terraform platform provider contains a list of shared terraform variables that will be applied while running the deployment of its applications. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: terraform-dev - type: TERRAFORM - config: - vars: - - "project=pipecd" -``` - -See [ConfigurationReference](../configuration-reference/#platformproviderterraformconfig) for the full configuration. - -### Configuring Cloud Run platform provider - -Adding a Cloud Run provider requires the name of the Google Cloud project and the region name where Cloud Run service is running. A service account file for accessing to Cloud Run is also required if the machine running the piped does not have enough permissions to access. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: cloudrun-dev - type: CLOUDRUN - config: - project: {GCP_PROJECT} - region: {CLOUDRUN_REGION} - credentialsFile: {PATH_TO_THE_SERVICE_ACCOUNT_FILE} -``` - -See [ConfigurationReference](../configuration-reference/#platformprovidercloudrunconfig) for the full configuration. - -### Configuring Lambda platform provider - -Adding a Lambda provider requires the region name where Lambda service is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: lambda-dev - type: LAMBDA - config: - region: {LAMBDA_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -You will generally need your AWS credentials to authenticate with Lambda. Piped provides multiple methods of loading these credentials. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -Therefore, you don't have to set credentialsFile if you use the environment variables or the EC2 Instance Role. Keep in mind the IAM role/user that you use with your Piped must possess the IAM policy permission for at least `Lambda.Function` and `Lambda.Alias` resources controll (list/read/write). - -See [ConfigurationReference](../configuration-reference/#platformproviderlambdaconfig) for the full configuration. - -### Configuring ECS platform provider - -Adding a ECS provider requires the region name where ECS cluster is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: ecs-dev - type: ECS - config: - region: {ECS_CLUSTER_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -Just same as Lambda platform provider, there are several ways to authorize Piped agent to enable it performs deployment jobs. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -See [ConfigurationReference](../configuration-reference/#platformproviderecsconfig) for the full configuration. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-an-analysis-provider.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-an-analysis-provider.md deleted file mode 100644 index cc87d3a416..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-an-analysis-provider.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: "Adding an analysis provider" -linkTitle: "Adding analysis provider" -weight: 6 -description: > - This page describes how to add an analysis provider for doing deployment analysis. ---- - -To enable [Automated deployment analysis](../../managing-application/customizing-deployment/automated-deployment-analysis/) feature, you have to set the needed information for Piped to connect to the [Analysis Provider](../../../concepts/#analysis-provider). - -Currently, PipeCD supports the following providers: -- [Prometheus](https://prometheus.io/) -- [Datadog](https://datadoghq.com/) - - -## Prometheus -Piped queries the [range query endpoint](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) to obtain metrics used to evaluate the deployment. - -You need to define the Prometheus server address accessible to Piped. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - analysisProviders: - - name: prometheus-dev - type: PROMETHEUS - config: - address: https://your-prometheus.dev -``` -The full list of configurable fields are [here](../configuration-reference/#analysisproviderprometheusconfig). - -## Datadog -Piped queries the [MetricsApi.QueryMetrics](https://docs.datadoghq.com/api/latest/metrics/#query-timeseries-points) endpoint to obtain metrics used to evaluate the deployment. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - analysisProviders: - - name: datadog-dev - type: DATADOG - config: - apiKeyFile: /etc/piped-secret/datadog-api-key - applicationKeyFile: /etc/piped-secret/datadog-application-key -``` - -The full list of configurable fields are [here](../configuration-reference/#analysisproviderdatadogconfig). - -If you choose `Helm` as the installation method, we recommend using `--set-file` to mount the key files while performing the [upgrading process](../../../installation/install-piped/installing-on-kubernetes/#in-the-cluster-wide-mode). - -```console ---set-file secret.data.datadog-api-key={PATH_TO_API_KEY_FILE} \ ---set-file secret.data.datadog-application-key={PATH_TO_APPLICATION_KEY_FILE} -``` diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-helm-chart-repository-or-registry.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-helm-chart-repository-or-registry.md deleted file mode 100644 index 79581d2d65..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/adding-helm-chart-repository-or-registry.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: "Adding a Helm chart repository or registry" -linkTitle: "Adding Helm chart repo or registry" -weight: 5 -description: > - This page describes how to add a new Helm chart repository or registry. ---- - -PipeCD supports Kubernetes applications that are using Helm for templating and packaging. In addition to being able to deploy a Helm chart that is sourced from the same Git repository (`local chart`) or from a different Git repository (`remote git chart`), an application can use a chart sourced from a Helm chart repository. - -### Adding Helm chart repository - -A Helm [chart repository](https://helm.sh/docs/topics/chart_repository/) is a location backed by an HTTP server where packaged charts can be stored and shared. Before an application can be configured to use a chart from a Helm chart repository, that chart repository must be enabled in the related `piped` by adding the [ChartRepository](../configuration-reference/#chartrepository) struct to the piped configuration file. - -``` yaml -# piped configuration file -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - chartRepositories: - - name: pipecd - address: https://charts.pipecd.dev -``` - -For example, the above snippet enables the official chart repository of PipeCD project. After that, you can configure the Kubernetes application to load a chart from that chart repository for executing the deployment. - -``` yaml -# Application configuration file. -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - # Helm chart sourced from a Helm Chart Repository. - helmChart: - repository: pipecd - name: helloworld - version: v0.5.0 -``` - -In case the chart repository is backed by HTTP basic authentication, the username and password strings are required in [configuration](../configuration-reference/#chartrepository). - -### Adding Helm chart registry - -A Helm chart [registry](https://helm.sh/docs/topics/registries/) is a mechanism enabled by default in Helm 3.8.0 and later that allows the OCI registry to be used for storage and distribution of Helm charts. - -Before an application can be configured to use a chart from a registry, that registry must be enabled in the related `piped` by adding the [ChartRegistry](../configuration-reference/#chartregistry) struct to the piped configuration file if authentication is enabled at the registry. - -``` yaml -# piped configuration file -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - chartRegistries: - - type: OCI - address: registry.example.com - username: sample-username - password: sample-password -``` diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuration-reference.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuration-reference.md deleted file mode 100644 index 83d81ed100..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuration-reference.md +++ /dev/null @@ -1,316 +0,0 @@ ---- -title: "Configuration reference" -linkTitle: "Configuration reference" -weight: 9 -description: > - This page describes all configurable fields in the piped configuration. ---- - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - projectID: ... - pipedID: ... - ... -``` - -## Piped Configuration - -| Field | Type | Description | Required | -|-|-|-|-| -| projectID | string | The identifier of the PipeCD project where this piped belongs to. | Yes | -| pipedID | string | The generated ID for this piped. | Yes | -| pipedKeyFile | string | The path to the file containing the generated key string for this piped. | Yes | -| pipedKeyData | string | Base64 encoded string of Piped key. Either pipedKeyFile or pipedKeyData must be set. | Yes | -| apiAddress | string | The address used to connect to the Control Plane's API in format `host:port`. | Yes | -| syncInterval | duration | How often to check whether an application should be synced. Default is `1m`. | No | -| appConfigSyncInterval | duration | How often to check whether application configuration files should be synced. Default is `1m`. | No | -| git | [Git](#git) | Git configuration needed for Git commands. | No | -| repositories | [][Repository](#gitrepository) | List of Git repositories this piped will handle. | No | -| chartRepositories | [][ChartRepository](#chartrepository) | List of Helm chart repositories that should be added while starting up. | No | -| chartRegistries | [][ChartRegistry](#chartregistry) | List of helm chart registries that should be logged in while starting up. | No | -| cloudProviders | [][CloudProvider](#cloudprovider) | List of cloud providers can be used by this piped. This field is deprecated, use `platformProviders` instead. | No | -| platformProviders | [][PlatformProvider](#platformprovider) | List of platform providers can be used by this piped. | No | -| analysisProviders | [][AnalysisProvider](#analysisprovider) | List of analysis providers can be used by this piped. | No | -| eventWatcher | [EventWatcher](#eventwatcher) | Optional Event watcher settings. | No | -| secretManagement | [SecretManagement](#secretmanagement) | The using secret management method. | No | -| notifications | [Notifications](#notifications) | Sending notifications to Slack, Webhook... | No | -| appSelector | map[string]string | List of labels to filter all applications this piped will handle. Currently, it is only be used to filter the applications suggested for adding from the control plane. | No | - -## Git - -| Field | Type | Description | Required | -|-|-|-|-| -| username | string | The username that will be configured for `git` user. Default is `piped`. | No | -| email | string | The email that will be configured for `git` user. Default is `pipecd.dev@gmail.com`. | No | -| sshConfigFilePath | string | Where to write ssh config file. Default is `$HOME/.ssh/config`. | No | -| host | string | The host name. Default is `github.com`. | No | -| hostName | string | The hostname or IP address of the remote git server. Default is the same value with Host. | No | -| sshKeyFile | string | The path to the private ssh key file. This will be used to clone the source code of the specified git repositories. | No | -| sshKeyData | string | Base64 encoded string of SSH key. | No | - -## GitRepository - -| Field | Type | Description | Required | -|-|-|-|-| -| repoID | string | Unique identifier to the repository. This must be unique in the piped scope. | Yes | -| remote | string | Remote address of the repository used to clone the source code. e.g. `git@github.com:org/repo.git` | Yes | -| branch | string | The branch will be handled. | Yes | - -## ChartRepository - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | The repository type. Currently, HTTP and GIT are supported. Default is HTTP. | No | -| name | string | The name of the Helm chart repository. Note that is not a Git repository but a [Helm chart repository](https://helm.sh/docs/topics/chart_repository/). | Yes if type is HTTP | -| address | string | The address to the Helm chart repository. | Yes if type is HTTP | -| username | string | Username used for the repository backed by HTTP basic authentication. | No | -| password | string | Password used for the repository backed by HTTP basic authentication. | No | -| insecure | bool | Whether to skip TLS certificate checks for the repository or not. | No | -| gitRemote | string | Remote address of the Git repository used to clone Helm charts. | Yes if type is GIT | -| sshKeyFile | string | The path to the private ssh key file used while cloning Helm charts from above Git repository. | No | - -## ChartRegistry - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | The registry type. Currently, only OCI is supported. Default is OCI. | No | -| address | string | The address to the registry. | Yes | -| username | string | Username used for the registry authentication. | No | -| password | string | Password used for the registry authentication. | No | - -## CloudProvider - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of the cloud provider. | Yes | -| type | string | The cloud provider type. Must be one of the following values:
`KUBERNETES`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. | Yes | -| config | [CloudProviderConfig](#cloudproviderconfig) | Specific configuration for the specified type of cloud provider. | No | - -## CloudProviderConfig - -Must be one of the following structs: - -### CloudProviderKubernetesConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| masterURL | string | The master URL of the kubernetes cluster. Empty means in-cluster. | No | -| kubeConfigPath | string | The path to the kubeconfig file. Empty means in-cluster. | No | -| appStateInformer | [KubernetesAppStateInformer](#kubernetesappstateinformer) | Configuration for application resource informer. | No | - -### CloudProviderTerraformConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| vars | []string | List of variables that will be set directly on terraform commands with `-var` flag. The variable must be formatted by `key=value`. | No | - -### CloudProviderCloudRunConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| project | string | The GCP project hosting the Cloud Run service. | Yes | -| region | string | The region of running Cloud Run service. | Yes | -| credentialsFile | string | The path to the service account file for accessing Cloud Run service. | No | - -### CloudProviderLambdaConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| region | string | The region of running Lambda service. | Yes | -| credentialsFile | string | The path to the credential file for logging into AWS cluster. If this value is not provided, piped will read credential info from environment variables. It expects the format [~/.aws/credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). | No | -| roleARN | string | The IAM role arn to use when assuming an role. Required if you want to use the AWS SecurityTokenService. | No | -| tokenFile | string | The path to the WebIdentity token the SDK should use to assume a role with. Required if you want to use the AWS SecurityTokenService. | No | -| profile | string | The profile to use for logging into AWS cluster. The default value is `default`. | No | - -### CloudProviderECSConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| region | string | The region of running ECS cluster. | Yes | -| credentialsFile | string | The path to the credential file for logging into AWS cluster. If this value is not provided, piped will read credential info from environment variables. It expects the format [~/.aws/credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) | No | -| roleARN | string | The IAM role arn to use when assuming an role. Required if you want to use the AWS SecurityTokenService. | No | -| tokenFile | string | The path to the WebIdentity token the SDK should use to assume a role with. Required if you want to use the AWS SecurityTokenService. | No | -| profile | string | The profile to use for logging into AWS cluster. The default value is `default`. | No | - -## PlatformProvider - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of the platform provider. | Yes | -| type | string | The platform provider type. Must be one of the following values:
`KUBERNETES`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. | Yes | -| config | [PlatformProviderConfig](#platformproviderconfig) | Specific configuration for the specified type of platform provider. | No | - -## PlatformProviderConfig - -Must be one of the following structs: - -### PlatformProviderKubernetesConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| masterURL | string | The master URL of the kubernetes cluster. Empty means in-cluster. | No | -| kubeConfigPath | string | The path to the kubeconfig file. Empty means in-cluster. | No | -| appStateInformer | [KubernetesAppStateInformer](#kubernetesappstateinformer) | Configuration for application resource informer. | No | - -### PlatformProviderTerraformConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| vars | []string | List of variables that will be set directly on terraform commands with `-var` flag. The variable must be formatted by `key=value`. | No | - -### PlatformProviderCloudRunConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| project | string | The GCP project hosting the Cloud Run service. | Yes | -| region | string | The region of running Cloud Run service. | Yes | -| credentialsFile | string | The path to the service account file for accessing Cloud Run service. | No | - -### PlatformProviderLambdaConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| region | string | The region of running Lambda service. | Yes | -| credentialsFile | string | The path to the credential file for logging into AWS cluster. If this value is not provided, piped will read credential info from environment variables. It expects the format [~/.aws/credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). | No | -| roleARN | string | The IAM role arn to use when assuming an role. Required if you want to use the AWS SecurityTokenService. | No | -| tokenFile | string | The path to the WebIdentity token the SDK should use to assume a role with. Required if you want to use the AWS SecurityTokenService. | No | -| profile | string | The profile to use for logging into AWS cluster. The default value is `default`. | No | - -### PlatformProviderECSConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| region | string | The region of running ECS cluster. | Yes | -| credentialsFile | string | The path to the credential file for logging into AWS cluster. If this value is not provided, piped will read credential info from environment variables. It expects the format [~/.aws/credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) | No | -| roleARN | string | The IAM role arn to use when assuming an role. Required if you want to use the AWS SecurityTokenService. | No | -| tokenFile | string | The path to the WebIdentity token the SDK should use to assume a role with. Required if you want to use the AWS SecurityTokenService. | No | -| profile | string | The profile to use for logging into AWS cluster. The default value is `default`. | No | - -## KubernetesAppStateInformer - -| Field | Type | Description | Required | -|-|-|-|-| -| namespace | string | Only watches the specified namespace. Empty means watching all namespaces. | No | -| includeResources | [][KubernetesResourcematcher](#kubernetesresourcematcher) | List of resources that should be added to the watching targets. | No | -| excludeResources | [][KubernetesResourcematcher](#kubernetesresourcematcher) | List of resources that should be ignored from the watching targets. | No | - -## KubernetesResourceMatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| apiVersion | string | The APIVersion of the kubernetes resource. | Yes | -| kind | string | The kind name of the kubernetes resource. Empty means all kinds are matching. | No | - -## AnalysisProvider - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The unique name of the analysis provider. | Yes | -| type | string | The provider type. Currently, only PROMETHEUS, DATADOG are available. | Yes | -| config | [AnalysisProviderConfig](#analysisproviderconfig) | Specific configuration for the specified type of analysis provider. | Yes | - -## AnalysisProviderConfig - -Must be one of the following structs: - -### AnalysisProviderPrometheusConfig -| Field | Type | Description | Required | -|-|-|-|-| -| address | string | The Prometheus server address. | Yes | -| usernameFile | string | The path to the username file. | No | -| passwordFile | string | The path to the password file. | No | - -### AnalysisProviderDatadogConfig -| Field | Type | Description | Required | -|-|-|-|-| -| address | string | The address of Datadog API server. Only "datadoghq.com", "us3.datadoghq.com", "datadoghq.eu", "ddog-gov.com" are available. Defaults to "datadoghq.com" | No | -| apiKeyFile | string | The path to the api key file. | Yes | -| applicationKeyFile | string | The path to the application key file. | Yes | - -## EventWatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| checkInterval | duration | Interval to fetch the latest event and compare it with one defined in EventWatcher config files. Defaults to `1m`. | No | -| gitRepos | [][EventWatcherGitRepo](#eventwatchergitrepo) | The configuration list of git repositories to be observed. Only the repositories in this list will be observed by Piped. | No | - -### EventWatcherGitRepo - -| Field | Type | Description | Required | -|-|-|-|-| -| repoId | string | Id of the git repository. This must be unique within the repos' elements. | Yes | -| commitMessage | string | The commit message used to push after replacing values. Default message is used if not given. | No | -| includes | []string | The paths to EventWatcher files to be included. Patterns can be used like `foo/*.yaml`. | No | -| excludes | []string | The paths to EventWatcher files to be excluded. Patterns can be used like `foo/*.yaml`. This is prioritized if both includes and this are given. | No | - -## SecretManagement - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | Which management method should be used. Default is `KEY_PAIR`. | Yes | -| config | [SecretManagementConfig](#secretmanagementconfig) | Configration for using secret management method. | Yes | - -## SecretManagementConfig - -Must be one of the following structs: - -### SecretManagementKeyPair - -| Field | Type | Description | Required | -|-|-|-|-| -| privateKeyFile | string | Path to the private RSA key file. | Yes | -| privateKeyData | string | Base64 encoded string of private RSA key. Either privateKeyFile or privateKeyData must be set. | No | -| publicKeyFile | string | Path to the public RSA key file. | Yes | -| publicKeyData | string | Base64 encoded string of public RSA key. Either publicKeyFile or publicKeyData must be set. | No | - -### SecretManagementGCPKMS - -> WIP - -## Notifications - -| Field | Type | Description | Required | -|-|-|-|-| -| routes | [][NotificationRoute](#notificationroute) | List of notification routes. | No | -| receivers | [][NotificationReceiver](#notificationreceiver) | List of notification receivers. | No | - -## NotificationRoute - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of the route. | Yes | -| receiver | string | The name of receiver who will receive all matched events. | Yes | -| events | []string | List of events that should be routed to the receiver. | No | -| ignoreEvents | []string | List of events that should be ignored. | No | -| groups | []string | List of event groups should be routed to the receiver. | No | -| ignoreGroups | []string | List of event groups should be ignored. | No | -| apps | []string | List of applications where their events should be routed to the receiver. | No | -| ignoreApps | []string | List of applications where their events should be ignored. | No | -| labels | map[string]string | List of labels where their events should be routed to the receiver. | No | -| ignoreLabels | map[string]string | List of labels where their events should be ignored. | No | - - -## NotificationReceiver - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of the receiver. | Yes | -| slack | [NotificationReciverSlack](#notificationreceiverslack) | Configuration for slack receiver. | No | -| webhook | [NotificationReceiverWebhook](#notificationreceiverwebhook) | Configuration for webhook receiver. | No | - -## NotificationReceiverSlack - -| Field | Type | Description | Required | -|-|-|-|-| -| hookURL | string | The hookURL of a slack channel. | Yes | - -## NotificationReceiverWebhook - -| Field | Type | Description | Required | -|-|-|-|-| -| url | string | The URL where notification event will be sent to. | Yes | -| signatureKey | string | The HTTP header key used to store the configured signature in each event. Default is "PipeCD-Signature". | No | -| signatureValue | string | The value of signature included in header of each event request. It can be used to verify the received events. | No | -| signatureValueFile | string | The path to the signature value file. | No | diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuring-event-watcher.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuring-event-watcher.md deleted file mode 100644 index 1a7b0ae10c..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuring-event-watcher.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: "Configuring event watcher" -linkTitle: "Configuring event watcher" -weight: 7 -description: > - This page describes how to configure piped to enable event watcher. ---- - -To enable [EventWatcher](../../event-watcher/), you have to configure your piped at first. - -### Grant write permission -The [SSH key used by Piped](../configuration-reference/#git) must be a key with write-access because piped needs to commit and push to your git repository when any incoming event matches. - -### Specify Git repositories to be observed -Piped watches events only for the Git repositories specified in the `gitRepos` list. -You need to add all repositories you want to enable Eventwatcher. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - eventWatcher: - gitRepos: - - repoId: repo-1 - - repoId: repo-2 - - repoId: repo-3 -``` - -### [optional] Specify Eventwatcher files Piped will use ->NOTE: This way is valid only for defining events using [.pipe/](../../event-watcher/#use-the-pipe-directory). - -If multiple Pipeds handle a single repository, you can prevent conflicts by splitting into the multiple EventWatcher files and setting `includes/excludes` to specify the files that should be monitored by this Piped. - -Say for instance, if you only want the Piped to use the Eventwatcher files under `.pipe/dev/`: - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - eventWatcher: - gitRepos: - - repoId: repo-1 - commitMessage: Update values by Event watcher - includes: - - dev/*.yaml -``` - -`excludes` is prioritized if both `includes` and `excludes` are given. - -The full list of configurable fields are [here](../configuration-reference/#eventwatcher). - -### [optional] Settings for git user -By default, every git commit uses `piped` as a username and `pipecd.dev@gmail.com` as an email. You can change it with the [git](../configuration-reference/#git) field. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - git: - username: foo - email: foo@example.com -``` diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuring-notifications.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuring-notifications.md deleted file mode 100644 index 25ba874a40..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/configuring-notifications.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: "Configuring notifications" -linkTitle: "Configuring notifications" -weight: 8 -description: > - This page describes how to configure piped to send notifications to external services. ---- - -PipeCD events (deployment triggered, planned, completed, analysis result, piped started...) can be sent to external services like Slack or a Webhook service. While forwarding those events to a chat service helps developers have a quick and convenient way to know the deployment's current status, forwarding to a Webhook service may be useful for triggering other related tasks like CI jobs. - -PipeCD events are emitted and sent by the `piped` component. So all the needed configurations can be specified in the `piped` configuration file. -Notification configuration including: -- a list of `Route`s which used to match events and decide where the event should be sent to -- a list of `Receiver`s which used to know how to send events to the external service - -[Notification Route](../configuration-reference/#notificationroute) matches events based on their metadata like `name`, `group`, `app`, `labels`. -Below is the list of supporting event names and their groups. - -| Event | Group | Supported | -|-|-|-| -| DEPLOYMENT_TRIGGERED | DEPLOYMENT |

| -| DEPLOYMENT_PLANNED | DEPLOYMENT |

| -| DEPLOYMENT_APPROVED | DEPLOYMENT |

| -| DEPLOYMENT_WAIT_APPROVAL | DEPLOYMENT |

| -| DEPLOYMENT_ROLLING_BACK | DEPLOYMENT |

| -| DEPLOYMENT_SUCCEEDED | DEPLOYMENT |

| -| DEPLOYMENT_FAILED | DEPLOYMENT |

| -| DEPLOYMENT_CANCELLED | DEPLOYMENT |

| -| DEPLOYMENT_TRIGGER_FAILED | DEPLOYMENT |

| -| APPLICATION_SYNCED | APPLICATION_SYNC |

| -| APPLICATION_OUT_OF_SYNC | APPLICATION_SYNC |

| -| APPLICATION_HEALTHY | APPLICATION_HEALTH |

| -| APPLICATION_UNHEALTHY | APPLICATION_HEALTH |

| -| PIPED_STARTED | PIPED |

| -| PIPED_STOPPED | PIPED |

| - -### Sending notifications to Slack - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - notifications: - routes: - # Sending all event which contains labels `env: dev` to dev-slack-channel. - - name: dev-slack - labels: - env: dev - receiver: dev-slack-channel - # Only sending deployment started and completed events which contains - # labels `env: prod` and `team: pipecd` to prod-slack-channel. - - name: prod-slack - events: - - DEPLOYMENT_TRIGGERED - - DEPLOYMENT_COMPLETED - labels: - env: prod - team: pipecd - receiver: prod-slack-channel - receivers: - - name: dev-slack-channel - slack: - hookURL: https://slack.com/dev - - name: prod-slack-channel - slack: - hookURL: https://slack.com/prod -``` - - -![](/images/slack-notification-deployment.png) -

-Deployment was triggered, planned and completed successfully -

- -![](/images/slack-notification-piped-started.png) -

-A piped has been started -

- - -For detailed configuration, please check the [configuration reference for Notifications](../configuration-reference/#notifications) section. - -### Sending notifications to external services via webhook - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - notifications: - routes: - # Sending all events an external service. - - name: all-events-to-a-external-service - receiver: a-webhook-service - receivers: - - name: a-webhook-service - webhook: - url: {WEBHOOK_SERVICE_URL} - signatureValue: {RANDOM_SIGNATURE_STRING} -``` - -For detailed configuration, please check the [configuration reference for NotificationReceiverWebhook](../configuration-reference/#notificationreceiverwebhook) section. diff --git a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/remote-upgrade-remote-config.md b/docs/content/en/docs-v0.37.x/user-guide/managing-piped/remote-upgrade-remote-config.md deleted file mode 100644 index eec51632dd..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/managing-piped/remote-upgrade-remote-config.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: "Remote upgrade and remote config" -linkTitle: "Remote upgrade and remote config" -weight: 1 -description: > - This page describes how to use remote upgrade and remote config features. ---- - -## Remote upgrade - -The remote upgrade is the ability to restart the currently running Piped with another version from the web console. -This reduces the effort involved in updating Piped to newer versions. -All Pipeds that are running by the provided Piped container image can be enabled to use this feature. -It means Pipeds running on a Kubernetes cluster, a virtual machine, a serverless service can be upgraded remotely from the web console. - -Basically, in order to use this feature you must run Piped with `/launcher` command instead of `/piped` command as usual. -Please check the [installation](../../../installation/install-piped/) guide on each environment to see the details. - -After starting Piped with the remote-upgrade feature, you can go to the Settings page then click on `UPGRADE` button on the top-right corner. -A dialog will be shown for selecting which Pipeds you want to upgrade and what version they should run. - -![](/images/settings-remote-upgrade.png) -

-Select a list of Pipeds to upgrade from Settings page -

- -## Remote config - -Although the remote-upgrade allows you remotely restart your Pipeds to run any new version you want, if your Piped is loading its config locally where Piped is running, you still need to manually restart Piped after adding any change on that config data. Remote-config is for you to remove that kind of manual operation. - -Remote-config is the ability to load Piped config data from a remote location such as a Git repository. Not only that, but it also watches the config periodically to detect any changes on that config and restarts Piped to reflect the new configuration automatically. - -This feature requires the remote-upgrade feature to be enabled simultaneously. Currently, we only support remote config from a Git repository, but other remote locations could be supported in the future. Please check the [installation](../../../installation/install-piped/) guide on each environment to know how to configure Piped to load a remote config file. - - -## Summary - -- By `remote-upgrade` you can upgrade your Piped to a newer version by clicking on the web console -- By `remote-config` you can enforce your Piped to use the latest config data just by updating its config file stored in a Git repository diff --git a/docs/content/en/docs-v0.37.x/user-guide/plan-preview.md b/docs/content/en/docs-v0.37.x/user-guide/plan-preview.md deleted file mode 100644 index bbcafab16e..0000000000 --- a/docs/content/en/docs-v0.37.x/user-guide/plan-preview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: "Confidently review your changes with Plan Preview" -linkTitle: "Plan preview" -weight: 4 -description: > - Enables the ability to preview the deployment plan against a given commit before merging. ---- - -In order to help developers review the pull request with a better experience and more confidence to approve it to trigger the actual deployments, -PipeCD provides a way to preview the deployment plan of all updated applications by that pull request. - -Here are what will be included currently in the result of plan-preview process: - -- which application will be deployed once the pull request got merged -- which deployment strategy (QUICK_SYNC or PIPELINE_SYNC) will be used -- which resources will be added, deleted, or modified - -This feature will available for all application kinds: KUBERNETES, TERRAFORM, CLOUD_RUN, LAMBDA and Amazon ECS. - -![](/images/plan-preview-comment.png) -

-PlanPreview with GitHub actions pipe-cd/actions-plan-preview -

- -## Prerequisites - -- Ensure the version of your Piped is at least `v0.11.0`. -- Having an API key that has `READ_WRITE` role to authenticate with PipeCD's Control Plane. A new key can be generated from `settings/api-key` page of your PipeCD web. - -## Usage - -Plan-preview result can be requested by using `pipectl` command-line tool as below: - -``` console -pipectl plan-preview \ - --address={ PIPECD_CONTROL_PLANE_ADDRESS } \ - --api-key={ PIPECD_API_KEY } \ - --repo-remote-url={ REPO_REMOTE_GIT_SSH_URL } \ - --head-branch={ HEAD_BRANCH } \ - --head-commit={ HEAD_COMMIT } \ - --base-branch={ BASE_BRANCH } -``` - -You can run it locally or integrate it to your CI system to run automatically when a new pull request is opened/updated. Use `--help` to see more options. - -``` console -pipectl plan-preview --help -``` - -## GitHub Actions - -If you are using GitHub Actions, you can seamlessly integrate our prepared [actions-plan-preview](https://github.com/pipe-cd/actions-plan-preview) to your workflows. This automatically comments the plan-preview result on the pull request when it is opened or updated. You can also trigger to run plan-preview manually by leave a comment `/pipecd plan-preview` on the pull request.