Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release: Prepare v2.14 #1382

Merged
merged 5 commits into from
Apr 25, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions .htmltest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,3 @@ CheckExternal: false
IgnoreAltMissing: true
IgnoreEmptyHref: true
IgnoreInternalURLs:
- /docs/2.12/authentication-providers/aws/
- /docs/2.12/authentication-providers/aws-secret-manager/
- /docs/2.12/authentication-providers/configmap/
- /docs/2.12/authentication-providers/gcp-secret-manager/
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ Remember to create the folder for next version with already existing docs in
current version.

Make sure that the version on `content/docs/{next-version}/deploy.md` is updated
and uses the next version, instead of the current one.
and uses the next version, instead of the current one. Ensure that Kubernetes cluster version is updated as well.

Ensure that compatibility matrix on `content/docs/{next-version}/operate/cluster.md` is updated with the compatibilities for the incoming version.

Expand Down
2 changes: 1 addition & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ alpine_js_version = "2.2.1"
favicon = "favicon.png"

[params.versions]
docs = ["2.13", "2.12", "2.11", "2.10", "2.9", "2.8", "2.7", "2.6", "2.5", "2.4", "2.3", "2.2", "2.1", "2.0", "1.5", "1.4"]
docs = ["2.14", "2.13", "2.12", "2.11", "2.10", "2.9", "2.8", "2.7", "2.6", "2.5", "2.4", "2.3", "2.2", "2.1", "2.0", "1.5", "1.4"]

# Site fonts. For more options see https://fonts.google.com.
[[params.fonts]]
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.10/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ Trigger fields:

### Caching Metrics (Experimental)

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.11/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.12/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.13/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 0 additions & 2 deletions content/docs/2.13/scalers/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,6 @@ spec:
- type: prometheus
metadata:
serverAddress: http://<prometheus-host>:9090
metricName: http_requests_total
threshold: '100'
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))
authModes: "custom"
Expand Down Expand Up @@ -424,7 +423,6 @@ spec:
- type: prometheus
metadata:
serverAddress: https://test-azure-monitor-workspace-name-9ksc.eastus.prometheus.monitor.azure.com
metricName: http_requests_total
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) # Note: query must return a vector/scalar single element response
threshold: '100.50'
activationThreshold: '5.5'
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.14/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.14/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ We provide a few approaches to deploy KEDA runtime in your Kubernetes clusters:
- [Operator Hub](#operatorhub)
- [YAML declarations](#yaml)

> 💡 **NOTE:** KEDA requires Kubernetes cluster version 1.24 and higher
> 💡 **NOTE:** KEDA requires Kubernetes cluster version 1.27 and higher

Don't see what you need? Feel free to [create an issue](https://github.com/kedacore/keda/issues/new) on our GitHub repo.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.14/operate/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ As a reference, this compatibility matrix shows supported k8s versions per KEDA

| KEDA | Kubernetes |
|-----------|---------------|
| v2.14 | v1.28 - v1.30 |
| v2.14 | v1.27 - v1.29 |
| v2.13 | v1.27 - v1.29 |
| v2.12 | v1.26 - v1.28 |
| v2.11 | v1.25 - v1.27 |
Expand Down
2 changes: 0 additions & 2 deletions content/docs/2.14/scalers/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,6 @@ spec:
- type: prometheus
metadata:
serverAddress: http://<prometheus-host>:9090
metricName: http_requests_total
threshold: '100'
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))
authModes: "custom"
Expand Down Expand Up @@ -424,7 +423,6 @@ spec:
- type: prometheus
metadata:
serverAddress: https://test-azure-monitor-workspace-name-9ksc.eastus.prometheus.monitor.azure.com
metricName: http_requests_total
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) # Note: query must return a vector/scalar single element response
threshold: '100.50'
activationThreshold: '5.5'
Expand Down
8 changes: 8 additions & 0 deletions content/docs/2.15/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
+++
title = "The KEDA Documentation"
weight = 1
+++

Welcome to the documentation for **KEDA**, the Kubernetes Event-driven Autoscaler. Use the navigation to the left to learn more about how to use KEDA and its components.

Additions and contributions to these docs are managed on [the keda-docs GitHub repo](https://github.com/kedacore/keda-docs).
8 changes: 8 additions & 0 deletions content/docs/2.15/authentication-providers/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
+++
title = "Authentication Providers"
weight = 5
+++

Available authentication providers for KEDA:

{{< authentication-providers >}}
14 changes: 14 additions & 0 deletions content/docs/2.15/authentication-providers/aws-eks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
+++
title = "AWS EKS Pod Identity Webhook"
+++

[**EKS Pod Identity Webhook**](https://github.com/aws/amazon-eks-pod-identity-webhook), which is described more in depth [here](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/), allows you to provide the role name using an annotation on a service account associated with your pod.

> ⚠️ **WARNING:** [`aws-eks` auth has been deprecated](https://github.com/kedacore/keda/discussions/5343) and support for it will be removed from KEDA on v3. We strongly encourage the migration to [`aws` auth](./aws.md).
You can tell KEDA to use EKS Pod Identity Webhook via `podIdentity.provider`.

```yaml
podIdentity:
provider: aws-eks # Optional. Default: none
```
14 changes: 14 additions & 0 deletions content/docs/2.15/authentication-providers/aws-kiam.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
+++
title = "AWS Kiam Pod Identity"
+++

[**Kiam**](https://github.com/uswitch/kiam/) lets you bind an AWS IAM Role to a pod using an annotation on the pod.

> ⚠️ **WARNING:** `aws-kiam` auth has been deprecated given [AWS KIAM is no longer maintained](https://github.com/uswitch/kiam/#-%EF%B8%8Fthis-project-is-now-being-abandoned-%EF%B8%8F-). As a result, [support for it will be removed from KEDA on v2.15](https://github.com/kedacore/keda/discussions/5342). We strongly encourage the migration to [`aws` auth](./aws.md).

You can tell KEDA to use Kiam via `podIdentity.provider`.

```yaml
podIdentity:
provider: aws-kiam # Optional. Default: none
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
+++
title = "AWS Secret Manager"
+++

You can integrate AWS Secret Manager secrets into your trigger by configuring the `awsSecretManager` key in your KEDA scaling specification.

The `podIdentity` section configures the usage of AWS pod identity with the provider set to AWS.

The `credentials` section specifies AWS credentials, including the `accessKey` and `secretAccessKey`.

- **accessKey:** Configuration for the AWS access key.
- **secretAccessKey:** Configuration for the AWS secret access key.

The `region` parameter is optional and represents the AWS region where the secret resides, defaulting to the default region if not specified.

The `secrets` list within `awsSecretManager` defines the mapping between the AWS Secret Manager secret and the authentication parameter used in your application, including the parameter name, AWS Secret Manager secret name, and an optional version parameter, defaulting to the latest version if unspecified.

### Configuration

```yaml
awsSecretManager:
podIdentity: # Optional.
provider: aws # Required.
credentials: # Optional.
accessKey: # Required.
valueFrom: # Required.
secretKeyRef: # Required.
name: {k8s-secret-with-aws-credentials} # Required.
key: {key-in-k8s-secret} # Required.
accessSecretKey: # Required.
valueFrom: # Required.
secretKeyRef: # Required.
name: {k8s-secret-with-aws-credentials} # Required.
key: {key-in-k8s-secret} # Required.
region: {aws-region} # Optional.
secrets: # Required.
- parameter: {param-name-used-for-auth} # Required.
name: {aws-secret-name} # Required.
version: {aws-secret-version} # Optional.
```
140 changes: 140 additions & 0 deletions content/docs/2.15/authentication-providers/aws.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
+++
title = "AWS (IRSA) Pod Identity Webhook"
+++

[**AWS IAM Roles for Service Accounts (IRSA) Pod Identity Webhook**](https://github.com/aws/amazon-eks-pod-identity-webhook) ([documentation](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/)) allows you to provide the role name using an annotation on a service account associated with your pod.

You can tell KEDA to use AWS Pod Identity Webhook via `podIdentity.provider`.

```yaml
podIdentity:
provider: aws
roleArn: <role-arn> # Optional.
identityOwner: keda|workload # Optional. If not set, 'keda' is default value. Mutually exclusive with 'roleArn' (if set)
```

**Parameter list:**

- `roleArn` - Role ARN to be used by KEDA. If not set the IAM role which the KEDA operator uses will be used. Mutually exclusive with `identityOwner: workload`
- `identityOwner` - Owner of the identity to be used. (Values: `keda`, `workload`, Default: `keda`, Optional)

> ⚠️ **NOTE:** `podIdentity.roleArn` and `podIdentity.identityOwner` are mutually exclusive, setting both is not supported.

## How to use

AWS IRSA will give access to pods with service accounts having appropriate annotations. ([official docs](https://aws.amazon.com/es/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/)) You can set these annotations on the KEDA Operator service account.

This can be done for you during deployment with Helm with the following flags:

1. `--set podIdentity.aws.irsa.enabled=true`
2. `--set podIdentity.aws.irsa.roleArn={aws-arn-role}`

You can override the default KEDA operator IAM role by specifying an `roleArn` parameter under the `podIdentity` field. This allows end-users to use different roles to access various resources which allows for more granular access than having a single IAM role that has access to multiple resources.

If you would like to use the same IAM credentials as your workload is currently using, `podIdentity.identityOwner` can be set with the value `workload` and KEDA will inspect the workload service account to check if IRSA annotation is there and KEDA will assume that role.

## AssumeRole or AssumeRoleWithWebIdentity?

This authentication uses automatically both, doing a fallback from [AssumeRoleWithWebIdentity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html) to [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) if the first one fails. This extends the capabilities because KEDA doesn't need `sts:AssumeRole` permission if you are already working with [WebIdentities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html), you just need to add KEDA service account to the trusted relations of the role.

## Setting up KEDA role and policy

The [official AWS docs](https://aws.amazon.com/es/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/) explain how to set up a a basic configuration for an IRSA role. The policy changes depend if you are using the KEDA role (`podIdentity.roleArn` is not set) or workload role (`podIdentity.roleArn` sets a RoleArn or `podIdentity.identityOwner` sets to `workload`).

### Using KEDA role to access infrastructure

This is the easiest case and you just need to attach to KEDA's role the desired policy/policies, granting the access permissions that you want to provide. For example, this could be a policy to use with SQS:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sqs:GetQueueAttributes",
"Resource": "arn:aws:sqs:*:YOUR_ACCOUNT:YOUR_QUEUE"
}
]
}
```

### Using KEDA role to assume workload role using AssumeRoleWithWebIdentity
In this case, KEDA will use its own (k8s) service account to assume workload role (and to use workload's role attached policies). This scenario requires that KEDA service account is trusted for requesting the role using AssumeRoleWithWebIdentity.

This is an example of how role policy could look like:

```json
{
"Version": "2012-10-17",
"Statement": [
{
... YOUR WORKLOAD TRUSTED RELATION ...
},
{
"Effect": "Allow",
"Principal": {
"Federated": "YOUR_OIDC_ARN"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"YOUR_OIDC:sub": "system:serviceaccount:keda:keda-operator",
"YOUR_OIDC:aud": "sts.amazonaws.com"
}
}
}
]
}
```

### Using KEDA role to assume workload role using AssumeRole

In this case, KEDA will use its own role to assume the workload role (and to use workload's role attached policies). This scenario is a bit more complex because we need to establish a trusted relationship between both roles and we need to grant to KEDA's role the permission to assume other roles.

This is an example of how KEDA's role policy could look like:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::ACCOUNT_1:role/ROLE_NAME"
]
},
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::ACCOUNT_2:role/*"
]
}
]
}
```
This can be extended so that KEDA can assume multiple workload roles, either as an explicit array of role ARNs, or with a wildcard.
This policy attached to KEDA's role will allow KEDA to assume other roles, now you have to allow the workload roles you want to use all allow to being assumed by the KEDA operator role. To achieve this, you have to add a trusted relation to the workload role:

```json
{
"Version": "2012-10-17",
"Statement": [
{
// Your already existing relations
"Sid": "",
"Effect": "Allow",
// ...
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT:role/KEDA_ROLE_NAME"
},
"Action": "sts:AssumeRole"
}
]
}
```
Loading