Skip to content

Commit

Permalink
connectors and optimize sm docs review (#3119)
Browse files Browse the repository at this point in the history
* connectors and sm docs review

* resolve comments
  • Loading branch information
christinaausley authored Jan 4, 2024
1 parent e725412 commit 6501ffe
Show file tree
Hide file tree
Showing 14 changed files with 179 additions and 80 deletions.
76 changes: 59 additions & 17 deletions docs/self-managed/connectors-deployment/connectors-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ id: connectors-configuration
title: Configuration
---

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

You can configure the Connector runtime environment in the following ways:

- The Zeebe instance to connect to.
Expand All @@ -13,7 +16,15 @@ You can configure the Connector runtime environment in the following ways:

In general, the Connector Runtime will respect all properties known to [Spring Zeebe](https://github.com/camunda-community-hub/spring-zeebe).

### SaaS
<Tabs groupId="configuration" defaultValue="saas" queryString values={
[
{label: 'SaaS', value: 'saas' },
{label: 'Local installation', value: 'local' },
{label: 'Disable Operate connectivity', value: 'operate' }
]
}>

<TabItem value='saas'>

To use Camunda 8 SaaS specify the connection properties:

Expand All @@ -33,7 +44,9 @@ CAMUNDA_OPERATE_CLIENT_CLIENT-SECRET=xxx

If you are connecting a local Connector runtime to a SaaS cluster, you may want to check out our [guide to using Connectors in hybrid mode](/guides/use-connectors-in-hybrid-mode.md).

### Local installation
</TabItem>

<TabItem value='local'>

Zeebe:

Expand All @@ -59,7 +72,9 @@ CAMUNDA_OPERATE_CLIENT_KEYCLOAK-URL=http://localhost:18080
CAMUNDA_OPERATE_CLIENT_KEYCLOAK-REALM=camunda-platform
```

### Disable Operate connectivity
</TabItem>

<TabItem value='operate'>

Disabling Operate polling will lead to inability to use inbound (e.g., webhook) capabilities.
However, if you still wish to do so, you need to start your Connector runtime with the following environment variables:
Expand All @@ -71,6 +86,9 @@ SPRING_MAIN_WEB-APPLICATION-TYPE=none
OPERATE_CLIENT_ENABLED=false
```

</TabItem>
</Tabs>

## Manual discovery of Connectors

By default, the Connector runtime picks up outbound Connectors available on the classpath automatically.
Expand All @@ -96,7 +114,16 @@ CONNECTOR_HTTPJSON_TYPE=non-default-httpjson-task-type

Providing secrets to the runtime environment can be achieved in different ways, depending on your setup.

### Default secret provider
<Tabs groupId="connectorTemplateInbound" defaultValue="default" queryString values={
[
{label: 'Default secret provider', value: 'default' },
{label: 'Secrets in Docker images', value: 'docker' },
{label: 'Secrets in manual installations', value: 'manual' },
{label: 'Custom secret provider', value: 'custom' },
]
}>

<TabItem value='default'>

:::caution
By default, all environment variables can be used as Connector secrets.
Expand All @@ -116,7 +143,9 @@ The following environment variables can be used to configure the default secret
| `CAMUNDA_CONNECTOR_SECRETPROVIDER_ENVIRONMENT_ENABLED` | Whether the default secret provider is enabled. | `true` |
| `CAMUNDA_CONNECTOR_SECRETPROVIDER_ENVIRONMENT_PREFIX` | The prefix applied to the secret name before looking up the environment. | `""` |

### Secrets in Docker images
</TabItem>

<TabItem value='docker'>

To inject secrets into the [Docker images of the runtime](../platform-deployment/docker.md#connectors), they must be available in the environment of the Docker container.

Expand All @@ -137,7 +166,9 @@ current shell environment when `docker run` is executed. The `--env-file`
option allows using a single file with the format `NAME=VALUE` per line
to inject multiple secrets at once.

### Secrets in manual installations
</TabItem>

<TabItem value='manual'>

In the [manual setup](../platform-deployment/manual.md#run-connectors), inject secrets during Connector execution by providing
them as environment variables before starting the runtime environment. You can, for example, export them beforehand as follows:
Expand All @@ -148,7 +179,9 @@ export MY_SECRET='foo'

Reference the secret in the Connector's input in the prefixed style `{{secrets.MY_SECRET}}`.

### Custom secret provider
</TabItem>

<TabItem value='custom'>

Create your own implementation of the `io.camunda.connector.api.secret.SecretProvider` interface that
[comes with the SDK](https://github.com/camunda/connectors/blob/main/connector-sdk/core/src/main/java/io/camunda/connector/api/secret/SecretProvider.java).
Expand All @@ -175,15 +208,18 @@ java -cp 'connector-runtime-application-VERSION-with-dependencies.jar:...:my-sec
io.camunda.connector.runtime.ConnectorRuntimeApplication
```

## Multi-Tenancy
</TabItem>
</Tabs>

## Multi-tenancy

The Connector Runtime supports multiple tenants for Inbound and Outbound Connectors.
The Connector Runtime supports multiple tenants for inbound and outbound Connectors.
A single Connector Runtime can serve a single tenant or can be configured to serve
multiple tenants. By default, the runtime uses the `<default>` tenant id for all
Zeebe related operations like handling Jobs and publishing Messages.

:::info
Support for **Outbound Connectors** with multiple tenants requires a dedicated
Support for **outbound Connectors** with multiple tenants requires a dedicated
tenant job worker config (described below). **Inbound Connectors** will automatically work for all tenants
the configured Connector Runtime client has access to. This can be configured in Identity via
the application assignment.
Expand All @@ -209,11 +245,11 @@ zeebe.client.default-job-worker-tenant-ids=t1,<default>

### Outbound Connector config

The Connector Runtime uses the `<default>` tenant for Outbound Connector related features.
The Connector Runtime uses the `<default>` tenant for outbound Connector related features.
If support for a different tenant or multiple tenants should be enabled, the tenants need
to be configured individually using the following environment variables.

If you want to use Outbound Connectors for a single tenant that is different
If you want to use outbound Connectors for a single tenant that is different
from the `<default>` tenant you can specify a different default tenant id using:

```bash
Expand All @@ -224,12 +260,12 @@ This will change the default tenant id used for fetching jobs and publishing mes
to the tenant id `tenant1`.

:::note
Please keep in mind that Inbound Connectors will still be enabled for
Please keep in mind that inbound Connectors will still be enabled for
all tenants that the Connector Runtime client has access to.
:::

If you want to run the Connector Runtime in a setup where a single runtime
serves multiple tenants you have add each tenant id to the list of the default job workers:
serves multiple tenants you have to add each tenant id to the list of the default job workers:

```bash
ZEEBE_CLIENT_DEFAULT-JOB-WORKER-TENANT-IDS=tenant1, tenant2
Expand All @@ -240,13 +276,19 @@ configuration of job workers.

### Inbound Connector config

The Connector Runtime will fetch and execute all Inbound Connectors it receives from
Operate independently of the Outbound Connector configuration without any additional
The Connector Runtime will fetch and execute all inbound Connectors it receives from
Operate independently of the outbound Connector configuration without any additional
configuration required from the user.

If you want to restrict the Connector Runtime Inbound Connector feature to a single tenant or multiple tenants
If you want to restrict the Connector Runtime inbound Connector feature to a single tenant or multiple tenants
you have to use Identity and assign the tenants the Connector application should have access to.

### Troubleshooting

To ensure seamless integration and functionality, the multi-tenancy feature must also be enabled across **all** associated components [if not configured in Helm](/self-managed/concepts/multi-tenancy.md) so users can view any data from tenants for which they have authorizations configured in Identity.

Find more information (including links to individual component configuration) on the [multi-tenancy concepts page](/self-managed/concepts/multi-tenancy.md).

## Logging

### Google Stackdriver (JSON) logging
Expand Down
8 changes: 4 additions & 4 deletions docs/self-managed/connectors-deployment/install-and-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ In a [Self-Managed](/self-managed/about-self-managed.md) environment, you manage
Using our [Connector runtime environments](/components/connectors/custom-built-connectors/connector-sdk.md#runtime-environments), you can consume any set of Connectors,
including the [out-of-the-box Connectors](/components/connectors/out-of-the-box-connectors/available-connectors-overview.md) and custom Connectors developed using the **[Connector SDK](/components/connectors/custom-built-connectors/connector-sdk.md)** and [Connector templates](/components/connectors/custom-built-connectors/connector-templates.md).

You can find a list of Connectors developed by Camunda, Partners, and the community in our
You can find a list of Connectors developed by Camunda, partners, and the community in our
[Camunda Connectors Awesome List](https://github.com/camunda-community-hub/camunda-8-connectors#readme).

:::note
Expand All @@ -29,8 +29,8 @@ Currently, we support an installation of Connectors with [Docker](/self-managed/
[Docker Compose](/self-managed/platform-deployment/docker.md#docker-compose), [Helm charts](/self-managed/platform-deployment/helm-kubernetes/overview.md), and the [manual setup](/self-managed/platform-deployment/manual.md#run-connectors).

:::note
Inbound Connectors require Operate to be deployed as part of your Camunda Self-Managed installation.
If you don't use Operate with your cluster, you can still use Outbound Connectors.
[Inbound Connectors](/components/connectors/use-connectors/inbound.md) require [Operate](/self-managed/operate-deployment/install-and-start.md) to be deployed as part of your Camunda Self-Managed installation.
If you don't use Operate with your cluster, you can still use [outbound Connectors](/components/connectors/use-connectors/outbound.md).
:::

## Connector templates
Expand All @@ -41,7 +41,7 @@ For the [out-of-the-box Connectors](/components/connectors/out-of-the-box-connec
the Connectors Bundle project provides a set of all Connector templates related to one [release version](https://github.com/camunda/connectors-bundle/releases).
If you use the [Docker Compose](/self-managed/platform-deployment/docker.md#docker-compose) installation, you can thus fetch all Connector templates that match the versions of the Connectors used in the backend.

Alternatively, you can fetch the JSON templates from the respective Connector's releases at respective connectors folder in the [bundle repository](https://github.com/camunda/connectors-bundle)
Alternatively, you can fetch the JSON templates from the respective Connector's releases in the respective Connectors folder in the [bundle repository](https://github.com/camunda/connectors-bundle)
at `connectors/{connector name}/element-templates`.

You can use the Connector templates as provided or modify them to your needs as described in our [Connector templates guide](/components/connectors/custom-built-connectors/connector-templates.md).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ There are entities that only exist in Camunda Optimize and authorizations to the

[Collections](components/userguide/collections-dashboards-reports.md) are the only way to share Camunda Optimize reports and dashboards with other users. Access to them is directly managed via the UI of collections; see the corresponding user guide section on [Collection - User Permissions](components/userguide/collections-dashboards-reports.md#user-permissions).

### Event based processes
### Event-based processes

<span class="badge badge--platform">Camunda 7 only</span>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The configuration property [`engines.${engineAlias}.importEnabled`](./system-con

Given a simple failover cluster consisting of two instances connected to one engine, the engine configurations in the `environment-config.yaml` would look like the following:

Instance 1 (import from engine `default` enabled):
**Instance 1 (import from engine `default` enabled):**

```
...
Expand All @@ -43,7 +43,7 @@ historyCleanup:
...
```

Instance 2 (import from engine `camunda-bpm` disabled):
**Instance 2 (import from engine `camunda-bpm` disabled):**

```
...
Expand All @@ -59,7 +59,7 @@ engines:
The importing instance has the [history cleanup enabled](./system-configuration.md#history-cleanup-settings). It is strongly recommended all non-importing Optimize instances in the cluster do not enable history cleanup to prevent any conflicts when the [history cleanup](../history-cleanup/) is performed.
:::

### 1.1 Import - event based process import
### 1.1 Import - event-based process import

<span class="badge badge--platform">Camunda 7 only</span>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ description: "How to configure event-based processes in Optimize."

<span class="badge badge--platform">Camunda 7 only</span>

Configuration of the Optimize event based process feature.
Configuration of the Optimize event-based process feature.

| YAML Path | Default Value | Description |
| YAML path | Default value | Description |
| -------------------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| eventBasedProcess.authorizedUserIds | [ ] | A list of userIds that are authorized to manage (Create, Update, Publish & Delete) event based processes. |
| eventBasedProcess.authorizedGroupIds | [ ] | A list of groupIds that are authorized to manage (Create, Update, Publish & Delete) event based processes. |
Expand All @@ -17,13 +17,13 @@ Configuration of the Optimize event based process feature.
| eventBasedProcess.eventIndexRollover.scheduleIntervalInMinutes | 10 | The interval in minutes at which to check whether the conditions for a rollover of eligible indices are met, triggering one if required. This value should be greater than 0. |
| eventBasedProcess.eventIndexRollover.maxIndexSizeGB | 50 | Specifies the maximum total index size for events (excluding replicas). When shards get too large, query performance can slow down and rolling over an index can bring an improvement. Using this configuration, a rollover will occur when triggered and the current event index size matches or exceeds the maxIndexSizeGB threshold. |

## Event Ingestion REST API Configuration
## Event ingestion REST API configuration

<span class="badge badge--platform">Camunda 7 only</span>

Configuration of the Optimize [Event Ingestion REST API](../../../apis-tools/optimize-api/event-ingestion.md) for [event-based processes](components/userguide/additional-features/event-based-processes.md).
Configuration of the Optimize [event ingestion REST API](../../../apis-tools/optimize-api/event-ingestion.md) for [event-based processes](components/userguide/additional-features/event-based-processes.md).

| YAML Path | Default Value | Description |
| YAML path | Default value | Description |
| ----------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| eventBasedProcess.eventIngestion.maxBatchRequestBytes | 10485760 | Content length limit for an ingestion REST API bulk request in bytes. Requests will be rejected when exceeding that limit. Defaults to 10MB. In case this limit is raised you should carefully tune the heap memory accordingly, see Adjust Optimize heap size on how to do that. |
| eventBasedProcess.eventIngestion.maxRequests | 5 | The maximum number of event ingestion requests that can be serviced at any given time. |
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ title: "History cleanup"
description: "Make sure that old data is automatically removed from Optimize."
---

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

To satisfy data protection laws or just for general storage management purposes, Optimize provides an automated cleanup functionality.

There are four types of history cleanup:
Expand Down Expand Up @@ -41,7 +44,15 @@ For details on the notation, see the [Configuration Description](./system-config

All the remaining settings are entity type specific and will be explained in the following subsections.

### Process data cleanup
<Tabs groupId="cleanup" defaultValue="processdata" queryString values={
[
{label: 'Process data', value: 'processdata' },
{label: 'Decision data', value: 'decisiondata' },
{label: 'Ingested event', value: 'ingestedevent' }
]
}>

<TabItem value='processdata'>

The age of process instance data is determined by the `endTime` field of each process instance. Running instances are never cleaned up.

Expand All @@ -66,7 +77,9 @@ historyCleanup:
processDataCleanupMode: 'variables'
```

### Decision data cleanup
</TabItem>

<TabItem value='decisiondata'>

The age of decision instance data is determined by the `evaluationTime` field of each decision instance.

Expand All @@ -84,7 +97,9 @@ historyCleanup:
ttl: 'P3M'
```

### Ingested event cleanup
</TabItem>

<TabItem value='ingestedevent'>

The age of ingested event data is determined by the [`time`](../../../apis-tools/optimize-api/event-ingestion.md#request-body) field provided for each event at the time of ingestion.

Expand All @@ -101,6 +116,9 @@ historyCleanup:
The ingested event cleanup does not cascade down to potentially existing [event-based processes](components/userguide/additional-features/event-based-processes.md) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
:::

</TabItem>
</Tabs>

## Example

Here is an example of what a complete cleanup configuration might look like:
Expand Down
Loading

0 comments on commit 6501ffe

Please sign in to comment.