Skip to content

Commit

Permalink
docs: update AWS config properties
Browse files Browse the repository at this point in the history
  • Loading branch information
fhussonnois committed Apr 19, 2022
1 parent 1bb3c9b commit d8b176b
Show file tree
Hide file tree
Showing 3 changed files with 40 additions and 13 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ public class AmazonS3ClientConfig extends AbstractConfig {

public static final int AWS_S3_RETRY_BACKOFF_MAX_RETRIES_DEFAULT = PredefinedRetryPolicies.DEFAULT_MAX_ERROR_RETRY;

public static final String AWS_S3_OBJECT_STORAGE_CLASS_CONFIG = "aws.default.object.storage.class";
public static final String AWS_S3_OBJECT_STORAGE_CLASS_CONFIG = "aws.s3.default.object.storage.class";
public static final String AWS_S3_OBJECT_STORAGE_CLASS_DOC = "The AWS storage class to associate with an S3 object when it is copied by the connector (e.g., during a move operation).";

/**
Expand Down
25 changes: 24 additions & 1 deletion docs/content/en/docs/Developer Guide/cleaning-completed-files.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The cleanup policy can be configured with the below connect property :
| `fs.cleanup.policy.class` | The fully qualified name of the class which is used to cleanup files | class | *-* | high |


## Available Cleanup Policies
## Generic Cleanup Policies

### `DeleteCleanPolicy`

Expand All @@ -38,6 +38,8 @@ To enable this policy, the property `fs.cleanup.policy.class` must be configured
io.streamthoughts.kafka.connect.filepulse.fs.clean.LogCleanupPolicy
```

## Cleanup Policies: Local Filesystem

### `LocalMoveCleanupPolicy`

This policy attempts to move atomically files to configurable target directories.
Expand All @@ -59,4 +61,25 @@ This policy only works when using the `LocalFSDirectoryListing`.
| `cleaner.output.failed.path` | Target directory for file proceed with failure | string | *.failure* | high |
| `cleaner.output.succeed.path` | Target directory for file proceed successfully | string | *.success* | high |

## Cleanup Policies: Amazon

### `AmazonMoveCleanupPolicy`

This policy moves S3 objects atomically files to configurable target directories.

To enable this policy, the property `fs.cleanup.policy.class` must be configured to :

```
io.streamthoughts.kafka.connect.filepulse.fs.clean.AmazonS3MoveCleanupPolicy
```


| Configuration | Description | Type | Default | Importance |
|--------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------|------------|
| `fs.cleanup.policy.move.success.aws.bucket.name` | The name of the destination S3 bucket for success objects (optional) | `string` | *Bucket name of the source S3 Object* | HIGH |
| `fs.cleanup.policy.move.success.aws.prefix.path` | The prefix to be used for defining the key of an S3 object to move into the destination bucket. | `string` | | HIGH |
| `fs.cleanup.policy.move.failure.aws.bucket.name` | The name of the destination S3 bucket for failure objects (optional) | `string` | *Bucket name of the source S3 Object* | HIGH |
| `fs.cleanup.policy.move.failure.aws.prefix.path` | The prefix to be used for defining the key of an S3 object to move into the destination bucket. | `string` | | HIGH |
| `aws.s3.default.object.storage.class` | The AWS storage class to associate with an S3 object when it is copied by the connector (e.g., during a move operation). Accepted values are: `STANDARD`, `GLACIER`, `REDUCED_REDUNDANCY`, `STANDARD_IA`,`ONEZONE_IA`,`INTELLIGENT_TIERING`,`DEEP_ARCHIVE` | `string` | | LOW |

## Implementing your own policy
26 changes: 15 additions & 11 deletions docs/content/en/docs/Developer Guide/file-system-listing.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,17 +53,21 @@ The `AmazonS3FileSystemListing` class can be used for listing objects that exist

#### Configuration

| Configuration | Description | Type | Default | Importance |
|------------------------------------|--------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------|------------|
| `aws.access.key.id` | AWS Access Key ID AWS | `string` | - | HIGH |
| `aws.secret.access.key` | AWS Secret Access Key | `string` | - | HIGH |
| `aws.secret.session.token` | AWS Secret Session Token | `string` | - | HIGH |
| `aws.s3.region` | The AWS S3 Region, e.g. us-east-1 | `string` | `Regions.DEFAULT_REGION.getName()` | MEDIUM |
| `aws.s3.service.endpoint` | AWS S3 custom service endpoint. | `string` | - | MEDIUM |
| `aws.s3.path.style.access.enabled` | Configures the client to use path-style access for all requests. | `string` | - | MEDIUM |
| `aws.s3.bucket.name` | The name of the Amazon S3 bucket. | `string` | - | HIGH |
| `aws.s3.bucket.prefix` | The prefix to be used for restricting the listing of the objects in the bucket | `string` | - | MEDIUM |
| `aws.credentials.provider.class` | The AWSCredentialsProvider to use if no access key id and secret access key is configured. | `class` | `com.amazonaws.auth.EnvironmentVariableCredentialsProvider` | LOW |
| Configuration | Description | Type | Default | Importance |
|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------|------------|
| `aws.access.key.id` | AWS Access Key ID AWS | `string` | - | HIGH |
| `aws.secret.access.key` | AWS Secret Access Key | `string` | - | HIGH |
| `aws.secret.session.token` | AWS Secret Session Token | `string` | - | HIGH |
| `aws.credentials.provider.class` | The AWSCredentialsProvider to use if no access key id and secret access key is configured. | `class` | `com.amazonaws.auth.EnvironmentVariableCredentialsProvider` | LOW |
| `aws.s3.region` | The AWS S3 Region, e.g. us-east-1 | `string` | `Regions.DEFAULT_REGION.getName()` | MEDIUM |
| `aws.s3.service.endpoint` | AWS S3 custom service endpoint. | `string` | - | MEDIUM |
| `aws.s3.path.style.access.enabled` | Configures the client to use path-style access for all requests. | `string` | - | MEDIUM |
| `aws.s3.bucket.name` | The name of the Amazon S3 bucket. | `string` | - | HIGH |
| `aws.s3.bucket.prefix` | The prefix to be used for restricting the listing of the objects in the bucket | `string` | - | MEDIUM |
| `aws.s3.default.object.storage.class` | The AWS storage class to associate with an S3 object when it is copied by the connector (e.g., during a move operation). Accepted values are: `STANDARD`, `GLACIER`, `REDUCED_REDUNDANCY`, `STANDARD_IA`,`ONEZONE_IA`,`INTELLIGENT_TIERING`,`DEEP_ARCHIVE` | `string` | | LOW |
| `aws.s3.backoff.delay.ms` | The base back-off time (milliseconds) before retrying a request. | `int` | `100` | MEDIUM |
| `aws.s3.backoff.max.delay.ms` | The maximum back-off time (in milliseconds) before retrying a request. | `int` | `20_000` | MEDIUM |
| `aws.s3.backoff.max.retries` | The maximum number of retry attempts for failed retryable requests. | `int` | `3` | MEDIUM |

### Google Cloud Storage

Expand Down

0 comments on commit d8b176b

Please sign in to comment.