Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs update: metaclient version #6291

Merged
merged 3 commits into from
Jul 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/howto/export.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ The complete `spark-submit` command would look as follows:
spark-submit --conf spark.hadoop.lakefs.api.url=https://<LAKEFS_ENDPOINT>/api/v1 \
--conf spark.hadoop.lakefs.api.access_key=<LAKEFS_ACCESS_KEY_ID> \
--conf spark.hadoop.lakefs.api.secret_key=<LAKEFS_SECRET_ACCESS_KEY> \
--packages io.lakefs:lakefs-spark-client-301_2.12:0.9.0 \
--packages io.lakefs:lakefs-spark-client-301_2.12:0.9.1 \
--class io.treeverse.clients.Main export-app example-repo s3://example-bucket/prefix \
--branch=example-branch
```
Expand Down
15 changes: 8 additions & 7 deletions docs/howto/garbage-collection-committed.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,9 @@ has_children: false

{: .warning-title }
> Deprecation notice
>
> This page describes a deprecated feature. Please visit the new [garbage collection documentation](./garbage-collection.md).
>
> This feature will be available up to version 0.9.1 of the lakeFS metadata client. It will be discontinued in subsequent versions.
> Please visit the new [garbage collection documentation](./garbage-collection.md).

By default, lakeFS keeps all your objects forever. This allows you to travel back in time to previous versions of your data.
However, sometimes you may want to hard-delete your objects - namely, delete them from the underlying storage.
Expand Down Expand Up @@ -115,7 +116,7 @@ spark-submit --class io.treeverse.clients.GarbageCollector \
-c spark.hadoop.lakefs.api.secret_key=<LAKEFS_SECRET_KEY> \
-c spark.hadoop.fs.s3a.access.key=<S3_ACCESS_KEY> \
-c spark.hadoop.fs.s3a.secret.key=<S3_SECRET_KEY> \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo us-east-1
```
</div>
Expand All @@ -128,7 +129,7 @@ spark-submit --class io.treeverse.clients.GarbageCollector \
-c spark.hadoop.lakefs.api.secret_key=<LAKEFS_SECRET_KEY> \
-c spark.hadoop.fs.s3a.access.key=<S3_ACCESS_KEY> \
-c spark.hadoop.fs.s3a.secret.key=<S3_SECRET_KEY> \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-301/0.9.0/lakefs-spark-client-301-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-301/0.9.1/lakefs-spark-client-301-assembly-0.9.1.jar \
example-repo us-east-1
```
</div>
Expand All @@ -144,7 +145,7 @@ spark-submit --class io.treeverse.clients.GarbageCollector \
-c spark.hadoop.lakefs.api.access_key=<LAKEFS_ACCESS_KEY> \
-c spark.hadoop.lakefs.api.secret_key=<LAKEFS_SECRET_KEY> \
-c spark.hadoop.fs.azure.account.key.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=<AZURE_STORAGE_ACCESS_KEY> \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo
```

Expand All @@ -161,7 +162,7 @@ spark-submit --class io.treeverse.clients.GarbageCollector \
-c spark.hadoop.fs.azure.account.oauth2.client.id.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=<application-id> \
-c spark.hadoop.fs.azure.account.oauth2.client.secret.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=<service-credential-key> \
-c spark.hadoop.fs.azure.account.oauth2.client.endpoint.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=https://login.microsoftonline.com/<directory-id>/oauth2/token \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo
```

Expand Down Expand Up @@ -189,7 +190,7 @@ spark-submit --class io.treeverse.clients.GarbageCollector \
-c spark.hadoop.fs.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem \
-c spark.hadoop.fs.AbstractFileSystem.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS \
-c spark.hadoop.lakefs.gc.do_sweep=false \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo
```

Expand Down
5 changes: 3 additions & 2 deletions docs/howto/garbage-collection-uncommitted.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,9 @@ has_children: false

{: .warning-title }
> Deprecation notice
>
> This page describes a deprecated feature. Please visit the new [garbage collection documentation](./garbage-collection.md).
>
> This feature will be available up to version 0.9.1 of the lakeFS metadata client. It will be discontinued in subsequent versions.
> Please visit the new [garbage collection documentation](./garbage-collection.md).

Deletion of objects that were never committed was always a difficulty for lakeFS, see
[#1933](https://github.com/treeverse/lakeFS/issues/1933) for more details. Examples for
Expand Down
10 changes: 5 additions & 5 deletions docs/howto/garbage-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ spark-submit --class io.treeverse.gc.GarbageCollection \
-c spark.hadoop.lakefs.api.secret_key=<LAKEFS_SECRET_KEY> \
-c spark.hadoop.fs.s3a.access.key=<S3_ACCESS_KEY> \
-c spark.hadoop.fs.s3a.secret.key=<S3_SECRET_KEY> \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo us-east-1
```
</div>
Expand All @@ -129,7 +129,7 @@ spark-submit --class io.treeverse.gc.GarbageCollection \
-c spark.hadoop.lakefs.api.secret_key=<LAKEFS_SECRET_KEY> \
-c spark.hadoop.fs.s3a.access.key=<S3_ACCESS_KEY> \
-c spark.hadoop.fs.s3a.secret.key=<S3_SECRET_KEY> \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-301/0.9.0/lakefs-spark-client-301-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-301/0.9.1/lakefs-spark-client-301-assembly-0.9.1.jar \
example-repo us-east-1
```
</div>
Expand All @@ -145,7 +145,7 @@ spark-submit --class io.treeverse.gc.GarbageCollection \
-c spark.hadoop.lakefs.api.access_key=<LAKEFS_ACCESS_KEY> \
-c spark.hadoop.lakefs.api.secret_key=<LAKEFS_SECRET_KEY> \
-c spark.hadoop.fs.azure.account.key.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=<AZURE_STORAGE_ACCESS_KEY> \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo
```

Expand All @@ -162,7 +162,7 @@ spark-submit --class io.treeverse.gc.GarbageCollection \
-c spark.hadoop.fs.azure.account.oauth2.client.id.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=<application-id> \
-c spark.hadoop.fs.azure.account.oauth2.client.secret.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=<service-credential-key> \
-c spark.hadoop.fs.azure.account.oauth2.client.endpoint.<AZURE_STORAGE_ACCOUNT>.dfs.core.windows.net=https://login.microsoftonline.com/<directory-id>/oauth2/token \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo
```

Expand Down Expand Up @@ -190,7 +190,7 @@ spark-submit --class io.treeverse.gc.GarbageCollection \
-c spark.hadoop.fs.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem \
-c spark.hadoop.fs.AbstractFileSystem.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS \
-c spark.hadoop.lakefs.gc.do_sweep=false \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar \
http://treeverse-clients-us-east.s3-website-us-east-1.amazonaws.com/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar \
example-repo
```

Expand Down
8 changes: 4 additions & 4 deletions docs/reference/spark-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,23 +33,23 @@ Start Spark Shell / PySpark with the `--packages` flag:
higher versions.

```bash
spark-shell --packages io.lakefs:lakefs-spark-client-301_2.12:0.9.0
spark-shell --packages io.lakefs:lakefs-spark-client-301_2.12:0.9.1
```

Alternatively an assembled jar is available on S3, at
`s3://treeverse-clients-us-east/lakefs-spark-client-301/0.9.0/lakefs-spark-client-301-assembly-0.9.0.jar`
`s3://treeverse-clients-us-east/lakefs-spark-client-301/0.9.1/lakefs-spark-client-301-assembly-0.9.1.jar`
</div>

<div markdown="1" id="packages-3-hadoop3">
This client is compiled for Spark 3.1.2 with Hadoop 3.2.1, but can work for other Spark
versions and higher Hadoop versions.

```bash
spark-shell --packages io.lakefs:lakefs-spark-client-312-hadoop3-assembly_2.12:0.9.0
spark-shell --packages io.lakefs:lakefs-spark-client-312-hadoop3-assembly_2.12:0.9.1
```

Alternatively an assembled jar is available on S3, at
`s3://treeverse-clients-us-east/lakefs-spark-client-312-hadoop3/0.9.0/lakefs-spark-client-312-hadoop3-assembly-0.9.0.jar`
`s3://treeverse-clients-us-east/lakefs-spark-client-312-hadoop3/0.9.1/lakefs-spark-client-312-hadoop3-assembly-0.9.1.jar`
</div>

## Configuration
Expand Down
Loading