diff --git a/docs/enterprise/index.md b/docs/enterprise/index.md index 462fd4283d2..f11ad8632ea 100644 --- a/docs/enterprise/index.md +++ b/docs/enterprise/index.md @@ -9,7 +9,7 @@ has_toc: false # lakeFS Enterprise lakeFS Enterprise is an enterprise-ready lakeFS solution that provides a support SLA and additional features to the open-source version of lakeFS. The additional features are: -* [RBAC]({{ site.baseurl }}/reference/rbac.html) -* [SSO]({{ site.baseurl }}/enterprise/sso.html) +* [RBAC]({% link reference/rbac.md}) +* [SSO]({% link enterprise/sso.html}) * Support SLA diff --git a/docs/enterprise/sso.md b/docs/enterprise/sso.md index f288d9af25c..5f64fca5a85 100644 --- a/docs/enterprise/sso.md +++ b/docs/enterprise/sso.md @@ -101,7 +101,7 @@ auth: In order for Fluffy to work, the following values must be configured. Update (or override) the following attributes in the chart's `values.yaml` file. 1. Replace `lakefsConfig.friendly_name_claim_name` with the right claim name. -1. Replace `lakefsConfig.default_initial_groups` with desired claim name (See [pre-configured]({{ site.baseurl }}/reference/rbac.md#preconfigured-groups) groups for enterprise) +1. Replace `lakefsConfig.default_initial_groups` with desired claim name (See [pre-configured][rbac-preconfigured] groups for enterprise) 2. Replace `fluffyConfig.auth.logout_redirect_url` with your full OIDC logout URL (e.g `https://oidc-provider-url.com/logout/path`) 3. Replace `fluffyConfig.auth.oidc.url` with your OIDC provider URL (e.g `https://oidc-provider-url.com`) 4. Replace `fluffyConfig.auth.oidc.logout_endpoint_query_parameters` with parameters you'd like to pass to the OIDC provider for logout. @@ -218,3 +218,5 @@ Notes: * Change the `ingress.hosts[0]` from `lakefs.company.com` to a real host (usually same as lakeFS), also update additional references in the file (note: URL path after host if provided should stay unchanged). * Update the `ingress` configuration with other optional fields if used * Fluffy docker image: replace the `fluffy.image.privateRegistry.secretToken` with real token to dockerhub for the fluffy docker image. + +[rbac-preconfigured]: {% link reference/rbac.md %}#preconfigured-groups diff --git a/docs/howto/deploy/aws.md b/docs/howto/deploy/aws.md index 28a0153da7a..d4479e3acf1 100644 --- a/docs/howto/deploy/aws.md +++ b/docs/howto/deploy/aws.md @@ -93,7 +93,7 @@ Connect to your EC2 instance using SSH: blockstore: type: s3 ``` -1. [Download the binary]({% link index.md %}#downloads}) to the EC2 instance. +1. [Download the binary][downloads] to the EC2 instance. 1. Run the `lakefs` binary on the EC2 instance: ```sh @@ -278,3 +278,5 @@ lakeFS can authenticate with your AWS account using an AWS user, using an access ``` {% include_relative includes/setup.md %} + +[downloads]: {% link index.md %}#downloads diff --git a/docs/howto/deploy/azure.md b/docs/howto/deploy/azure.md index cfb702bf9bf..706b421c826 100644 --- a/docs/howto/deploy/azure.md +++ b/docs/howto/deploy/azure.md @@ -144,7 +144,7 @@ lakeFS stores metadata in a database for its versioning engine. This is done via 1. Create a new container in the database and select type `partitionKey` as the Partition key (case sensitive). 1. Pass the endpoint, database name and container name to lakeFS as - described in the [configuration guide]({% link reference/configuration.md %}#example--azure-blob-storage). + described in the [configuration guide][config-reference-azure-block]. You can either pass the CosmosDB's account read-write key to lakeFS, or use a managed identity to authenticate to CosmosDB, as described [earlier](#identity-based-credentials). @@ -293,3 +293,5 @@ Checkout Nginx [documentation](https://kubernetes.github.io/ingress-nginx/user-g {% include_relative includes/setup.md %} + +[config-reference-azure-block]: {% link reference/configuration.md %}#example--azure-blob-storage diff --git a/docs/howto/import.md b/docs/howto/import.md index 20d98827ef2..e837295392d 100644 --- a/docs/howto/import.md +++ b/docs/howto/import.md @@ -86,7 +86,7 @@ the following policy needs to be attached to the lakeFS S3 service-account to al
-See [Azure deployment]({% link howto/deploy/azure.md %}#storage-account-credentials) on limitations when using account credentials. +See [Azure deployment][deploy-azure-storage-account-creds] on limitations when using account credentials. #### Azure Data Lake Gen2 @@ -190,6 +190,9 @@ Another way of getting existing data into a lakeFS repository is by copying it. To copy data into lakeFS you can use the following tools: -1. The `lakectl` command line tool - see the [reference]({% link reference/cli.md %}#lakectl-fs-upload) to learn more about using it to copy local data into lakeFS. Using `lakectl fs upload --recursive` you can upload multiple objects together from a given directory. +1. The `lakectl` command line tool - see the [reference][lakectl-fs-upload] to learn more about using it to copy local data into lakeFS. Using `lakectl fs upload --recursive` you can upload multiple objects together from a given directory. 1. Using [rclone](./copying.md#using-rclone) 1. Using Hadoop's [DistCp](./copying.md#using-distcp) + +[deploy-azure-storage-account-creds]: {% link howto/deploy/azure.md %}#storage-account-credentials +[lakectl-fs-upload]: {% link reference/cli.md %}#lakectl-fs-upload diff --git a/docs/understand/faq.md b/docs/understand/faq.md index 7e14fa30860..efc8f3e4fd5 100644 --- a/docs/understand/faq.md +++ b/docs/understand/faq.md @@ -19,7 +19,7 @@ lakeFS uses a copy-on-write mechanism to avoid data duplication. For example, cr We are extremely responsive on our Slack channel, and we make sure to prioritize the most pressing issues for the community. For SLA-based support, please contact us at [support@treeverse.io](mailto:support@treeverse.io). ### 4. Do you collect data from your active installations? -We collect anonymous usage statistics to understand the patterns of use and to detect product gaps we may have so we can fix them. This is optional and may be turned off by setting `stats.enabled` to `false`. See the [configuration reference]({% link reference/configuration %}#reference) for more details. +We collect anonymous usage statistics to understand the patterns of use and to detect product gaps we may have so we can fix them. This is optional and may be turned off by setting `stats.enabled` to `false`. See the [configuration reference][config-ref] for more details. The data we gather is limited to the following: @@ -40,3 +40,5 @@ The [Axolotl](https://en.wikipedia.org/wiki/Axolotl){:target="_blank"} – a spe [copyright](https://en.wikipedia.org/wiki/Axolotl#/media/File:AxolotlBE.jpg) + +[config-ref]: {% link reference/configuration.md %}#reference