Skip to content

Commit

Permalink
Fix links to use Jekyll %link syntax
Browse files Browse the repository at this point in the history
That one understands `.md` suffixes, and is arguably nicer to use.

Absolute links in aws.md
  • Loading branch information
arielshaqed committed Aug 2, 2023
1 parent ea3f640 commit 2ec1ae2
Show file tree
Hide file tree
Showing 6 changed files with 20 additions and 20 deletions.
2 changes: 1 addition & 1 deletion docs/cloud/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ has_toc: false
[lakeFS Cloud](https://lakefs.cloud) is a fully-managed lakeFS solution provided by Treeverse, implemented using our best practices, providing high availability, auto-scaling, support and enterprise-ready features.

## lakeFS Cloud Features
* [Role-Based Access Control]({{ site.baseurl }}/reference/rbac.md)
* [Role-Based Access Control]({% link reference/rbac.md %})
* [Auditing](./auditing.md)
* [Single-Sign-On](./sso.md) (including support for SAML, OIDC, AD FS, Okta, and Azure AD)
* [Managed Garbage Collection](./managed-gc.md)
Expand Down
8 changes: 4 additions & 4 deletions docs/howto/deploy/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ By default, lakeFS will create the required DynamoDB table if it does not alread
}
```

💡 You can also use lakeFS with PostgreSQL instead of DynamoDB! See the [configuration reference]({{ site.baseurl }}/reference/configuration.md) for more information.
💡 You can also use lakeFS with PostgreSQL instead of DynamoDB! See the [configuration reference]({% link reference/configuration.md %}) for more information.
{: .note }

## Run the lakeFS server
Expand Down Expand Up @@ -93,9 +93,9 @@ Connect to your EC2 instance using SSH:
blockstore:
type: s3
```
1. [Download the binary]({{ site.baseurl }}/index.md#downloads) to the EC2 instance.
1. [Download the binary]({% link index.md %}#downloads}) to the EC2 instance.
1. Run the `lakefs` binary on the EC2 instance:

```sh
lakefs --config config.yaml run
```
Expand Down Expand Up @@ -131,7 +131,7 @@ To install lakeFS with Helm:
```
1. Fill in the missing values and save the file as `conf-values.yaml`. For more configuration options, see our Helm chart [README](https://github.com/treeverse/charts/blob/master/charts/lakefs/README.md#custom-configuration){:target="_blank"}.

The `lakefsConfig` parameter is the lakeFS configuration documented [here](https://docs.lakefs.io/reference/configuration.html) but without sensitive information.
The `lakefsConfig` parameter is the lakeFS configuration documented [here]({% link reference/configuration.md%}) but without sensitive information.
Sensitive information like `databaseConnectionString` is given through separate parameters, and the chart will inject it into Kubernetes secrets.
{: .note }

Expand Down
4 changes: 2 additions & 2 deletions docs/howto/deploy/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ lakeFS stores metadata in a database for its versioning engine. This is done via
1. Create a new container in the database and select type
`partitionKey` as the Partition key (case sensitive).
1. Pass the endpoint, database name and container name to lakeFS as
described in the [configuration guide]({{ site.baseurl }}/reference/configuration.md#example--azure-blob-storage).
described in the [configuration guide]({% link reference/configuration.md %}#example--azure-blob-storage).
You can either pass the CosmosDB's account read-write key to lakeFS, or
use a managed identity to authenticate to CosmosDB, as described
[earlier](#identity-based-credentials).
Expand Down Expand Up @@ -292,4 +292,4 @@ Checkout Nginx [documentation](https://kubernetes.github.io/ingress-nginx/user-g



{% include_relative includes/setup.md %}
{% include_relative includes/setup.md %}
20 changes: 10 additions & 10 deletions docs/howto/import.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ To avoid copying the data, lakeFS offers [Zero-copy import](#zero-copy-import).

To run import you need the following permissions:
`fs:WriteObject`, `fs:CreateMetaRange`, `fs:CreateCommit`, `fs:ImportFromStorage` and `fs:ImportCancel`.
The first 3 permissions are available by default to users in the default Developers group ([RBAC]({{ site.baseurl }}/reference/rbac.md)) or the
Writers group ([ACL]({{ site.baseurl }}/reference/access-control-lists.md)). The `Import*` permissions enable the user to import data from any location of the storage
The first 3 permissions are available by default to users in the default Developers group ([RBAC]({% link reference/rbac.md %})) or the
Writers group ([ACL]({% link reference/access-control-lists.md %})). The `Import*` permissions enable the user to import data from any location of the storage
provider that lakeFS has access to and cancel the operation if needed.
Thus, it's only available to users in group Supers ([ACL]({{ site.baseurl }}/reference/access-control-lists.md)) or SuperUsers([RBAC]({{ site.baseurl }}/reference/rbac.md)).
Thus, it's only available to users in group Supers ([ACL]({% link reference/access-control-lists.md %})) or SuperUsers([RBAC]({% link reference/rbac.md %})).
RBAC installations can modify policies to add that permission to any group, such as Developers.


Expand Down Expand Up @@ -86,7 +86,7 @@ the following policy needs to be attached to the lakeFS S3 service-account to al

</div>
<div markdown="1" id="azure-storage">
See [Azure deployment]({{ site.baseurl }}/howto/deploy/azure.md#storage-account-credentials) on limitations when using account credentials.
See [Azure deployment]({% link howto/deploy/azure.md %}#storage-account-credentials) on limitations when using account credentials.

#### Azure Data Lake Gen2

Expand Down Expand Up @@ -115,7 +115,7 @@ To import using the UI, lakeFS must have permissions to list the objects in the
1. In your repository's main page, click the _Import_ button to open the import dialog:
![img.png]({{ site.baseurl }}/assets/img/UI-Import-Dialog.png)
![img.png]({% link assets/img/UI-Import-Dialog.png %})
2. Under _Import from_, fill in the location on your object store you would like to import from.
3. Fill in the import destination in lakeFS
Expand All @@ -131,7 +131,7 @@ Once the import is complete, the changes are merged into the destination branch.
### _lakectl import_
Prerequisite: have [lakectl]({{ site.baseurl }}/reference/cli.html) installed.
Prerequisite: have [lakectl]({% link reference/cli.md %}) installed.
The _lakectl import_ command acts the same as the UI import wizard. It commits the changes to a dedicated branch, with an optional
flag to merge the changes to `<branch_name>`.
Expand Down Expand Up @@ -170,7 +170,7 @@ lakectl import \
1. Importing is only possible from the object storage service in which your installation stores its data. For example, if lakeFS is configured to use S3, you cannot import data from Azure.
2. Import is available for S3, GCP and Azure.
3. For security reasons, if you are lakeFS on top of your local disk, you need to enable the import feature explicitly.
To do so, set the `blockstore.local.import_enabled` to `true` and specify the allowed import paths in `blockstore.local.allowed_external_prefixes` (see [configuration reference]({{ site.baseurl }}/reference/configuration.md)).
To do so, set the `blockstore.local.import_enabled` to `true` and specify the allowed import paths in `blockstore.local.allowed_external_prefixes` (see [configuration reference]({% link reference/configuration.md %})).
Since there are some differences between object-stores and file-systems in the way directories/prefixes are treated, local import is allowed only for directories.

### Working with imported data
Expand All @@ -190,6 +190,6 @@ Another way of getting existing data into a lakeFS repository is by copying it.

To copy data into lakeFS you can use the following tools:

1. The `lakectl` command line tool - see the [reference]({{ site.baseurl }}/reference/cli.html#lakectl-fs-upload) to learn more about using it to copy local data into lakeFS. Using `lakectl fs upload --recursive` you can upload multiple objects together from a given directory.
1. Using [rclone]({{ site.baseurl }}/howto/copying.md#using-rclone)
1. Using Hadoop's [DistCp]({{ site.baseurl }}/howto/copying.md#using-distcp)
1. The `lakectl` command line tool - see the [reference]({% link reference/cli.md %}#lakectl-fs-upload) to learn more about using it to copy local data into lakeFS. Using `lakectl fs upload --recursive` you can upload multiple objects together from a given directory.
1. Using [rclone](./copying.md#using-rclone)
1. Using Hadoop's [DistCp](./copying.md#using-distcp)
2 changes: 1 addition & 1 deletion docs/project/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Our [Go release workflow](https://github.com/treeverse/lakeFS/blob/master/.githu

1. Install the required dependencies for your OS:
1. [Git](https://git-scm.com/downloads)
1. [GNU make](https://www.gnu.org/software/make/) (probably best to install from your OS package manager such as [apt](https://en.wikipedia.org/wiki/APT_(software)) or [brew](https://brew.sh/))
1. [GNU make](https://www.gnu.org/software/make) (probably best to install from your OS package manager such as [apt](https://en.wikipedia.org/wiki/APT_(software)) or [brew](https://brew.sh/))
1. [Docker](https://docs.docker.com/get-docker/)
1. [Go](https://golang.org/doc/install)
1. [Node.js & npm](https://www.npmjs.com/get-npm)
Expand Down
4 changes: 2 additions & 2 deletions docs/understand/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ redirect_from:
lakeFS is completely free, open-source, and licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) License. We maintain a public [product roadmap]({{ site.baseurl }}/project/index.html#roadmap) and a [Slack channel](https://lakefs.io/slack) for open discussions.

### 2. How does lakeFS data versioning work?
lakeFS uses a copy-on-write mechanism to avoid data duplication. For example, creating a new branch is a metadata-only operation: no objects are actually copied. Only when an object changes does lakeFS create another [version of the data](https://lakefs.io/data-versioning/) in the storage. For more information, see [Versioning internals]({{ site.baseurl }}/understand/how/versioning-internals.md).
lakeFS uses a copy-on-write mechanism to avoid data duplication. For example, creating a new branch is a metadata-only operation: no objects are actually copied. Only when an object changes does lakeFS create another [version of the data](https://lakefs.io/data-versioning/) in the storage. For more information, see [Versioning internals]({% link understand/how/versioning-internals.md%}).

### 3. How do I get support for my lakeFS installation?
We are extremely responsive on our Slack channel, and we make sure to prioritize the most pressing issues for the community. For SLA-based support, please contact us at [[email protected]](mailto:[email protected]).

### 4. Do you collect data from your active installations?
We collect anonymous usage statistics to understand the patterns of use and to detect product gaps we may have so we can fix them. This is optional and may be turned off by setting `stats.enabled` to `false`. See the [configuration reference]({{ site.baseurl }}/reference/configuration.md#reference) for more details.
We collect anonymous usage statistics to understand the patterns of use and to detect product gaps we may have so we can fix them. This is optional and may be turned off by setting `stats.enabled` to `false`. See the [configuration reference]({% link reference/configuration.md %}#reference) for more details.


The data we gather is limited to the following:
Expand Down

0 comments on commit 2ec1ae2

Please sign in to comment.