Skip to content

Commit

Permalink
[doc] Backups and DDLs (#23840)
Browse files Browse the repository at this point in the history
* Backups and DDLs

* typo

* copy to stable

* icons

* icons
  • Loading branch information
ddhodge authored Sep 12, 2024
1 parent e791c40 commit e69d8cb
Show file tree
Hide file tree
Showing 19 changed files with 221 additions and 159 deletions.
4 changes: 1 addition & 3 deletions docs/content/preview/manage/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ linkTitle: Manage
description: Manging YugabyteDB deployments
image: /images/section_icons/quick_start/sample_apps.png
headcontent: Manage your YugabyteDB deployment
aliases:
- /preview/manage/
menu:
preview:
identifier: manage
Expand All @@ -20,7 +18,7 @@ type: indexpage
title="Back up and restore"
body="Back up and restore data in YugabyteDB."
href="backup-restore/"
icon="/images/section_icons/manage/backup.png">}}
icon="fa-light fa-life-ring">}}

{{<index/item
title="Migrate data"
Expand Down
25 changes: 18 additions & 7 deletions docs/content/preview/manage/backup-restore/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Back up and restore data
headerTitle: Backup and restore
linkTitle: Backup and restore
description: Back up and restore YugabyteDB
image: /images/section_icons/manage/enterprise.png
image: fa-light fa-life-ring
headcontent: Create backups and restore your data
aliases:
- /manage/backup-restore/
Expand All @@ -20,28 +20,39 @@ Backup and restoration is the process of creating and storing copies of your dat

Unlike traditional single-instance databases, YugabyteDB is designed for fault tolerance. By maintaining at least three copies of your data across multiple data regions or multiple clouds, it makes sure no losses occur if a single node or single data region becomes unavailable. Thus, with YugabyteDB, you would mainly use backups to:

* Recover from a user or software error, such as accidental table removal.
* Recover from a disaster scenario, like a full cluster failure or a simultaneous outage of multiple data regions. Even though such scenarios are extremely unlikely, it's still a best practice to maintain a way to recover from them.
* Maintain a remote copy of data, as required by data protection regulations.
- Recover from a user or software error, such as accidental table removal.
- Recover from a disaster scenario, like a full cluster failure or a simultaneous outage of multiple data regions. Even though such scenarios are extremely unlikely, it's still a best practice to maintain a way to recover from them.
- Maintain a remote copy of data, as required by data protection regulations.

## Best practices

- Don't perform cluster operations at the same time as your scheduled backup.
- Configure your maintenance window and backup schedule so that they do not conflict.
- Performing a backup or restore incurs a load on the cluster. Perform backup operations when the cluster isn't experiencing heavy traffic. Backing up during times of heavy traffic can temporarily degrade application performance and increase the length of time of the backup.
- Avoid running a backup during or before a scheduled maintenance.

{{< warning title="Backups and high DDL activity" >}}
In some circumstances, a backup can fail during high DDL activity. Avoid performing major DDL operations during scheduled backups or while a backup is in progress.
{{< /warning >}}

{{<index/block>}}

{{<index/item
title="Export and import"
body="Export and import data using SQL or CQL scripts."
href="export-import-data/"
icon="/images/section_icons/manage/export_import.png">}}
icon="fa-light fa-file-import">}}

{{<index/item
title="Distributed snapshots"
body="Back up and restore data using distributed snapshots."
href="snapshot-ysql/"
icon="/images/section_icons/manage/backup.png">}}
icon="fa-light fa-camera">}}

{{<index/item
title="Point-in-time recovery"
body="Restore data to a particular point in time."
href="point-in-time-recovery/"
icon="/images/section_icons/manage/pitr.png">}}
icon="fa-light fa-timeline-arrow">}}

{{</index/block>}}
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ You can't restore a backup to a cluster with an version of YugabyteDB that is ea

Backups are not supported for Sandbox clusters.

{{< warning title="Backups and high DDL activity" >}}
In some circumstances, a backup can fail during high DDL activity. Avoid performing major DDL operations during scheduled backups or while a backup is in progress.
{{< /warning >}}

## Recommendations

- Don't perform cluster operations at the same time as your scheduled backup.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Back up and restore universes
headerTitle: Back up and restore universes
linkTitle: Back up universes
description: Use YugabyteDB Anywhere to back up and restore YugabyteDB universe data.
image: /images/section_icons/manage/backup.png
image: fa-light fa-life-ring
headcontent: Use YugabyteDB Anywhere to back up and restore YugabyteDB universes and data
aliases:
- /preview/manage/enterprise-edition/backup-restore
Expand Down Expand Up @@ -32,42 +32,53 @@ You can use YugabyteDB to schedule and manage backups of your universe data. Thi
Configurable performance parameters and incremental backups are mediated using the yb-controller process, which is only available in YugabyteDB Anywhere v2.16 or later for universes with YugabyteDB version 2.16 or later.
{{< /note >}}

## Best practices

- Don't perform cluster operations at the same time as your scheduled backup.
- Configure your maintenance window and backup schedule so that they do not conflict.
- Performing a backup or restore incurs a load on the cluster. Perform backup operations when the cluster isn't experiencing heavy traffic. Backing up during times of heavy traffic can temporarily degrade application performance and increase the length of time of the backup.
- Avoid running a backup during or before a scheduled maintenance.

{{< warning title="Backups and high DDL activity" >}}
In some circumstances, a backup can fail during high DDL activity. Avoid performing major DDL operations during scheduled backups or while a backup is in progress. To view active tasks, navigate to **Tasks**.
{{< /warning >}}

{{<index/block>}}

{{<index/item
title="Configure backup storage"
body="Configure the storage location for your backups."
href="configure-backup-storage/"
icon="/images/section_icons/manage/backup.png">}}
icon="fa-light fa-bucket">}}

{{<index/item
title="Schedule universe data backups"
body="Create backup schedules to regularly back up universe data."
href="schedule-data-backups/"
icon="/images/section_icons/explore/high_performance.png">}}
icon="fa-light fa-calendar">}}

{{<index/item
title="Back up universe data"
body="Back up universes and create incremental backups."
href="back-up-universe-data/"
icon="/images/section_icons/manage/backup.png">}}
icon="fa-light fa-down-to-bracket">}}

{{<index/item
title="Restore universe data"
body="Restore from full and incremental backups."
href="restore-universe-data/"
icon="/images/section_icons/manage/backup.png">}}
icon="fa-light fa-up-to-bracket">}}

{{<index/item
title="Perform point-in-time recovery"
body="Recover universe data from a specific point in time."
href="pitr/"
icon="/images/section_icons/manage/pitr.png">}}
icon="fa-light fa-timeline-arrow">}}

{{<index/item
title="Disaster recovery"
body="Fail over to a backup universe in case of unplanned outages."
href="disaster-recovery/"
icon="/images/section_icons/manage/pitr.png">}}
icon="fa-light fa-sun-cloud">}}

{{</index/block>}}
Original file line number Diff line number Diff line change
Expand Up @@ -134,9 +134,9 @@ s3://user_bucket

| Component | Description |
| :-------- | :---------- |
| Storage address | The name of the bucket as specified in the [backup configuration](../configure-backup-storage/) that was used for the backup. |
| Storage address | The name of the bucket as specified in the [storage configuration](../configure-backup-storage/) that was used for the backup. |
| Sub-directories | The path of the sub-folders (if any) in a bucket. |
| Universe UUID | The UUID of the universe that was backed up. You can move this folder to different a location, but to successfully restore, do not modify this folder or any of its contents. |
| Universe UUID | The UUID of the universe that was backed up. You can move this folder to different a location, but to successfully restore, do not modify this folder, or any of its contents. |
| Backup series name and UUID | The name of the backup series and YBA-generated UUID. The UUID ensures that YBA can correctly identify the appropriate folder. |
| Backup type | `full` or `incremental`. Indicates whether the subfolders contain full or incremental backups. |
| Creation time | The time the backup was started. |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,26 +18,6 @@ Before you can back up universes, you need to configure a storage location for y

Depending on your environment, you can save your YugabyteDB universe data to a variety of storage solutions.

## Local storage

If your YugabyteDB universe has one node, you can create a local directory on a T-Server to which to back up, as follows:

1. Navigate to **Universes**, select your universe, and then select **Nodes**.

2. Click **Connect**.

3. Take note of the services and endpoints information displayed in the **Connect** dialog, as shown in the following illustration:

![Connect dialog](/images/yp/cloud-provider-local-backup1.png)

4. While connected using `ssh`, create a directory `/backup` and then change the owner to `yugabyte`, as follows:

```sh
sudo mkdir /backup; sudo chown yugabyte /backup
```

If there is more than one node, you should consider using a network file system mounted on each server.

## Amazon S3

You can configure Amazon S3 as your backup target, as follows:
Expand All @@ -48,7 +28,7 @@ You can configure Amazon S3 as your backup target, as follows:

![S3 Backup](/images/yp/cloud-provider-configuration-backup-aws.png)

3. Use the **Configuration Name** field to provide a meaningful name for your backup configuration.
3. Use the **Configuration Name** field to provide a meaningful name for your storage configuration.

4. Enable **IAM Role** to use the YugabyteDB Anywhere instance's Identity Access Management (IAM) role for the S3 backup. See [Required S3 IAM permissions](#required-s3-iam-permissions).

Expand Down Expand Up @@ -80,33 +60,17 @@ The following S3 IAM permissions are required:
"s3:GetBucketLocation"
```

## Network File System
You can configure Network File System (NFS) as your backup target, as follows:
1. Navigate to **Integrations > Backup > Network File System**.
2. Click **Create NFS Backup** to access the configuration form shown in the following illustration:
![NFS Configuration](/images/yp/cloud-provider-configuration-backup-nfs.png)
3. Use the **Configuration Name** field to provide a meaningful name for your backup configuration.
4. Complete the **NFS Storage Path** field by entering `/backup` or another directory that provides read, write, and access permissions to the SSH user of the YugabyteDB Anywhere instance.
5. Click **Save**.
## Google Cloud Storage

You can configure Google Cloud Storage (GCS) as your backup target, as follows:

1. Navigate to **Integrations > Backup > Google Cloud Storage**.

1. Click **Create GCS Backup** to access the configuration form shown in the following illustration:
1. Click **Create GCS Backup**.

![GCS Configuration](/images/yp/cloud-provider-configuration-backup-gcs-stable.png)

1. Use the **Configuration Name** field to provide a meaningful name for your backup configuration.
1. Use the **Configuration Name** field to provide a meaningful name for your storage configuration.

1. Enter the URI of your GCS bucket in the **GCS Bucket** field. For example, `gs://gcp-bucket/test_backups`.

Expand Down Expand Up @@ -149,9 +113,27 @@ To enable GCP IAM during universe creation, refer to [Configure Helm overrides](

To upgrade an existing universe with GCP IAM, refer to [Upgrade universes for GKE service account-based IAM support](../../manage-deployments/edit-helm-overrides/#upgrade-universes-for-gke-service-account-based-iam).

## Network File System

You can configure Network File System (NFS) as your backup target, as follows:

1. Navigate to **Integrations > Backup > Network File System**.

2. Click **Create NFS Backup** to access the configuration form shown in the following illustration:

![NFS Configuration](/images/yp/cloud-provider-configuration-backup-nfs.png)

3. Use the **Configuration Name** field to provide a meaningful name for your storage configuration.

4. Complete the **NFS Storage Path** field by entering `/backup` or another directory that provides read, write, and access permissions to the SSH user of the YugabyteDB Anywhere instance.

5. Click **Save**.

## Azure Storage

You can configure Azure as your backup target, as follows:
You can configure Azure as your backup target.

### Configure storage on Azure

1. Create a storage account in Azure, as follows:

Expand Down Expand Up @@ -179,12 +161,38 @@ You can configure Azure as your backup target, as follows:

![Azure Shared Access Signature page](/images/yp/cloud-provider-configuration-backup-azure-generate-token.png)

1. On your YugabyteDB Anywhere instance, provide the container URL and SAS token for creating a backup, as follows:
### Create an Azure storage configuration

In YugabyteDB Anywhere:

1. Navigate to **Integrations > Backup > Azure Storage**.

1. Click **Create AZ Backup**.

![Azure Configuration](/images/yp/cloud-provider-configuration-backup-azure.png)

1. Use the **Configuration Name** field to provide a meaningful name for your storage configuration.

- Navigate to **Integrations** > **Backup** > **Azure Storage**.
- Click **Create AZ Backup** to access the configuration form shown in the following illustration:
1. Enter values for the **Container URL** and **SAS Token** fields.

![Azure Configuration](/images/yp/cloud-provider-configuration-backup-azure.png)
1. Click **Save**.

## Local storage

If your YugabyteDB universe has one node, you can create a local directory on a YB-TServer to which to back up, as follows:

1. Navigate to **Universes**, select your universe, and then select **Nodes**.

2. Click **Connect**.

3. Take note of the services and endpoints information displayed in the **Connect** dialog, as shown in the following illustration:

![Connect dialog](/images/yp/cloud-provider-local-backup1.png)

4. While connected using `ssh`, create a directory `/backup` and then change the owner to `yugabyte`, as follows:

```sh
sudo mkdir /backup; sudo chown yugabyte /backup
```

- Use the **Configuration Name** field to provide a meaningful name for your backup configuration.
- Enter values for the **Container URL** and **SAS Token** fields, and then click **Save**.
If there is more than one node, you should consider using a [network file system](#network-file-system) mounted on each server.
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Ensure the universes have the following characteristics:

- Both universes are running the same version of YugabyteDB (v2.18.0.0 or later).
- Both universes have the same [encryption in transit](../../../security/enable-encryption-in-transit/) settings. Encryption in transit is recommended, and you should create the DR primary and DR replica universes with TLS enabled.
- They can be backed up and restored using the same backup configuration.
- They can be backed up and restored using the same [storage configuration](../../configure-backup-storage/).
- They have enough disk space to support storage of write-ahead logs (WALs) in case of a network partition or a temporary outage of the DR replica universe. During these cases, WALs will continue to write until replication is restored. Consider sizing your disk according to your ability to respond and recover from network or other infrastructure outages.
- DR enables [Point-in-time-recovery](../../pitr/) (PITR) on the DR replica, requiring additional disk space for the replica.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -156,11 +156,11 @@ To perform an advanced restore, on the YugabyteDB Anywhere installation where yo
s3://user_bucket/some/sub/folders/univ-a85b5b01-6e0b-4a24-b088-478dafff94e4/ybc_backup-92317948b8e444ba150616bf182a061/incremental/20204-01-04T12: 11: 03/multi-table-postgres_40522fc46c69404893392b7d92039b9e
```

1. Select the **Backup config** that corresponds to the location of the backup. The storage could be on Google Cloud, Amazon S3, Azure, or Network File System.
1. Select the **Backup config** that corresponds to the storage configuration that was used for the backup. The storage could be on Google Cloud, Amazon S3, Azure, or Network File System.

Note that the backup config bucket takes precedence over the bucket specified in the backup location.
Note that the storage configuration bucket takes precedence over the bucket specified in the backup location.

For example, if the backup config you provide is for the following S3 Bucket:
For example, if the storage configuration you select is for the following S3 Bucket:

```output
s3://test_bucket/test
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Before scheduling a backup of your universe data, create a policy, as follows:

1. Select the API type for the backup.

1. Select the backup storage configuration. The available configurations depend on your existing backup storage configurations. For more information, see [Configure backup storage](../configure-backup-storage/).
1. Select the storage configuration. For more information, see [Configure backup storage](../configure-backup-storage/).

1. Select the database/keyspace to back up.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,11 +78,11 @@ The following permissions are required:
"s3:GetBucketLocation"
```

The Access key ID and Secret Access Key for the service account are used when creating a [backup storage configuration](../../../back-up-restore-universes/configure-backup-storage/#amazon-s3) for S3.
The Access key ID and Secret Access Key for the service account are used when creating a backup [storage configuration](../../../back-up-restore-universes/configure-backup-storage/#amazon-s3) for S3.

| Save for later | To configure |
| :--- | :--- |
| Service account Access key ID and Secret Access Key | [Backup storage configuration](../../../back-up-restore-universes/configure-backup-storage/#amazon-s3) for S3 |
| Service account Access key ID and Secret Access Key | [Storage configuration](../../../back-up-restore-universes/configure-backup-storage/#amazon-s3) for S3 |

</div>

Expand All @@ -96,11 +96,11 @@ To grant the required access, create a GCP service account with [IAM roles for c
roles/storage.admin
```

The credentials for this account (in JSON format) are used when creating a [backup storage configuration](../../../back-up-restore-universes/configure-backup-storage/#google-cloud-storage) for GCS.
The credentials for this account (in JSON format) are used when creating a backup [storage configuration](../../../back-up-restore-universes/configure-backup-storage/#google-cloud-storage) for GCS.

| Save for later | To configure |
| :--- | :--- |
| Storage service account JSON credentials | [Backup storage configuration](../../../back-up-restore-universes/configure-backup-storage/#google-cloud-storage) for GCS |
| Storage service account JSON credentials | [Storage configuration](../../../back-up-restore-universes/configure-backup-storage/#google-cloud-storage) for GCS |

For database clusters deployed to GKE, you can alternatively assign the appropriate IAM roles to the YugabyteDB Anywhere VM and the YugabyteDB nodes.

Expand All @@ -114,11 +114,11 @@ To grant the required access, create a [Shared Access Signature (SAS)](https://l

![Azure Shared Access Signature page](/images/yp/cloud-provider-configuration-backup-azure-generate-token.png)

The Connection string and SAS token are used when creating a [backup storage configuration](../../../back-up-restore-universes/configure-backup-storage/#azure-storage) for Azure.
The Connection string and SAS token are used when creating a backup [storage configuration](../../../back-up-restore-universes/configure-backup-storage/#azure-storage) for Azure.

| Save for later | To configure |
| :--- | :--- |
| Azure storage Connection string and SAS token | [Backup storage configuration](../../../back-up-restore-universes/configure-backup-storage/#azure-storage) for Azure |
| Azure storage Connection string and SAS token | [Storage configuration](../../../back-up-restore-universes/configure-backup-storage/#azure-storage) for Azure |

</div>

Expand Down
Loading

0 comments on commit e69d8cb

Please sign in to comment.