Skip to content

Commit

Permalink
add-br-ent (#1609)
Browse files Browse the repository at this point in the history
* add-br-ent

* Update mkdocs.yml

* Update 1.br-ent-overview.md
  • Loading branch information
abby-cyber authored Sep 15, 2022
1 parent 48c7173 commit 77f1ae1
Show file tree
Hide file tree
Showing 5 changed files with 460 additions and 5 deletions.
39 changes: 39 additions & 0 deletions docs-2.0/backup-and-restore/nebula-br-ent/1.br-ent-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# What is Backup Restore (Enterprise Edition)

Backup Restore (BR for short) Enterprise Edition is a Command-Line Interface (CLI) tool. With BR Enterprise Edition, you can back up and restore NebulaGraph data.

!!! enterpriseonly

The BR Enterprise Edition tool is for NebulaGraph Enterprise Edition only.

## Features

- Backups and restoration of NebulaGraph data with a single command:
- Support full and incremental backups.
- Support the restoration of a full backup.
- Support restoration across NebulaGraph clusters.
- Cloud and on-premises backup and restoration:
- Local disks (SSD or HDD). It is recommended to use local disks with the shared storage service NFS.
- Cloud services compatible with the Amazon S3 interface, such as Alibaba Cloud OSS, MinIO, Ceph RGW, etc.

## Limits

- The version of NebulaGraph Enterprise Edition clusters must be equal to or greater than v{{ nebula.release }}.
- [Listener](../../4.deployment-and-installation/6.deploy-text-based-index/3.deploy-listener.md) backups are not supported.
- Full-text indexes data backups are not supported.
- If you back up data to a local disk, the backup files will be saved in the local storage path of each server in a NebulaGraph cluster. You can also mount the NFS on your server to restore the backup data to a different server.
- During the backup process, both DDL and DML statements in a specified graph space are blocked. We recommend that you perform the operation within the low peak period of the business, for example, from 2:00 AM to 5:00 AM.
- Restoration requires that the number of the storage servers in the original cluster is the same as that of the storage servers in the target cluster.
- Backing up data of a specified graph space is not supported.
- Using BR Enterprise Edition in a container-based NebulaGraph cluster is not supported.

## Usage process

To use BR Enterprise Edition, follow these steps:

1. [Install BR (Enterprise Edition)](2.install-tools.md)
2. [Back up data](3.backup-data.md)
3. [Restore data](4.restore-data.md)



94 changes: 94 additions & 0 deletions docs-2.0/backup-and-restore/nebula-br-ent/2.install-tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Install BR (Enterprise Edition)

The BR Enterprise Edition tool is used to back up and restore NebulaGraph Enterprise Edition data. This topic describes how to install this tool.

## Notes

To use the BR (Enterprise Edition) tool, you need to install the NebulaGraph Agent plug-in, which is taken as a daemon for each machine in the cluster that starts and stops the NebulaGraph service, and uploads and downloads backup files. The BR (Enterprise Edition) tool and the Agent plug-in are installed as described below.

## Version compatibility

|NebulaGraph Enterprise Edition|BR Enterprise Edition|Agent |
|:---|:---|:---|
|{{nebula.release}}|{{br_ent.release}}|{{agent.release}}|

## Install BR (Enterprise Edition)

The BR (Enterprise Edition) is a command-line interface (CLI) tool that helps to back up NebulaGraph data or restore the data through backup files.

Follow these steps:

1. Obtain the RPM package.

!!! enterpriseonly

Contact our sales team via email ([email protected]) to obtain the BR Enterprise Edition installation package.


2. Run `sudo rpm -i <rpm>` to install the RPM package.
<!-- Make sure the correctness of steps and package name. -->
For example, install BR Enterprise Edition with the following command, and the default installation path is `/usr/local/br-ent/`:

`sudo rpm -i nebula-br-ent-<version>.x86_64.rpm`

In the BR Enterprise Edition installation directory, run `./br version` to view version information. The following information is returned:

```
[br-ent]$ ./br version
Nebula Backup And Restore Utility Tool,V-{{br_ent.release}}
```

## Install Agent

NebulaGraph Agent is installed as a binary file in each machine and serves the BR tool with the RPC protocol.

In **each machine**, follow these steps:

1. Install Agent.

```
wget https://github.com/vesoft-inc/nebula-agent/releases/download/v{{agent.release}}/agent-{{agent.release}}-linux-amd64
```

2. Rename the Agent file to `agent`.

```
sudo mv agent-{{agent.release}}-linux-amd64 agent
```

3. Add execute permission to Agent.

```
sudo chmod +x agent
```

4. Start Agent.

!!! note

Before starting Agent, make sure that the Meta service has been started and Agent has read and write access to the corresponding NebulaGraph cluster directory and backup directory.

```
sudo nohup ./nebula_agent --agent="<agent_node_ip>:8888" --meta="<metad_node_ip>:9559" > nebula_agent.log 2>&1 &
```

- `--agent`: The IP address and port number of Agent.
- `--meta`: The IP address and access port of any Meta service in the cluster.
- `--ratelimit`: (Optional) Limits the speed of file uploads and downloads to prevent bandwidth from being filled up and making other services unavailable. Unit: Bytes.

For example:

```
sudo nohup ./nebula_agent --agent="192.168.8.129:8888" --meta="192.168.8.129:9559" --ratelimit=1048576 > nebula_agent.log 2>&1 &
```

5. Log into NebulaGraph and then run the following command to view the status of Agent.

```
nebula> SHOW HOSTS AGENT;
+-----------------+------+----------+---------+--------------+---------+
| Host | Port | Status | Role | Git Info Sha | Version |
+-----------------+------+----------+---------+--------------+---------+
| "192.168.8.129" | 8888 | "ONLINE" | "AGENT" | "96646b8" | |
+-----------------+------+----------+---------+--------------+---------+
```
196 changes: 196 additions & 0 deletions docs-2.0/backup-and-restore/nebula-br-ent/3.backup-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
# Back up data with BR (Enterprise Edition)

You can use BR (Enterprise Edition) to back up NebulaGraph data. You can perform full backups and incremental backups. You can also back up data to your local disks and to cloud storage services compatible with the Amazon S3 interface. This topic introduces how to back up NebulaGraph data.

## Background

- A full backup is a backup of your NebulaGraph database's entire.
- An incremental backup covers all files that have changed or been modified since the last backup was made. The last backup can be either a full backup or an incremental backup.
- For the data directory structure of NebulaGraph, see the (default) path `usr/local/nebula-ent/data`.

## Notes

- During the process of a full backup, it may take a long time if the amount of the data to be backed up is large.
- During the process of data backup, DDL and DML statements in the specified graph space are blocked. We recommend that you perform the backup operation during the low business peak period.
- The cluster performing an incremental backup needs to be the same as the cluster specified for the previous backup. The storage path of the incremental backup should also be the same as the path of the previous backup.
- Make sure that the time between each incremental backup and the first backup is less than one [`wal_ttl`](../../5.configurations-and-logs/1.configurations/4.storage-config.md).
- Make sure that Agent has read and write permissions for the corresponding NebulaGraph cluster directory and backup directory.

## Prerequisites

- The NebulaGraph services are running.

- You have installed [BR Enterprise Edition and Agent](2.install-tools.md), and have started the Agent(s) that are installed in each machine.

- You have created a directory with the same absolute path on the machines where the meta service, storage service, and the BR Enterprise Edition tool are located. Make sure your account has write privileges for this directory.

## Full backups

### To a cloud storage service

!!! note

You can back up data to the cloud storage services that are compatible with the Amazon S3 interface.

In the directory of the BR Enterprise Edition installation tool, run the following command to back up the entire data of your NebulaGraph database to a cloud storage service:

```
./br backup full --meta <ip_address:port> --s3.access_key <access_key> --s3.secret_key <secret_key> --s3.region <region_name> --storage s3://<storage_path> --s3.endpoint <endpoint_url>
```

For example, perform a full backup of the entire cluster where one of the meta service addresses is `192.168.8.129:9559`, and back up data to the `/` directory in the `nebula-br-test` bucket of Amazon S3.

```
./br backup full --meta 192.168.8.129:9559 --s3.access_key QImbbGDjfQExxx --s3.secret_key dVSJZfl7tnoFq7Z5zt6sfxxxx --s3.region us-east-1 --storage s3://nebula-br-test/ --s3.endpoint http://192.168.8.xxx:9000/
```

### To a local disk

!!! caution

In the production environment, we recommend that you mount NFS (Network File System) to the machines where the meta service, the storage service, and the BR Enterprise Edition tool are located for local backups, or use Alibaba Cloud OSS, and Amazon S3 for cloud backups. Otherwise, you must manually move these backup files to the specified directory before restoring the data.

For local backups, only the data of the leader mated and the leader partitions are backed up. When the shared storage service is not used and there are multiple metads or the replica number of partitions is greater than 1, you need to manually copy the directory of the backed-up leader metad (`<storage_path>/meta`) and overwrite the corresponding directories of other follower metads, and copy the corresponding partition data from the directory of the leader partitions (`<storage_path>/<partition_id>`) to the corresponding directories of other follower partitions. It is not recommended for you to have the copy operation.

In the directory of the BR Enterprise Edition installation tool, run the following command to back up the entire data of your NebulaGraph database to a local disk:

!!! note

Make sure you have created a directory where your data to be backed up.

```
./br backup full --meta <ip_address:port> --storage local://<storage_path>
```

For example, perform a full backup of the entire cluster where one of the meta server addresses is `192.168.8.129:9559`, and back up data to the `/backup/` directory on the local disk.


```
./br backup full --meta "192.168.8.129:9559" --storage "local:///backup/"
```

### Directory structure

Full backups back up all the data of the leader metad and leader partitions. The structure of a backed-up directory is as follows:

```
├── BACKUP_2022_08_12_08_41_19.meta(Backup of metadata, such as host information and the storage path of partitions)
├── data(Backup of the storage data)
│ ├── 1(Graph space ID)
│ │ ├── 1(Partition ID)
│ │ │ ├── data
│ │ │ │ ├── 000033.sst
│ │ │ │ ├── 000035.sst
│ │ │ │ ├── 000037.sst
│ │ │ │ ├── 000039.sst
│ │ │ │ ├── CURRENT
│ │ │ │ ├── MANIFEST-000004
│ │ │ │ └── OPTIONS-000007
│ │ │ └── wal
│ │ │ ├── 0000000000000000001.wal
│ │ │ ├── 0000000000000000267.wal
│ │ │ ├── 0000000000000000324.wal
│ │ │ └── commitlog.id
│ │ ├── 10(Partition ID)
...
└── meta(Backup of the meta data)
...
```

## Incremental backups

!!! caution

Before performing an incremental backup, make sure that the data of the previous full backup or incremental backup is not changed. Otherwise, the incremental backup may fail.

### To a cloud storage service

!!! note

You can back up data to the cloud storage services that are compatible with the Amazon S3 interface.

In the directory of the BR Enterprise Edition installation tool, run the following command to back up the incremental data of your NebulaGraph database to a cloud storage service:

```
./br backup incr --meta <ip_address:port> --s3.access_key <access_key> --s3.secret_key <secret_key> --s3.region <region_name> --storage s3://<storage_path> --s3.endpoint <endpoint_url> --base <backup_file_name>
```

For example, based on the file `BACKUP_2022_08_11_09_11_07`, perform an incremental backup for the cluster where one of the meta server addresses is `192.168.8.129:9559`, and back up data to the `/` directory in the `nebula-br-test` bucket of Amazon S3.

```
./br backup incr --meta 192.168.8.129:9559 --s3.access_key QImbbGDjfQExxx --s3.secret_key dVSJZfl7tnoFq7Z5zt6sfxxxx --s3.region us-east-1 --storage s3://nebula-br-test/ --s3.endpoint http://192.168.8.xxx:9000/ --base BACKUP_2022_08_11_09_11_07
```

### To a local disk

In the directory of the BR Enterprise Edition installation tool, run the following command to back up the incremental data of your NebulaGraph database to a local disk:

!!! note

Make sure that the local path where the backup file is stored exists.

```
./br backup incr --meta <ip_address:port> --storage local://<storage_path> --base <backup_file_name>
```

For example, based on the file `BACKUP_2022_08_11_09_11_07`, perform an incremental backup for the cluster where one of the meta server addresses is `192.168.8.129:9559`, and back up data to the `/backup/` directory on the local disk.


```
./br backup incr --meta "192.168.8.129:9559" --storage "local:///backup/" --base BACKUP_2022_08_11_09_11_07
```

### Directory structure

In addition to the leader meta data, for the leader partition data of an existing graph space (graph space ID `1` in the following code block), Incremental backups only back up the `wal` directory of the leader partition; for the leader partition data of the newly added graph space (graph space ID `4` in the following code block), the full `data` and `wal` directories are backed up. The incremental backup directory structure may be different compared to the full backup data structure.

The structure of an incremental backup is as follows:

```
├── BACKUP_2022_08_12_08_58_23.meta(Backup of metadata, such as host information and the storage path of partitions)
├── data(Backup of the storage data)
│ ├── 1 (Graph space ID)
│ │ ├── 1 (Partition ID)
│ │ │ └── wal (wal directory)
│ │ │ ├── 0000000000000000671.wal
│ │ │ ├── 0000000000000000700.wal
│ │ │ └── commitlog.id
...
│ └── 4 (Graph space ID)
│ ├── 1
│ │ ├── data(data directory)
│ │ │ ├── 000009.sst
│ │ │ ├── CURRENT
│ │ │ ├── MANIFEST-000004
│ │ │ └── OPTIONS-000007
│ │ └── wal (wal directory)
│ │ ├── 0000000000000000001.wal
│ │ └── commitlog.id
└── meta(Backup of the meta data)
...
```

## Options

| Option | Data type | Required | Default value | Description |
| --- | --- | --- | --- | --- |
| `-h`,`--help` | - | No | None | View more information about BR commands. |
| `--debug` | - | No | None | View more information for BR logs. |
| `--log` | string | No | `"br.log"` | The path of BR logs. |
| `--concurrency` | int | No | None | Used to control the number of concurrent file uploads during data backup. The default value is `5`.|
| `--meta` | string | Yes | None | The IP address and port number of any Meta service in the cluster. |
| `--storage` | string | Yes | None | The storage path of the data to be backed up in the form of `<schema>://<storage_path>`.<br>`schema`: Two options, `s3` and `local`.<br>When `schema` is `s3`, you still need to add options of `s3.access_key``s3.endpoint``s3.region`, and `s3.secret_key`.<br>`<storage_path>`: The storage path.|
| `--s3.access_key` | string | No | None | The Access Key ID that is used to identify a user. |
| `--s3.endpoint` | string | No | None | The S3 endpoint URL. You need to specify the HTTP or HTTPS scheme explicitly. |
| `--s3.region` | string | No | None | The physical location of a data center. |
| `--s3.secret_key`| string | No | None | The Access Key Secret that is used to verify the identity of the user. |
| `--base` | string | Yes | None | The name of the directory of any previous backup. Based on this backup, perform an incremental backup. |


## What is next?

After your NebulaGraph data is backed up, you can restore the data to NebulaGraph. For details, see [Restore data](4.restore-data.md).

!!! caution

DO NOT change the name and path of the backed-up files. Otherwise, data restoration will fail.
Loading

0 comments on commit 77f1ae1

Please sign in to comment.