diff --git a/docs-2.0-en/.DS_Store b/docs-2.0-en/.DS_Store deleted file mode 100644 index 5008ddfcf53..00000000000 Binary files a/docs-2.0-en/.DS_Store and /dev/null differ diff --git a/docs-2.0-en/1.introduction/.DS_Store b/docs-2.0-en/1.introduction/.DS_Store deleted file mode 100644 index 2c83d62dfc5..00000000000 Binary files a/docs-2.0-en/1.introduction/.DS_Store and /dev/null differ diff --git a/docs-2.0-en/14.client/1.nebula-client.md b/docs-2.0-en/14.client/1.nebula-client.md index a03943f3d2f..5b706ea5377 100644 --- a/docs-2.0-en/14.client/1.nebula-client.md +++ b/docs-2.0-en/14.client/1.nebula-client.md @@ -2,7 +2,7 @@ NebulaGraph supports multiple types of clients for users to connect to and manage the NebulaGraph database. -- [NebulaGraph Console](../nebula-console.md): the native CLI client +- [NebulaGraph Console](nebula-console.md): the native CLI client - [NebulaGraph CPP](3.nebula-cpp-client.md): the NebulaGraph client for C++ diff --git a/docs-2.0-en/nebula-console.md b/docs-2.0-en/14.client/nebula-console.md similarity index 90% rename from docs-2.0-en/nebula-console.md rename to docs-2.0-en/14.client/nebula-console.md index 48324a6d6f3..70727fb324e 100644 --- a/docs-2.0-en/nebula-console.md +++ b/docs-2.0-en/14.client/nebula-console.md @@ -26,7 +26,22 @@ To connect to NebulaGraph with the `nebula-console` file, use the following synt -addr -port -u -p ``` -`path_of_console` indicates the storage path of the NebulaGraph Console binary file. +- `path_of_console` indicates the storage path of the NebulaGraph Console binary file. +- When two-way authentication is required after SSL encryption is enabled, you need to specify SSL-related parameters when connecting. + +For example: + +- Direct link to NebulaGraph + + ```bash + ./nebula-console -addr 192.168.8.100 -port 9669 -u root -p nebula + ``` + +- Enable SSL encryption and require two-way authentication + + ```bash + ./nebula-console -addr 192.168.8.100 -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /home/xxx/cert/root.crt -ssl_cert_path /home/xxx/cert/client.crt -ssl_private_key_path /home/xxx/cert/client.key + ``` Parameter descriptions are as follows: @@ -44,15 +59,10 @@ Parameter descriptions are as follows: | `-ssl_root_ca_path` | Sets the storage path of the certification authority file. | | `-ssl_cert_path` | Sets the storage path of the certificate file. | | `-ssl_private_key_path` | Sets the storage path of the private key file. | +|`-ssl_insecure_skip_verify`| Specifies whether the client skips verifying the server's certificate chain and hostname. The default is `false`. If set to `true`, any certificate chain and hostname provided by the server is accepted.| For information on more parameters, see the [project repository](https://github.com/vesoft-inc/nebula-console/tree/{{console.branch}}). -For example, to connect to the Graph Service deployed on 192.168.10.8, run the following command: - -```bash -./nebula-console -addr 192.168.10.8 -port 9669 -u root -p thisisapassword -``` - ### Manage parameters You can save parameters for parameterized queries. diff --git a/docs-2.0-en/2.quick-start/1.quick-start-workflow.md b/docs-2.0-en/2.quick-start/1.quick-start-workflow.md index c6c0771d64b..22be3f5f94c 100644 --- a/docs-2.0-en/2.quick-start/1.quick-start-workflow.md +++ b/docs-2.0-en/2.quick-start/1.quick-start-workflow.md @@ -123,15 +123,15 @@ You can quickly get started with NebulaGraph by deploying NebulaGraph with Docke ```bash [nebula-docker-compose]$ docker-compose up -d - Creating nebuladockercompose_metad0_1 ... done - Creating nebuladockercompose_metad2_1 ... done - Creating nebuladockercompose_metad1_1 ... done - Creating nebuladockercompose_graphd2_1 ... done - Creating nebuladockercompose_graphd_1 ... done - Creating nebuladockercompose_graphd1_1 ... done - Creating nebuladockercompose_storaged0_1 ... done - Creating nebuladockercompose_storaged2_1 ... done - Creating nebuladockercompose_storaged1_1 ... done + Creating nebula-docker-compose_metad0_1 ... done + Creating nebula-docker-compose_metad2_1 ... done + Creating nebula-docker-compose_metad1_1 ... done + Creating nebula-docker-compose_graphd2_1 ... done + Creating nebula-docker-compose_graphd_1 ... done + Creating nebula-docker-compose_graphd1_1 ... done + Creating nebula-docker-compose_storaged0_1 ... done + Creating nebula-docker-compose_storaged2_1 ... done + Creating nebula-docker-compose_storaged1_1 ... done ``` !!! compatibility @@ -156,7 +156,7 @@ You can quickly get started with NebulaGraph by deploying NebulaGraph with Docke $ docker-compose ps Name Command State Ports -------------------------------------------------------------------------------------------- - nebuladockercompose_console_1 sh -c sleep 3 && Up + nebula-docker-compose_console_1 sh -c sleep 3 && Up nebula-co ... ...... ``` @@ -164,7 +164,7 @@ You can quickly get started with NebulaGraph by deploying NebulaGraph with Docke 2. Run the following command to enter the NebulaGraph Console docker container. ```bash - docker exec -it nebuladockercompose_console_1 /bin/sh + docker exec -it nebula-docker-compose_console_1 /bin/sh / # ``` @@ -203,35 +203,35 @@ You can quickly get started with NebulaGraph by deploying NebulaGraph with Docke ```bash $ docker-compose ps - nebuladockercompose_console_1 sh -c sleep 3 && Up + nebula-docker-compose_console_1 sh -c sleep 3 && Up nebula-co ... - nebuladockercompose_graphd1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49174->19669/tcp,:::49174->19669/tcp, 0.0.0.0:49171->19670/tcp,:::49171->19670/tcp, 0.0.0.0:49177->9669/tcp,:::49177->9669/tcp - nebuladockercompose_graphd2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49175->19669/tcp,:::49175->19669/tcp, 0.0.0.0:49172->19670/tcp,:::49172->19670/tcp, 0.0.0.0:49178->9669/tcp,:::49178->9669/tcp - nebuladockercompose_graphd_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49180->19669/tcp,:::49180->19669/tcp, 0.0.0.0:49179->19670/tcp,:::49179->19670/tcp, 0.0.0.0:9669->9669/tcp,:::9669->9669/tcp - nebuladockercompose_metad0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49157->19559/tcp,:::49157->19559/tcp, 0.0.0.0:49154->19560/tcp,:::49154->19560/tcp, 0.0.0.0:49160->9559/tcp,:::49160->9559/tcp, 9560/tcp - nebuladockercompose_metad1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49156->19559/tcp,:::49156->19559/tcp, 0.0.0.0:49153->19560/tcp,:::49153->19560/tcp, 0.0.0.0:49159->9559/tcp,:::49159->9559/tcp, 9560/tcp - nebuladockercompose_metad2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49158->19559/tcp,:::49158->19559/tcp, 0.0.0.0:49155->19560/tcp,:::49155->19560/tcp, 0.0.0.0:49161->9559/tcp,:::49161->9559/tcp, 9560/tcp - nebuladockercompose_storaged0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49166->19779/tcp,:::49166->19779/tcp, 0.0.0.0:49163->19780/tcp,:::49163->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49169->9779/tcp,:::49169->9779/tcp, 9780/tcp - nebuladockercompose_storaged1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49165->19779/tcp,:::49165->19779/tcp, 0.0.0.0:49162->19780/tcp,:::49162->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49168->9779/tcp,:::49168->9779/tcp, 9780/tcp - nebuladockercompose_storaged2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp + nebula-docker-compose_graphd1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49174->19669/tcp,:::49174->19669/tcp, 0.0.0.0:49171->19670/tcp,:::49171->19670/tcp, 0.0.0.0:49177->9669/tcp,:::49177->9669/tcp + nebula-docker-compose_graphd2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49175->19669/tcp,:::49175->19669/tcp, 0.0.0.0:49172->19670/tcp,:::49172->19670/tcp, 0.0.0.0:49178->9669/tcp,:::49178->9669/tcp + nebula-docker-compose_graphd_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49180->19669/tcp,:::49180->19669/tcp, 0.0.0.0:49179->19670/tcp,:::49179->19670/tcp, 0.0.0.0:9669->9669/tcp,:::9669->9669/tcp + nebula-docker-compose_metad0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49157->19559/tcp,:::49157->19559/tcp, 0.0.0.0:49154->19560/tcp,:::49154->19560/tcp, 0.0.0.0:49160->9559/tcp,:::49160->9559/tcp, 9560/tcp + nebula-docker-compose_metad1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49156->19559/tcp,:::49156->19559/tcp, 0.0.0.0:49153->19560/tcp,:::49153->19560/tcp, 0.0.0.0:49159->9559/tcp,:::49159->9559/tcp, 9560/tcp + nebula-docker-compose_metad2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49158->19559/tcp,:::49158->19559/tcp, 0.0.0.0:49155->19560/tcp,:::49155->19560/tcp, 0.0.0.0:49161->9559/tcp,:::49161->9559/tcp, 9560/tcp + nebula-docker-compose_storaged0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49166->19779/tcp,:::49166->19779/tcp, 0.0.0.0:49163->19780/tcp,:::49163->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49169->9779/tcp,:::49169->9779/tcp, 9780/tcp + nebula-docker-compose_storaged1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49165->19779/tcp,:::49165->19779/tcp, 0.0.0.0:49162->19780/tcp,:::49162->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49168->9779/tcp,:::49168->9779/tcp, 9780/tcp + nebula-docker-compose_storaged2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp ``` - If the service is abnormal, you can first confirm the abnormal container name (such as `nebuladockercompose_graphd2_1`). + If the service is abnormal, you can first confirm the abnormal container name (such as `nebula-docker-compose_graphd2_1`). Then you can execute `docker ps` to view the corresponding `CONTAINER ID` (such as `2a6c56c405f5`). ```bash [nebula-docker-compose]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 2a6c56c405f5 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp nebuladockercompose_graphd2_1 - 7042e0a8e83d vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp nebuladockercompose_storaged2_1 - 18e3ea63ad65 vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp nebuladockercompose_storaged0_1 - 4dcabfe8677a vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp nebuladockercompose_graphd1_1 - a74054c6ae25 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp nebuladockercompose_graphd_1 - 880025a3858c vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp nebuladockercompose_storaged1_1 - 45736a32a23a vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp nebuladockercompose_metad0_1 - 3b2c90eb073e vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp nebuladockercompose_metad2_1 - 7bb31b7a5b3f vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp nebuladockercompose_metad1_1 + 2a6c56c405f5 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp nebula-docker-compose_graphd2_1 + 7042e0a8e83d vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp nebula-docker-compose_storaged2_1 + 18e3ea63ad65 vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp nebula-docker-compose_storaged0_1 + 4dcabfe8677a vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp nebula-docker-compose_graphd1_1 + a74054c6ae25 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp nebula-docker-compose_graphd_1 + 880025a3858c vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp nebula-docker-compose_storaged1_1 + 45736a32a23a vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp nebula-docker-compose_metad0_1 + 3b2c90eb073e vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp nebula-docker-compose_metad2_1 + 7bb31b7a5b3f vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp nebula-docker-compose_metad1_1 ``` Use the `CONTAINER ID` to log in the container and troubleshoot. @@ -280,27 +280,27 @@ You can quickly get started with NebulaGraph by deploying NebulaGraph with Docke The following information indicates you have successfully stopped the NebulaGraph services: ```bash - Stopping nebuladockercompose_console_1 ... done - Stopping nebuladockercompose_graphd1_1 ... done - Stopping nebuladockercompose_graphd_1 ... done - Stopping nebuladockercompose_graphd2_1 ... done - Stopping nebuladockercompose_storaged1_1 ... done - Stopping nebuladockercompose_storaged0_1 ... done - Stopping nebuladockercompose_storaged2_1 ... done - Stopping nebuladockercompose_metad2_1 ... done - Stopping nebuladockercompose_metad0_1 ... done - Stopping nebuladockercompose_metad1_1 ... done - Removing nebuladockercompose_console_1 ... done - Removing nebuladockercompose_graphd1_1 ... done - Removing nebuladockercompose_graphd_1 ... done - Removing nebuladockercompose_graphd2_1 ... done - Removing nebuladockercompose_storaged1_1 ... done - Removing nebuladockercompose_storaged0_1 ... done - Removing nebuladockercompose_storaged2_1 ... done - Removing nebuladockercompose_metad2_1 ... done - Removing nebuladockercompose_metad0_1 ... done - Removing nebuladockercompose_metad1_1 ... done - Removing network nebuladockercompose_nebula-net + Stopping nebula-docker-compose_console_1 ... done + Stopping nebula-docker-compose_graphd1_1 ... done + Stopping nebula-docker-compose_graphd_1 ... done + Stopping nebula-docker-compose_graphd2_1 ... done + Stopping nebula-docker-compose_storaged1_1 ... done + Stopping nebula-docker-compose_storaged0_1 ... done + Stopping nebula-docker-compose_storaged2_1 ... done + Stopping nebula-docker-compose_metad2_1 ... done + Stopping nebula-docker-compose_metad0_1 ... done + Stopping nebula-docker-compose_metad1_1 ... done + Removing nebula-docker-compose_console_1 ... done + Removing nebula-docker-compose_graphd1_1 ... done + Removing nebula-docker-compose_graphd_1 ... done + Removing nebula-docker-compose_graphd2_1 ... done + Removing nebula-docker-compose_storaged1_1 ... done + Removing nebula-docker-compose_storaged0_1 ... done + Removing nebula-docker-compose_storaged2_1 ... done + Removing nebula-docker-compose_metad2_1 ... done + Removing nebula-docker-compose_metad0_1 ... done + Removing nebula-docker-compose_metad1_1 ... done + Removing network nebula-docker-compose_nebula-net ``` !!! danger diff --git a/docs-2.0-en/20.appendix/6.eco-tool-version.md b/docs-2.0-en/20.appendix/6.eco-tool-version.md index eee5b6ca5ef..e3fccb5739b 100644 --- a/docs-2.0-en/20.appendix/6.eco-tool-version.md +++ b/docs-2.0-en/20.appendix/6.eco-tool-version.md @@ -131,7 +131,7 @@ NebulaGraph Analytics is an application that integrates the open-source Plato Gr ## NebulaGraph Console -NebulaGraph Console is the native CLI client of NebulaGraph. For how to use it, see [NebulaGraph Console](../nebula-console.md). +NebulaGraph Console is the native CLI client of NebulaGraph. For how to use it, see [NebulaGraph Console](../14.client/nebula-console.md). |NebulaGraph version|Console version| |:---|:---| diff --git a/docs-2.0-en/20.appendix/release-notes/.DS_Store b/docs-2.0-en/20.appendix/release-notes/.DS_Store deleted file mode 100644 index 5008ddfcf53..00000000000 Binary files a/docs-2.0-en/20.appendix/release-notes/.DS_Store and /dev/null differ diff --git a/docs-2.0-en/20.appendix/release-notes/dashboard-comm-release-note.md b/docs-2.0-en/20.appendix/release-notes/dashboard-comm-release-note.md new file mode 100644 index 00000000000..a1b2f26f601 --- /dev/null +++ b/docs-2.0-en/20.appendix/release-notes/dashboard-comm-release-note.md @@ -0,0 +1,12 @@ +# NebulaGraph Dashboard Community Edition {{ nebula.release }} release notes + +## Community Edition 3.4.0 + +- Feature + - Support the built-in [dashboard.service](../../nebula-dashboard/2.deploy-dashboard.md) script to manage the Dashboard services with one-click and view the Dashboard version. + - Support viewing the configuration of Meta services. + +- Enhancement + - Adjust the directory structure and simplify the [deployment steps](../../nebula-dashboard/2.deploy-dashboard.md). + - Display the names of the monitoring metrics on the overview page of `machine`. + - Optimize the calculation of monitoring metrics such as `num_queries`, and adjust the display to time series aggregation. diff --git a/docs-2.0-en/20.appendix/release-notes/dashboard-ent-release-note.md b/docs-2.0-en/20.appendix/release-notes/dashboard-ent-release-note.md index 1596bfd8ba9..0621cad9b9b 100644 --- a/docs-2.0-en/20.appendix/release-notes/dashboard-ent-release-note.md +++ b/docs-2.0-en/20.appendix/release-notes/dashboard-ent-release-note.md @@ -8,7 +8,7 @@ - [Back up and restore](../../nebula-dashboard-ent/4.cluster-operator/operator/backup-and-restore.md) support full backup to local. - Add [Slow query analyst](../../nebula-dashboard-ent/4.cluster-operator/analysis-diagnosis/slow-query-analyst.md) function. - The [Cluster diagnostics](../../nebula-dashboard-ent/4.cluster-operator/analysis-diagnosis/cluster-diagnosis.md) formula supports configuration. - - [Config Management](../../nebula-dashboard-ent/4.cluster-operator/operator/config-management.md) support **Add Config**, view the **Effective value** of the current configuration, and **View inconsistent configurations**. + - [Config Management](../../nebula-dashboard-ent/4.cluster-operator/operator/update-config.md) support **Add Config**, view the **Effective value** of the current configuration, and **View inconsistent configurations**. - In the [Notification endpoint](../../nebula-dashboard-ent/system-settings/notification-endpoint.md), the webhook supports configuring the **Webhook request body**. - Support [custom monitoring panel](../../nebula-dashboard-ent/4.cluster-operator/2.monitor.md). diff --git a/docs-2.0-en/20.appendix/release-notes/explorer-release-note.md b/docs-2.0-en/20.appendix/release-notes/explorer-release-note.md index 61bbeecfb04..a0e5bdc7939 100644 --- a/docs-2.0-en/20.appendix/release-notes/explorer-release-note.md +++ b/docs-2.0-en/20.appendix/release-notes/explorer-release-note.md @@ -1,5 +1,30 @@ # NebulaGraph Explorer release notes +## v3.6.0 + +- Features +- Enhancements + - Compatibility + Since the database table structure has changed, you need to set `DB.AutoMigrate` to `true` in the configuration file, and the system will automatically upgrade and adapt the existing historical data. + + If the tables were created manually after you consulted our after-sales staff, please modify these tables manually: `task_infos`, `task_effects`, `sketches`, `schema_snapshots`, `favorites`, `files`, `datasources`, `snapshots`, `templates`, `icon_groups`, and `icon_items`. + + For example: + + ```mysql + ALTER TABLE `task_infos` ADD COLUMN `b_id` CHAR(32) NOT NULL DEFAULT ''; + UPDATE TABLE `task_infos` SET `b_id` = `id`; + CREATE UNIQUE INDEX `idx_task_infos_id` ON `task_infos`(`b_id`); + + ALTER TABLE `task_effects` ADD COLUMN `b_id` CHAR(32) NOT NULL DEFAULT ''; + UPDATE TABLE `task_effects` SET `b_id` = `id`; + CREATE UNIQUE INDEX `idx_task_effects_id` ON `task_effects`(`b_id`); + ... + ``` + +- Bug fixes +- Deprecated + ## v3.5.1 - Bugfix @@ -56,7 +81,7 @@ - Optimize guidances. - Optimize error messages. -- Bugfix +- Bug fixes - Fix the bug that can not be able to view the import task log. - Fix the bug that some data of the edges in the `demo_basketballplayer` dataset is missing. diff --git a/docs-2.0-en/20.appendix/release-notes/nebula-comm-release-note.md b/docs-2.0-en/20.appendix/release-notes/nebula-comm-release-note.md new file mode 100644 index 00000000000..9b28f536600 --- /dev/null +++ b/docs-2.0-en/20.appendix/release-notes/nebula-comm-release-note.md @@ -0,0 +1,39 @@ +# NebulaGraph {{ nebula.release }} release notes + +## Features + +- Enhance the full-text index. [#5567](https://github.com/vesoft-inc/nebula/pull/5567) [#5575](https://github.com/vesoft-inc/nebula/pull/5575) [#5577](https://github.com/vesoft-inc/nebula/pull/5577) [#5580](https://github.com/vesoft-inc/nebula/pull/5580) [#5584](https://github.com/vesoft-inc/nebula/pull/5584) [#5587](https://github.com/vesoft-inc/nebula/pull/5587) + +## Optimizations + +- Support variables when querying vertex id or property index in a match clause. [#5486](https://github.com/vesoft-inc/nebula/pull/5486) [#5553](https://github.com/vesoft-inc/nebula/pull/5553) +- Support parallel startup of RocksDB instances to speed up the startup of the Storage service. [#5521](https://github.com/vesoft-inc/nebula/pull/5521) +- Optimize the prefix search performance of the RocksDB iterator after the `DeleteRange` operation. [#5525](https://github.com/vesoft-inc/nebula/pull/5525) +- Optimize the appendLog sending logic to avoid impacting write performance when a follower is down. [#5571](https://github.com/vesoft-inc/nebula/pull/5571) +- Optimize the performance of the `MATCH` statement when querying for non-existent properties. [#5634](https://github.com/vesoft-inc/nebula/pull/5634) + +## Bug fixes + +- Fix the bug of meta data inconsistency. [#5517](https://github.com/vesoft-inc/nebula/pull/5517) +- Fix the bug that RocksDB ingest causes the leader lease to be invalid. [#5534](https://github.com/vesoft-inc/nebula/pull/5534) +- Fix the error in the statistics logic of storage. [#5547](https://github.com/vesoft-inc/nebula/pull/5547) +- Fix the bug that causes the web service to crash if a flag is set for an invalid request parameter. [#5566](https://github.com/vesoft-inc/nebula/pull/5566) +- Fix the bug that too many logs are printed when listing sessions. [#5618](https://github.com/vesoft-inc/nebula/pull/5618) +- Fix the crash of the Graph service when executing a single big query. [#5619](https://github.com/vesoft-inc/nebula/pull/5619) +- Fix the crash of the Graph service when executing the `Find All Path` statement. [#5621](https://github.com/vesoft-inc/nebula/pull/5621) [#5640](https://github.com/vesoft-inc/nebula/pull/5640) +- Fix the bug that some expired data is not recycled at the bottom level. [#5447](https://github.com/vesoft-inc/nebula/pull/5447) [#5622](https://github.com/vesoft-inc/nebula/pull/5622) +- Fix the bug that adding a path variable in the `MATCH` statement causes the `all()` function push-down optimization to fail. [#5631](https://github.com/vesoft-inc/nebula/pull/5631) +- Fix the bug in the `MATCH` statement that returns incorrect results when querying the self-loop by the shortest path. [#5636](https://github.com/vesoft-inc/nebula/pull/5636) +- Fix the bug that deleting edges by pipe causes the Graph service to crash. [#5645](https://github.com/vesoft-inc/nebula/pull/5645) +- Fix the bug in the `MATCH` statement that returns missing properties of edges when matching multiple hops. [#5646](https://github.com/vesoft-inc/nebula/pull/5646) + +## Changes + +Enhance full-text index features with the following changes: + +- The original full-text indexing function has been changed from calling Elasticsearch's Term-level queries to Full text queries. +- In addition to supporting wildcards, regulars, fuzzy matches, etc. (but the syntax has been changed), support for word splitting (relying on Elasticsearch's own word splitter) has been added, and the query results include scoring results. For more syntax, see [official Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/full-text-queries.html). + +## Legacy versions + +[Release notes of legacy versions](https://nebula-graph.io/posts/) diff --git a/docs-2.0-en/20.appendix/release-notes/nebula-ent-release-note.md b/docs-2.0-en/20.appendix/release-notes/nebula-ent-release-note.md index ef9898f25f4..682a2db4f82 100644 --- a/docs-2.0-en/20.appendix/release-notes/nebula-ent-release-note.md +++ b/docs-2.0-en/20.appendix/release-notes/nebula-ent-release-note.md @@ -1,61 +1,31 @@ -# NebulaGraph {{ nebula.release }} release notes - -## Features - -- Support managing licenses through License Center and License Manager. -- Support full table scan without index. -- Support expressions like `v.tag` in return statements. -- Support `json_extract` function in UPDATE statements. -- Support TCK format in EXPLAIN output. -- DML supports parameters. -- Enhance full-text index. - -## Optimizations - -- Support TTL in milliseconds. -- Enhance attribute trimming in aggregation functions. -- Improve the performance of traversal executor. -- Optimize FIND ALL PATH performance. -- Removes some Raft locks to improve performance. -- Optimize predicate function filtering for variable-length edges. -- Parallel traversal executor. -- MATCH supports ID collection. -- Refactor the GO planner. -- Add some Graph performance options in the configuration file. -- Add maximum connection number flag. -- Support variable when seeking vertex id or property index in match clause. - -## Bug fixes - -- Fix the defect where RocksDB data import invalidates the leader lease. -- Fix the error message when `DESC USER` does not exist. -- Fix the defect where `CREATE IF NOT EXIST` fails when SPACE exists. -- Fix the incorrect edge direction in GetNeighbors plan. -- Fix the client IP format in the `SHOW SESSIONS` command. -- Fix the defect where attributes are pruned in USE and MATCH. -- Fix the defect where the filter is not pushed down in some cases. -- Fix the defect where the filter is incorrectly filtered in some cases. -- Fix the incorrect handling of internal variables in pattern expressions. -- Fix defects involving EMPTY comparisons. -- Fix the defect where duplicate columns are returned when all columns are requested in MATCH. -- Fix the error in comparing paths involving reflexive edges. -- Fix the defect of redefining aliases in a MATCH path. -- Fix the type check defect when inserting geographical location values. -- Fix the crash in a shortest path. -- Fix the crash in GEO. -- Fix the bug that caused storage crash during logical expression evaluation. -- Fix the error in `MATCH...contains`. -- Fix the bug of incorrect session count in concurrency. -- Fix the defect of SUBGRAPH and PATH parameters. -- Fix the defect in regular expressions. -- Fix the issue with non-expression pushing down. -- Fixed the bug of slaving cluster. - -## Changes - -- Disable `edge list join`, not supporting the use of edge list in multiple patterns. -- Remove GLR parser, needs to change `YIELD 1–-1` to `YIELD 1– -1`. - -## Legacy versions - -[Release notes of legacy versions](https://www.nebula-graph.io/tags/release-notes) +# NebulaGraph release notes + +## v3.6.0 + +- Features + - Added support for [zone](../../4.deployment-and-installation/5.zone.md). The zone is a logical rack of storage nodes in NebulaGraph that separates multiple Storage nodes into manageable logical zones for resource isolation. + - Supported [HTTP2](../../5.configurations-and-logs/1.configurations/3.graph-config.md) protocol. + - Supported SSL two-way authentication ([mTLS](../../7.data-security/4.ssl.md)). + - Supported [automatic monitoring](../../7.data-security/4.ssl.md) of SSL certificate updates. + - Supported join queries using [INNER JOIN](../../3.ngql-guide/8.clauses-and-options/joins.md). + - Supported single shortest path using [FIND SINGLE SHORTEST PATH](../../3.ngql-guide/16.subgraph-and-path/2.find-path.md). + - Supported logging slow queries (excluding DML) using the [enable_record_slow_query](../../5.configurations-and-logs/1.configurations/3.graph-config.md) parameter. + +- Enhancements + - Performance + - Optimized performance for deep queries. + - Optimized performance of the Aggregate operator. + - High availability + - Added monitoring metric `resp_part_completeness` to partial success. + - Supported recording the duration of the last successful access to LM, so that you can easily check the time when LM is down. + - When the hard disk of a node fails to write, it triggers a re-election to ensure that the cluster can provide services normally. + - Usability + - When modifying users, you can change the password or whitelist individually. + +- Bug fixes + - Fixed the bug of meta data consistency. + - Fixed the bug where some expired data would not be recycled at the bottom level. + - Fixed the bug where incorrect results were returned when querying all paths from a self-loop vertex. + - Fixed the logging error of requests sent to the follower of a meta service. + - Fixed the bug of the OOM when explaining statements with multiple variables. + - Fixed the bug that caused the graph service to crash when executing multiple MATCH statements with an empty filter. diff --git a/docs-2.0-en/20.appendix/release-notes/studio-release-note.md b/docs-2.0-en/20.appendix/release-notes/studio-release-note.md new file mode 100644 index 00000000000..c3d3b0b5489 --- /dev/null +++ b/docs-2.0-en/20.appendix/release-notes/studio-release-note.md @@ -0,0 +1,53 @@ +# NebulaGraph Studio release notes + + + +## v3.7.0 + +- Enhancements + + - Supported importing SFTP, Amazon S3 data files. + - The import page is supported to configure more import parameters, such as concurrency, retries, etc. + - Supported re-running tasks. + - Supported saving tasks as drafts. + - Supported ARM architecture. + +## v3.6.0 + +- Feature + - Support viewing the [creation statements](../../nebula-studio/manage-schema/st-ug-view-schema.md) of the schema. + - Add a product feedback page. + +- Enhancement + - Remove the timeout limit for slow queries. + - Display browser compatibility hints. + - Optimize the login page. + - Support adding comments with `#` on the console page. + - Optimize the console page. + +- Bugfix + + - Fix the bug that the list has not been refreshed after uploading files. + - Fix the invalid error message of the schema drafting. + - Fix the bug that the **view schema** data has not been cleared after switching the login user. + - Fix the presentation problem of the thumbnail in the schema drafting. diff --git a/docs-2.0-en/3.ngql-guide/3.data-types/1.numeric.md b/docs-2.0-en/3.ngql-guide/3.data-types/1.numeric.md index b0e1b3ce8c9..7dc229d0ae9 100644 --- a/docs-2.0-en/3.ngql-guide/3.data-types/1.numeric.md +++ b/docs-2.0-en/3.ngql-guide/3.data-types/1.numeric.md @@ -32,7 +32,7 @@ Scientific notation is also supported, such as `1e2`, `1.1e2`, `.3e4`, `1.e4`, a When writing and reading different types of data, nGQL complies with the following rules: -| Data type | Set as VID | Set as property | Actual type of data stored| +| Data type | Set as VID | Set as property | Resulted data type| |-|-|-|-| | INT64 | Supported | Supported | INT64 | | INT32 | Not supported | Supported | INT64 | @@ -41,7 +41,7 @@ When writing and reading different types of data, nGQL complies with the followi | FLOAT | Not supported | Supported | DOUBLE | | DOUBLE | Not supported | Supported | DOUBLE | -For example, nGQL does not support setting [VID](../../1.introduction/3.vid.md) as INT8, but supports setting a certain property type of [TAG](../10.tag-statements/1.create-tag.md) or [Edge type](../11.edge-type-statements/1.create-edge.md) as INT8. When using the nGQL statement to insert the property of INT8, the actual type of value stored is INT64, and the type of value received when read is also INT64. +For example, nGQL does not support setting [VID](../../1.introduction/3.vid.md) as INT8, but supports setting a certain property type of [TAG](../10.tag-statements/1.create-tag.md) or [Edge type](../11.edge-type-statements/1.create-edge.md) as INT8. When using the nGQL statement to read the property of INT8, the resulted type is INT64. - Multiple formats are supported: diff --git a/docs-2.0-en/3.ngql-guide/4.job-statements.md b/docs-2.0-en/3.ngql-guide/4.job-statements.md index a006f771fdf..da79f8fd487 100644 --- a/docs-2.0-en/3.ngql-guide/4.job-statements.md +++ b/docs-2.0-en/3.ngql-guide/4.job-statements.md @@ -242,7 +242,7 @@ The Meta Service parses a `SUBMIT JOB` request into multiple tasks and assigns t For example: ```ngql -nebula> SHOW JOB 9; +nebula> SHOW JOB 8; +----------------+-----------------+------------+----------------------------+----------------------------+-------------+ | Job Id(TaskId) | Command(Dest) | Status | Start Time | Stop Time | Error Code | +----------------+-----------------+------------+----------------------------+----------------------------+-------------+ diff --git a/docs-2.0-en/3.ngql-guide/4.variable-and-composite-queries/2.user-defined-variables.md b/docs-2.0-en/3.ngql-guide/4.variable-and-composite-queries/2.user-defined-variables.md index aeef106bf1f..5f3fb87e786 100644 --- a/docs-2.0-en/3.ngql-guide/4.variable-and-composite-queries/2.user-defined-variables.md +++ b/docs-2.0-en/3.ngql-guide/4.variable-and-composite-queries/2.user-defined-variables.md @@ -48,3 +48,18 @@ nebula> $var = GO FROM "player100" OVER follow YIELD dst(edge) AS id; \ | "Spurs" | "Manu Ginobili" | +-----------+-----------------+ ``` + +## Set operations and scope of user-defined variables + +When assigning variables within a compound statement involving set operations, it is important to enclose the scope of the variable assignment in parentheses. In the example below, the source of the `$var` assignment is the results of the output of two `INTERSECT` statements. + +```ngql +$var = ( \ + GO FROM "player100" OVER follow \ + YIELD dst(edge) AS id \ + INTERSECT \ + GO FROM "player100" OVER follow \ + YIELD dst(edge) AS id \ + ); \ + GO FROM $var.id OVER follow YIELD follow.degree AS degree +``` \ No newline at end of file diff --git a/docs-2.0-en/3.ngql-guide/7.general-query-statements/6.show/.3.show-configs.md b/docs-2.0-en/3.ngql-guide/7.general-query-statements/6.show/.3.show-configs.md deleted file mode 100644 index d0568a64a38..00000000000 --- a/docs-2.0-en/3.ngql-guide/7.general-query-statements/6.show/.3.show-configs.md +++ /dev/null @@ -1,51 +0,0 @@ -# SHOW CONFIGS - -The `SHOW CONFIGS` statement lists the mutable configurations of the Graph Service, Meta Service, or Storage Service. - - - -## Syntax - -```ngql -SHOW CONFIGS [GRAPH|META|STORAGE] -``` - -|Option|Description| -|-|-| -|`GRAPH`|Shows the configuration of the Graph Service.| -|`META`|Shows the configuration of the Meta Service.| -|`STORAGE`|Shows the configuration of the Meta Service.| - -If no service name is set in the statement, NebulaGraph shows the mutable configurations of all services. - -## Example - -```ngql -nebula> SHOW CONFIGS GRAPH; -+---------+---------------------------+-------+-----------+-------+ -| module | name | type | mode | value | -+---------+---------------------------+-------+-----------+-------+ -| "GRAPH" | "v" | "int" | "MUTABLE" | 0 | -+---------+---------------------------+-------+-----------+-------+ -| "GRAPH" | "minloglevel" | "int" | "MUTABLE" | 0 | -+---------+---------------------------+-------+-----------+-------+ -| "GRAPH" | "slow_op_threshhold_ms" | "int" | "MUTABLE" | 50 | -+---------+---------------------------+-------+-----------+-------+ -| "GRAPH" | "heartbeat_interval_secs" | "int" | "MUTABLE" | 3 | -+---------+---------------------------+-------+-----------+-------+ -| "GRAPH" | "meta_client_retry_times" | "int" | "MUTABLE" | 3 | -+---------+---------------------------+-------+-----------+-------+ -Got 6 rows (time spent 1216/1880 us) -``` - -The output of `SHOW CONFIGS` is explained as follows: - -|Column|Description| -|-|-| -|`module`|The NebulaGraph service name.| -|`name`|The parameter name.| -|`type`|The data type of the value.| -|`mode`|Shows whether the parameter can be modified or not.| -|`value`|The value of the parameter.| - -For more information about the NebulaGraph configurations, see [Configuration](../../../5.configurations-and-logs/1.configurations/1.configurations.md). diff --git a/docs-2.0-en/3.ngql-guide/8.clauses-and-options/ttl-options.md b/docs-2.0-en/3.ngql-guide/8.clauses-and-options/ttl-options.md index 3c9222ebe9c..bc2b7566ecc 100644 --- a/docs-2.0-en/3.ngql-guide/8.clauses-and-options/ttl-options.md +++ b/docs-2.0-en/3.ngql-guide/8.clauses-and-options/ttl-options.md @@ -1,6 +1,6 @@ # TTL -TTL (Time To Live) specifies a timeout for a property. Once timed out, the property expires. +TTL (Time To Live) is a mechanism in NebulaGraph that defines the lifespan of data. Once the data reaches its predefined lifespan, it is automatically deleted from the database. This feature is particularly suitable for data that only needs temporary storage, such as temporary sessions or cached data. ## OpenCypher Compatibility @@ -22,7 +22,7 @@ The native nGQL TTL feature has the following options. |Option|Description| |:---|:---| -|`ttl_col`|Specifies the property to set a timeout on. The data type of the property must be `int` or `timestamp`.| +|`ttl_col`|Specifies an existing property to set a lifespan on. The data type of the property must be `int` or `timestamp`.| |`ttl_duration`|Specifies the timeout adds-on value in seconds. The value must be a non-negative int64 number. A property expires if the sum of its value and the `ttl_duration` value is smaller than the current timestamp. If the `ttl_duration` value is `0`, the property never expires.
You can set `ttl_use_ms` to `true` in the configuration file `nebula-storaged.conf` (default path: `/usr/local/nightly/etc/`) to set the default unit to milliseconds.| !!! caution @@ -31,39 +31,13 @@ The native nGQL TTL feature has the following options. - After setting `ttl_use_ms` to `true`, which sets the default TTL unit to milliseconds, the data type of the property specified by `ttl_col` must be `int`, and the property value needs to be manually converted to milliseconds. For example, when setting `ttl_col` to `a`, you need to convert the value of `a` to milliseconds, such as when the value of `a` is `now()`, you need to set the value of `a` to `now() * 1000`. +## Use TTL options -## Data expiration and deletion - -!!! caution - - - When the TTL options are set for a property of a tag or an edge type and the property's value is `NULL`, the property never expires. - - If a property with a default value of `now()` is added to a tag or an edge type and the TTL options are set for the property, the history data related to the tag or the edge type will never expire because the value of that property for the history data is the current timestamp. - -### Vertex property expiration - -Vertex property expiration has the following impact. - -* If a vertex has only one tag, once a property of the vertex expires, the vertex expires. - -* If a vertex has multiple tags, once a property of the vertex expires, properties bound to the same tag with the expired property also expire, but the vertex does not expire and other properties of it remain untouched. - -### Edge property expiration - -Since an edge can have only one edge type, once an edge property expires, the edge expires. - -### Data deletion - -The expired data are still stored on the disk, but queries will filter them out. - -NebulaGraph automatically deletes the expired data and reclaims the disk space during the next [compaction](../../8.service-tuning/compaction.md). - -!!! note - - If TTL is [disabled](#remove_a_timeout), the corresponding data deleted after the last compaction can be queried again. +You must use the TTL options together to set a lifespan on a property. -## Use TTL options +Before using the TTL feature, you must first create a timestamp or integer property and specify it in the TTL options. NebulaGraph will not automatically create or manage this timestamp property for you. -You must use the TTL options together to set a valid timeout on a property. +When inserting the value of the timestamp or integer property, it is recommended to use the `now()` function or the current timestamp to represent the present time. ### Set a timeout if a tag or an edge type exists @@ -91,6 +65,34 @@ nebula> CREATE TAG IF NOT EXISTS t2(a int, b int, c string) TTL_DURATION= 100, T # Insert a vertex with tag t2. The timeout timestamp is 1648197238 (1648197138 + 100). nebula> INSERT VERTEX t2(a, b, c) VALUES "102":(1648197138, 30, "Hello"); ``` +## Data expiration and deletion + +!!! caution + + - When the TTL options are set for a property of a tag or an edge type and the property's value is `NULL`, the property never expires. + - If a property with a default value of `now()` is added to a tag or an edge type and the TTL options are set for the property, the history data related to the tag or the edge type will never expire because the value of that property for the history data is the current timestamp. + +### Vertex property expiration + +Vertex property expiration has the following impact. + +* If a vertex has only one tag, once a property of the vertex expires, the vertex expires. + +* If a vertex has multiple tags, once a property of the vertex expires, properties bound to the same tag with the expired property also expire, but the vertex does not expire and other properties of it remain untouched. + +### Edge property expiration + +Since an edge can have only one edge type, once an edge property expires, the edge expires. + +### Data deletion + +The expired data are still stored on the disk, but queries will filter them out. + +NebulaGraph automatically deletes the expired data and reclaims the disk space during the next [compaction](../../8.service-tuning/compaction.md). + +!!! note + + If TTL is [disabled](#remove_a_timeout), the corresponding data deleted after the last compaction can be queried again. ## Remove a timeout diff --git a/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md b/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md index 05d4af57ab4..d60de764cea 100644 --- a/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md +++ b/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md @@ -93,15 +93,15 @@ Using Docker Compose can quickly deploy NebulaGraph services based on the prepar ```bash [nebula-docker-compose]$ docker-compose up -d - Creating nebuladockercompose_metad0_1 ... done - Creating nebuladockercompose_metad2_1 ... done - Creating nebuladockercompose_metad1_1 ... done - Creating nebuladockercompose_graphd2_1 ... done - Creating nebuladockercompose_graphd_1 ... done - Creating nebuladockercompose_graphd1_1 ... done - Creating nebuladockercompose_storaged0_1 ... done - Creating nebuladockercompose_storaged2_1 ... done - Creating nebuladockercompose_storaged1_1 ... done + Creating nebula-docker-compose_metad0_1 ... done + Creating nebula-docker-compose_metad2_1 ... done + Creating nebula-docker-compose_metad1_1 ... done + Creating nebula-docker-compose_graphd2_1 ... done + Creating nebula-docker-compose_graphd_1 ... done + Creating nebula-docker-compose_graphd1_1 ... done + Creating nebula-docker-compose_storaged0_1 ... done + Creating nebula-docker-compose_storaged2_1 ... done + Creating nebula-docker-compose_storaged1_1 ... done ``` !!! compatibility @@ -126,7 +126,7 @@ There are two ways to connect to NebulaGraph: $ docker-compose ps Name Command State Ports -------------------------------------------------------------------------------------------- - nebuladockercompose_console_1 sh -c sleep 3 && Up + nebula-docker-compose_console_1 sh -c sleep 3 && Up nebula-co ... ...... ``` @@ -134,7 +134,7 @@ There are two ways to connect to NebulaGraph: 2. Run the following command to enter the NebulaGraph Console docker container. ```bash - docker exec -it nebuladockercompose_console_1 /bin/sh + docker exec -it nebula-docker-compose_console_1 /bin/sh / # ``` @@ -173,35 +173,35 @@ Run `docker-compose ps` to list all the services of NebulaGraph and their status ```bash $ docker-compose ps -nebuladockercompose_console_1 sh -c sleep 3 && Up +nebula-docker-compose_console_1 sh -c sleep 3 && Up nebula-co ... -nebuladockercompose_graphd1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49174->19669/tcp,:::49174->19669/tcp, 0.0.0.0:49171->19670/tcp,:::49171->19670/tcp, 0.0.0.0:49177->9669/tcp,:::49177->9669/tcp -nebuladockercompose_graphd2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49175->19669/tcp,:::49175->19669/tcp, 0.0.0.0:49172->19670/tcp,:::49172->19670/tcp, 0.0.0.0:49178->9669/tcp,:::49178->9669/tcp -nebuladockercompose_graphd_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49180->19669/tcp,:::49180->19669/tcp, 0.0.0.0:49179->19670/tcp,:::49179->19670/tcp, 0.0.0.0:9669->9669/tcp,:::9669->9669/tcp -nebuladockercompose_metad0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49157->19559/tcp,:::49157->19559/tcp, 0.0.0.0:49154->19560/tcp,:::49154->19560/tcp, 0.0.0.0:49160->9559/tcp,:::49160->9559/tcp, 9560/tcp -nebuladockercompose_metad1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49156->19559/tcp,:::49156->19559/tcp, 0.0.0.0:49153->19560/tcp,:::49153->19560/tcp, 0.0.0.0:49159->9559/tcp,:::49159->9559/tcp, 9560/tcp -nebuladockercompose_metad2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49158->19559/tcp,:::49158->19559/tcp, 0.0.0.0:49155->19560/tcp,:::49155->19560/tcp, 0.0.0.0:49161->9559/tcp,:::49161->9559/tcp, 9560/tcp -nebuladockercompose_storaged0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49166->19779/tcp,:::49166->19779/tcp, 0.0.0.0:49163->19780/tcp,:::49163->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49169->9779/tcp,:::49169->9779/tcp, 9780/tcp -nebuladockercompose_storaged1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49165->19779/tcp,:::49165->19779/tcp, 0.0.0.0:49162->19780/tcp,:::49162->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49168->9779/tcp,:::49168->9779/tcp, 9780/tcp -nebuladockercompose_storaged2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp +nebula-docker-compose_graphd1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49174->19669/tcp,:::49174->19669/tcp, 0.0.0.0:49171->19670/tcp,:::49171->19670/tcp, 0.0.0.0:49177->9669/tcp,:::49177->9669/tcp +nebula-docker-compose_graphd2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49175->19669/tcp,:::49175->19669/tcp, 0.0.0.0:49172->19670/tcp,:::49172->19670/tcp, 0.0.0.0:49178->9669/tcp,:::49178->9669/tcp +nebula-docker-compose_graphd_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49180->19669/tcp,:::49180->19669/tcp, 0.0.0.0:49179->19670/tcp,:::49179->19670/tcp, 0.0.0.0:9669->9669/tcp,:::9669->9669/tcp +nebula-docker-compose_metad0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49157->19559/tcp,:::49157->19559/tcp, 0.0.0.0:49154->19560/tcp,:::49154->19560/tcp, 0.0.0.0:49160->9559/tcp,:::49160->9559/tcp, 9560/tcp +nebula-docker-compose_metad1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49156->19559/tcp,:::49156->19559/tcp, 0.0.0.0:49153->19560/tcp,:::49153->19560/tcp, 0.0.0.0:49159->9559/tcp,:::49159->9559/tcp, 9560/tcp +nebula-docker-compose_metad2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49158->19559/tcp,:::49158->19559/tcp, 0.0.0.0:49155->19560/tcp,:::49155->19560/tcp, 0.0.0.0:49161->9559/tcp,:::49161->9559/tcp, 9560/tcp +nebula-docker-compose_storaged0_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49166->19779/tcp,:::49166->19779/tcp, 0.0.0.0:49163->19780/tcp,:::49163->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49169->9779/tcp,:::49169->9779/tcp, 9780/tcp +nebula-docker-compose_storaged1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49165->19779/tcp,:::49165->19779/tcp, 0.0.0.0:49162->19780/tcp,:::49162->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49168->9779/tcp,:::49168->9779/tcp, 9780/tcp +nebula-docker-compose_storaged2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp ``` -If the service is abnormal, you can first confirm the abnormal container name (such as `nebuladockercompose_graphd2_1`). +If the service is abnormal, you can first confirm the abnormal container name (such as `nebula-docker-compose_graphd2_1`). Then you can execute `docker ps` to view the corresponding `CONTAINER ID` (such as `2a6c56c405f5`). ```bash [nebula-docker-compose]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -2a6c56c405f5 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp nebuladockercompose_graphd2_1 -7042e0a8e83d vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp nebuladockercompose_storaged2_1 -18e3ea63ad65 vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp nebuladockercompose_storaged0_1 -4dcabfe8677a vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp nebuladockercompose_graphd1_1 -a74054c6ae25 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp nebuladockercompose_graphd_1 -880025a3858c vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp nebuladockercompose_storaged1_1 -45736a32a23a vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp nebuladockercompose_metad0_1 -3b2c90eb073e vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp nebuladockercompose_metad2_1 -7bb31b7a5b3f vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp nebuladockercompose_metad1_1 +2a6c56c405f5 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp nebula-docker-compose_graphd2_1 +7042e0a8e83d vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp nebula-docker-compose_storaged2_1 +18e3ea63ad65 vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp nebula-docker-compose_storaged0_1 +4dcabfe8677a vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp nebula-docker-compose_graphd1_1 +a74054c6ae25 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp nebula-docker-compose_graphd_1 +880025a3858c vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp nebula-docker-compose_storaged1_1 +45736a32a23a vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp nebula-docker-compose_metad0_1 +3b2c90eb073e vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp nebula-docker-compose_metad2_1 +7bb31b7a5b3f vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp nebula-docker-compose_metad1_1 ``` Use the `CONTAINER ID` to log in the container and troubleshoot. @@ -250,27 +250,27 @@ $ docker-compose down The following information indicates you have successfully stopped the NebulaGraph services: ```bash -Stopping nebuladockercompose_console_1 ... done -Stopping nebuladockercompose_graphd1_1 ... done -Stopping nebuladockercompose_graphd_1 ... done -Stopping nebuladockercompose_graphd2_1 ... done -Stopping nebuladockercompose_storaged1_1 ... done -Stopping nebuladockercompose_storaged0_1 ... done -Stopping nebuladockercompose_storaged2_1 ... done -Stopping nebuladockercompose_metad2_1 ... done -Stopping nebuladockercompose_metad0_1 ... done -Stopping nebuladockercompose_metad1_1 ... done -Removing nebuladockercompose_console_1 ... done -Removing nebuladockercompose_graphd1_1 ... done -Removing nebuladockercompose_graphd_1 ... done -Removing nebuladockercompose_graphd2_1 ... done -Removing nebuladockercompose_storaged1_1 ... done -Removing nebuladockercompose_storaged0_1 ... done -Removing nebuladockercompose_storaged2_1 ... done -Removing nebuladockercompose_metad2_1 ... done -Removing nebuladockercompose_metad0_1 ... done -Removing nebuladockercompose_metad1_1 ... done -Removing network nebuladockercompose_nebula-net +Stopping nebula-docker-compose_console_1 ... done +Stopping nebula-docker-compose_graphd1_1 ... done +Stopping nebula-docker-compose_graphd_1 ... done +Stopping nebula-docker-compose_graphd2_1 ... done +Stopping nebula-docker-compose_storaged1_1 ... done +Stopping nebula-docker-compose_storaged0_1 ... done +Stopping nebula-docker-compose_storaged2_1 ... done +Stopping nebula-docker-compose_metad2_1 ... done +Stopping nebula-docker-compose_metad0_1 ... done +Stopping nebula-docker-compose_metad1_1 ... done +Removing nebula-docker-compose_console_1 ... done +Removing nebula-docker-compose_graphd1_1 ... done +Removing nebula-docker-compose_graphd_1 ... done +Removing nebula-docker-compose_graphd2_1 ... done +Removing nebula-docker-compose_storaged1_1 ... done +Removing nebula-docker-compose_storaged0_1 ... done +Removing nebula-docker-compose_storaged2_1 ... done +Removing nebula-docker-compose_metad2_1 ... done +Removing nebula-docker-compose_metad0_1 ... done +Removing nebula-docker-compose_metad1_1 ... done +Removing network nebula-docker-compose_nebula-net ``` !!! danger diff --git a/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent-from-3.x-3.4.md b/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent-from-3.x-3.4.md deleted file mode 100644 index 4de0e82b5a4..00000000000 --- a/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent-from-3.x-3.4.md +++ /dev/null @@ -1,115 +0,0 @@ -# Upgrade NebulaGraph Enterprise Edition from version 3.x to {{nebula.release}} - -This topic takes the enterprise edition of NebulaGraph v3.1.0 as an example and describes how to upgrade to v{{nebula.release}}. - -## Notes - -- This upgrade is only applicable for upgrading the enterprise edition of NebulaGraph v3.x (x < 4) to v{{nebula.release}}. For upgrading from version 3.4.0 and above to {{nebula.release}}, you can directly replace the binary files for an upgrade. For more information, see [Upgrade NebulaGraph to {{nebula.release}}](https://docs.nebula-graph.com.cn/{{nebula.release}}/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest/). - - !!! note - - If your version is below 3.0.0, please upgrade to enterprise edition 3.1.0 before upgrading to v{{nebula.release}}. For details, see [Upgrade NebulaGraph Enterprise Edition 2.x to 3.1.0](https://docs.nebula-graph.io/3.1.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest/). - -- The IP address of the machine performing the upgrade operation must be the same as the original machine. - -- The remaining disk space on the machine must be at least 1.5 times the size of the original data directory. - -- Before upgrading a NebulaGraph cluster with full-text indexes deployed, you must manually delete the full-text indexes in Elasticsearch, and then run the `SIGN IN` command to log into ES and recreate the indexes after the upgrade is complete. - - !!! note - - To manually delete the full-text indexes in Elasticsearch, you can use the curl command `curl -XDELETE -u : ':/'`, for example, `curl -XDELETE -u elastic:elastic 'http://192.168.8.223:9200/nebula_index_2534'`. If no username and password are set for Elasticsearch, you can omit the `-u :` part. - -## Steps - -1. [Contact us](https://www.nebula-graph.io/contact) to obtain the installation package of the enterprise edition of NebulaGraph v{{nebula.release}} and install it. - - !!! note - - The upgrade steps are the same for different installation packages. This article uses the RPM package and the installation directory `/usr/local/nebulagraph-ent-{{nebula.release}}` as an example. See [Install with RPM packages](../2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) for specific operations. - - !!! caution - - Please ensure that the number of storage paths set for the `--data_path` parameter in the Meta and Storage service configuration files of the {{nebula.release}} cluster is the same as that for the `--data_path` parameter in the configuration files of the 3.x cluster. Otherwise, the upgraded cluster will not start. - -1. Stop the enterprise edition of v3.x services. For details see [Manage NebulaGraph services](https://docs.nebula-graph.io/3.5.0-sc/4.deployment-and-installation/manage-service/). - - Run the `nebula.service status all` command to confirm that all services have been stopped after running the command. - -3. In the installation directory of the Enterprise Edition NebulaGraph v{{nebula.release}}, run the following commands to upgrade the Storage and Meta services. - - - Upgrade the Storage service: - - Syntax: - - ```bash - sudo ./bin/db_upgrader --max_concurrent_parts= --src_db_path= --dst_db_path= - ``` - - | Parameter | Description | - | :-------------- | :--------------------------- | - | `--max_concurrent_parts` | Specify the number of partitions to upgrade simultaneously, with the default value being 1.
It is recommended to increase the value appropriately based on disk performance. | - | `--src_db_path` | Specify the absolute path to the source data directory. The following takes the source data directory `/usr/local/nebula-ent-3.1.0/data/storage` as an example. | - | `--dst_db_path` | Specify the absolute path to the target data directory. The example target data directory is `/usr/local/nebula-ent-{{nebula.release}}/data/storage`.| - - Example: - - ```bash - sudo ./bin/db_upgrader --max_concurrent_parts=20 --src_db_path=/usr/local/nebula-ent-3.1.0/data/storage --dst_db_path=/usr/local/nebula-ent-{{nebula.release}}/data/storage - ``` - - If there are multiple source data directories, specify each source data directory and target data directory and run the corresponding command. For example, there are two source data directories `/usr/local/nebula-ent-3.1.0/data/storage` and `/usr/local/nebula-ent-3.1.0/data2/storage`, run the following commands: - - ```bash - sudo ./bin/db_upgrader --src_db_path=/usr/local/nebula-ent-3.1.0/data/storage --dst_db_path=/usr/local/nebula-ent-{{nebula.release}}/data/storage - - sudo ./bin/db_upgrader --src_db_path=/usr/local/nebula-ent-3.1.0/data2/storage --dst_db_path=/usr/local/nebula-ent-{{nebula.release}}/data2/storage - ``` - - - Upgrade the Meta service: - - Syntax: - - ```bash - sudo ./bin/meta_upgrader --src_meta_path= --dst_meta_path= - ``` - - | Parameter | Description | - | :-------------- | :--------------------------- | - | `--src_meta_path` | Specify the absolute path to the source meta data directory. The following takes the source data directory `/usr/local/nebula-ent-3.1.0/data/meta` as an example. | - | `--dst_meta_path` | Specify the absolute path to the target meta data directory. The example target data directory is `/usr/local/nebula-ent-{{nebula.release}}/data/meta`.| - - Example: - - ```bash - sudo ./bin/meta_upgrader --src_meta_path=/usr/local/nebula-ent-3.1.0/data/meta --dst_meta_path=/usr/local/nebula-ent-{{nebula.release}}/data/meta - ``` - - If there are multiple source meta data directories, specify each source meta data directory and target meta data directory and run the corresponding command. - - After the upgrade, a `data` directory will be generated in the v{{nebula.release}} installation directory, containing the upgraded data files. - -4. Start and connect to the NebulaGraph v{{nebula.release}} enterprise edition service and verify that the data is correct. The following commands can be used as reference: - - ``` - nebula> SHOW HOSTS; - nebula> SHOW HOSTS storage; - nebula> SHOW SPACES; - nebula> USE - nebula> SHOW PARTS; - nebula> SUBMIT JOB STATS; - nebula> SHOW STATS; - nebula> MATCH (v) RETURN v LIMIT 5; - ``` - -## Docker Compose Deployment - -!!! caution - - For NebulaGraph deployed using Docker Compose, it is recommended to redeploy the new version and import data. - - - - - - diff --git a/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent.md .md b/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent.md .md new file mode 100644 index 00000000000..4bab871ed47 --- /dev/null +++ b/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent.md .md @@ -0,0 +1,244 @@ +# Upgrade NebulaGraph Enterprise Edition from version 3.x to {{nebula.release}} + +This topic takes the enterprise edition of NebulaGraph v3.1.0 as an example and describes how to upgrade to v{{nebula.release}}. + +## Notes + +- Rolling Upgrade is not supported. You must stop all the NebulaGraph services before the upgrade. + +- There is no upgrade script. You have to manually upgrade each server in the cluster. + +- The IP address of the machine performing the upgrade operation must be the same as the original machine. +- You must have the sudo privileges to complete the steps in this topic. + +- The remaining disk space on the machine must be at least 1.5 times the size of the original data directory. + +## Upgrade influences + + + +- Client compatibility + + After the upgrade, you will not be able to connect to NebulaGraph from old clients. You will need to upgrade all clients to a version compatible with NebulaGraph {{nebula.release}}. + +- Configuration changes + + A few configuration parameters have been changed. For more information, see the release notes and configuration docs. + +- nGQL compatibility + + The nGQL syntax is partially incompatible: + + - Disable the `YIELD` clause to return custom variables. + + - The `YIELD` clause is required in the `FETCH`, `GO`, `LOOKUP`, `FIND PATH` and `GET SUBGRAPH` statements. + + - It is required to specify a tag to query properties of a vertex in a `MATCH` statement. For example, from `return v.name` to `return v.player.name`. + +- Full-text indexes + + Before upgrading a NebulaGraph cluster with full-text indexes deployed, you must manually delete the full-text indexes in Elasticsearch, and then run the `SIGN IN` command to log into ES and recreate the indexes after the upgrade is complete. To manually delete the full-text indexes in Elasticsearch, you can use the curl command `curl -XDELETE -u : ':/'`, for example, `curl -XDELETE -u elastic:elastic 'http://192.168.8.xxx:9200/nebula_index_2534'`. If no username and password are set for Elasticsearch, you can omit the `-u :` part. + + !!! note + + For upgrades from version 3.5.0 and later to {{nebula.release}}, there's no need to manually delete the full-text indexes. + +## Upgrading to {{nebula.release}} from version 3.4.0 and above + + +1. Stop all {{nebula.name}} services. + + ``` + /scripts/nebula.service stop all + ``` + + Replace `install_path` with the installation directory of the {{nebula.name}} instance you want to upgrade. + + Allow approximately 1 minute for the `storaged` process to flush data. You can continue by running the `nebula.service status all` command to confirm that all services have stopped. For detailed instructions on starting and stopping services, refer to [Managing Services](../manage-service.md). + + !!! note + + If services cannot be stopped within 20 minutes, abandon the upgrade, and contact customer support. + + !!! caution + + Starting from version 3.0.0, {{nebula.name}} supports inserting points without tags. To retain points without tags, add the `--graph_use_vertex_key=true` flag to the configuration file (`nebula-graphd.conf`) of all Graph services within the cluster and add the `--use_vertex_key=true` flag to the configuration file (`nebula-storaged.conf`) of all Storage services. + +2. Prepare the installation package for {{nebula.name}} {{nebula.release}} and extract it. You can specify any installation directory. + +3. In the directory of {{nebula.release}}, use the new binary files from its `bin` directory to replace the old binary files in the `bin` directory of the {{nebula.name}} installation path. + + !!! note + Update the binary files for each machine where {{nebula.name}} services are deployed. + + + +4. In the `nebula-metad.conf` configuration file of NebulaGraph, add the `license_manager_url` parameter and set it to the LM's path. + + The LM is used to verify {{nebula.name}}'s licensing information. For details, see [LM Configuration](../../9.about-license/2.license-management-suite/3.license-manager.md). + + !!! note + Starting from version 3.5.0, {{nebula.name}} enables license validation, so it's necessary to install and configure LM. + +5. Start all Meta services. + + ``` + /scripts/nebula-metad.service start + ``` + + After starting, the Meta services will elect a leader. This process takes a few seconds. + + Once started, you can start any Graph service node and connect to it using {{nebula.name}}. Run [`SHOW HOSTS meta`](../../3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md) and [`SHOW META LEADER`](../../3.ngql-guide/7.general-query-statements/6.show/19.show-meta-leader.md). If they return the status of the Meta node correctly, the Meta service has started successfully. + + !!! note + + If there are any exceptions during startup, abandon the upgrade, and contact customer support. + +5. Start all Graph and Storage services. + + !!! note + + If there are any exceptions during startup, abandon the upgrade, and contact customer support. + +6. Connect to the new version of {{nebula.name}} and verify that the service is operational and that the data is intact. For information on connecting to the service, refer to [Connecting to Nebula Graph](../connect-to-nebula-graph.md). + + Some reference commands to test the upgrade are as follows: + + ```ngql + nebula> SHOW HOSTS; + nebula> SHOW HOSTS storage; + nebula> SHOW SPACES; + nebula> USE + nebula> SHOW PARTS; + nebula> SUBMIT JOB STATS; + nebula> SHOW STATS; + nebula> MATCH (v) RETURN v LIMIT 5; + ``` + +## Upgrade 3.x(x < 4)to {{nebula.release}} + +1. [Contact us](https://www.nebula-graph.io/contact) to obtain the installation package of the enterprise edition of NebulaGraph v{{nebula.release}} and install it. + + !!! note + + The upgrade steps are the same for different installation packages. This article uses the RPM package and the installation directory `/usr/local/nebulagraph-ent-{{nebula.release}}` as an example. See [Install with RPM packages](../2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) for specific operations. + + !!! caution + + Please ensure that the number of storage paths set for the `--data_path` parameter in the Meta and Storage service configuration files of the {{nebula.release}} cluster is the same as that for the `--data_path` parameter in the configuration files of the 3.x cluster. Otherwise, the upgraded cluster will not start. + +2. Back up the data and the binary files in the subdirectory `bin` of the 3.x cluster. + + !!! note + + The backup is used for rollback in case of upgrade failure. The backup files are not used in the upgrade process. + +3. Stop the enterprise edition of v3.x services. For details see [Manage NebulaGraph services](https://docs.nebula-graph.io/3.5.0-sc/4.deployment-and-installation/manage-service/). + + Run the `nebula.service status all` command to confirm that all services have been stopped after running the command. + + +4. In the subdirectory `etc` of the {{nebula.name}} v{{nebula.release}} installation directory, update the configuration files (if there have been any configuration updates previously). + + !!! note + + If there were no configuration updates previously, you can skip this step. + +5. In the `nebula-metad.conf` file of {{nebula.name}} v{{nebula.release}}, set `license_manager_url` to the URL of [LM](../../9.about-license/2.license-management-suite/3.license-manager.md). + + !!! note + For the Enterprise Edition of NebulaGraph v3.5.0 or later, you need to install and configure LM to verify the license used to start NebulaGraph. + +6. In the installation directory of the Enterprise Edition NebulaGraph v{{nebula.release}}, run the following commands to upgrade the Storage and Meta services. + + - Upgrade the Storage service: + + Syntax: + + ```bash + sudo ./bin/db_upgrader --max_concurrent_parts= --src_db_path= --dst_db_path= + ``` + + | Parameter | Description | + | :-------------- | :--------------------------- | + | `--max_concurrent_parts` | Specify the number of partitions to upgrade simultaneously, with the default value being 1.
It is recommended to increase the value appropriately based on disk performance. | + | `--src_db_path` | Specify the absolute path to the source data directory. The following takes the source data directory `/usr/local/nebula-ent-3.1.0/data/storage` as an example. | + | `--dst_db_path` | Specify the absolute path to the target data directory. The example target data directory is `/usr/local/nebula-ent-{{nebula.release}}/data/storage`.| + + Example: + + ```bash + sudo ./bin/db_upgrader --max_concurrent_parts=20 --src_db_path=/usr/local/nebula-ent-3.1.0/data/storage --dst_db_path=/usr/local/nebula-ent-{{nebula.release}}/data/storage + ``` + + If there are multiple source data directories, specify each source data directory and target data directory and run the corresponding command. For example, there are two source data directories `/usr/local/nebula-ent-3.1.0/data/storage` and `/usr/local/nebula-ent-3.1.0/data2/storage`, run the following commands: + + ```bash + sudo ./bin/db_upgrader --src_db_path=/usr/local/nebula-ent-3.1.0/data/storage --dst_db_path=/usr/local/nebula-ent-{{nebula.release}}/data/storage + + sudo ./bin/db_upgrader --src_db_path=/usr/local/nebula-ent-3.1.0/data2/storage --dst_db_path=/usr/local/nebula-ent-{{nebula.release}}/data2/storage + ``` + + - Upgrade the Meta service: + + Syntax: + + ```bash + sudo ./bin/meta_upgrader --src_meta_path= --dst_meta_path= + ``` + + | Parameter | Description | + | :-------------- | :--------------------------- | + | `--src_meta_path` | Specify the absolute path to the source meta data directory. The following takes the source data directory `/usr/local/nebula-ent-3.1.0/data/meta` as an example. | + | `--dst_meta_path` | Specify the absolute path to the target meta data directory. The example target data directory is `/usr/local/nebula-ent-{{nebula.release}}/data/meta`.| + + Example: + + ```bash + sudo ./bin/meta_upgrader --src_meta_path=/usr/local/nebula-ent-3.1.0/data/meta --dst_meta_path=/usr/local/nebula-ent-{{nebula.release}}/data/meta + ``` + + If there are multiple source meta data directories, specify each source meta data directory and target meta data directory and run the corresponding command. + + After the upgrade, a `data` directory will be generated in the v{{nebula.release}} installation directory, containing the upgraded data files. + + +7. Start and connect to the NebulaGraph v{{nebula.release}} enterprise edition service and verify that the data is correct. The following commands can be used as reference: + + ``` + nebula> SHOW HOSTS; + nebula> SHOW HOSTS storage; + nebula> SHOW SPACES; + nebula> USE + nebula> SHOW PARTS; + nebula> SUBMIT JOB STATS; + nebula> SHOW STATS; + nebula> MATCH (v) RETURN v LIMIT 5; + ``` + + +## Upgrading from previous versions to {{nebula.release}} + +If the your NebulaGraph database version is lower than 3.0.0, to upgrade to {{nebula.release}}, see the above section **Upgrade 3.x(x < 4)to {{nebula.release}}**. + + + + + + + + diff --git a/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-300-to-latest.md b/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-300-to-latest.md deleted file mode 100644 index 29ed7ff7a2b..00000000000 --- a/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-300-to-latest.md +++ /dev/null @@ -1,59 +0,0 @@ -# Upgrade NebulaGraph v3.x to v{{nebula.release}} - -To upgrade NebulaGraph v3.x to v{{nebula.release}}, you only need to use the RPM/DEB package of v{{nebula.release}} for the upgrade, or [compile it](../2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md) and then reinstall. - -!!! caution - - Before upgrading a NebulaGraph cluster with full-text indexes deployed, you must manually delete the full-text indexes in Elasticsearch, and then run the `SIGN IN` command to log into ES and recreate the indexes after the upgrade is complete. To manually delete the full-text indexes in Elasticsearch, you can use the curl command `curl -XDELETE -u : ':/'`, for example, `curl -XDELETE -u elastic:elastic 'http://192.168.8.223:9200/nebula_index_2534'`. If no username and password are set for Elasticsearch, you can omit the `-u :` part. - -## Upgrade steps with RPM/DEB packages - -1. Download the [RPM/DEB package](https://www.nebula-graph.io/download). - -2. Stop all NebulaGraph services. For details, see [Manage NebulaGraph Service](../../2.quick-start/5.start-stop-service.md). It is recommended to back up the configuration file before updating. - - !!! caution - - If you want to use the vertex without tags, add `--graph_use_vertex_key=true` to the configuration files (`nebula-graphd.conf`) of all Graph services in the cluster, add `--use_vertex_key=true` to the configuration files (`nebula-storaged.conf`) of all Storage services in the cluster. - -3. Execute the following command to upgrade: - - - RPM package - - ```bash - $ sudo rpm -Uvh - ``` - - If you specify the path during installation, you also need to specify the path during upgrade. - - ```bash - $ sudo rpm -Uvh --prefix= - ``` - - - DEB package - - ```bash - $ sudo dpkg -i - ``` - -4. Start the required services on each server. For details, see [Manage NebulaGraph Service](../../2.quick-start/5.start-stop-service.md). - -## Upgrade steps by compiling the new source code - -1. Back up the old version of the configuration file. The configuration file is saved in the `etc` directory of the NebulaGraph installation path. - -2. Update the repository and compile the source code. For details, see [Install NebulaGraph by compiling the source code](../2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md). - - !!! note - - When compiling, set the installation path, which is the same as the installation path of the old version. - -## Upgrade steps by deploying Docker Compose - -1. Modify the file `docker-compose.yaml` in the directory `nebula-docker-compose`, and modify all versions after `image` to `{{nebula.branch}}`. - -2. Execute the command `docker-compose pull` in the directory `nebula-docker-compose` to update the images of all services. - -3. Execute the command `docker-compose down` to stop the NebulaGraph service. - -4. Execute the command `docker-compose up -d` to start the NebulaGraph service. diff --git a/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md b/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md deleted file mode 100644 index 2eb2688130a..00000000000 --- a/docs-2.0-en/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md +++ /dev/null @@ -1,211 +0,0 @@ -# Upgrade NebulaGraph to {{nebula.release}} - -This topic describes how to upgrade NebulaGraph from version 2.x and 3.x to {{nebula.release}}, taking upgrading from version 2.6.1 to {{nebula.release}} as an example. - -## Applicable source versions - -This topic applies to upgrading NebulaGraph from 2.5.0 and later 2.x, and 3.x versions to {{nebula.release}}. It does not apply to historical versions earlier than 2.5.0, including the 1.x versions. - -To upgrade NebulaGraph from historical versions to {{nebula.release}}: - -1. Upgrade it to the latest 2.5 version according to the docs of that version. -2. Follow this topic to upgrade it to {{nebula.release}}. - -!!! caution - - To upgrade NebulaGraph from versions earlier than 2.0.0 (including the 1.x versions) to {{nebula.release}}, you need to find the `date_time_zonespec.csv` in the `share/resources` directory of {{nebula.release}} files, and then copy it to the same directory in the NebulaGraph installation path. - -## Limitations - -* Rolling Upgrade is not supported. You must stop all the NebulaGraph services before the upgrade. - -* There is no upgrade script. You have to manually upgrade each server in the cluster. - -* This topic does not apply to scenarios where NebulaGraph is deployed with Docker, including Docker Swarm, Docker Compose, and K8s. - -* You must upgrade the old NebulaGraph services on the same machines they are deployed. **DO NOT** change the IP addresses, configuration files of the machines, and **DO NOT** change the cluster topology. - - - -* Known issues that could cause data loss are listed on [GitHub known issues](https://github.com/vesoft-inc/nebula-graph/issues/857). The issues are all related to altering schema or default values. - -* **DO NOT** use soft links to switch the data directories. - -* You must have the sudo privileges to complete the steps in this topic. - -## Upgrade influences - - - -- Client compatibility - - After the upgrade, you will not be able to connect to NebulaGraph from old clients. You will need to upgrade all clients to a version compatible with NebulaGraph {{nebula.release}}. - -- Configuration changes - - A few configuration parameters have been changed. For more information, see the release notes and configuration docs. - -- nGQL compatibility - - The nGQL syntax is partially incompatible: - - - Disable the `YIELD` clause to return custom variables. - - - The `YIELD` clause is required in the `FETCH`, `GO`, `LOOKUP`, `FIND PATH` and `GET SUBGRAPH` statements. - - - It is required to specify a tag to query properties of a vertex in a `MATCH` statement. For example, from `return v.name` to `return v.player.name`. - -- Full-text indexes - - Before upgrading a NebulaGraph cluster with full-text indexes deployed, you must manually delete the full-text indexes in Elasticsearch, and then run the `SIGN IN` command to log into ES and recreate the indexes after the upgrade is complete. To manually delete the full-text indexes in Elasticsearch, you can use the curl command `curl -XDELETE -u : ':/'`, for example, `curl -XDELETE -u elastic:elastic 'http://192.168.8.xxx:9200/nebula_index_2534'`. If no username and password are set for Elasticsearch, you can omit the `-u :` part. - -!!! caution - - There may be other undiscovered influences. Before the upgrade, we recommend that you read the release notes and user manual carefully, and keep an eye on the [posts](https://github.com/vesoft-inc/nebula/discussions) on the forum and [issues](https://github.com/vesoft-inc/nebula/issues) on Github. - -## Preparations before the upgrade - -- Download the package of NebulaGraph {{nebula.release}} according to your operating system and system architecture. You need the binary files during the upgrade. Find the package on [the download page](https://nebula-graph.io/download/). - - !!! note - You can also get the new binaries from the source code or the RPM/DEB package. - -- Locate the data files based on the value of the `data_path` parameters in the Storage and Meta configurations, and backup the data files. The default paths are `nebula/data/storage` and `nebula/data/meta`. - - !!! danger - The old data will not be automatically backed up during the upgrade. You must manually back up the data to avoid data loss. - -- Backup the configuration files. - -- Collect the statistics of all graph spaces before the upgrade. After the upgrade, you can collect again and compare the results to make sure that no data is lost. To collect the statistics: - - 1. Run `SUBMIT JOB STATS`. - 2. Run `SHOW JOBS` and record the result. - -## Upgrade steps - -1. Stop all NebulaGraph services. - - ``` - /scripts/nebula.service stop all - ``` - - `nebula_install_path` indicates the installation path of NebulaGraph. - - The storaged progress needs around 1 minute to flush data. You can run `nebula.service status all` to check if all services are stopped. For more information about starting and stopping services, see [Manage services](../manage-service.md). - - !!! note - - If the services are not fully stopped in 20 minutes, stop upgrading and ask for help on [the forum](https://github.com/vesoft-inc/nebula/discussions) or [Github](https://github.com/vesoft-inc/nebula/issues). - - !!! caution - - Starting from version 3.0.0, it is possible to insert vertices without tags. If you need to keep vertices without tags, add `--graph_use_vertex_key=true` in the configuration file (`nebula-graphd.conf`) of all Graph services within the cluster; and add `--use_vertex_key=true` in the configuration file (`nebula-storaged.conf`) of all Storage services." - -2. In the target path where you unpacked the package, use the binaries in the `bin` directory to replace the old binaries in the `bin` directory in the NebulaGraph installation path. - - !!! note - Update the binary of the corresponding service on each NebulaGraph server. - -3. Modify the following parameters in all Graph configuration files to accommodate the value range of the new version. If the parameter values are within the specified range, skip this step. - - - Set a value in [1,604800] for `session_idle_timeout_secs`. The recommended value is 28800. - - Set a value in [1,604800] for `client_idle_timeout_secs`. The recommended value is 28800. - - The default values of these parameters in the 2.x versions are not within the range of the new version. If you do not change the default values, the upgrade will fail. For detailed parameter description, see [Graph Service Configuration](../../5.configurations-and-logs/1.configurations/3.graph-config.md). - -4. Start all Meta services. - - ``` - /scripts/nebula-metad.service start - ``` - - Once started, the Meta services take several seconds to elect a leader. - - To verify that Meta services are all started, you can start any Graph server, connect to it through NebulaGraph Console, and run [`SHOW HOSTS meta`](../../3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md) and [`SHOW META LEADER`](../../3.ngql-guide/7.general-query-statements/6.show/19.show-meta-leader.md). If the status of Meta services are correctly returned, the services are successfully started. - - !!! note - If the operation fails, stop the upgrade and ask for help on [the forum](https://discuss.nebula-graph.com.cn/) or [GitHub](https://github.com/vesoft-inc/nebula/issues). - -5. Start all the Graph and Storage services. - - !!! note - If the operation fails, stop the upgrade and ask for help on [the forum](https://discuss.nebula-graph.com.cn/) or [GitHub](https://github.com/vesoft-inc/nebula/issues). - -6. Connect to the new version of NebulaGraph to verify that services are available and data are complete. For how to connect, see [Connect to NebulaGraph](../connect-to-nebula-graph.md). - - Currently, there is no official way to check whether the upgrade is successful. You can run the following reference statements to test the upgrade: - - ```ngql - nebula> SHOW HOSTS; - nebula> SHOW HOSTS storage; - nebula> SHOW SPACES; - nebula> USE - nebula> SHOW PARTS; - nebula> SUBMIT JOB STATS; - nebula> SHOW STATS; - nebula> MATCH (v) RETURN v LIMIT 5; - ``` - - You can also test against [new features](../../20.appendix/release-notes/nebula-comm-release-note.md) in version {{nebula.release}}. - -## Upgrade failure and rollback - -If the upgrade fails, stop all NebulaGraph services of the new version, recover the old configuration files and binaries, and start the services of the old version. - -All NebulaGraph clients in use must be switched to the old version. - -## FAQ - -### Can I write through the client during the upgrade? - -A: No. You must stop all NebulaGraph services during the upgrade. - - -### The `Space 0 not found` warning message during the upgrade process - -When the `Space 0 not found` warning message appears during the upgrade process, you can ignore it. The space `0` is used to store meta information about the Storage service and does not contain user data, so it will not affect the upgrade. - -### How to upgrade if a machine has only the Graph Service, but not the Storage Service? - -A: You only need to update the configuration files and binaries of the Graph Service. - -### How to resolve the error `Permission denied`? - -A: Try again with the sudo privileges. - -### Is there any change in gflags? - -A: Yes. For more information, see the release notes and configuration docs. - -### Is there a tool or solution for verifying data consistency after the upgrade? - -A: No. But if you only want to check the number of vertices and edges, run `SUBMIT JOB STATS` and `SHOW STATS` after the upgrade, and compare the result with the result that you recorded before the upgrade. - -### How to solve the issue that Storage is `OFFLINE` and `Leader count` is `0`? - -A: Run the following statement to add the Storage hosts into the cluster manually. - -```ngql -ADD HOSTS :[, : ...]; -``` - -For example: - -```ngql -ADD HOSTS 192.168.10.100:9779, 192.168.10.101:9779, 192.168.10.102:9779; -``` - -If the issue persists, ask for help on [the forum](https://discuss.nebula-graph.com.cn/) or [GitHub](https://github.com/vesoft-inc/nebula/issues). - -### Why the job type changed after the upgrade, but job ID remains the same? - -A: `SHOW JOBS` depends on an internal ID to identify job types, but in NebulaGraph 2.5.0 the internal ID changed in [this pull request](https://github.com/vesoft-inc/nebula-common/pull/562/files), so this issue happens after upgrading from a version earlier than 2.5.0. diff --git a/docs-2.0-en/4.deployment-and-installation/5.zone.md b/docs-2.0-en/4.deployment-and-installation/5.zone.md index 73d371c3b95..822ce224f02 100644 --- a/docs-2.0-en/4.deployment-and-installation/5.zone.md +++ b/docs-2.0-en/4.deployment-and-installation/5.zone.md @@ -123,7 +123,7 @@ nebula> DESC ZONE az1 +-----------------+------+ | Hosts | Port | +-----------------+------+ -| "192.168.8.111" | 7779 | +| "192.168.8.111" | 9779 | | "192.168.8.112" | 9779 | +-----------------+------+ ``` diff --git a/docs-2.0-en/5.configurations-and-logs/1.configurations/1.configurations.md b/docs-2.0-en/5.configurations-and-logs/1.configurations/1.configurations.md index 49a99ce84e4..fff96314b3c 100644 --- a/docs-2.0-en/5.configurations-and-logs/1.configurations/1.configurations.md +++ b/docs-2.0-en/5.configurations-and-logs/1.configurations/1.configurations.md @@ -59,6 +59,12 @@ curl 127.0.0.1:19669/flags curl 127.0.0.1:19779/flags ``` +Utilizing the `-s` or `-silent option allows for the concealment of the progress bar and error messages. For example: + +```bash +curl -s 127.0.0.1:19559/flags +``` + !!! Note In an actual environment, use the real host IP address instead of `127.0.0.1` in the above example. diff --git a/docs-2.0-en/5.configurations-and-logs/1.configurations/3.graph-config.md b/docs-2.0-en/5.configurations-and-logs/1.configurations/3.graph-config.md index 5dea83bba35..62f83af97ae 100644 --- a/docs-2.0-en/5.configurations-and-logs/1.configurations/3.graph-config.md +++ b/docs-2.0-en/5.configurations-and-logs/1.configurations/3.graph-config.md @@ -54,7 +54,7 @@ For all parameters and their current values, see [Configurations](1.configuratio | Name | Predefined value | Description |Whether supports runtime dynamic modifications| | ----------------------------- | ------------------------ | ------------------------------------------ |------------------| -|`accept_partial_success` |`false` | When set to `false`, the process treats partial success as an error. This configuration only applies to read-only requests. Write requests always treat partial success as an error. | Yes| +|`accept_partial_success` |`false` | When set to `false`, the process treats partial success as an error. This configuration only applies to read-only requests. Write requests always treat partial success as an error. A partial success query will prompt `Got partial result`.| Yes| |`session_reclaim_interval_secs`|`60` | Specifies the interval that the Session information is sent to the Meta service. This configuration is measured in seconds. | Yes| |`max_allowed_query_size` |`4194304` | Specifies the maximum length of queries. Unit: bytes. The default value is `4194304`, namely 4MB.| Yes| @@ -78,9 +78,9 @@ For all parameters and their current values, see [Configurations](1.configuratio | `ws_http_port` | `19669` | Specifies the port for the HTTP service. | No| |`heartbeat_interval_secs` | `10` | Specifies the default heartbeat interval. Make sure the `heartbeat_interval_secs` values for all services are the same, otherwise NebulaGraph **CANNOT** work normally. This configuration is measured in seconds. | Yes| |`storage_client_timeout_ms` |-| Specifies the RPC connection timeout threshold between the Graph Service and the Storage Service. This parameter is not predefined in the initial configuration files. You can manually set it if you need it. The system default value is `60000` ms. | No| -|`enable_record_slow_query`|`true`|Whether to record slow queries.
Only available in NebulaGraph Enterprise Edition.| No| +|`slow_query_threshold_us`|`200000`|When the execution time of a query exceeds the value, the query is called a slow query. Unit: Microsecond.
**Note**: Even if the execution time of DML statements exceeds this value, they will not be recorded as slow queries.| No| +|`enable_record_slow_query`|`true`|Whether to record slow queries.
Only available in NebulaGraph Enterprise Edition.
When set to `true`, if a query's execution time exceeds the duration defined by `slow_query_threshold_us`, NebulaGraph logs that query to a log file.
Additionally, the graphd process caches the most recent slow queries in memory, and the number of queries cached is determined by the `slow_query_limit` setting.
Cached slow query records can be retrieved through an HTTP interface.| No| |`slow_query_limit`|`100`|The maximum number of slow queries that can be recorded.
Only available in NebulaGraph Enterprise Edition.| No| -|`slow_query_threshold_us`|`200000`|When the execution time of a query exceeds the value, the query is called a slow query. Unit: Microsecond.| No| |`ws_meta_http_port` |`19559`| Specifies the Meta service listening port used by the HTTP protocol. It must be consistent with the `ws_http_port` in the Meta service configuration file.| No| !!! caution @@ -166,7 +166,7 @@ For more information about audit log, see [Audit log](../2.log-management/audit- | Name | Predefined value | Description |Whether supports runtime dynamic modifications| | :------------------- | :------------------------ | :------------------------------------------ |:------------------| -|`memory_tracker_limit_ratio` |`0.8` | The percentage of free memory. When the free memory is lower than this value, NebulaGraph stops accepting queries.
Calculated as follows:
`Free memory / (Total memory - Reserved memory)`
**Note**: For clusters with a mixed-used environment, the value of `memory_tracker_limit_ratio` should be set to a **lower** value. For example, when Graphd is expected to occupy only 50% of memory, the value can be set to less than `0.5`.| Yes| +|`memory_tracker_limit_ratio` |`0.8` | The value of this parameter can be set to `(0, 1]`, `2`, and `3`.
**Caution: When setting this parameter, ensure that the value of `system_memory_high_watermark_ratio` is not set to `1`, otherwise the value of this parameter will not take effect.**
`(0, 1]`: The percentage of free memory. When the free memory is lower than this value, NebulaGraph stops accepting queries.
Calculated as follows:
`Free memory / (Total memory - Reserved memory)`
**Note**: For clusters with a mixed-used environment, the value of `memory_tracker_limit_ratio` should be set to a **lower** value. For example, when Graphd is expected to occupy only 50% of memory, the value can be set to less than `0.5`.
`2`: Dynamic Self Adaptive mode. MemoryTracker dynamically adjusts the available memory based on the system's current available memory.
**Note**: This feature is experimental. As memory usage cannot be monitored in real time in dynamic adaptive mode, an OOM error may still occur to handle large memory allocations.
`3`: Disable MemoryTracker. MemoryTracker only logs memory usage and does not interfere with executions even if the limit is exceeded.| Yes| |`memory_tracker_untracked_reserved_memory_mb` |`50`| The reserved memory that is not tracked by the memory tracker. Unit: MB.| Yes| |`memory_tracker_detail_log` |`false` | Whether to enable the memory tracker log. When the value is `true`, the memory tracker log is generated.| Yes| |`memory_tracker_detail_log_interval_ms` |`60000`| The time interval for generating the memory tracker log. Unit: Millisecond. `memory_tracker_detail_log` is `true` when this parameter takes effect.| Yes| @@ -197,7 +197,7 @@ For more information about audit log, see [Audit log](../2.log-management/audit- | Name | Default Value | Description | Runtime Dynamic Modification Supported | | :-------------------------------- | :------------ | :-------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------- | | `assigned_zone` | Empty | When the Zone feature is enabled, set the Zone where the graphd to be located. See [Managing Zones](../../4.deployment-and-installation/5.zone.md) for details. | No | -| `prioritize_intra_zone_reading` | `false` | When set to `true`, prioritize to send queries to the Storage services in the same Zone. If reading fails, it depends on the value of `stick_to_intra_zone_on_failure` to determine whether to send requests to the leader partition replicas.
When set to `false`, data is read from the leader partition replicas. | No | -| `stick_to_intra_zone_on_failure` | `false` | When set to `true`, stick to intra-zone routing if unable to find the storaged hosting the requested partition replica in the same Zone.
When set to `false`, sending requests to leader partition replicas. | No | +| `prioritize_intra_zone_reading` | `false` | When set to `true`, prioritize to send queries to the Storage services in the same Zone. If reading fails, it depends on the value of `stick_to_intra_zone_on_failure` to determine whether to send requests to the leader partition replicas.
When set to `false`, data is read from the leader partition replicas. | Yes | +| `stick_to_intra_zone_on_failure` | `false` | When set to `true`, stick to intra-zone routing if unable to find the storaged hosting the requested partition replica in the same Zone.
When set to `false`, sending requests to leader partition replicas. | Yes | {{ ent.ent_end }} diff --git a/docs-2.0-en/5.configurations-and-logs/1.configurations/4.storage-config.md b/docs-2.0-en/5.configurations-and-logs/1.configurations/4.storage-config.md index 4b2bfa6259e..0d29418cfa9 100644 --- a/docs-2.0-en/5.configurations-and-logs/1.configurations/4.storage-config.md +++ b/docs-2.0-en/5.configurations-and-logs/1.configurations/4.storage-config.md @@ -115,10 +115,10 @@ For all parameters and their current values, see [Configurations](1.configuratio | :-- | :----- | :--- |:------------------| | `query_concurrently` |`true`| Whether to turn on multi-threaded queries. Enabling it can improve the latency performance of individual queries, but it will reduce the overall throughput under high pressure. | Yes| | `auto_remove_invalid_space` | `true` |After executing `DROP SPACE`, the specified graph space will be deleted. This parameter sets whether to delete all the data in the specified graph space at the same time. When the value is `true`, all the data in the specified graph space will be deleted at the same time.| Yes| -| `num_io_threads` | `16` | The number of network I/O threads used to send RPC requests and receive responses. | Yes| +| `num_io_threads` | `16` | The number of network I/O threads used to send RPC requests and receive responses. | No| |`num_max_connections` |`0` |Max active connections for all networking threads. 0 means no limit.
Max connections for each networking thread = num_max_connections / num_netio_threads|No| | `num_worker_threads` | `32` | The number of worker threads for one RPC-based Storage service. | No| -| `max_concurrent_subtasks` | `10` | The maximum number of concurrent subtasks to be executed by the task manager. | Yes| +| `max_concurrent_subtasks` | `10` | The maximum number of concurrent subtasks to be executed by the task manager. | No| | `snapshot_part_rate_limit` | `10485760` | The rate limit when the Raft leader synchronizes the stock data with other members of the Raft group. Unit: bytes/s. | Yes| | `snapshot_batch_size` | `1048576` | The amount of data sent in each batch when the Raft leader synchronizes the stock data with other members of the Raft group. Unit: bytes. | Yes| | `rebuild_index_part_rate_limit` | `4194304` | The rate limit when the Raft leader synchronizes the index data rate with other members of the Raft group during the index rebuilding process. Unit: bytes/s. | Yes| @@ -128,9 +128,9 @@ For all parameters and their current values, see [Configurations](1.configuratio | Name | Predefined value | Description |Whether supports runtime dynamic modifications| | :----------- | :------------------------ | :------------------------ |:------------------| -| `rocksdb_db_options` | `{}` | Specifies the RocksDB database options. | Yes| -| `rocksdb_column_family_options` | `{"write_buffer_size":"67108864",`
`"max_write_buffer_number":"4",`
`"max_bytes_for_level_base":"268435456"}` | Specifies the RocksDB column family options. | Yes| -| `rocksdb_block_based_table_options` | `{"block_size":"8192"}` | Specifies the RocksDB block based table options. | Yes| +| `rocksdb_db_options` | `{}` | Specifies the RocksDB database options. | No| +| `rocksdb_column_family_options` | `{"write_buffer_size":"67108864",`
`"max_write_buffer_number":"4",`
`"max_bytes_for_level_base":"268435456"}` | Specifies the RocksDB column family options. | No| +| `rocksdb_block_based_table_options` | `{"block_size":"8192"}` | Specifies the RocksDB block based table options. | No| The format of the RocksDB option is `{"":""}`. Multiple options are separated with commas. diff --git a/docs-2.0-en/5.configurations-and-logs/2.log-management/audit-log.md b/docs-2.0-en/5.configurations-and-logs/2.log-management/audit-log.md index f2cd2371ef3..06b10bf5982 100644 --- a/docs-2.0-en/5.configurations-and-logs/2.log-management/audit-log.md +++ b/docs-2.0-en/5.configurations-and-logs/2.log-management/audit-log.md @@ -100,3 +100,115 @@ The fields of audit logs are the same for different handlers and formats. For ex |`QUERY`| The query statement.| |`QUERY_STATUS`| The status of the query. `0` indicates success, and other numbers indicate different error messages.| |`QUERY_MESSAGE`| An error message is displayed when the query fails.| + + +## Rotate audit logs using logrotate + +You can use the [logrotate](https://github.com/logrotate/logrotate) tool available in Linux systems to rotate audit logs, ensuring regular archiving and removal of old audit logs to prevent excessively large log files. + +Here are the steps to regularly clean NebulaGraph audit logs using `logrotate`: + +!!! note + + You need to use a root user or a user with sudo privileges to install or run logrotate. + +1. Install logrotate. + + - Debian/Ubuntu: + + ```bash + sudo apt-get install logrotate + ``` + + - CentOS/RHEL: + + ```bash + sudo yum install logrotate + ``` + +2. Create a logrotate configuration file. + + In the `/etc/logrotate.d` directory, create a new logrotate configuration file for audit logs. For example, create a file named `audit`. + + ```bash + # Create the audit file + sudo vim /etc/logrotate.d/audit + ``` + + And add the following content to the file: + + ```bash + # Add configurations to the audit file to set log rotation rules + /usr/local/nebula/logs/audit/audit.log { + daily + rotate 5 + copytruncate + nocompress + missingok + notifempty + create 644 root root + dateext + dateformat .%Y-%m-%d-%s + maxsize 1k + } + ``` + + In this example, `/usr/local/nebula/logs/audit/audit.log` is the path to the default audit log file (`audit.log`) for NebulaGraph. If your log path is different, modify the path in the configuration file accordingly. Here's an explanation of the parameters in the sample configuration file: + + | Parameter | Description | + | --------------- | ------------------------------------------------------------ | + | `daily` | Rotate the log daily. Other available time units include `hourly`, `daily`, `weekly`, `monthly`, and `yearly`. | + | `rotate 5` | Keep the most recent 5 log files before deleting the older one. | + | `copytruncate` | Copy the current log file and then truncate it, ensuring no disruption to the logging process. | + | `nocompress` | Do not compress the old log files. | + | `missingok` | Do not report errors if the log file is missing. | + | `notifempty` | Do not rotate the log file if it's empty. | + | `create 644 root root` | Create a new log file with the specified permissions and ownership. | + | `dateext` | Add a date extension to the log file name.
The default is the current date in the format `-%Y%m%d`.
You can extend this using the `dateformat` option. | + | `dateformat .%Y-%m-%d-%s` | This must follow immediately after `dateext` and defines the file name after log rotation.
Before V3.9.0, only `%Y`, `%m`, `%d`, and `%s` parameters were supported.
Starting from V3.9.0, the `%H` parameter is also supported.| + | `maxsize 1k` | Rotate the log when it exceeds 1 kilobyte (`1024` bytes) in size or when the specified time unit (e.g., `daily`) has passed.
You can use size units like `k` and `M`, with the default unit being bytes. | + + Users can modify the parameters in the configuration file to suit their specific requirements. For details on more parameters and their meanings, refer to the [logrotate documentation](https://man7.org/linux/man-pages/man8/logrotate.8.html). + +3. Test the logrotate configuration. + + To verify that the logrotate configuration is correct, you can use the following command for testing: + + ```bash + sudo logrotate --debug /etc/logrotate.d/audit + ``` + +4. Run logrotate. + + Although `logrotate` is typically executed automatically by cron jobs, you can manually run the following command to immediately rotate the logs: + + ```bash + sudo logrotate -fv /etc/logrotate.d/audit + ``` + + `-fv`: `f` stands for force execution, and `v` stands for verbose mode. + +5. Check the log rotation results. + + After log rotation, you will see new log files in the `/usr/local/nebula/logs/audit` directory, such as `audit.log.2022-04-07-1649298693`. The original log content will be cleared, but the file will be retained for new log entries. When the number of new log files exceeds the `rotate` value, the oldest log file will be deleted. + + For example, if you have `rotate 5`, it means that the five most recent log files will be retained, and when the number of new log files exceeds 5, the oldest log file will be deleted. + + Here's an example directory listing after log rotation: + + ```bash + [test@test audit]$ ll + -rw-r--r-- 1 root root 0 10OCT 12 11:15 audit.log + -rw-r--r-- 1 root root 1436 10OCT 11 19:38 audit.log-202310111697024305 # The oldest log file among the retained ones. When the number of log files exceeds the configured value of 5, this file will be deleted. + -rw-r--r-- 1 root root 286 10OCT 12 11:05 audit.log-202310121697079901 + -rw-r--r-- 1 root root 571 10OCT 12 11:05 audit.log-202310121697079940 + -rw-r--r-- 1 root root 571 10OCT 12 11:14 audit.log-202310121697080478 + -rw-r--r-- 1 root root 571 10OCT 12 11:15 audit.log-202310121697080536 + [test@test audit]$ ll + -rw-r--r-- 1 root root 571 10OCT 12 11:18 audit.log + -rw-r--r-- 1 root root 286 10OCT 12 11:05 audit.log-202310121697079901 + -rw-r--r-- 1 root root 571 10OCT 12 11:05 audit.log-202310121697079940 + -rw-r--r-- 1 root root 571 10OCT 12 11:14 audit.log-202310121697080478 + -rw-r--r-- 1 root root 571 10OCT 12 11:15 audit.log-202310121697080536 + -rw-r--r-- 1 root root 571 10OCT 12 11:17 audit.log-202310121697080677 # Newly generated log file. + ``` diff --git a/docs-2.0-en/7.data-security/1.authentication/2.management-user.md b/docs-2.0-en/7.data-security/1.authentication/2.management-user.md index 3aca0e95f61..7bb9ef96e8b 100644 --- a/docs-2.0-en/7.data-security/1.authentication/2.management-user.md +++ b/docs-2.0-en/7.data-security/1.authentication/2.management-user.md @@ -30,7 +30,7 @@ The `root` user with the **GOD** role can run `CREATE USER` to create a new user ```ngql CREATE USER [IF NOT EXISTS] [WITH PASSWORD ''][WITH IP WHITELIST ]; ``` - - `ip_list`: Sets the IP address whitelist. The user can connect to NebulaGraph only from IP addresses in the list. Use commas to separate multiple IP addresses. + - `ip_list`: Sets the IP address whitelist. Any IP can connect to the database without this option. When this option is used, only IPs in the list can connect to the database. Use commas to separate multiple IP addresses. {{ ent.ent_end }} @@ -172,17 +172,32 @@ The `root` user with the **GOD** role can run `ALTER USER` to set a new password - Syntax with enterprise edition ```ngql - ALTER USER WITH PASSWORD '' [WITH IP WHITELIST ]; + ALTER USER [WITH PASSWORD ''] [WITH IP WHITELIST { | % }]; ``` -- Example with enterprise edition - - !!! enterpriseonly + - `ip_list`: Sets the IP address whitelist. Any IP can connect to the database without this option. When this option is used, only IPs in the list can connect to the database. Use commas to separate multiple IP addresses. + - `%`: Cancel the whitelist. Users can connect to the database using any IP. - When `WITH IP WHITELIST` is not used, the IP address whitelist is removed and the user can connect to the NebulaGraph by any IP address. +- Example with enterprise edition ```ngql + nebula> ALTER USER user2 WITH PASSWORD 'change_password'; nebula> ALTER USER user2 WITH PASSWORD 'nebula' WITH IP WHITELIST 192.168.10.10; + nebula> SHOW USERS; + +---------+-----------------+ + | Account | IP Whitelist | + +---------+-----------------+ + | "root" | "" | + | "user2" | "192.168.10.10" | + +---------+-----------------+ + nebula> ALTER USER user2 WITH IP WHITELIST %; + nebula> SHOW USERS; + +---------+--------------+ + | Account | IP Whitelist | + +---------+--------------+ + | "root" | "" | + | "user2" | "" | + +---------+--------------+ ``` {{ ent.ent_end }} diff --git a/docs-2.0-en/backup-and-restore/.DS_Store b/docs-2.0-en/backup-and-restore/.DS_Store deleted file mode 100644 index 5008ddfcf53..00000000000 Binary files a/docs-2.0-en/backup-and-restore/.DS_Store and /dev/null differ diff --git a/docs-2.0-en/backup-and-restore/nebula-br/1.what-is-br.md b/docs-2.0-en/backup-and-restore/nebula-br/1.what-is-br.md new file mode 100644 index 00000000000..270b75cbd6f --- /dev/null +++ b/docs-2.0-en/backup-and-restore/nebula-br/1.what-is-br.md @@ -0,0 +1,148 @@ +# What is Backup & Restore + +Backup & Restore (BR for short) is a Command-Line Interface (CLI) tool to back up data of graph spaces of NebulaGraph and to restore data from the backup files. + +## Features + +The BR has the following features. It supports: + +- Backing up and restoring data in a one-click operation. +- Restoring data in the following backup file types: + - Local Disk (SSD or HDD). It is recommend to use local disk in test environment only. + - Amazon S3 compatible interface, such as Alibaba Cloud OSS, MinIO,Ceph RGW, etc. +- Backing up and restoring the entire NebulaGraph cluster. +- Backing up data of specified graph spaces (experimental). + +## Limitations + +- Supports NebulaGraph v3.x only. +- Supports full backup, but not incremental backup. +- Currently, NebulaGraph Listener and full-text indexes do not support backup. +- If you back up data to the local disk, the backup files will be saved in the local path of each server. You can also mount the NFS on your host to restore the backup data to a different host. +- Restoration requires that the number of the storage servers in the original cluster is the same as that of the storage servers in the target cluster and storage server IPs must be the same. Restoring the specified space will clear all the remaining spaces in the cluster. +- During the backup process, both DDL and DML statements in any specified graph spaces are blocked. We recommend that you do the operation within the low peak period of the business, for example, from 2:00 AM to 5:00 AM. +- During the restoration process, there is a time when NebulaGraph stops running. +- Using BR in a container-based NebulaGraph cluster is not supported. + + + + + +## How to use BR + +To use the BR, follow these steps: + +1. [Install BR](2.compile-br.md). +2. [Use BR to back up data](3.br-backup-data.md). +3. [Use BR to restore data from backup files](4.br-restore-data.md). diff --git a/docs-2.0-en/backup-and-restore/nebula-br/2.compile-br.md b/docs-2.0-en/backup-and-restore/nebula-br/2.compile-br.md new file mode 100644 index 00000000000..12a2a608a31 --- /dev/null +++ b/docs-2.0-en/backup-and-restore/nebula-br/2.compile-br.md @@ -0,0 +1,142 @@ +# Install BR + +This topic introduces the installation of BR in bare-metal deployment scenarios. + +## Notes + +To use the BR (Community Edition) tool, you need to install the NebulaGraph Agent service, which is taken as a daemon for each machine in the cluster that starts and stops the NebulaGraph service, and uploads and downloads backup files. The BR (Community Edition) tool and the Agent plug-in are installed as described below. + + +## Version compatibility + +|NebulaGraph|BR |Agent | +|:---|:---|:---| +|3.5.x|3.5.0|0.2.0 ~ 3.4.0| +|3.3.0 ~ 3.4.1|3.3.0|0.2.0 ~ 3.4.0| +|3.0.x ~ 3.2.x|0.6.1|0.1.0 ~ 0.2.0| + +## Install BR with a binary file + +1. Install BR. + + ``` + wget https://github.com/vesoft-inc/nebula-br/releases/download/{{br.tag}}/br-{{br.release}}-linux-amd64 + ``` + +2. Change the binary file name to `br`. + + ``` + sudo mv br-{{br.release}}-linux-amd64 br + ``` + +3. Grand execute permission to BR. + + ``` + sudo chmod +x br + ``` + +4. Run `./br version` to check BR version. + + ``` + [nebula-br]$ ./br version + Nebula Backup And Restore Utility Tool,V-{{br.release}} + ``` + + +## Install BR with the source code + +Before compiling the BR, do a check of these: + +- [Go](https://github.com/golang/go "Click to go to GitHub") 1.14.x or a later version is installed. +- make is installed. + + +To compile the BR, follow these steps: + +1. Clone the `nebula-br` repository to your machine. + + ```bash + git clone https://github.com/vesoft-inc/nebula-br.git + ``` + +2. Change to the `br` directory. + + ```bash + cd nebula-br + ``` + +3. Compile the BR. + + ```bash + make + ``` + +Users can enter `bin/br version` on the command line. If the following results are returned, the BR is compiled successfully. + +```bash +[nebula-br]$ bin/br version +NebulaGraph Backup And Restore Utility Tool,V-{{br.release}} +``` + +## Install Agent + +NebulaGraph Agent is installed as a binary file in each machine and serves the BR tool with the RPC protocol. + +In **each machine**, follow these steps: + +1. Install Agent. + + ``` + wget https://github.com/vesoft-inc/nebula-agent/releases/download/v{{agent.release}}/agent-{{agent.release}}-linux-amd64 + ``` + +2. Rename the Agent file to `agent`. + + ``` + sudo mv agent-{{agent.release}}-linux-amd64 agent + ``` + +3. Add execute permission to Agent. + + ``` + sudo chmod +x agent + ``` + +4. Start Agent. + + !!! note + + Before starting Agent, make sure that the Meta service has been started and Agent has read and write access to the corresponding NebulaGraph cluster directory and backup directory. + + ``` + sudo nohup ./agent --agent=":8888" --meta=":9559" --ratelimit= > nebula_agent.log 2>&1 & + ``` + + - `--agent`: The IP address and port number of Agent. + - `--meta`: The IP address and access port of any Meta service in the cluster. + - `--ratelimit`: (Optional) Limits the speed of file uploads and downloads to prevent bandwidth from being filled up and making other services unavailable. Unit: Bytes. + + For example: + + ``` + sudo nohup ./agent --agent="192.168.8.129:8888" --meta="192.168.8.129:9559" --ratelimit=1048576 > nebula_agent.log 2>&1 & + ``` + !!! caution + + The IP address format for `--agent`should be the same as that of Meta and Storage services set in the [configuration files](../../5.configurations-and-logs/1.configurations/1.configurations.md). That is, use the real IP addresses or use `127.0.0.1`. Otherwise Agent does not run. + +1. Log into NebulaGraph and then run the following command to view the status of Agent. + + ``` + nebula> SHOW HOSTS AGENT; + +-----------------+------+----------+---------+--------------+---------+ + | Host | Port | Status | Role | Git Info Sha | Version | + +-----------------+------+----------+---------+--------------+---------+ + | "192.168.8.129" | 8888 | "ONLINE" | "AGENT" | "96646b8" | | + +-----------------+------+----------+---------+--------------+---------+ + ``` + +## FAQ + +### The error `E_LIST_CLUSTER_NO_AGENT_FAILURE +If you encounter `E_LIST_CLUSTER_NO_AGENT_FAILURE` error, it may be due to the Agent service is not started or the Agent service is not registered to Meta service. First, execute `SHOW HOSTS AGENT` to check the status of the Agent service on all nodes in the cluster, when the status shows `OFFLINE`, it means the registration of Agent failed, then check whether the value of the `--meta` option in the command to start the Agent service is correct. diff --git a/docs-2.0-en/backup-and-restore/nebula-br/3.br-backup-data.md b/docs-2.0-en/backup-and-restore/nebula-br/3.br-backup-data.md new file mode 100644 index 00000000000..bbbdbe39ce1 --- /dev/null +++ b/docs-2.0-en/backup-and-restore/nebula-br/3.br-backup-data.md @@ -0,0 +1,70 @@ +# Use BR to back up data + +After the BR is installed, you can back up data of the entire graph space. This topic introduces how to use the BR to back up data. + +## Prerequisites + +To back up data with the BR, do a check of these: + +- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. + +- The NebulaGraph services are running. + +- If you store the backup files locally, create a directory with the same absolute path on the meta servers, the storage servers, and the BR machine for the backup files and get the absolute path. Make sure the account has write privileges for this directory. + + !!! note + + In the production environment, we recommend that you mount Network File System (NFS) storage to the meta servers, the storage servers, and the BR machine for local backup, or use Amazon S3 or Alibaba Cloud OSS for remote backup. When you restore the data from local files, you must manually move these backup files to a specified directory, which causes redundant data and troubles. For more information, see [Restore data from backup files](4.br-restore-data.md). + +## Procedure + +In the BR installation directory (the default path of the compiled BR is `./bin/br`), run the following command to perform a full backup for the entire cluster. + +!!! Note + + Make sure that the local path where the backup file is stored exists. + +```bash +$ ./br backup full --meta --storage +``` + +For example: + +- Run the following command to perform a full backup for the entire cluster whose meta service address is `192.168.8.129:9559`, and save the backup file to `/home/nebula/backup/`. + + !!! caution + + If there are multiple metad addresses, you can use any one of them. + + !!! caution + + If you back up data to a local disk, only the data of the leader metad is backed up by default. So if there are multiple metad processes, you need to manually copy the directory of the leader metad (path `/meta`) and overwrite the corresponding directory of other follower meatd processes. + + ```bash + $ ./br backup full --meta "192.168.8.129:9559" --storage "local:///home/nebula/backup/" + ``` + +- Run the following command to perform a full backup for the entire cluster whose meta service address is `192.168.8.129:9559`, and save the backup file to `backup` in the `br-test` bucket of the object storage service compatible with S3 protocol. + + ```bash + $ ./br backup full --meta "192.168.8.129:9559" --s3.endpoint "http://192.168.8.129:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region=default + ``` + +The parameters are as follows. + +| Parameter | Data type | Required | Default value | Description | +| --- | --- | --- | --- | --- | +| `-h,-help` | - | No | None | Checks help for restoration. | +| `--debug` | - | No | None | Checks for more log information. | +| `--log` | string | No | `"br.log"` | Specifies detailed log path for restoration and backup. | +| `--meta` | string | Yes | None | The IP address and port of the meta service. | +| `--space` | string | Yes | None | (Experimental feature) Specifies the names of the spaces to be backed up. All spaces will be backed up if not specified. Multiple spaces can be specified, and format is `--spaces nba_01 --spaces nba_02`.| +| `--storage` | string | Yes | None | The target storage URL of BR backup data. The format is: \://\.
Schema: Optional values are `local` and `s3`.
When selecting s3, you need to fill in `s3.access_key`, `s3.endpoint`, `s3.region`, and `s3.secret_key`.
PATH: The path of the storage location. | +| `--s3.access_key` | string | No | None | Sets AccessKey ID. | +| `--s3.endpoint` | string | No | None | Sets the S3 endpoint URL, please specify the HTTP or HTTPS scheme explicitly. | +| `--s3.region` | string | No | None | Sets the region or location to upload or download the backup. | +| `--s3.secret_key` | string | No | None | Sets SecretKey for AccessKey ID. | + +## Next to do + +After the backup files are generated, you can use the BR to restore them for NebulaGraph. For more information, see [Use BR to restore data](4.br-restore-data.md). diff --git a/docs-2.0-en/backup-and-restore/nebula-br/4.br-restore-data.md b/docs-2.0-en/backup-and-restore/nebula-br/4.br-restore-data.md new file mode 100644 index 00000000000..20824f60f07 --- /dev/null +++ b/docs-2.0-en/backup-and-restore/nebula-br/4.br-restore-data.md @@ -0,0 +1,118 @@ +# Use BR to restore data + +If you use the BR to back up data, you can use it to restore the data to NebulaGraph. This topic introduces how to use the BR to restore data from backup files. + +!!! caution + + During the restoration process, the data on the target NebulaGraph cluster is removed and then is replaced with the data from the backup files. If necessary, back up the data on the target cluster. + +!!! caution + + The restoration process is performed OFFLINE. + +## Prerequisites + +- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. + +- No application is connected to the target NebulaGraph cluster. + +- Make sure that the target and the source NebulaGraph clusters have the same topology, which means that they have exactly the same number of hosts. The number of data folders for each host is consistently distributed. + +## Procedures + +In the BR installation directory (the default path of the compiled BR is `./br`), run the following command to perform a full backup for the entire cluster. + +1. Users can use the following command to list the existing backup information: + + ```bash + $ ./br show --storage + ``` + For example, run the following command to list the backup information in the local `/home/nebula/backup` path. + ```bash + $ ./br show --storage "local:///home/nebula/backup" + +----------------------------+---------------------+------------------------+-------------+------------+ + | NAME | CREATE TIME | SPACES | FULL BACKUP | ALL SPACES | + +----------------------------+---------------------+------------------------+-------------+------------+ + | BACKUP_2022_02_10_07_40_41 | 2022-02-10 07:40:41 | basketballplayer | true | true | + | BACKUP_2022_02_11_08_26_43 | 2022-02-11 08:26:47 | basketballplayer,foesa | true | true | + +----------------------------+---------------------+------------------------+-------------+------------+ + ``` + + Or, you can run the following command to list the backup information stored in S3 URL `s3://192.168.8.129:9000/br-test/backup`. + ```bash + $ ./br show --s3.endpoint "http://192.168.8.129:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region=default + ``` + + | Parameter | Data type | Required | Default value | Description | + | --- | --- | --- | --- | --- | + | `-h,-help` | - | No | None | Checks help for restoration. | + | `-debug` | - | No | None | Checks for more log information. | + | `-log` | string | No | `"br.log"` | Specifies detailed log path for restoration and backup. | + | `--storage` | string | Yes | None | The target storage URL of BR backup data. The format is: ://.
Schema: Optional values are `local` and `s3`.
When selecting s3, you need to fill in `s3.access_key`, `s3.endpoint`, `s3.region`, and `s3.secret_key`.
PATH: The path of the storage location. | + | `--s3.access_key` | string | No | None | Sets AccessKey ID. | + | `--s3.endpoint` | string | No | None | Sets the S3 endpoint URL, please specify the HTTP or HTTPS scheme explicitly. | + | `--s3.region` | string | No | None | Sets the region or location to upload or download the backup. | + | `--s3.secret_key` | string | No | None | Sets SecretKey for AccessKey ID. | + + +2. Run the following command to restore data. + + ``` + $ ./br restore full --meta --storage --name + ``` + + For example, run the following command to upload the backup files from the local `/home/nebula/backup/` to the cluster where the meta service's address is `192.168.8.129:9559`. + + ``` + $ ./br restore full --meta "192.168.8.129:9559" --storage "local:///home/nebula/backup/" --name BACKUP_2021_12_08_18_38_08 + ``` + + Or, you can run the following command to upload the backup files from the S3 URL `s3://192.168.8.129:9000/br-test/backup`. + ```bash + $ ./br restore full --meta "192.168.8.129:9559" --s3.endpoint "http://192.168.8.129:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region="default" --name BACKUP_2021_12_08_18_38_08 + ``` + + If the following information is returned, the data is restored successfully. + ```bash + Restore succeed. + ``` + + !!! caution + + If your new cluster hosts' IPs are not all the same as the backup cluster, after restoration, you should run `add hosts` to add the Storage host IPs in the new cluster one by one. + + The parameters are as follows. + + | Parameter | Data type | Required | Default value | Description | + | --- | --- | --- | --- | --- | + | `-h,-help` | - | No | None | Checks help for restoration. | + | `-debug` | - | No | None | Checks for more log information. | + | `-log` | string | No | `"br.log"` | Specifies detailed log path for restoration and backup. | + | `-meta` | string | Yes | None | The IP address and port of the meta service. | + | `-name` | string | Yes | None | The name of backup. | + | `--storage` | string | Yes | None | The target storage URL of BR backup data. The format is: \://\.
Schema: Optional values are `local` and `s3`.
When selecting s3, you need to fill in `s3.access_key`, `s3.endpoint`, `s3.region`, and `s3.secret_key`.
PATH: The path of the storage location. | + | `--s3.access_key` | string | No | None | Sets AccessKey ID. | + | `--s3.endpoint` | string | No | None | Sets the S3 endpoint URL, please specify the HTTP or HTTPS scheme explicitly. | + | `--s3.region` | string | No | None | Sets the region or location to upload or download the backup. | + | `--s3.secret_key` | string | No | None | Sets SecretKey for AccessKey ID. | + +3. Run the following command to clean up temporary files if any error occurred during backup. It will clean the files in cluster and external storage. You could also use it to clean up old backups files in external storage. + + ```bash + $ ./br cleanup --meta --storage --name + ``` + + The parameters are as follows. + + | Parameter | Data type | Required | Default value | Description | + | --- | --- | --- | --- | --- | + | `-h,-help` | - | No | None | Checks help for restoration. | + | `-debug` | - | No | None | Checks for more log information. | + | `-log` | string | No | `"br.log"` | Specifies detailed log path for restoration and backup. | + | `-meta` | string | Yes | None | The IP address and port of the meta service. | + | `-name` | string | Yes | None | The name of backup. | + | `--storage` | string | Yes | None | The target storage URL of BR backup data. The format is: \://\.
Schema: Optional values are `local` and `s3`.
When selecting s3, you need to fill in `s3.access_key`, `s3.endpoint`, `s3.region`, and `s3.secret_key`.
PATH: The path of the storage location. | + | `--s3.access_key` | string | No | None | Sets AccessKey ID. | + | `--s3.endpoint` | string | No | None | Sets the S3 endpoint URL, please specify the HTTP or HTTPS scheme explicitly. | + | `--s3.region` | string | No | None | Sets the region or location to upload or download the backup. | + | `--s3.secret_key` | string | No | None | Sets SecretKey for AccessKey ID. | diff --git a/docs-2.0-en/basketballplayer-2.X.ngql b/docs-2.0-en/basketballplayer-2.X.ngql new file mode 100644 index 00000000000..dbb8cc817a2 --- /dev/null +++ b/docs-2.0-en/basketballplayer-2.X.ngql @@ -0,0 +1,326 @@ +drop space basketballplayer; +create space basketballplayer(partition_num=10,replica_factor=1,vid_type=fixed_string(32)); +:sleep 20 +use basketballplayer; +create tag player(name string,age int); +create tag team(name string); +create edge serve(start_year int,end_year int); +create edge follow(degree int); +:sleep 20 +create tag index player_index_0 on player(); +create tag index player_index_1 on player(name(20)); +:sleep 20 +insert vertex player(name,age) values "player100":("Tim Duncan", 42); +insert vertex player(name,age) values "player101":("Tony Parker", 36); +insert vertex player(name,age) values "player102":("LaMarcus Aldridge", 33); +insert vertex player(name,age) values "player103":("Rudy Gay", 32); +insert vertex player(name,age) values "player104":("Marco Belinelli", 32); +insert vertex player(name,age) values "player105":("Danny Green", 31); +insert vertex player(name,age) values "player106":("Kyle Anderson", 25); +insert vertex player(name,age) values "player107":("Aron Baynes", 32); +insert vertex player(name,age) values "player108":("Boris Diaw", 36); +insert vertex player(name,age) values "player109":("Tiago Splitter", 34); +insert vertex player(name,age) values "player110":("Cory Joseph", 27); +insert vertex player(name,age) values "player111":("David West", 38); +insert vertex player(name,age) values "player112":("Jonathon Simmons", 29); +insert vertex player(name,age) values "player113":("Dejounte Murray", 29); +insert vertex player(name,age) values "player114":("Tracy McGrady", 39); +insert vertex player(name,age) values "player115":("Kobe Bryant", 40); +insert vertex player(name,age) values "player116":("LeBron James", 34); +insert vertex player(name,age) values "player117":("Stephen Curry", 31); +insert vertex player(name,age) values "player118":("Russell Westbrook", 30); +insert vertex player(name,age) values "player119":("Kevin Durant", 30); +insert vertex player(name,age) values "player120":("James Harden", 29); +insert vertex player(name,age) values "player121":("Chris Paul", 33); +insert vertex player(name,age) values "player122":("DeAndre Jordan", 30); +insert vertex player(name,age) values "player123":("Ricky Rubio", 28); +insert vertex player(name,age) values "player124":("Rajon Rondo", 33); +insert vertex player(name,age) values "player125":("Manu Ginobili", 41); +insert vertex player(name,age) values "player126":("Kyrie Irving", 26); +insert vertex player(name,age) values "player127":("Vince Carter", 42); +insert vertex player(name,age) values "player128":("Carmelo Anthony", 34); +insert vertex player(name,age) values "player129":("Dwyane Wade", 37); +insert vertex player(name,age) values "player130":("Joel Embiid", 25); +insert vertex player(name,age) values "player131":("Paul George", 28); +insert vertex player(name,age) values "player132":("Giannis Antetokounmpo", 24); +insert vertex player(name,age) values "player133":("Yao Ming", 38); +insert vertex player(name,age) values "player134":("Blake Griffin", 30); +insert vertex player(name,age) values "player135":("Damian Lillard", 28); +insert vertex player(name,age) values "player136":("Steve Nash", 45); +insert vertex player(name,age) values "player137":("Dirk Nowitzki", 40); +insert vertex player(name,age) values "player138":("Paul Gasol", 38); +insert vertex player(name,age) values "player139":("Marc Gasol", 34); +insert vertex player(name,age) values "player140":("Grant Hill", 46); +insert vertex player(name,age) values "player141":("Ray Allen", 43); +insert vertex player(name,age) values "player142":("Klay Thompson", 29); +insert vertex player(name,age) values "player143":("Kristaps Porzingis", 23); +insert vertex player(name,age) values "player144":("Shaquille O'Neal", 47); +insert vertex player(name,age) values "player145":("JaVale McGee", 31); +insert vertex player(name,age) values "player146":("Dwight Howard", 33); +insert vertex player(name,age) values "player147":("Amar'e Stoudemire", 36); +insert vertex player(name,age) values "player148":("Jason Kidd", 45); +insert vertex player(name,age) values "player149":("Ben Simmons", 22); +insert vertex player(name,age) values "player150":("Luka Doncic", 20); +insert vertex team(name) values "team200":("Warriors"); +insert vertex team(name) values "team201":("Nuggets"); +insert vertex team(name) values "team202":("Rockets"); +insert vertex team(name) values "team203":("Trail Blazers"); +insert vertex team(name) values "team204":("Spurs"); +insert vertex team(name) values "team205":("Thunders"); +insert vertex team(name) values "team206":("Jazz"); +insert vertex team(name) values "team207":("Clippers"); +insert vertex team(name) values "team208":("Kings"); +insert vertex team(name) values "team209":("Timberwolves"); +insert vertex team(name) values "team210":("Lakers"); +insert vertex team(name) values "team211":("Pelicans"); +insert vertex team(name) values "team212":("Grizzlies"); +insert vertex team(name) values "team213":("Mavericks"); +insert vertex team(name) values "team214":("Suns"); +insert vertex team(name) values "team215":("Hornets"); +insert vertex team(name) values "team216":("Cavaliers"); +insert vertex team(name) values "team217":("Celtics"); +insert vertex team(name) values "team218":("Raptors"); +insert vertex team(name) values "team219":("76ers"); +insert vertex team(name) values "team220":("Pacers"); +insert vertex team(name) values "team221":("Bulls"); +insert vertex team(name) values "team222":("Hawks"); +insert vertex team(name) values "team223":("Knicks"); +insert vertex team(name) values "team224":("Pistons"); +insert vertex team(name) values "team225":("Bucks"); +insert vertex team(name) values "team226":("Magic"); +insert vertex team(name) values "team227":("Nets"); +insert vertex team(name) values "team228":("Wizards"); +insert vertex team(name) values "team229":("Heat"); +insert edge follow(degree) values "player100"->"player101":(95); +insert edge follow(degree) values "player100"->"player125":(95); +insert edge follow(degree) values "player101"->"player100":(95); +insert edge follow(degree) values "player101"->"player125":(95); +insert edge follow(degree) values "player101"->"player102":(90); +insert edge follow(degree) values "player125"->"player100":(90); +insert edge follow(degree) values "player102"->"player101":(75); +insert edge follow(degree) values "player102"->"player100":(75); +insert edge follow(degree) values "player103"->"player102":(70); +insert edge follow(degree) values "player104"->"player101":(50); +insert edge follow(degree) values "player104"->"player100":(55); +insert edge follow(degree) values "player104"->"player105":(60); +insert edge follow(degree) values "player105"->"player104":(83); +insert edge follow(degree) values "player105"->"player100":(70); +insert edge follow(degree) values "player105"->"player116":(80); +insert edge follow(degree) values "player107"->"player100":(80); +insert edge follow(degree) values "player108"->"player101":(80); +insert edge follow(degree) values "player108"->"player100":(80); +insert edge follow(degree) values "player109"->"player100":(80); +insert edge follow(degree) values "player109"->"player125":(90); +insert edge follow(degree) values "player113"->"player100":(99); +insert edge follow(degree) values "player113"->"player101":(99); +insert edge follow(degree) values "player113"->"player125":(99); +insert edge follow(degree) values "player113"->"player104":(99); +insert edge follow(degree) values "player113"->"player105":(99); +insert edge follow(degree) values "player113"->"player116":(99); +insert edge follow(degree) values "player113"->"player118":(99); +insert edge follow(degree) values "player113"->"player121":(99); +insert edge follow(degree) values "player113"->"player106":(99); +insert edge follow(degree) values "player113"->"player119":(99); +insert edge follow(degree) values "player113"->"player120":(99); +insert edge follow(degree) values "player114"->"player115":(90); +insert edge follow(degree) values "player114"->"player140":(90); +insert edge follow(degree) values "player114"->"player103":(90); +insert edge follow(degree) values "player116"->"player141":(100); +insert edge follow(degree) values "player118"->"player131":(90); +insert edge follow(degree) values "player118"->"player120":(90); +insert edge follow(degree) values "player120"->"player118":(80); +insert edge follow(degree) values "player121"->"player116":(90); +insert edge follow(degree) values "player121"->"player128":(90); +insert edge follow(degree) values "player121"->"player129":(90); +insert edge follow(degree) values "player124"->"player141":(-1); +insert edge follow(degree) values "player126"->"player116":(13); +insert edge follow(degree) values "player127"->"player114":(90); +insert edge follow(degree) values "player127"->"player148":(70); +insert edge follow(degree) values "player128"->"player116":(90); +insert edge follow(degree) values "player128"->"player121":(90); +insert edge follow(degree) values "player128"->"player129":(90); +insert edge follow(degree) values "player129"->"player116":(90); +insert edge follow(degree) values "player129"->"player121":(90); +insert edge follow(degree) values "player129"->"player128":(90); +insert edge follow(degree) values "player130"->"player149":(80); +insert edge follow(degree) values "player131"->"player118":(95); +insert edge follow(degree) values "player133"->"player114":(90); +insert edge follow(degree) values "player133"->"player144":(90); +insert edge follow(degree) values "player134"->"player121":(-1); +insert edge follow(degree) values "player135"->"player102":(80); +insert edge follow(degree) values "player136"->"player147":(90); +insert edge follow(degree) values "player136"->"player137":(88); +insert edge follow(degree) values "player136"->"player117":(90); +insert edge follow(degree) values "player136"->"player148":(85); +insert edge follow(degree) values "player137"->"player136":(80); +insert edge follow(degree) values "player137"->"player148":(80); +insert edge follow(degree) values "player137"->"player129":(10); +insert edge follow(degree) values "player138"->"player115":(90); +insert edge follow(degree) values "player138"->"player139":(99); +insert edge follow(degree) values "player139"->"player138":(99); +insert edge follow(degree) values "player140"->"player114":(90); +insert edge follow(degree) values "player141"->"player124":(9); +insert edge follow(degree) values "player142"->"player117":(90); +insert edge follow(degree) values "player143"->"player150":(90); +insert edge follow(degree) values "player144"->"player145":(100); +insert edge follow(degree) values "player144"->"player100":(80); +insert edge follow(degree) values "player147"->"player136":(90); +insert edge follow(degree) values "player148"->"player127":(80); +insert edge follow(degree) values "player148"->"player136":(90); +insert edge follow(degree) values "player148"->"player137":(85); +insert edge follow(degree) values "player149"->"player130":(80); +insert edge follow(degree) values "player150"->"player137":(90); +insert edge follow(degree) values "player150"->"player143":(90); +insert edge follow(degree) values "player150"->"player120":(80); +insert edge serve(start_year,end_year) values "player100"->"team204":(1997, 2016); +insert edge serve(start_year,end_year) values "player101"->"team204":(1999, 2018); +insert edge serve(start_year,end_year) values "player101"->"team215":(2018, 2019); +insert edge serve(start_year,end_year) values "player102"->"team203":(2006, 2015); +insert edge serve(start_year,end_year) values "player102"->"team204":(2015, 2019); +insert edge serve(start_year,end_year) values "player103"->"team212":(2006, 2013); +insert edge serve(start_year,end_year) values "player103"->"team218":(2013, 2013); +insert edge serve(start_year,end_year) values "player103"->"team208":(2013, 2017); +insert edge serve(start_year,end_year) values "player103"->"team204":(2017, 2019); +insert edge serve(start_year,end_year) values "player104"->"team200":(2007, 2009); +insert edge serve(start_year,end_year) values "player104"->"team218":(2009, 2010); +insert edge serve(start_year,end_year) values "player104"->"team215"@20102012:(2010, 2012); +insert edge serve(start_year,end_year) values "player104"->"team221":(2012, 2013); +insert edge serve(start_year,end_year) values "player104"->"team204"@20132015:(2013, 2015); +insert edge serve(start_year,end_year) values "player104"->"team208":(2015, 2016); +insert edge serve(start_year,end_year) values "player104"->"team215"@20162017:(2016, 2017); +insert edge serve(start_year,end_year) values "player104"->"team222":(2017, 2018); +insert edge serve(start_year,end_year) values "player104"->"team219":(2018, 2018); +insert edge serve(start_year,end_year) values "player104"->"team204"@20182019:(2018, 2019); +insert edge serve(start_year,end_year) values "player105"->"team216":(2009, 2010); +insert edge serve(start_year,end_year) values "player105"->"team204":(2010, 2018); +insert edge serve(start_year,end_year) values "player105"->"team218":(2018, 2019); +insert edge serve(start_year,end_year) values "player106"->"team204":(2014, 2018); +insert edge serve(start_year,end_year) values "player106"->"team212":(2018, 2019); +insert edge serve(start_year,end_year) values "player107"->"team204":(2013, 2015); +insert edge serve(start_year,end_year) values "player107"->"team224":(2015, 2017); +insert edge serve(start_year,end_year) values "player107"->"team217":(2017, 2019); +insert edge serve(start_year,end_year) values "player108"->"team222":(2003, 2005); +insert edge serve(start_year,end_year) values "player108"->"team214":(2005, 2008); +insert edge serve(start_year,end_year) values "player108"->"team215":(2008, 2012); +insert edge serve(start_year,end_year) values "player108"->"team204":(2012, 2016); +insert edge serve(start_year,end_year) values "player108"->"team206":(2016, 2017); +insert edge serve(start_year,end_year) values "player109"->"team204":(2010, 2015); +insert edge serve(start_year,end_year) values "player109"->"team222":(2015, 2017); +insert edge serve(start_year,end_year) values "player109"->"team219":(2017, 2017); +insert edge serve(start_year,end_year) values "player110"->"team204":(2011, 2015); +insert edge serve(start_year,end_year) values "player110"->"team218":(2015, 2017); +insert edge serve(start_year,end_year) values "player110"->"team220":(2017, 2019); +insert edge serve(start_year,end_year) values "player111"->"team215":(2003, 2011); +insert edge serve(start_year,end_year) values "player111"->"team220":(2011, 2015); +insert edge serve(start_year,end_year) values "player111"->"team204":(2015, 2016); +insert edge serve(start_year,end_year) values "player111"->"team200":(2016, 2018); +insert edge serve(start_year,end_year) values "player112"->"team204":(2015, 2017); +insert edge serve(start_year,end_year) values "player112"->"team226":(2017, 2019); +insert edge serve(start_year,end_year) values "player112"->"team219":(2019, 2019); +insert edge serve(start_year,end_year) values "player113"->"team204":(2016, 2019); +insert edge serve(start_year,end_year) values "player114"->"team218":(1997, 2000); +insert edge serve(start_year,end_year) values "player114"->"team226":(2000, 2004); +insert edge serve(start_year,end_year) values "player114"->"team202":(2004, 2010); +insert edge serve(start_year,end_year) values "player114"->"team204":(2013, 2013); +insert edge serve(start_year,end_year) values "player115"->"team210":(1996, 2016); +insert edge serve(start_year,end_year) values "player116"->"team216"@20032010:(2003, 2010); +insert edge serve(start_year,end_year) values "player116"->"team229":(2010, 2014); +insert edge serve(start_year,end_year) values "player116"->"team216"@20142018:(2014, 2018); +insert edge serve(start_year,end_year) values "player116"->"team210":(2018, 2019); +insert edge serve(start_year,end_year) values "player117"->"team200":(2009, 2019);; +insert edge serve(start_year,end_year) values "player118"->"team205":(2008, 2019); +insert edge serve(start_year,end_year) values "player119"->"team205":(2007, 2016); +insert edge serve(start_year,end_year) values "player119"->"team200":(2016, 2019); +insert edge serve(start_year,end_year) values "player120"->"team205":(2009, 2012); +insert edge serve(start_year,end_year) values "player120"->"team202":(2012, 2019); +insert edge serve(start_year,end_year) values "player121"->"team215":(2005, 2011); +insert edge serve(start_year,end_year) values "player121"->"team207":(2011, 2017); +insert edge serve(start_year,end_year) values "player121"->"team202":(2017, 2021); +insert edge serve(start_year,end_year) values "player122"->"team207":(2008, 2018); +insert edge serve(start_year,end_year) values "player122"->"team213":(2018, 2019); +insert edge serve(start_year,end_year) values "player122"->"team223":(2019, 2019); +insert edge serve(start_year,end_year) values "player123"->"team209":(2011, 2017); +insert edge serve(start_year,end_year) values "player123"->"team206":(2017, 2019); +insert edge serve(start_year,end_year) values "player124"->"team217":(2006, 2014); +insert edge serve(start_year,end_year) values "player124"->"team213":(2014, 2015); +insert edge serve(start_year,end_year) values "player124"->"team208":(2015, 2016); +insert edge serve(start_year,end_year) values "player124"->"team221":(2016, 2017); +insert edge serve(start_year,end_year) values "player124"->"team211":(2017, 2018); +insert edge serve(start_year,end_year) values "player124"->"team210":(2018, 2019); +insert edge serve(start_year,end_year) values "player125"->"team204":(2002, 2018); +insert edge serve(start_year,end_year) values "player126"->"team216":(2011, 2017); +insert edge serve(start_year,end_year) values "player126"->"team217":(2017, 2019); +insert edge serve(start_year,end_year) values "player127"->"team218":(1998, 2004); +insert edge serve(start_year,end_year) values "player127"->"team227":(2004, 2009); +insert edge serve(start_year,end_year) values "player127"->"team226":(2009, 2010); +insert edge serve(start_year,end_year) values "player127"->"team214":(2010, 2011); +insert edge serve(start_year,end_year) values "player127"->"team213":(2011, 2014); +insert edge serve(start_year,end_year) values "player127"->"team212":(2014, 2017); +insert edge serve(start_year,end_year) values "player127"->"team208":(2017, 2018); +insert edge serve(start_year,end_year) values "player127"->"team222":(2018, 2019); +insert edge serve(start_year,end_year) values "player128"->"team201":(2003, 2011); +insert edge serve(start_year,end_year) values "player128"->"team223":(2011, 2017); +insert edge serve(start_year,end_year) values "player128"->"team205":(2017, 2018); +insert edge serve(start_year,end_year) values "player128"->"team202":(2018, 2019); +insert edge serve(start_year,end_year) values "player129"->"team229"@20032016:(2003, 2016); +insert edge serve(start_year,end_year) values "player129"->"team221":(2016, 2017); +insert edge serve(start_year,end_year) values "player129"->"team216":(2017, 2018); +insert edge serve(start_year,end_year) values "player129"->"team229"@20182019:(2018, 2019); +insert edge serve(start_year,end_year) values "player130"->"team219":(2014, 2019); +insert edge serve(start_year,end_year) values "player131"->"team220":(2010, 2017); +insert edge serve(start_year,end_year) values "player131"->"team205":(2017, 2019); +insert edge serve(start_year,end_year) values "player132"->"team225":(2013, 2019); +insert edge serve(start_year,end_year) values "player133"->"team202":(2002, 2011); +insert edge serve(start_year,end_year) values "player134"->"team207":(2009, 2018); +insert edge serve(start_year,end_year) values "player134"->"team224":(2018, 2019); +insert edge serve(start_year,end_year) values "player135"->"team203":(2012, 2019); +insert edge serve(start_year,end_year) values "player136"->"team214"@19961998:(1996, 1998); +insert edge serve(start_year,end_year) values "player136"->"team213":(1998, 2004); +insert edge serve(start_year,end_year) values "player136"->"team214"@20042012:(2004, 2012); +insert edge serve(start_year,end_year) values "player136"->"team210":(2012, 2015); +insert edge serve(start_year,end_year) values "player137"->"team213":(1998, 2019); +insert edge serve(start_year,end_year) values "player138"->"team212":(2001, 2008); +insert edge serve(start_year,end_year) values "player138"->"team210":(2008, 2014); +insert edge serve(start_year,end_year) values "player138"->"team221":(2014, 2016); +insert edge serve(start_year,end_year) values "player138"->"team204":(2016, 2019); +insert edge serve(start_year,end_year) values "player138"->"team225":(2019, 2020); +insert edge serve(start_year,end_year) values "player139"->"team212":(2008, 2019); +insert edge serve(start_year,end_year) values "player139"->"team218":(2019, 2019); +insert edge serve(start_year,end_year) values "player140"->"team224":(1994, 2000); +insert edge serve(start_year,end_year) values "player140"->"team226":(2000, 2007); +insert edge serve(start_year,end_year) values "player140"->"team214":(2007, 2012); +insert edge serve(start_year,end_year) values "player140"->"team207":(2012, 2013); +insert edge serve(start_year,end_year) values "player141"->"team225":(1996, 2003); +insert edge serve(start_year,end_year) values "player141"->"team205":(2003, 2007); +insert edge serve(start_year,end_year) values "player141"->"team217":(2007, 2012); +insert edge serve(start_year,end_year) values "player141"->"team229":(2012, 2014); +insert edge serve(start_year,end_year) values "player142"->"team200":(2011, 2019); +insert edge serve(start_year,end_year) values "player143"->"team223":(2015, 2019); +insert edge serve(start_year,end_year) values "player143"->"team213":(2019, 2020); +insert edge serve(start_year,end_year) values "player144"->"team226":(1992, 1996); +insert edge serve(start_year,end_year) values "player144"->"team210":(1996, 2004); +insert edge serve(start_year,end_year) values "player144"->"team229":(2004, 2008); +insert edge serve(start_year,end_year) values "player144"->"team214":(2008, 2009); +insert edge serve(start_year,end_year) values "player144"->"team216":(2009, 2010); +insert edge serve(start_year,end_year) values "player144"->"team217":(2010, 2011); +insert edge serve(start_year,end_year) values "player145"->"team228":(2008, 2012); +insert edge serve(start_year,end_year) values "player145"->"team201":(2012, 2015); +insert edge serve(start_year,end_year) values "player145"->"team213":(2015, 2016); +insert edge serve(start_year,end_year) values "player145"->"team200":(2016, 2018); +insert edge serve(start_year,end_year) values "player145"->"team210":(2018, 2019); +insert edge serve(start_year,end_year) values "player146"->"team226":(2004, 2012); +insert edge serve(start_year,end_year) values "player146"->"team210":(2012, 2013); +insert edge serve(start_year,end_year) values "player146"->"team202":(2013, 2016); +insert edge serve(start_year,end_year) values "player146"->"team222":(2016, 2017); +insert edge serve(start_year,end_year) values "player146"->"team215":(2017, 2018); +insert edge serve(start_year,end_year) values "player146"->"team228":(2018, 2019); +insert edge serve(start_year,end_year) values "player147"->"team214":(2002, 2010); +insert edge serve(start_year,end_year) values "player147"->"team223":(2010, 2015); +insert edge serve(start_year,end_year) values "player147"->"team229":(2015, 2016); +insert edge serve(start_year,end_year) values "player148"->"team213"@19941996:(1994, 1996); +insert edge serve(start_year,end_year) values "player148"->"team214":(1996, 2001); +insert edge serve(start_year,end_year) values "player148"->"team227":(2001, 2008); +insert edge serve(start_year,end_year) values "player148"->"team213"@20082012:(2008, 2012); +insert edge serve(start_year,end_year) values "player148"->"team223":(2012, 2013); +insert edge serve(start_year,end_year) values "player149"->"team219":(2016, 2019); +insert edge serve(start_year,end_year) values "player150"->"team213":(2018, 2019); \ No newline at end of file diff --git a/docs-2.0-en/nebula-exchange/about-exchange/ex-ug-limitations.md b/docs-2.0-en/import-export/nebula-exchange/about-exchange/ex-ug-limitations.md similarity index 100% rename from docs-2.0-en/nebula-exchange/about-exchange/ex-ug-limitations.md rename to docs-2.0-en/import-export/nebula-exchange/about-exchange/ex-ug-limitations.md diff --git a/docs-2.0-en/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md b/docs-2.0-en/import-export/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md similarity index 100% rename from docs-2.0-en/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md rename to docs-2.0-en/import-export/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md diff --git a/docs-2.0-en/nebula-exchange/ex-ug-FAQ.md b/docs-2.0-en/import-export/nebula-exchange/ex-ug-FAQ.md similarity index 100% rename from docs-2.0-en/nebula-exchange/ex-ug-FAQ.md rename to docs-2.0-en/import-export/nebula-exchange/ex-ug-FAQ.md diff --git a/docs-2.0-en/nebula-exchange/ex-ug-compile.md b/docs-2.0-en/import-export/nebula-exchange/ex-ug-compile.md similarity index 100% rename from docs-2.0-en/nebula-exchange/ex-ug-compile.md rename to docs-2.0-en/import-export/nebula-exchange/ex-ug-compile.md diff --git a/docs-2.0-en/nebula-exchange/parameter-reference/ex-ug-para-import-command.md b/docs-2.0-en/import-export/nebula-exchange/parameter-reference/ex-ug-para-import-command.md similarity index 100% rename from docs-2.0-en/nebula-exchange/parameter-reference/ex-ug-para-import-command.md rename to docs-2.0-en/import-export/nebula-exchange/parameter-reference/ex-ug-para-import-command.md diff --git a/docs-2.0-en/nebula-exchange/parameter-reference/ex-ug-parameter.md b/docs-2.0-en/import-export/nebula-exchange/parameter-reference/ex-ug-parameter.md similarity index 100% rename from docs-2.0-en/nebula-exchange/parameter-reference/ex-ug-parameter.md rename to docs-2.0-en/import-export/nebula-exchange/parameter-reference/ex-ug-parameter.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-csv.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-csv.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-csv.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-csv.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-hive.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-hive.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-hive.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-hive.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-jdbc.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-jdbc.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-jdbc.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-jdbc.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-json.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-json.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-json.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-json.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-oracle.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-oracle.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-oracle.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-oracle.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-orc.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-orc.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-orc.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-orc.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md diff --git a/docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-sst.md b/docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-sst.md similarity index 100% rename from docs-2.0-en/nebula-exchange/use-exchange/ex-ug-import-from-sst.md rename to docs-2.0-en/import-export/nebula-exchange/use-exchange/ex-ug-import-from-sst.md diff --git a/docs-2.0-en/nebula-flink-connector.md b/docs-2.0-en/import-export/nebula-flink-connector.md similarity index 100% rename from docs-2.0-en/nebula-flink-connector.md rename to docs-2.0-en/import-export/nebula-flink-connector.md diff --git a/docs-2.0-en/nebula-spark-connector.md b/docs-2.0-en/import-export/nebula-spark-connector.md similarity index 100% rename from docs-2.0-en/nebula-spark-connector.md rename to docs-2.0-en/import-export/nebula-spark-connector.md diff --git a/docs-2.0-en/nebula-importer/use-importer.md b/docs-2.0-en/import-export/use-importer.md similarity index 77% rename from docs-2.0-en/nebula-importer/use-importer.md rename to docs-2.0-en/import-export/use-importer.md index b58faf7db35..92450b98464 100644 --- a/docs-2.0-en/nebula-importer/use-importer.md +++ b/docs-2.0-en/import-export/use-importer.md @@ -1,14 +1,17 @@ # NebulaGraph Importer -NebulaGraph Importer (Importer) is a standalone tool for importing data from CSV files into NebulaGraph. Importer can read and import CSV file data from multiple data sources. +NebulaGraph Importer (Importer) is a standalone tool for importing data from CSV files into NebulaGraph. Importer can read and batch import CSV file data from multiple data sources, and also supports batch update and delete operations. ## Features -- Support multiple data sources, including local, S3, OSS, HDFS, FTP, and SFTP. +- Support multiple data sources, including local, S3, OSS, HDFS, FTP, SFTP, and GCS. - Support importing data from CSV format files. A single file can contain multiple tags, multiple edge types or a mix of both. +- Support filtering the data from source. +- Support batch operation, including insert, update, delete. - Support connecting to multiple Graph services simultaneously for importing and dynamic load balancing. - Support reconnect or retry after failure. - Support displaying statistics in multiple dimensions, including import time, import percentage, etc. Support for printing statistics in Console or logs. +- Support SSL. ## Advantage @@ -154,6 +157,12 @@ client: address: "192.168.1.100:9669,192.168.1.101:9669" user: root password: nebula + ssl: + enable: true + certPath: "/home/xxx/cert/importer.crt" + keyPath: "/home/xxx/cert/importer.key" + caPath: "/home/xxx/cert/root.crt" + insecureSkipVerify: false concurrencyPerAddress: 10 reconnectInitialInterval: 1s retry: 3 @@ -166,6 +175,11 @@ client: |`client.address`|`"127.0.0.1:9669"`|Yes| Specifies the address of the NebulaGraph. Multiple addresses are separated by commas.| |`client.user`|`root`|No| NebulaGraph user name.| |`client.password`|`nebula`|No| The password for the NebulaGraph user name.| +|`client.ssl.enable`|`false`|No| Specifies whether to enable SSL authentication.| +|`client.ssl.certPath`|-|No| Specifies the storage path for the SSL public key certificate.
This parameter is required when SSL authentication is enabled.| +|`client.ssl.keyPath`|-|No|S pecifies the storage path for the SSL key.
This parameter is required when SSL authentication is enabled.| +|`client.ssl.caPath`|-|No| Specifies the storage path for the CA root certificate.
This parameter is required when SSL authentication is enabled.| +|`client.ssl.insecureSkipVerify`|`false`|No|Specifies whether the client skips verifying the server's certificate chain and hostname. If set to `true`, any certificate chain and hostname provided by the server is accepted.| |`client.concurrencyPerAddress`|`10`|No| The number of concurrent client connections for a single graph service.| |`client.retryInitialInterval`|`1s`|No| Reconnect interval time.| |`client.retry`|`3`|No| The number of retries for failed execution of the nGQL statement.| @@ -226,7 +240,7 @@ log: level: INFO console: true files: - - logs/nebula-importer.log + - logs/nebula-importer.log ``` |Parameter|Default value|Required|Description| @@ -275,7 +289,33 @@ sources: # - hdfs: # address: "127.0.0.1:8020" # Required. The address of HDFS service. # user: "hdfs" # Optional. The user of HDFS service. -# path: "/events/20190918.export.csv" # Required. The path of file in the HDFS service. +# servicePrincipalName: # Optional. The name of the Kerberos service instance for the HDFS service when Kerberos authentication is enabled. +# krb5ConfigFile: # Optional. The path to the Kerberos configuration file for the HDFS service when Kerberos authentication is enabled. Defaults to `/etc/krb5.conf`. +# ccacheFile: # Optional. The path to the Kerberos ccache file for the HDFS service when Kerberos authentication is enabled. +# keyTabFile: # Optional. The path to the Kerberos keytab file for the HDFS service when Kerberos authentication is enabled. +# password: # Optional. The Kerberos password for the HDFS service when Kerberos authentication is enabled. +# dataTransferProtection: # Optional. The type of transport encryption when Kerberos authentication is enabled. Optional values are `authentication`, `integrity`, `privacy`. +# disablePAFXFAST: false # Optional. Whether to disable the use of PA_FX_FAST for clients. +# path: "/events/20190918.export.csv" # Required. The path to the file in the HDFS service. Wildcard filenames are also supported, e.g. `/events/*.export.csv`, make sure all matching files have the same schema. +# - gcs: # Google Cloud Storage +# bucket: chicago-crime-sample # Required. The name of the bucket in the GCS service. +# key: stats/000000000000.csv # Required. The path to the file in the GCS service. +# withoutAuthentication: false # Optional. Whether to anonymize access. Defaults to false, which means access with credentials. +# # When using credentials access, one of the credentialsFile and credentialsJSON parameters is sufficient. +# credentialsFile: "/path/to/your/credentials/file" # Optional. The path to the credentials file for the GCS service. +# credentialsJSON: '{ # Optional. The JSON content of the credentials for the GCS service. +# "type": "service_account", +# "project_id": "your-project-id", +# "private_key_id": "key-id", +# "private_key": "-----BEGIN PRIVATE KEY-----\nxxxxx\n-----END PRIVATE KEY-----\n", +# "client_email": "your-client@your-project-id.iam.gserviceaccount.com", +# "client_id": "client-id", +# "auth_uri": "https://accounts.google.com/o/oauth2/auth", +# "token_uri": "https://oauth2.googleapis.com/token", +# "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", +# "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-client%40your-project-id.iam.gserviceaccount.com", +# "universe_domain": "googleapis.com" +# }' batch: 256 csv: delimiter: "|" @@ -283,6 +323,9 @@ sources: lazyQuotes: false tags: - name: Person +# mode: INSERT +# filter: +# expr: Record[1] == "XXX" id: type: "STRING" function: "hash" @@ -321,6 +364,9 @@ sources: batch: 256 edges: - name: KNOWS # person_knows_person +# mode: INSERT +# filter: +# expr: Record[1] == "XXX" src: id: type: "STRING" @@ -356,10 +402,12 @@ The configuration mainly includes the following parts: |:---|:---|:---|:---| |`sources.path`
`sources.s3`
`sources.oss`
`sources.ftp`
`sources.sftp`
`sources.hdfs` |-| No | Specify data source information, such as local file, HDFS, and S3. Only one source can be configured for the `source`. Configure multiple sources in multiple `source`.
See the comments in the example for configuration items for different data sources. | |`sources.batch` |`256`| No | The batch size for executing statements when importing this data source. The priority is higher than `manager.batch`. | -|`sources.csv.delimiter` |`,`| No | Specifies the delimiter for the CSV file. Only 1-character string separators are supported. When using special characters as separators, they need to be escaped. For example, when the delimiter is `0x03` in hexadecimal, i.e. `Ctrl+C`, the escape is written as `"\x03"` or `"\u0003"`. For details on escaping special characters in yaml format, see [Escaped Characters](https://yaml.org/spec/1.2.2/#escaped-characters).| | +|`sources.csv.delimiter` |`,`| No | Specifies the delimiter for the CSV file. Only 1-character string separators are supported. Special characters like tabs (`\t`) and hexadecimal values (e.g., `0x03` or `Ctrl+C`) must be properly escaped and enclosed in double quotes, such as `"\t"` for tabs and `"\x03"` or `"\u0003"` for hexadecimal values, instead of using single quotes. For details on escaping special characters in yaml format, see [Escaped Characters](https://yaml.org/spec/1.2.2/#escaped-characters).| | |`sources.csv.withHeader` |`false`| No | Whether to ignore the first record in the CSV file. | |`sources.csv.lazyQuotes` |`false`| No | Whether to allow lazy quotes. If `lazyQuotes` is true, a quote may appear in an unquoted field and a non-doubled quote may appear in a quoted field. | |`sources.tags.name` |-| Yes | The tag name. | +|`sources.tags.mode` |`INSERT`| No | Batch operation types, including insert, update and delete. Optional values are `INSERT`, `UPDATE` and `DELETE`. | +|`sources.tags.filter.expr` |-| No | Filter the data and only import if the filter conditions are met.
Supported comparison characters are `==`, `! =`, `<`, `>`, `<=` and `>=`.
Logical operators supported are `not` (!) , `and` (&&) and `or` (\|\|).
For example `(Record[0] == "Mahinda" or Record[0] == "Michael") and Record[3] == "male"`. | |`sources.tags.id.type` |`STRING`| No | The type of the VID. | |`sources.tags.id.function` |-| No | Functions to generate the VID. Currently, only function `hash` are supported. | |`sources.tags.id.index` |-| No | The column number corresponding to the VID in the data file. If `sources.tags.id.concatItems` is not configured, this parameter must be configured. | @@ -373,6 +421,8 @@ The configuration mainly includes the following parts: |`sources.tags.props.alternativeIndices` |-| No | Ignored when `nullable` is `false`. The property is fetched from records according to the indices in order until not equal to `nullValue`. | |`sources.tags.props.defaultValue` |-| No | Ignored when `nullable` is `false`. The property default value, when all the values obtained by `index` and `alternativeIndices` are `nullValue`. | |`sources.edges.name` |-| Yes | The edge type name. | +|`sources.edges.mode` |`INSERT`| No | Batch operation types, including insert, update and delete. Optional values are `INSERT`, `UPDATE` and `DELETE`. | +|`sources.edges.filter.expr` |-| No | Filter the data and only import if the filter conditions are met.
Supported comparison characters are `==`, `! =`, `<`, `>`, `<=` and `>=`.
Logical operators supported are `not` (!) , `and` (&&) and `or` (\|\|).
For example `(Record[0] == "Mahinda" or Record[0] == "Michael") and Record[3] == "male"`. | |`sources.edges.src.id.type` |`STRING`| No | The data type of the VID at the starting vertex on the edge. | |`sources.edges.src.id.index` |-| Yes | The column number in the data file corresponding to the VID at the starting vertex on the edge. | |`sources.edges.dst.id.type` |`STRING`| No | The data type of the VID at the destination vertex on the edge. | diff --git a/docs-2.0-en/20.appendix/write-tools.md b/docs-2.0-en/import-export/write-tools.md similarity index 100% rename from docs-2.0-en/20.appendix/write-tools.md rename to docs-2.0-en/import-export/write-tools.md diff --git a/docs-2.0-en/nebula-dashboard-ent/10.tasks.md b/docs-2.0-en/nebula-dashboard-ent/10.tasks.md index 74d46ec2fcb..8c2f7538a80 100644 --- a/docs-2.0-en/nebula-dashboard-ent/10.tasks.md +++ b/docs-2.0-en/nebula-dashboard-ent/10.tasks.md @@ -26,7 +26,7 @@ At the top navigation bar of the Dashboard Enterprise Edition page, click Task C Click the tab **Running Task** to view the progress of the running tasks. -- Click a task name to view the ID, node name, type, create time, and operator of the running task. +- Click a task name to view the ID, node name, type, create time, and operator of the running task. - Clink **Task information** to view task details. ## Task history @@ -34,4 +34,4 @@ Click the tab **Running Task** to view the progress of the running tasks. Click **Task History** to view all ended tasks. - You can filter historical tasks by status, type, date, and time. -- On the right side of the target historical task, click **Task information** to view task details, and click **Logs** to view task execution logs. +- On the right side of the target historical task, click **Task information** to view task details, click **Logs** to view task execution logs, and click **Retry** to re-execute the failed task. diff --git a/docs-2.0-en/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md b/docs-2.0-en/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md index 96ce258ffcc..8ff4d916c60 100644 --- a/docs-2.0-en/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md +++ b/docs-2.0-en/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md @@ -47,75 +47,13 @@ Before deploying Dashboard Enterprise Edition, you must do a check of these: tar -xzvf nebula-dashboard-ent-{{dashboard_ent.release}}.linux-amd64.tar.gz -C /usr/local/ ``` -3. Enter the extracted folder and modify the `config.yaml` file in the `etc` directory to set the `LicenseManagerURL` parameter to the host IP and port number `9119` where the License Manager ([LM](../9.about-license/2.license-management-suite/3.license-manager.md)) is located and to set the relevant configuration. +3. Enter the extracted folder. - ```bash - Name: dashboard-api - Host: 0.0.0.0 # Specifies the address segment that can access Dashboard. - Port: 7005 # The default port used to access Dashboard Enterprise Edition. - MaxBytes: 1073741824 # The maximum content length of an Http request that can be accepted. The default value is 1048576. Value range: 0 ~ 8388608. - Timeout: 60000 # Timeout duration of the access. - Log: # Dashboard run log settings. - KeepDays: 7 # The number of days for keeping log. - Mode: file # The save mode of logs, including console and file. console means the service logs are logged in webserver.log. file means the service logs are logged in access.log, error.log, severe.log, slow.log, and stat.log respectively. - Encoding: plain # Log encoding method, plain and json are supported. - Database: - Dialect: sqlite # The database type used to store metadata. Only support SQLite and MySQL currently. The default value is SQLite. - AutoMigrate: true # Whether to automatically create a database table. Defaults to true. - Host: 127.0.0.1 # The IP address of the connected MySQL database. - Port: 3306 # The port of the connected MySQL database. - Username: root # The username to log in MySQL. - Password: nebula # The password to log in MySQL. - Name: dashboard # The name of the corresponding database. - - # Information about the exporter port - Exporter: - NodePort: 9100 # The port of the node-exporter service. - NebulaPort: 9200 # The port of the nebula-stats-exporter service. - - # Information of services - Proxy: - PrometheusAddr: 127.0.0.1:9091 # The IP address and port of the prometheus service. - AlertmanagerAddr: 127.0.0.1:9093 # The IP address and port of the Alertmanager service. - - # Self-built Prometheus service configuration. If you are using the Prometheus service included with the software, you do not need to set these parameters. - # PrometheusConfig: - # Auth: # If authentication is enabled in Prometheus, the following parameters need to be configured. AccessToken is preferred. - # Username: "" # The account of the Prometheus HTTP API service. - # Password: "" # The password for the account. - # AccessToken: "" # The access token of the Prometheus HTTP API service. - # RuleImport: # Configuration for remotely updating alert rules to the Prometheus service. - # Enable: false # Whether to enable remote updating. - # URL: "https://xxxx/prometheus/import" # The interface address for updating the alert rule. - # Method: POST # Update method. Only POST operations are supported. - # Auth: # If the authentication is enabled on the interface of the URL, the following parameters need to be configured. AccessToken is preferred. AccessToken is preferred. - # AccessToken: "" # The access token of the URL interface. - # Username: "" # The authentication account of the URL interface. - # Password: "" # The password for the account. - - # Information of the sender's Email used to invite LDAP accounts. - Mail: - Host: smtp.office365.com # The SMTP server address. - Port: 587 # The port number of the SMTP server. - Username: "" # The SMTP server account name. - Password: "" # The SMTP server password. - - # SlowQuery - SlowQuery: - Enable: true # The switch of slowquery data polling. - MessageStore: 14 # Slow queary data store time (day). - ScrapeInterval: 2m # The interval time for slow query data pulling, eg: 1s, 10s, 2m, 3h - - # System information - System: - WebAddress: http://127.0.0.1:7005 # The external access for Dashboard. It can be set as a hostname, used for interface callbacks. For example, the invitee who is invited by mail can use this link to access Dashboard. - MessageStore: 90 # It sets the number of days to keep alert messages, the value of which is 90 by default. - - CloudProvider: "" # cloud provider name, used for aliyun now. - LicenseManagerURL: "" # license manager url. - ``` - -4. Start Dashboard. +4. (Optional) If the LM service has been deployed, modify the `config.yaml` file in the `etc` directory. Set the value of `LicenseManagerURL` to the IP of the host where LM is located and the port number to `9119`, for example `192.168.8.100:9119`. If the LM service is not deployed, it can be deployed in the visualization page after connecting Dashboard Enterprise Edition. + + For more configuration descriptions, see the **Configuration file description** section at the end of the topic. + +5. Start Dashboard. You can use the following command to start the Dashboard with one click. @@ -163,75 +101,13 @@ The Linux version used is CentOS and lsof has been installed. sudo rpm -i nebula-dashboard-ent-xxx.rpm --prefix= ``` -3. Enter the extracted folder and modify the `config.yaml` file in the `etc` directory to set the `LicenseManagerURL` parameter to the host IP and port number `9119` where the License Manager ([LM](../9.about-license/2.license-management-suite/3.license-manager.md)) is located and to set the relevant configuration. +3. Enter the extracted folder. - ```bash - Name: dashboard-api - Host: 0.0.0.0 # Specifies the address segment that can access Dashboard. - Port: 7005 # The default port used to access Dashboard Enterprise Edition. - MaxBytes: 1073741824 # The maximum content length of an Http request that can be accepted. The default value is 1048576. Value range: 0 ~ 8388608. - Timeout: 60000 # Timeout duration of the access. - Log: # Dashboard run log settings. - KeepDays: 7 # The number of days for keeping log. - Mode: file # The save mode of logs, including console and file. console means the service logs are logged in webserver.log. file means the service logs are logged in access.log, error.log, severe.log, slow.log, and stat.log respectively. - Encoding: plain # Log encoding method, plain and json are supported. - Database: - Dialect: sqlite # The database type used to store metadata. Only support SQLite and MySQL currently. The default value is SQLite. - AutoMigrate: true # Whether to automatically create a database table. Defaults to true. - Host: 127.0.0.1 # The IP address of the connected MySQL database. - Port: 3306 # The port of the connected MySQL database. - Username: root # The username to log in MySQL. - Password: nebula # The password to log in MySQL. - Name: dashboard # The name of the corresponding database. - - # Information about the exporter port - Exporter: - NodePort: 9100 # The port of the node-exporter service. - NebulaPort: 9200 # The port of the nebula-stats-exporter service. - - # Information of services - Proxy: - PrometheusAddr: 127.0.0.1:9091 # The IP address and port of the prometheus service. - AlertmanagerAddr: 127.0.0.1:9093 # The IP address and port of the Alertmanager service. - - # Self-built Prometheus service configuration. If you are using the Prometheus service included with the software, you do not need to set these parameters. - # PrometheusConfig: - # Auth: # If authentication is enabled in Prometheus, the following parameters need to be configured. AccessToken is preferred. - # Username: "" # The account of the Prometheus HTTP API service. - # Password: "" # The password for the account. - # AccessToken: "" # The access token of the Prometheus HTTP API service. - # RuleImport: # Configuration for remotely updating alert rules to the Prometheus service. - # Enable: false # Whether to enable remote updating. - # URL: "https://xxxx/prometheus/import" # The interface address for updating the alert rule. - # Method: POST # Update method. Only POST operations are supported. - # Auth: # If the authentication is enabled on the interface of the URL, the following parameters need to be configured. AccessToken is preferred. AccessToken is preferred. - # AccessToken: "" # The access token of the URL interface. - # Username: "" # The authentication account of the URL interface. - # Password: "" # The password for the account. - - # Information of the sender's Email used to invite LDAP accounts. - Mail: - Host: smtp.office365.com # The SMTP server address. - Port: 587 # The port number of the SMTP server. - Username: "" # The SMTP server account name. - Password: "" # The SMTP server password. - - # SlowQuery - SlowQuery: - Enable: true # The switch of slowquery data polling. - MessageStore: 14 # Slow queary data store time (day). - ScrapeInterval: 2m # The interval time for slow query data pulling, eg: 1s, 10s, 2m, 3h - - # System information - System: - WebAddress: http://127.0.0.1:7005 # The external access for Dashboard. It can be set as a hostname, used for interface callbacks. For example, the invitee who is invited by mail can use this link to access Dashboard. - MessageStore: 90 # It sets the number of days to keep alert messages, the value of which is 90 by default. - - CloudProvider: "" # cloud provider name, used for aliyun now. - LicenseManagerURL: "" # license manager url. - ``` - -4. Run the following commands to view the status of and start all the services. +4. (Optional) If the LM service has been deployed, modify the `config.yaml` file in the `etc` directory. Set the value of `LicenseManagerURL` to the IP of the host where LM is located and the port number to `9119`, for example `192.168.8.100:9119`. If the LM service is not deployed, it can be deployed in the visualization page after connecting Dashboard Enterprise Edition. + + For more configuration descriptions, see the **Configuration file description** section at the end of the topic. + +5. Run the following commands to view the status of and start all the services. ``` sudo systemctl list-dependencies nebula-dashboard.target # View the status of all the services. @@ -274,76 +150,13 @@ sudo rpm -e sudo dpkg -i nebula-dashboard-ent-{{dashboard_ent.release}}.ubuntu1804.amd64.deb ``` -3. Enter the extracted folder and modify the `config.yaml` file in the `etc` directory to set the `LicenseManagerURL` parameter to the host IP and port number `9119` where the License Manager ([LM](../9.about-license/2.license-management-suite/3.license-manager.md)) is located and to set the relevant configuration. - +3. Enter the extracted folder. - ```bash - Name: dashboard-api - Host: 0.0.0.0 # Specifies the address segment that can access Dashboard. - Port: 7005 # The default port used to access Dashboard Enterprise Edition. - MaxBytes: 1073741824 # The maximum content length of an Http request that can be accepted. The default value is 1048576. Value range: 0 ~ 8388608. - Timeout: 60000 # Timeout duration of the access. - Log: # Dashboard run log settings. - KeepDays: 7 # The number of days for keeping log. - Mode: file # The save mode of logs, including console and file. console means the service logs are logged in webserver.log. file means the service logs are logged in access.log, error.log, severe.log, slow.log, and stat.log respectively. - Encoding: plain # Log encoding method, plain and json are supported. - Database: - Dialect: sqlite # The database type used to store metadata. Only support SQLite and MySQL currently. The default value is SQLite. - AutoMigrate: true # Whether to automatically create a database table. Defaults to true. - Host: 127.0.0.1 # The IP address of the connected MySQL database. - Port: 3306 # The port of the connected MySQL database. - Username: root # The username to log in MySQL. - Password: nebula # The password to log in MySQL. - Name: dashboard # The name of the corresponding database. - - # Information about the exporter port - Exporter: - NodePort: 9100 # The port of the node-exporter service. - NebulaPort: 9200 # The port of the nebula-stats-exporter service. - - # Information of services - Proxy: - PrometheusAddr: 127.0.0.1:9091 # The IP address and port of the prometheus service. - AlertmanagerAddr: 127.0.0.1:9093 # The IP address and port of the Alertmanager service. - - # Self-built Prometheus service configuration. If you are using the Prometheus service included with the software, you do not need to set these parameters. - # PrometheusConfig: - # Auth: # If authentication is enabled in Prometheus, the following parameters need to be configured. AccessToken is preferred. - # Username: "" # The account of the Prometheus HTTP API service. - # Password: "" # The password for the account. - # AccessToken: "" # The access token of the Prometheus HTTP API service. - # RuleImport: # Configuration for remotely updating alert rules to the Prometheus service. - # Enable: false # Whether to enable remote updating. - # URL: "https://xxxx/prometheus/import" # The interface address for updating the alert rule. - # Method: POST # Update method. Only POST operations are supported. - # Auth: # If the authentication is enabled on the interface of the URL, the following parameters need to be configured. AccessToken is preferred. AccessToken is preferred. - # AccessToken: "" # The access token of the URL interface. - # Username: "" # The authentication account of the URL interface. - # Password: "" # The password for the account. - - # Information of the sender's Email used to invite LDAP accounts. - Mail: - Host: smtp.office365.com # The SMTP server address. - Port: 587 # The port number of the SMTP server. - Username: "" # The SMTP server account name. - Password: "" # The SMTP server password. - - # SlowQuery - SlowQuery: - Enable: true # The switch of slowquery data polling. - MessageStore: 14 # Slow queary data store time (day). - ScrapeInterval: 2m # The interval time for slow query data pulling, eg: 1s, 10s, 2m, 3h - - # System information - System: - WebAddress: http://127.0.0.1:7005 # The external access for Dashboard. It can be set as a hostname, used for interface callbacks. For example, the invitee who is invited by mail can use this link to access Dashboard. - MessageStore: 90 # It sets the number of days to keep alert messages, the value of which is 90 by default. - - CloudProvider: "" # cloud provider name, used for aliyun now. - LicenseManagerURL: "" # license manager url. - ``` - -4. Run the following commands to view the status of and start all the services. +4. (Optional) If the LM service has been deployed, modify the `config.yaml` file in the `etc` directory. Set the value of `LicenseManagerURL` to the IP of the host where LM is located and the port number to `9119`, for example `192.168.8.100:9119`. If the LM service is not deployed, it can be deployed in the visualization page after connecting Dashboard Enterprise Edition. + + For more configuration descriptions, see the **Configuration file description** section at the end of the topic. + +5. Run the following commands to view the status of and start all the services. ``` sudo systemctl list-dependencies nebula-dashboard.target # View the status of all the services. @@ -416,6 +229,31 @@ The following section presents two methods for managing the Dashboard service. I sudo systemctl stop nbd-prometheus.service ``` +## Directory structure + +The structure of the Dashboard Enterprise Edition is as follows: + +```bash +├── assets # Static resource files +│   └── ... +├── bin # System executables +│   └──... +├── CMakeLists.txt # CMake configuration files +| +├── data # Database data files +│   └──... +├── download # Dependency packages +│   └──... +├── etc # Configuration files +│   └──... +├── logs # Log files +│   └──... +├── pids # Service process files +│   └──... +└── scripts # Scripts for managing services + └──... +``` + ## View logs - Users who manage services using `dashboard.service` can view the Dashboard Enterprise Edition logs in the `logs` directory. @@ -435,11 +273,11 @@ The following section presents two methods for managing the Dashboard service. I |`prometheus.log`| Prometheus service log. | |`br`| Backup and restore service log. | |`webserver.log`| Dashboard service log.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `console`. | - |`access.log`| Access log.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | - |`error.log`| Error log.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | - |`severe.log`| Severe log.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | - |`slow.log`| Slow log.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | - |`stat.log`| Statistic log.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | + |`access.log`| Access log. Records all request messages for accessing the services, including request time, source address, requested URL, HTTP method, returned HTTP status code, etc.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | + |`error.log`| Error log. Records error messages that occur during service running. This may include runtime errors, system errors, service logic errors, etc.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | + |`severe.log`| Severe log. Records error messages that could cause the system to crash, or seriously affect the correct functioning of the system. This may include runtime errors, system errors, serious service logic errors, etc.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | + |`slow.log`| Slow log. Records requests or operations whose execution time exceeds a preset threshold, helping users identify performance bottlenecks.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | + |`stat.log`| Statistic log. Records statistical information about the service, the content of which depends on the needs of the application and may include a variety of performance metrics, usage statistics, etc.
It takes effect only when the `Log.Mode` in the Dashboard configuration is `file`. | - Users managing services with systemd can access the logs for each Dashboard Enterprise Edition service through `journalctl`. @@ -453,6 +291,74 @@ The following section presents two methods for managing the Dashboard service. I journalctl -u nbd-prometheus.service -b ``` +## Configuration file description + +```bash +Name: dashboard-api +Host: 0.0.0.0 # Specifies the address segment that can access Dashboard. +Port: 7005 # The default port used to access Dashboard Enterprise Edition. +MaxBytes: 1073741824 # The maximum content length of an Http request that can be accepted. The default value is 1048576. Value range: 0 ~ 8388608. +Timeout: 60000 # Timeout duration of the access. +Log: # Dashboard run log settings. + KeepDays: 7 # The number of days for keeping log. + Mode: file # The save mode of logs, including console and file. console means the service logs are logged in webserver.log. file means the service logs are logged in access.log, error.log, severe.log, slow.log, and stat.log respectively. + Encoding: plain # Log encoding method, plain and json are supported. +Database: + Dialect: sqlite # The database type used to store metadata. Only support SQLite and MySQL currently. The default value is SQLite. + AutoMigrate: true # Whether to automatically create a database table. Defaults to true. + Host: 127.0.0.1 # The IP address of the connected MySQL database. + Port: 3306 # The port of the connected MySQL database. + Username: root # The username to log in MySQL. + Password: nebula # The password to log in MySQL. + Name: dashboard # The name of the corresponding database. + +# Information about the exporter port +Exporter: + NodePort: 9100 # The port of the node-exporter service. + NebulaPort: 9200 # The port of the nebula-stats-exporter service. + +# Information of services +Proxy: + PrometheusAddr: 127.0.0.1:9091 # The IP address and port of the prometheus service. + AlertmanagerAddr: 127.0.0.1:9093 # The IP address and port of the Alertmanager service. + +# Self-built Prometheus service configuration. If you are using the Prometheus service included with the software, you do not need to set these parameters. +# PrometheusConfig: +# Auth: # If authentication is enabled in Prometheus, the following parameters need to be configured. AccessToken is preferred. +# Username: "" # The account of the Prometheus HTTP API service. +# Password: "" # The password for the account. +# AccessToken: "" # The access token of the Prometheus HTTP API service. +# RuleImport: # Configuration for remotely updating alert rules to the Prometheus service. +# Enable: false # Whether to enable remote updating. +# URL: "https://xxxx/prometheus/import" # The interface address for updating the alert rule. +# Method: POST # Update method. Only POST operations are supported. +# Auth: # If the authentication is enabled on the interface of the URL, the following parameters need to be configured. AccessToken is preferred. AccessToken is preferred. +# AccessToken: "" # The access token of the URL interface. +# Username: "" # The authentication account of the URL interface. +# Password: "" # The password for the account. + +# Information of the sender's Email used to invite LDAP accounts. +Mail: + Host: smtp.office365.com # The SMTP server address. + Port: 587 # The port number of the SMTP server. + Username: "" # The SMTP server account name. + Password: "" # The SMTP server password. + +# SlowQuery +SlowQuery: + Enable: true # The switch of slowquery data polling. + MessageStore: 14 # Slow queary data store time (day). + ScrapeInterval: 2m # The interval time for slow query data pulling, eg: 1s, 10s, 2m, 3h + +# System information +System: + WebAddress: http://127.0.0.1:7005 # The external access for Dashboard. It can be set as a hostname, used for interface callbacks. For example, the invitee who is invited by mail can use this link to access Dashboard. + MessageStore: 90 # It sets the number of days to keep alert messages, the value of which is 90 by default. + +CloudProvider: "" # cloud provider name, used for aliyun now. +LicenseManagerURL: "" # license manager url. +``` + ## Next to do [Connect to Dashboard](3.connect-dashboard.md) diff --git a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/1.overview.md b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/1.overview.md index 897852d429b..0f534601aec 100644 --- a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/1.overview.md +++ b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/1.overview.md @@ -1,4 +1,4 @@ -# Cluster Overview +# Cluster overview This topic introduces the **Cluster Overview** page of Dashboard, which contains the following parts: diff --git a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/10.database-user.md b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/10.database-user.md new file mode 100644 index 00000000000..7e332538b2a --- /dev/null +++ b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/10.database-user.md @@ -0,0 +1,35 @@ +# Database member + +Users can manage account permissions for databases within the cluster, including managing database members and authorizing graph space permissions. + +!!! note + + For managing account privileges in each cluster on the platform, see [Platform member management](../5.account-management.md). + +## Entry + +1. At the top navigation bar of the Dashboard Enterprise Edition page, click **Cluster Management**. +2. On the right side of the target cluster, click **Detail**. +3. On the left-side navigation bar of the page, click **Database user**. + +## Managing database members + +1. Select the **Database user** tab. +2. Click **Create database user** and fill in the username, password, and IP whitelist. + + !!! note + + To create database users in batches, click **Add** in the upper left corner to add a new line of configuration items. + +1. Click **Confirm**. + +## Authorizing graph space permissions + +1. Select the **Authorization** tab. +2. Click **Grant Role** and select the username, the graph space to be authorized, and the role to be authorized. For details on role permissions, see [Roles and privileges](../../7.data-security/1.authentication/3.role-list.md). + + !!! note + + To authorize in batches, click **Add** in the upper left corner to add a new line of configuration items. + +1. Click **Confirm**. \ No newline at end of file diff --git a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/2.monitor.md b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/2.monitor.md index 7604aa0eb15..ae9d828d9c1 100644 --- a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/2.monitor.md +++ b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/2.monitor.md @@ -47,7 +47,7 @@ Clicking the ![watch](https://docs-cdn.nebula-graph.com.cn/figures/watch.png) bu !!! Caution - Before using Graph Space Metrics, users need to set `enable_space_level_metrics` to `true` in the Graph service. For specific operations, see [Update config](operator/config-management.md). + Before using Graph Space Metrics, users need to set `enable_space_level_metrics` to `true` in the Graph service. For specific operations, see [Update config](operator/update-config.md). ### Managing Monitoring diff --git a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/7.data-synchronization.md b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/7.data-synchronization.md index f035b9c6121..20f526b807f 100644 --- a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/7.data-synchronization.md +++ b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/7.data-synchronization.md @@ -1,4 +1,4 @@ -# Data Synchronization +# Data synchronization The **Data Synchronization** function of Dashboard Enterprise Edition is used to realize data synchronization between clusters. diff --git a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/9.notification.md b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/9.notification.md index ba90c2139b9..f4c3de2a142 100644 --- a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/9.notification.md +++ b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/9.notification.md @@ -47,7 +47,7 @@ Follow the below steps to create a custom rule. | Parameter | Description | | -------- | ------------------------------------------------------------ | - | Metric Type | Set a metric type. Metric type includes the node metric type and the service type (graphd,storaged,metad). | + | Metric Type | Set a metric type. Metric type includes the node metric type, service metric type (graphd,storaged,metad), and graph space metric type. | | Metric Rule | Click **+ Add condition** to set metric rules for a node or a service. It supports adding composite conditions (like the usage of AND). For more information, see [Monitoring metrics](../7.monitor-parameter.md).| | Alert duration | Set how long an alert lasts before the alert message is triggered. Unit: Minute (Min). | diff --git a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/member-management.md b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/member-management.md deleted file mode 100644 index a0bd43ed120..00000000000 --- a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/member-management.md +++ /dev/null @@ -1,21 +0,0 @@ -# Member management - -**Member Management** page shows only the cluster creator account (`owner` role) by default. The account with the `owner` role can add and delete the cluster administrator (`operator` role). - -## Entry - -1. At the top navigation bar of the Dashboard Enterprise Edition page, click **Cluster Management**. -2. On the right side of the target cluster, click **Detail**. -3. On the left-side navigation bar of the page, click **Operation**->**Member Management**. - -## Steps - -- Add the cluster administrator: Click the search box at the top left. Select the target account that you want to add to be the administrator of the cluster in the drop-down list, and then click **Add**. - - !!! note - - The accounts of cluster members must be included in Dashboard accounts. For information about how to create an account, see [Authority management](../../5.account-management.md). - -- Delete the cluster administrator: Click ![delete](https://docs-cdn.nebula-graph.com.cn/figures/alert_delete.png) in the operation column on the right of the cluster administrator account, and then click **Confirm**. - -- Transfer the `owner` role: Click **Transfer** in the operation column on the right of the `owner` role. Select the target account that you want to be transferred, and then click **Confirm**. \ No newline at end of file diff --git a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/config-management.md b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/update-config.md similarity index 84% rename from docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/config-management.md rename to docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/update-config.md index 459b3498534..6c978799d5f 100644 --- a/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/config-management.md +++ b/docs-2.0-en/nebula-dashboard-ent/4.cluster-operator/operator/update-config.md @@ -2,10 +2,6 @@ On **Config Management** page, you can view and update the service configuration files. -## Precautions - -You need to restart the corresponding service in the **Service** page after the configuration modification. For details, see [Service](service.md). - ## Entry 1. At the top navigation bar of the Dashboard Enterprise Edition page, click **Cluster Management**. @@ -18,9 +14,13 @@ You need to restart the corresponding service in the **Service** page after the 2. Locate the configuration to be modified and click **Edit** in the **Operation** column. 3. In the pop-up dialog box, you can modify the **Value** individually. They can also be modified uniformly at the top, and you need to click **Apply To All Services** after modification. + !!! note + + The top of the dialog box displays whether the modification of the parameter requires a restart to take effect. Please restart the corresponding service on the **Services** page. For details, see [Service](service.md). + A screenshot that shows the configuration of dashboard -4. Click **Confirm** after the modification is complete. +4. Click **Confirm** or **Save and Apply** after the modification is complete. ## Add configuration diff --git a/docs-2.0-en/nebula-dashboard-ent/5.account-management.md b/docs-2.0-en/nebula-dashboard-ent/5.account-management.md index 2452095bb5b..894dd5f5c84 100644 --- a/docs-2.0-en/nebula-dashboard-ent/5.account-management.md +++ b/docs-2.0-en/nebula-dashboard-ent/5.account-management.md @@ -1,68 +1,68 @@ -# Authority management +# Platform member management You can log into NebulaGraph Dashboard Enterprise Edition with different types of accounts. Different accounts have different permissions. This article introduces account types, roles, and permissions. !!! note - You need to configure the related protocols before using LDAP accounts or OAuth2.0 accounts. For details, see [Single sign-on](system-settings/single-sign-on.md). + For managing account privileges on databases within a cluster, see [Database member](4.cluster-operator/10.database-user.md). ## Account types -Once you log into Dashboard Enterprise Edition using the initialized account name `nebula` and password `nebula`, you can create different types of accounts: LDAP accounts, OAuth2.0 accounts and general accounts. +Once you log into Dashboard Enterprise Edition using the initialized account name `nebula` and password `nebula`, you can create different types of accounts: general accounts and SSO accounts. -### LDAP accounts +### General accounts -Dashboard Enterprise Edition enables you to log into it with your enterprise account by accessing [LDAP (Lightweight Directory Access Protocol)](https://ldap.com/). +Dashboard Enterprise Edition enables you to create local accounts. -### OAuth2.0 accounts +### SSO accounts -!!! caution +!!! note - The feature is still in beta. It will continue to be optimized. + SSO (Single Sign On) supports LDAP, OAuth2.0 and CAS. You need to configure the related protocols before using it. For details, see [Single sign-on](system-settings/single-sign-on.md). -Dashboard Enterprise Edition enables you to use access_token to authorize the third-party applications to access the protected information based on [OAuth2.0](https://oauth.net/2/). +- LDAP accounts -### General accounts + Dashboard Enterprise Edition enables you to log into it with your enterprise account by accessing [LDAP (Lightweight Directory Access Protocol)](https://ldap.com/). -Dashboard Enterprise Edition enables you to create local accounts. +- OAuth2.0 accounts -## Account roles + !!! caution -You can set different roles for your accounts. Roles are different in permissions. There are two types of account roles in Dashboard Enterprise Edition: system roles (`admin` and `user`) and cluster roles (`owner` and `operator`). + The feature is still in beta. It will continue to be optimized. -The relationship between system roles and cluster roles and their descriptions are as follows. + Dashboard Enterprise Edition enables you to use access_token to authorize the third-party applications to access the protected information based on [OAuth2.0](https://oauth.net/2/). -![roles](https://docs-cdn.nebula-graph.com.cn/figures/ds_roles_en.png) +- accounts -**System roles**: + Dashboard Enterprise Edition enables you to verify your identity based on [CAS (Central Authentication Service)](https://apereo.github.io/cas) 2.0 protocol. -| Roles | Permission | Description | -| ------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| admin | 1. Create accounts.
2. Modify the role of an existing account.
3. Perform platform settings, system-level alert settings.
4. Delete accounts. | 1. There can be multiple `admin` roles, i.e. system administrators.
2. An `admin` is the `operator` of all clusters by default, i.e. an `admin` can manage all clusters.
3. An `admin` can assign a `user` to be the `operator` of a cluster.
4. Displayed in the cluster member list by default. An `owner` cannot remove an `admin` unless the `admin` is converted to `user`, and the system will automatically remove the `admin` from the cluster member list. | -| user | 1. Has read-only permissions for the system dimension.
2. After an `admin` creates a new account with the `user` role, the `user` account cannot view any clusters if the corresponding cluster is not assigned to the account.
3. Can create clusters and become the `owner` of the clusters. | 1. General role.
2. There can be multiple `user` roles. | +## Account roles +You can set different roles for your accounts. Roles have different permissions. There are two types of account roles in Dashboard Enterprise Edition. -**Cluster roles**: +![roles](https://docs-cdn.nebula-graph.com.cn/figures/eo_dash_role_231007_en.png) -| Roles | Permission | Description | -| ---------- | ------------------------------------------------------------ | ---------------------------------------------------------- | -| `operator` | 1. Scale clusters.
2. Set cluster alerts.
3. Manage cluster nodes.
4. Manage cluster services. | 1. The cluster operator.
2. There can be multiple `operator` roles in a cluster. | -| `owner` | 1. Have all the permissions of `operator`.
2. Unbind and delete clusters.
3. Add and remove accounts with `operator` roles.
4. Transfer the `owner` role. | 1. The cluster owner.
2. There can only be one `owner` in a cluster. | +- Platform roles: `admin` and `user`. + - The `admin` role is equivalent to the administrator of the platform, who can manage the platform roles of all accounts, and can perform daily operation and maintenance operations on all clusters. + - The `user` role is equivalent to a general user of the platform, who can only manage the clusters that the user has created or been authorized to manage. +- Cluster roles: `owner` and `operator`. + - The `owner` role represents the owner of a cluster, you can authorize other accounts to manage your clusters. You set yourself up as the `owner` when you create a cluster, and you can transfer the `owner` role to other accounts. + - The `operator` role means that you can perform daily operations on the cluster, but you cannot transfer the `owner` role, change the cluster database password, unbind the cluster, and delete the cluster. ## Create accounts Accounts with `admin` roles can create other accounts. The steps are as follows: -1. At the top navigation bar of the Dashboard Enterprise Edition page, click **Authority**, and click **Create**. +1. At the top navigation bar of the Dashboard Enterprise Edition page, click **Members**, and click **Add**. 2. Select one method and input information to create an account, and click **OK**. - - Invite (LDAP or OAuth2.0 accounts): Set the invitee's account type, enterprise email and role. After the invitee clicks the **Accept** button in the email to activate the account, the invitee needs to click **Login** to automatically jump to the Dashboard Enterprise Edition login page. The invitee can log into Dashboard with his/her enterprise email account and password. + - Invite (LDAP or OAuth2.0 accounts): Set the invitee's account type, enterprise email, role and authorize cluster. After the invitee clicks the **Accept** button in the email to activate the account, the invitee needs to click **Login** to automatically jump to the Dashboard Enterprise Edition login page. The invitee can log into Dashboard with his/her enterprise email account and password. !!! note Automatic registration is also supported after LDAP is enabled. When you enter an unregistered account in LDAP mode on the login page, the Dashboard automatically registers the account, but the role permission is `user`. - - Create Account (general accounts): Set the login name, password, and role for the new account. For information about roles, see the above content. + - Create Account (general accounts): Set the login name, password, role and authorize cluster for the new account. For information about roles, see the **Account roles** section above. ## View accounts @@ -70,14 +70,20 @@ The created accounts are displayed on the **Authority** page. - You can view the username, account type, role, associated cluster, and create time of accounts. - - **Account Type**: Includes **ldap**, **oauth2.0** and **platform**. **platform** is a general account. - - **Role**: Displays the role of an account, including **admin** and **user**. For more information about roles, see the above content. + - **Account Type**: Includes **ldap**, **oauth2.0**, **cas** and **platform**. **platform** is a general account. + - **Role**: Displays the role of an account, including **admin** and **user**. For more information about roles, see the **Account roles** section above. - **Associated Clusters**: Displays all the clusters that can be operated by an account. If the cluster was created by the account, the associated cluster has the `owner` tag. - You can search for accounts in the search box, and filter accounts by selecting an associated cluster. ## Other operations -- In the **Action** column on the **Authority** page, click ![alert-edit](https://docs-cdn.nebula-graph.com.cn/figures/alert_edit.png) to edit account information. +Performing the following operations requires the account to have the associated role permissions. For details on roles, see the **Account Roles** section above. + +- Edit account + + In the **Action** column, click ![alert-edit](https://docs-cdn.nebula-graph.com.cn/figures/alert_edit.png) to edit account information. This includes modifying the account platform role, or authorizing the `operator` role to the account for clusters where you have the `owner` role. + +- Delete account -- In the **Action** column on the **Authority** page, click ![alert-delete](https://docs-cdn.nebula-graph.com.cn/figures/alert_delete.png) to delete an account. + In the **Action** column, click ![alert-delete](https://docs-cdn.nebula-graph.com.cn/figures/alert_delete.png) to delete an account. Accounts without the `owner` role can be deleted. diff --git a/docs-2.0-en/nebula-dashboard-ent/8.faq.md b/docs-2.0-en/nebula-dashboard-ent/8.faq.md index dec8583c379..48af8926cac 100644 --- a/docs-2.0-en/nebula-dashboard-ent/8.faq.md +++ b/docs-2.0-en/nebula-dashboard-ent/8.faq.md @@ -60,4 +60,14 @@ When importing a cluster, you need to access the path where the NebulaGraph serv If **Service Host** shows `127.0.0.1`, and your Dashboard and NebulaGraph are deployed on the same machine when authorizing service hosts, the system will prompt "SSH connection error”. You need to change the Host IP of each service to the real machine IP in the configuration files of all NebulaGraph services. For more information, see [Configuration management](../5.configurations-and-logs/1.configurations/1.configurations.md). -If you import a cluster deployed with Docker, it also prompts "SSH connection error". Dashboard does not support importing a cluster deployed with Docker. \ No newline at end of file +If you import a cluster deployed with Docker, it also prompts "SSH connection error". Dashboard does not support importing a cluster deployed with Docker. + +## How to implement a highly available architecture + +Users can use third-party high-availability software (e.g. [HAProxy](https://www.haproxy.org/)) to implement the high-availability architecture for Dashboard and the high-availability architecture for LM. + +For example, you can deploy Dashboard service, database service, Prometheus service, and LM service on multiple machines. Then use HAProxy to implement their respective load balancing. + +Then fill in the external interfaces of each service into the configuration of the Dashboard, for example, fill in the external interface information of the database service, Prometheus service, LM service, so as to realize the highly available architecture. + +For detailed solutions, you can contact the after-sales staff for consultation. \ No newline at end of file diff --git a/docs-2.0-en/nebula-dashboard-ent/system-settings/single-sign-on.md b/docs-2.0-en/nebula-dashboard-ent/system-settings/single-sign-on.md index 60d5ea465c7..6762368a5e1 100644 --- a/docs-2.0-en/nebula-dashboard-ent/system-settings/single-sign-on.md +++ b/docs-2.0-en/nebula-dashboard-ent/system-settings/single-sign-on.md @@ -1,11 +1,11 @@ # Single sign-on -NebulaGraph Dashboard Enterprise Edition supports general accounts, LDAP accounts, and OAuth2.0 accounts. This article introduces how to configure the protocols of LDAP and OAuth2.0. +NebulaGraph Dashboard Enterprise Edition supports general accounts, LDAP accounts, OAuth2.0 accounts, and CAS accounts. This article introduces how to configure the protocols of LDAP, OAuth2.0 and CAS. !!! note - After the configuration is complete, you can create the account and activate the invitation. For details,see [Authority management](../5.account-management.md). - - You can quickly switch on or off LDAP or OAuth2.0 in the left navigation bar. + - You can quickly switch on or off a login method in the left navigation bar. ## LDAP configuration @@ -61,3 +61,24 @@ After LDAP is enabled, you can register an LDAP account in two ways: ### Instruction After OAuth2.0 is enabled, you can invite others to register by [email](../5.account-management.md). + +## CAS configuration + +### Entry + +1. At the top navigation bar of the Dashboard Enterprise Edition page, click **System Settings**. +2. On the left-side navigation bar of the page, click **Single Sign-on**->**CAS**. + +### Configuration description + +|Parameter|Example|Description| +|:--|:--|:--| +|`CAS server address` | `https://192.168.8.100:8080/cas`| CAS server address. | +|`Organization` | `yueshu` | The name of the organization displayed on the login page. | + +### Instruction + +After enabling CAS, select SSO login on the login page. + +- If the login ticket is already saved in the browser cache, you can login NebulaGraph Dashboard Enterprise Edition directly. +- If there is no login ticket in the browser cache, it will jump to the central server for login verification. \ No newline at end of file diff --git a/docs-2.0-en/nebula-dashboard/1.what-is-dashboard.md b/docs-2.0-en/nebula-dashboard/1.what-is-dashboard.md new file mode 100644 index 00000000000..8afbbc461b3 --- /dev/null +++ b/docs-2.0-en/nebula-dashboard/1.what-is-dashboard.md @@ -0,0 +1,55 @@ +# What is NebulaGraph Dashboard Community Edition + +NebulaGraph Dashboard Community Edition (Dashboard for short) is a visualization tool that monitors the status of machines and services in NebulaGraph clusters. + +!!! enterpriseonly + + Dashboard Enterprise Edition adds features such as visual cluster creation, batch import of clusters, fast scaling, etc. For more information, see [Pricing](https://nebula-graph.io/pricing/). + +## Features + +Dashboard monitors: + +- The status of all the machines in clusters, including CPU, memory, load, disk, and network. + +- The information of all the services in clusters, including the IP addresses, versions, and monitoring metrics (such as the number of queries, the latency of queries, the latency of heartbeats, and so on). + +- The information of clusters, including the information of services, partitions, configurations, and long-term tasks. + +- Set how often the metrics page refreshes. + +## Scenarios + +You can use Dashboard in one of the following scenarios: + +- You want to monitor key metrics conveniently and quickly, and present multiple key information of the business to ensure the business operates normally. + +- You want to monitor clusters from multiple dimensions (such as the time, aggregate rules, and metrics). + +- After a failure occurs, you need to review it and confirm its occurrence time and unexpected phenomena. + +## Precautions + +The monitoring data will be retained for 14 days by default, that is, only the monitoring data within the last 14 days can be queried. + +!!! note + + The monitoring service is supported by Prometheus. The update frequency and retention intervals can be modified. For details, see [Prometheus](https://prometheus.io/docs/prometheus/latest/configuration/configuration/). + +## Version compatibility + +The version correspondence between NebulaGraph and Dashboard Community Edition is as follows. + +|NebulaGraph version|Dashboard version| +|:---|:---| +|3.5.0 |3.4.0| +|3.4.0 ~ 3.4.1|3.4.0、3.2.0| +|3.3.0 |3.2.0| +|2.5.0 ~ 3.2.0|3.1.0| +|2.5.x ~ 3.1.0|1.1.1| +|2.0.1~2.5.1|1.0.2| +|2.0.1~2.5.1|1.0.1| + +## Release note + +[Release](https://github.com/vesoft-inc/nebula-dashboard/releases/tag/{{dashboard.tag}}) diff --git a/docs-2.0-en/nebula-dashboard/2.deploy-dashboard.md b/docs-2.0-en/nebula-dashboard/2.deploy-dashboard.md new file mode 100644 index 00000000000..0191f62df36 --- /dev/null +++ b/docs-2.0-en/nebula-dashboard/2.deploy-dashboard.md @@ -0,0 +1,145 @@ +# Deploy Dashboard Community Edition + +This topic will describe how to deploy NebulaGraph Dashboard in detail. + +To download and compile the latest source code of Dashboard, follow the instructions on the [nebula dashboard GitHub page](https://github.com/vesoft-inc/nebula-dashboard). + +## Prerequisites + +Before you deploy Dashboard, you must confirm that: + +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../2.quick-start/1.quick-start-workflow.md). + +- Before the installation starts, the following ports are not occupied. + + - 9200 + + - 9100 + + - 9090 + + - 8090 + + - 7003 + +- The node-exporter is installed on the machines to be monitored. For details on installation, see [Prometheus document](https://prometheus.io/docs/guides/node-exporter/). + +## Steps + +1. Download the tar package[nebula-dashboard-{{ dashboard.release }}.x86_64.tar.gz](https://oss-cdn.nebula-graph.com.cn/nebula-graph-dashboard/{{ dashboard.release }}/nebula-dashboard-{{ dashboard.release }}.x86_64.tar.gz) as needed. + +2. Run `tar -xvf nebula-dashboard-{{ dashboard.release }}.x86_64.tar.gz` to decompress the installation package. + +3. Modify the `config.yaml` file in `nebula-dashboard`. + + The configuration file contains the configurations of four dependent services and configurations of clusters. The descriptions of the dependent services are as follows. + + |Service|Default port| Description| + |:---|:---|:---| + |nebula-http-gateway |8090| Provides HTTP ports for cluster services to execute nGQL statements to interact with the NebulaGraph database. | + |nebula-stats-exporter |9200| Collects the performance metrics in the cluster, including the IP addresses, versions, and monitoring metrics (such as the number of queries, the latency of queries, the latency of heartbeats, and so on). | + |node-exporter |9100| Collects the source information of nodes in the cluster, including the CPU, memory, load, disk, and network. | + |prometheus |9090| The time series database that stores monitoring data. | + + The descriptions of the configuration file are as follows. + + ```yaml + port: 7003 # Web service port. + gateway: + ip: hostIP # The IP of the machine where the Dashboard is deployed. + port: 8090 + https: false # Whether to enable HTTPS. + runmode: dev # Program running mode, including dev, test, and prod. It is used to distinguish between different running environments generally. + stats-exporter: + ip: hostIP # The IP of the machine where the Dashboard is deployed. + nebulaPort: 9200 + https: false # Whether to enable HTTPS. + node-exporter: + - ip: nebulaHostIP_1 # The IP of the machine where the NebulaGraph is deployed. + port: 9100 + https: false # Whether to enable HTTPS. + # - ip: nebulaHostIP_2 + # port: 9100 + # https: false + prometheus: + ip: hostIP # The IP of the machine where the Dashboard is deployed. + prometheusPort: 9090 + https: false # Whether to enable HTTPS. + scrape_interval: 5s # The interval for collecting the monitoring data, which is 1 minute by default. + evaluation_interval: 5s # The interval for running alert rules, which is 1 minute by default. + # Cluster node info + nebula-cluster: + name: 'default' # Cluster name + metad: + - name: metad0 + endpointIP: nebulaMetadIP # The IP of the machine where the Meta service is deployed. + port: 9559 + endpointPort: 19559 + # - name: metad1 + # endpointIP: nebulaMetadIP + # port: 9559 + # endpointPort: 19559 + graphd: + - name: graphd0 + endpointIP: nebulaGraphdIP # The IP of the machine where the Graph service is deployed. + port: 9669 + endpointPort: 19669 + # - name: graphd1 + # endpointIP: nebulaGraphdIP + # port: 9669 + # endpointPort: 19669 + storaged: + - name: storaged0 + endpointIP: nebulaStoragedIP # The IP of the machine where the Storage service is deployed. + port: 9779 + endpointPort: 19779 + # - name: storaged1 + # endpointIP: nebulaStoragedIP + # port: 9779 + # endpointPort: 19779 + ``` + +4. Run `./dashboard.service start all` to start the services. + +### Deploy Dashboard with Docker Compose + +If you are deploying Dashboard using docker, you should also modify the configuration file `config.yaml`, and then run `docker-compose up -d` to start the container. + +!!! note + + If you change the port number in `config.yaml`, the port number in `docker-compose.yaml` needs to be consistent as well. + +Run `docker-compose stop` to stop the container. + +## Manage services in Dashboard + +You can use the `dashboard.service` script to start, restart, stop, and check the Dashboard services. + +```bash +sudo /dashboard.service +[-v] [-h] + +``` + +| Parameter | Description | +| :------------------------- | :------------------- | +| `dashboard_path` | Dashboard installation path. | +| `-v` | Display detailed debugging information. | +| `-h` | Display help information. | +| `start` | Start the target services. | +| `restart` | Restart the target services. | +| `stop` | Stop the target services. | +| `status` | Check the status of the target services. | +| `prometheus` | Set the prometheus service as the target service. | +| `webserver` | Set the webserver Service as the target service. | +| `exporter` | Set the exporter Service as the target service. | +| `gateway` | Set the gateway Service as the target service. | +| `all` | Set all the Dashboard services as the target services. | + +!!! note + + To view the Dashboard version, run the command `./dashboard.service -version`. + +## Next to do + +[Connect to Dashboard](3.connect-dashboard.md) diff --git a/docs-2.0-en/nebula-dashboard/3.connect-dashboard.md b/docs-2.0-en/nebula-dashboard/3.connect-dashboard.md new file mode 100644 index 00000000000..a3e121a67b5 --- /dev/null +++ b/docs-2.0-en/nebula-dashboard/3.connect-dashboard.md @@ -0,0 +1,25 @@ +# Connect Dashboard + +After Dashboard is deployed, you can log in and use Dashboard on the browser. + +## Prerequisites + +- The Dashboard services are started. For more information, see [Deploy Dashboard](2.deploy-dashboard.md). + +- We recommend you to use the Chrome browser of the version above 89. Otherwise, there may be compatibility issues. + +## Procedures + +1. Confirm the IP address of the machine where the Dashboard service is installed. Enter `:7003` in the browser to open the login page. + +2. Enter the username and the passwords of the NebulaGraph database. + + - If authentication is enabled, you can log in with the created accounts. + + - If authentication is not enabled, you can only log in using `root` as the username and random characters as the password. + + To enable authentication, see [Authentication](../7.data-security/1.authentication/1.authentication.md). + +3. Select the NebulaGraph version to be used. + +4. Click **Login**. \ No newline at end of file diff --git a/docs-2.0-en/nebula-dashboard/4.use-dashboard.md b/docs-2.0-en/nebula-dashboard/4.use-dashboard.md new file mode 100644 index 00000000000..86530bea577 --- /dev/null +++ b/docs-2.0-en/nebula-dashboard/4.use-dashboard.md @@ -0,0 +1,143 @@ +# Dashboard + +NebulaGraph Dashboard consists of three parts: Machine, Service, and Management. This topic will describe them in detail. + +## Overview + +A screenshot that shows the overview of dashboard + +## Machine + +Click **Machine**->**Overview** to enter the machine overview page. + +On this page, you can view the variation of CPU, Memory, Load, Disk, and Network In/Out quickly. + +- By default, you can view the monitoring data for a maximum of 14 days. You can also select a time range or quickly select the latest 1 hour, 6 hours, 12 hours, 1 day, 3 days, 7 days, or 14 days. +- By default, you can view the monitoring data of all the instances in clusters. You can select the instances you want to view in the **instance** box. +- By default, the monitoring information page will not be updated automatically. You can set the update frequency of the monitoring information page globally or click the ![setup](https://docs-cdn.nebula-graph.com.cn/figures/refresh-220616.png) button to update the page manually. +- To set a base line, click the ![setup](https://docs-cdn.nebula-graph.com.cn/figures/Setup.png) button. +- To view the detailed monitoring information, click the ![watch](https://docs-cdn.nebula-graph.com.cn/figures/watch.png) button. In this example, select `Load` for details. The figure is as follows. + + A screenshot that shows the load of dashboard + + - You can set the monitoring time range, instance, update frequency and base line. + - You can search for or select the target metric. For details about monitoring metrics, see [Metrics](6.monitor-parameter.md). + - You can temporarily hide nodes that you do not need to view. + - You can click the ![watch](https://docs-cdn.nebula-graph.com.cn/figures/watch.png) button to view the detailed monitoring information. + +## Service + +Click **Service**->**Overview** to enter the service overview page. + +On this page, you can view the information of Graph, Meta, and Storage services quickly. In the upper right corner, the number of normal services and abnormal services will be displayed. + +!!! note + + In the **Service** page, only two monitoring metrics can be set for each service, which can be adjusted by clicking the **Set up** button. + +- By default, you can view the monitoring data for a maximum of 14 days. You can also select a time range or quickly select the latest 1 hour, 6 hours, 12 hours, 1 day, 3 days, 7 days, or 14 days. +- By default, you can view the monitoring data of all the instances in clusters. You can select the instances you want to view in the **instance** box. +- By default, the monitoring information page will not be updated automatically. You can set the update frequency of the monitoring information page globally or click the ![setup](https://docs-cdn.nebula-graph.com.cn/figures/refresh-220616.png) button to update the page manually. +- You can view the status of all the services in a cluster. +- To view the detailed monitoring information, click the ![watch](https://docs-cdn.nebula-graph.com.cn/figures/watch.png) button. In this example, select `Graph` for details. The figure is as follows. + + A screenshot that shows the graph service of dashboard + + - You can set the monitoring time range, instance, update frequency, period, aggregation and base line. + - You can search for or select the target metric. For details of monitoring metrics, see [Monitor parameter](6.monitor-parameter.md). + - You can temporarily hide nodes that you do not need to view. + - You can click the ![watch](https://docs-cdn.nebula-graph.com.cn/figures/watch.png) button to view the detailed monitoring information. + - The Graph service supports a set of space-level metrics. For more information, see the following section **Graph space**. + +### Graph space + +!!! note + + Before using graph space metrics, you need to set `enable_space_level_metrics` to `true` in the Graph service. For details, see [Graph Service configuration](../5.configurations-and-logs/1.configurations/3.graph-config.md. + +!!! compatibility "Space-level metric incompatibility" + + If a graph space name contains special characters, the corresponding metric data of that graph space may not be displayed. + +The service monitoring page can also monitor graph space level metrics. **Only when the behavior of a graph space metric is triggered, you can specify the graph space to view information about the corresponding graph space metric**. + +Space graph metrics record the information of different graph spaces separately. Currently, only the Graph service supports a set of space-level metrics. + +For information about the space graph metrics, see [Graph space](6.monitor-parameter.md). + +![graph-metrics](https://docs-cdn.nebula-graph.com.cn/figures/space_level_metrics.png) + +## Management + +### Overview info + +On the **Overview Info** page, you can see the information of the NebulaGraph cluster, including Storage leader distribution, Storage service details, versions and hosts information of each NebulaGraph service, and partition distribution and details. + +A screenshot that shows the cluster information of dashboard + +#### Storage Leader Distribution + +In this section, the number of Leaders and the Leader distribution will be shown. + +- Click the **Balance Leader** button in the upper right corner to distribute Leaders evenly and quickly in the NebulaGraph cluster. For details about the Leader, see [Storage Service](../1.introduction/3.nebula-graph-architecture/4.storage-service.md). + +- Click **Detail** in the upper right corner to view the details of the Leader distribution. + + +#### Version + +In this section, the version and host information of each NebulaGraph service will be shown. Click **Detail** in the upper right corner to view the details of the version and host information. + + +#### Service information + +In this section, the information on Storage services will be shown. The parameter description is as follows: + +| Parameter | Description | +| :--- | :--- | +| `Host` | The IP address of the host. | +| `Port` | The port of the host. | +| `Status` | The host status. | +| `Git Info Sha` | The commit ID of the current version. | +| `Leader Count` | The number of Leaders. | +| `Partition Distribution` | The distribution of partitions. | +| `Leader Distribution` | The distribution of Leaders. | + +Click **Detail** in the upper right corner to view the details of the Storage service information. + +#### Partition Distribution + +Select the specified graph space in the upper left corner, you can view the distribution of partitions in the specified graph space. You can see the IP addresses and ports of all Storage services in the cluster, and the number of partitions in each Storage service. + +Click **Detail** in the upper right corner to view more details. + +#### Partition information + +In this section, the information on partitions will be shown. Before viewing the partition information, you need to select a graph space in the upper left corner. The parameter description is as follows: + +|Parameter|Description| +|:---|:---| +|`Partition ID`|The ID of the partition.| +|`Leader`|The IP address and port of the leader.| +|`Peers`|The IP addresses and ports of all the replicas.| +|`Losts`|The IP addresses and ports of faulty replicas.| + +Click **Detail** in the upper right corner to view details. You can also enter the partition ID into the input box in the upper right corner of the details page to filter the shown data. + +### Config + +It shows the configuration of the NebulaGraph service. NebulaGraph Dashboard Community Edition does not support online modification of configurations for now. + +## Others + +In the lower left corner of the page, you can: + +- Sign out + +- Switch between Chinese and English + +- View the current Dashboard release + +- View the user manual and forum + +- Fold the sidebar diff --git a/docs-2.0-en/nebula-dashboard/6.monitor-parameter.md b/docs-2.0-en/nebula-dashboard/6.monitor-parameter.md new file mode 100644 index 00000000000..08bf0ce58ad --- /dev/null +++ b/docs-2.0-en/nebula-dashboard/6.monitor-parameter.md @@ -0,0 +1,88 @@ +# Metrics + +This topic will describe the monitoring metrics in NebulaGraph Dashboard. + +## Machine + +!!! note + + - All the machine metrics listed below are for the Linux operating system. + - The default unit in **Disk** and **Network** is byte. The unit will change with the data magnitude as the page displays. For example, when the flow is less than 1 KB/s, the unit will be Bytes/s. + - For versions of Dashboard Community Edition greater than v1.0.2, the memory occupied by Buff and Cache will not be counted in the memory usage. + +### CPU + +|Parameter|Description| +|:---|:---| +|`cpu_utilization`| The percentage of used CPU. | +|`cpu_idle`| The percentage of idled CPU. | +|`cpu_wait`| The percentage of CPU waiting for IO operations. | +|`cpu_user`| The percentage of CPU used by users. | +|`cpu_system`| The percentage of CPU used by the system. | + +### Memory + +|Parameter| Description| +|:---|:---| +|`memory_utilization`| The percentage of used memory. | +|`memory_used`| The memory space used (not including caches). | +|`memory_free`| The memory space available. | + +### Load + +|Parameter| Description| +|:---|:---| +|`load_1m`| The average load of the system in the last 1 minute. | +|`load_5m`| The average load of the system in the last 5 minutes. | +|`load_15m`| The average load of the system in the last 15 minutes. | + +### Disk + +|Parameter| Description| +|:---|:---| +|`disk_used_percentage`| The disk utilization percentage.| +|`disk_used`| The disk space used. | +|`disk_free`| The disk space available. | +|`disk_readbytes`| The number of bytes that the system reads in the disk per second. | +|`disk_writebytes`| The number of bytes that the system writes in the disk per second. | +|`disk_readiops`| The number of read queries that the disk receives per second. | +|`disk_writeiops`| The number of write queries that the disk receives per second. | +|`inode_utilization`| The percentage of used inode. | + +### Network + +|Parameter| Description| +|:---|:---| +|`network_in_rate`| The number of bytes that the network card receives per second. | +|`network_out_rate`| The number of bytes that the network card sends out per second. | +|`network_in_errs`| The number of wrong bytes that the network card receives per second. | +|`network_out_errs`| The number of wrong bytes that the network card sends out per second. | +|`network_in_packets`| The number of data packages that the network card receives per second. | +|`network_out_packets`| The number of data packages that the network card sends out per second. | + +## Service + +### Period + +The period is the time range of counting metrics. It currently supports 5 seconds, 60 seconds, 600 seconds, and 3600 seconds, which respectively represent the last 5 seconds, the last 1 minute, the last 10 minutes, and the last 1 hour. + +### Metric methods + +|Parameter|Description| +|:---|:---| +|`rate`| The average rate of operations per second in a period. | +|`sum`| The sum of operations in the period. | +|`avg`| The average latency in the cycle. | +|`P75`| The 75th percentile latency. | +|`P95`| The 95th percentile latency. | +|`P99`| The 99th percentile latency. | +|`P999`| The 99.9th percentile latency. | + + +!!! note + + Dashboard collects the following metrics from the NebulaGraph core, but only shows the metrics that are important to it. + +{% include "/source-monitoring-metrics.md" %} + + diff --git a/docs-2.0-en/nebula-explorer/db-management/dbuser_management.md b/docs-2.0-en/nebula-explorer/db-management/dbuser_management.md index 8f50abe1fcf..ecf1d619251 100644 --- a/docs-2.0-en/nebula-explorer/db-management/dbuser_management.md +++ b/docs-2.0-en/nebula-explorer/db-management/dbuser_management.md @@ -14,7 +14,8 @@ At the top navigation bar, click ![db_user_management](https://docs-cdn.nebula-g !!! note - Only the `root` user can create users. + - Only the `root` user can create users. + - Since there is a compatibility issue with the nGQL syntax, the IP whitelist modification function is disabled when connecting to the graph database of version 3.5.x and below. You can modify the IP whitelist by executing the nGQL statement for the corresponding database version. 1. In the tab **User list**, click **Create User** and set the following parameters. @@ -50,7 +51,7 @@ At the top navigation bar, click ![db_user_management](https://docs-cdn.nebula-g Only the `root` user can view the **User List**. - View: View the user permissions in each space. -- Edit: Change the password and IP whitelist of the user. You do not need to provide the old password when changing the password. If the user is not `root`, you can change the password in ![clear_connection](https://docs-cdn.nebula-graph.com.cn/figures/session_221024.png) on the upper right corner of the page. +- Edit: Change the password or IP whitelist of the user. You do not need to provide the old password when changing the password. If the user is not `root`, you can change the password in ![clear_connection](https://docs-cdn.nebula-graph.com.cn/figures/session_221024.png) on the upper right corner of the page. - Delete User: Only the `root` user can delete other users. - Search user: Search for the account by keyword. diff --git a/docs-2.0-en/nebula-explorer/db-management/explorer-console.md b/docs-2.0-en/nebula-explorer/db-management/explorer-console.md index 3e06da92141..ae3d237e0fd 100644 --- a/docs-2.0-en/nebula-explorer/db-management/explorer-console.md +++ b/docs-2.0-en/nebula-explorer/db-management/explorer-console.md @@ -21,7 +21,7 @@ The following table lists the functions on the console page. | 5 | run | After inputting the nGQL statement in the input box, click ![run](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-play.png) button to indicate the operation to start running the statement. | | 6 | save as template | Save the nGQL statement entered in the input box as a template. For details, see [nGQL template](ngql-template.md). | | 7 | input box | After inputting the nGQL statements, click the ![run](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-play.png) button to run the statement. You can input multiple statements and run them at the same time by using the separator `;`, and also use the symbol `//` to add comments. | -| 8 | custom parameters display | Click the ![Query](https://docs-cdn.nebula-graph.com.cn/figures/down.png) button to expand the custom parameters for parameterized query. For details, see [Manage parameters](../../nebula-console.md).| +| 8 | custom parameters display | Click the ![Query](https://docs-cdn.nebula-graph.com.cn/figures/down.png) button to expand the custom parameters for parameterized query. For details, see [Manage parameters](../../14.client/nebula-console.md).| | 9 | statement running status | After running the nGQL statement, the statement running status is displayed. If the statement runs successfully, the statement is displayed in green. If the statement fails, the statement is displayed in red. | | 10 | add to favorites | Click the ![save](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-save.png) button to save the statement as a favorite, the button for the favorite statement is colored in yellow exhibit.| | 11 | export CSV file or PNG file | After running the nGQL statement to return the result, when the result is in **Table** window, click the ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) button to export as a CSV file.
Switch to the **Graph** window and click the ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) button to save the results as a CSV file or PNG image export. | diff --git a/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-connect.md b/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-connect.md index 0cc9cc822fa..4d07ade68d8 100644 --- a/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-connect.md +++ b/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-connect.md @@ -1,6 +1,10 @@ # Connect to NebulaGraph -After successfully launching Explorer, you need to configure to connect to NebulaGraph. You can connect directly to NebulaGraph by default. To ensure data security, OAuth2.0 authentication is also supported. You can connect to NebulaGraph only after the authentication is passed. +After successfully launching Explorer, you can enter database credentials to connect to the database. You can connect directly to NebulaGraph by default. + +!!! note + + To ensure data security, OAuth2.0 and CAS authentication are also supported. You can connect to NebulaGraph only after the authentication is passed. For detailed configuration, see [Deploy Explorer](ex-ug-deploy.md). ## Prerequisites @@ -14,37 +18,6 @@ Before connecting to the NebulaGraph database, you need to confirm the following - We recommend you to use the Chrome browser of the version above 89. Otherwise, there may be compatibility issues. -## OAuth2.0 Configuration - -!!! caution - - The feature is still in beta. It will continue to be optimized. - -!!! note - - If you want to connect directly to NebulaGraph, see **Procedure** below. - -To enable OAuth2.0 authentication, modify the configuration file in the Explorer installation directory. The path is `config/app-config.yaml`. - -The descriptions of the OAuth configuration are as follows. - -|Parameter|Example|Description| -|:--|:--|:--| -|`Enable`|`false`| Enable or disable OAuth2.0 authentication. | -|`ClientID` | `4953xxx-mmnoge13xx.apps.googleusercontent.com`| The application's ClientId. | -|`ClientSecret` | `GOCxxx-xaytomFexxx` | The application's ClientSecret. | -|`RedirectURL` | `http://dashboard.vesoft-inc.com/login` |The URL that redirects to Dashboard. | -|`AuthURL` | `https://accounts.google.com/o/oauth2/auth` | The URL used for authentication. | -|`TokenURL` | `https://oauth2.googleapis.com/token`| The URL used to get the access_token. | -|`UserInfoURL` | `https://www.googleapis.com/oauth2/v1/userinfo`| The URL used to get the user information. | -|`UsernameKey` | `email`| The key of the user name. | -|`Organization` | `vesoft company` | The organization name. | -|`TokenName`|`oauth_token`| The name of the token in the cookie.| -|`Scope`| `email`| Scope of OAuth permissions. The scope of permissions needs to be a subset of the scope configured by the vendor's OAuth2.0 platform, otherwise, the request will fail. Make sure the `UsernameKey` is accessible within the requested scope. | -|`AvatarKey`|`picture`| The key of the avatar in the user information.| - -After the configuration is complete, restart the Explorer service. The OAuth authentication is displayed on the login page. You can continue to connect to NebulaGraph only after the authentication is passed. - ## Procedure To connect Explorer to NebulaGraph, follow these steps: diff --git a/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-deploy.md b/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-deploy.md index f9815e4bbd1..9a5aac47e65 100644 --- a/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-deploy.md +++ b/docs-2.0-en/nebula-explorer/deploy-connect/ex-ug-deploy.md @@ -44,19 +44,21 @@ In addition, if you need to use **Workflow** for complex graph computing, you ne 2. Use `sudo rpm -i ` to install RPM package. - For example, use the following command to install Explorer. The default installation path is `/usr/local/nebula-explorer`. + For example, use the following command to install Explorer. The default installation path is `/usr/local/nebula-explorer`. - ```bash - sudo rpm -i nebula-explorer-.x86_64.rpm - ``` + ```bash + sudo rpm -i nebula-explorer-.x86_64.rpm + ``` - You can also install it to the specified path using the following command: - ```bash - sudo rpm -i nebula-explorer-.x86_64.rpm --prefix= - ``` + You can also install it to the specified path using the following command: + ```bash + sudo rpm -i nebula-explorer-.x86_64.rpm --prefix= + ``` 3. Enter the extracted folder, and modify the `app-config.yaml` file in the `config` directory, set the value of `LicenseManagerURL` to the host IP of LM and the port number `9119`, for example, `192.168.8.100:9119`. + For more configuration descriptions, see the **Configuration file description** section at the end of the topic. + 4. (Optional) Configure the Dag Controller. See the **Configure Dag Controller** section below. 5. Enter the folder `nebula-explorer`, and start the service using the following command. @@ -117,6 +119,8 @@ sudo rpm -e nebula-graph-explorer-.x86_64 3. Enter the extracted folder, and modify the `app-config.yaml` file in the `config` directory, set the value of `LicenseManagerURL` to the host IP of LM and the port number `9119`, for example, `192.168.8.100:9119`. + For more configuration descriptions, see the **Configuration file description** section at the end of the topic. + 4. (Optional) Configure the Dag Controller. See the **Configure Dag Controller** section below. 5. Enter the folder `nebula-explorer`, and start the service using the following command. @@ -164,6 +168,8 @@ sudo dpkg -r nebula-explorer 3. Enter the extracted folder, and modify the `app-config.yaml` file in the `config` directory, set the value of `LicenseManagerURL` to the host IP of LM and the port number `9119`, for example, `192.168.8.100:9119`. + For more configuration descriptions, see the **Configuration file description** section at the end of the topic. + 4. (Optional) Configure the Dag Controller. See the **Configure Dag Controller** section below. 5. Enter the folder `nebula-explorer`, and start the service using the following command. @@ -271,6 +277,137 @@ The Dag Controller can perform complex graph computing with NebulaGraph Analytic exec_file: /home/xxx/nebula-analytics/scripts/run_algo.sh ``` +## Directory structure + +The structure of the Explorer Enterprise Edition is as follows: + +```bash +├── CMakeLists.txt # CMake configuration files +| +├── config # Configuration files +│   +├── dag-ctrl # Dag Controller installation directory +│   +├── scripts # Scripts for managing services +│   +├── tmp # Temporary files +| +└── nebula-explorer-server # Explorer service application +``` + +## View logs + +Users can view the Explorer Enterprise Edition logs in the `logs` directory. + +For example: + +``` +cat logs/access.log +``` + +The descriptions of the log files are as follows. + +|Log file| Description | +|:--|:--| +|`access.log`| Access log. Records all request messages for accessing the services, including request time, source address, requested URL, HTTP method, returned HTTP status code, etc.
It takes effect only when the `Log.Mode` in the Explorer configuration is `file`. | +|`error.log`| Error log. Records error messages that occur during service running. This may include runtime errors, system errors, service logic errors, etc.
It takes effect only when the `Log.Mode` in the Explorer configuration is `file`. | +|`severe.log`| Severe log. Records error messages that could cause the system to crash, or seriously affect the correct functioning of the system. This may include runtime errors, system errors, serious service logic errors, etc.
It takes effect only when the `Log.Mode` in the Explorer configuration is `file`. | +|`slow.log`| Slow log. Records requests or operations whose execution time exceeds a preset threshold, helping users identify performance bottlenecks.
It takes effect only when the `Log.Mode` in the Explorer configuration is `file`. | +|`stat.log`| Statistic log. Records statistical information about the service, the content of which depends on the needs of the application and may include a variety of performance metrics, usage statistics, etc.
It takes effect only when the `Log.Mode` in the Explorer configuration is `file`. | + +## Configuration file description + +```yaml +Name: explorer +Version: {{explorer.release}} +Database: nebula +Host: 0.0.0.0 # Specifies the address where explorer can be accessed. +Port: 7002 # The default access port for explorer. + +# The following parameters need to be configured when using SSL encrypted access or inline frames. Currently only self-signed certificates are supported, see the iframework section for how to do this. +# CertFile: "./config/Explorer.crt" # The path to the SSL public key certificate. +# KeyFile: "./config/Explorer.key" # The path to the SSL key. + +MaxBytes: 1073741824 # The maximum ContentLength that HTTP can accept, default is 1048576. range: 0 ~ 8388608. +Timeout: 30000 # Access timeout time. + +# The deployment mode of explorer, supports single and multiple instances.The optional values are single and multi. +# In multi-instance mode, local storage service (data import) will be disabled to ensure data consistency between instances. +# AppInstance: "multi" +Log: # explorer run log settings. See https://go-zero.dev/en/docs/tutorials/go-zero/configuration/log/ + Mode: file # Log saving method. The optional values are: console and file. console means the service log will be recorded in webserver.log; file means the service log will be recorded in access.log, error.log, sever.log, slow.log, and stat.log respectively. + Level: error # Log output level. The optional values are: debug, info, error, and severe. + KeepDays: 7 # The number of days the log is retained. +Env: "local" +Debug: + Enable: false # Whether to enable Debug mode. +Auth: + TokenName: "explorer_token" # The name of the token after login. + AccessSecret: "login_secret" # The secret of the token after login. + AccessExpire: 259200 # The validity of the token after login, in seconds. +File: + UploadDir: "./data/upload/" # The path where the uploaded files are stored when importing data. + TasksDir: "./data/tasks" # Task file storage path. Includes imported tasks, workflow tasks, etc. +# SqliteDbFilePath # Deprecated. +# TaskIdPath: "./data/taskId.data" # Deprecated. Please use DB.SqliteDbFilePath instead. +DB: + Enable: true + LogLevel: 4 # Database runtime log levels. 1, 2, 3, and 4 correspond to Silent, ERROR, Warn, and INFO, respectively. + IgnoreRecordNotFoundError: false + AutoMigrate: true # Whether or not to automatically create database tables. The default is true. + Type: "sqlite3" # The type of database used in the backend. Supports mysql and sqlite3. + Host: "127.0.0.1:3306" # The IP and port of the database. + Name: "nebula" # Database name. + User: "root" # Database username. + Password: "123456" # Database password. + SqliteDbFilePath: "./data/tasks.db" # This parameter is required for sqlite3 only. The address of the database file. + MaxOpenConns: 30 # Maximum number of active connections in the connection pool. + MaxIdleConns: 10 # Maximum number of free connections in the connection pool. +Analytics: + Host: "http://127.0.0.1:9002" # The address of the DAG service for the workflow. + # RPC_HDFS_PASSWORD: "passward" # The password for the HDFS RPC service. +# OAuth: # Deprecated. Continues to be compatible with version 3.x. Please use SSO instead. +# Enable: false +# ClientID: "10274xxxx-v2kn8oe6xxxxx.apps.googleusercontent.com" # The client ID of the OAuth service. +# ClientSecret: "GOCSPX-8Enxxxxx" # The client secret for the OAuth service. +# AuthURL: "https://accounts.google.com/o/oauth2/v2/auth" # The URL of the OAuth service. +# TokenURL: "https://oauth2.googleapis.com/token" # The URL to get the access token. +# Scopes: "https://www.googleapis.com/auth/userinfo.email" # The scope of the OAuth service. +# UserInfoURL: "https://www.googleapis.com/oauth2/v1/userinfo" # The URL to get the user information. +# UsernameKey: "email" # Username field. +# Organization: "vesoft" # OAuth vendor name. +# TokenName: "oauth_token" # The name of the token in the cookie. +# RedirectURL: "http://127.0.0.1:7002/login" # The redirect URL for the OAuth service. +# AvatarKey: "picture" # The key for the avatar in the user information. +SSO: + Enable: false # Whether to enable single sign-on. + Type: "CAS" # Single sign-on service type. The available values are OAuth2 and CAS. Configure this parameter and then configure the corresponding OAuthConfig or CASConfig below. + OAuthConfig: + ClientID: "1039194xxxxx-taufdxxxxx.apps.googleusercontent.com" # The client ID of the OAuth service. + ClientSecret: "GOCSPX-F_xBzfitifMU7acySxxxxx" # The client secret for the OAuth service. + AuthURL: "https://accounts.google.com/o/oauth2/v2/auth" # The URL of the OAuth service. + TokenURL: "https://oauth2.googleapis.com/token" # The URL to get the access token. + Scopes: "https://www.googleapis.com/auth/userinfo.email" # The scope of the OAuth service. + UserInfoURL: "https://www.googleapis.com/oauth2/v1/userinfo" # The URL to get the user information. + UsernameKey: "email" # Username field. + Organization: "vesoft" # OAuth vendor name. It will be displayed on the login page. + TokenName: "oauth_token" # The name of the token in the cookie. + RedirectURL: "http://127.0.0.1:7002/login" # The redirect URL for the OAuth service. + AvatarKey: "picture" # The key for the avatar in the user information. + CASConfig: + Address: "" # The address of the CAS service. + Organization: "vesoft" # CAS vendor name. It will be displayed on the login page. + AvatarKey: "avatar" # The key for the avatar in the user information. + TokenName: "cas_token" # The name of the token in the cookie. +IframeMode: + Enable: false # Whether to enable iframe mode. +Any source is allowed by default. + # Origins: # The source whitelist of iframe. Any source is allowed by default. + # - "http://192.168.8.8" +LicenseManagerURL: http://192.168.8.100:9119 # License manager url. +CorsOrigins: [] # The list of domains that are allowed to initiate cross-domain requests. +``` + ## Next to do [Connect to Explorer](ex-ug-connect.md) diff --git a/docs-2.0-en/nebula-explorer/faq.md b/docs-2.0-en/nebula-explorer/faq.md index a78fd63fc2e..e608f5b7e27 100644 --- a/docs-2.0-en/nebula-explorer/faq.md +++ b/docs-2.0-en/nebula-explorer/faq.md @@ -84,3 +84,37 @@ If the port is not opened, an error similar to the following may be reported: ## How to resolve the error `broadcast.hpp:193] Check failed: (size_t)recv_bytes >= sizeof(chunk_tail_t) recv message too small: 0`? The amount of data to be processed is too small, but the number of compute nodes and processes is too large. Smaller `clusterSize` and `processes` need to be set when submitting jobs. + +## How to implement a highly available architecture + +Users can use third-party high-availability software (e.g. [HAProxy](https://www.haproxy.org/)) to implement the high-availability architecture for Explorer and the high-availability architecture for LM. + +For example, you can deploy Explorer service and database service on multiple machines. Then use HAProxy to implement their respective load balancing. + +Then fill in the external interfaces of database service into the configuration of the Explorer, as shown in the example below: + +```yaml +# The deployment mode of explorer, supports single and multiple instances.The optional values are single and multi. +# In multi-instance mode, local storage service (data import) will be disabled to ensure data consistency between instances. +AppInstance: "multi" + +# Database configuration +DB: + Enable: true + LogLevel: 4 # Database runtime log levels. 1, 2, 3, and 4 correspond to Silent, ERROR, Warn, and INFO, respectively. + IgnoreRecordNotFoundError: false + AutoMigrate: true # Whether or not to automatically create database tables. The default is true. + Type: "mysql" # The type of database used in the backend. Supports mysql and sqlite3. PolarDB is fully compatible with MySQL, if it is PolarDB, just fill in mysql. + Host: "192.168.8.200:3306" # The external IP and port for the high availability database service. + Name: "nebula" # Database name. + User: "root" # Database username. + Password: "123456" # Database password. + # SqliteDbFilePath: "./data/tasks.db" # This parameter is required for sqlite3 only. The address of the database file. + MaxOpenConns: 30 # Maximum number of active connections in the connection pool. + MaxIdleConns: 10 # Maximum number of free connections in the connection pool. +LicenseManagerURL: http://192.168.8.100:9119 # License manager url. +``` + +Finally, just access the external interface of Explorer provided by HAProxy. + +For detailed solutions, you can contact the after-sales staff for consultation. diff --git a/docs-2.0-en/nebula-explorer/iframe.md b/docs-2.0-en/nebula-explorer/iframe.md index 72c05866ae9..20788cbf355 100644 --- a/docs-2.0-en/nebula-explorer/iframe.md +++ b/docs-2.0-en/nebula-explorer/iframe.md @@ -45,23 +45,79 @@ The Explorer has been installed. 3. Embed the Explorer page by using iframe on a third-party page. The work needs to be developed by yourself. -4. On the parent page, pass the login message through the postMessage method in the following format: +4. On the parent page, pass the request through the postMessage method as shown in the following example: ```json - { type: 'NebulaGraphExploreLogin', - data: { - authorization: 'WyJyb290IiwibmVidWxhIl0=', - host: '192.168.8.240:9669', - space: 'basketballplayer' - } } + const links = [ + { + source: 'player150', + target: 'player143', + id: 'follow player150->player143 @0', + rank: 0, + edgeType: 'follow', + properties: { + degree: 90, + }, + color: '#d40e0e', + }, + { + source: 'player143', + target: 'player150', + id: 'follow player143->player150 @0', + rank: 0, + edgeType: 'follow', + properties: { + degree: 90, + }, + }, + ]; + + const nodes = [ + { + id: 'player150', + tags: ['player'], + properties: { + player: { + age: 20, + name: 'Luka Doncic', + }, + }, + color: '#20eb14', + }, + { + id: 'player143', + tags: ['player'], + properties: { + player: { + age: 23, + name: 'Kristaps Porzingis', + }, + }, + color: '#3713ed', + }, + ]; + + // login + iframeEle.contentWindow.postMessage( + { + // `NebulaGraphExploreLogin` type has been deprecated and `ExplorerLogin` is used instead, but continues to be compatible with version 3.x. + type: 'ExplorerLogin', + data: { + authorization: 'WyJyb290IiwibmVidWxhIl0=', // NebulaGraph accounts and passwords were formed into an array and serialized, then Base64 encoded. The array format is `['account', 'password']`. The example is['root', 'nebula']. The encoded result is `WyJyb290IiwibmVidWxhIl0=`. + host: '192.168.8.240:9669', // The graph service address of NebulaGraph. + space: 'demo_basketball', // The name of the target graph space. + }, + }, + '*' + ); + + // add vertexes or edges + iframeEle.contentWindow.postMessage({ type: 'ExplorerAddCanvasElements', data: { nodes, links } }, '*') + + // Clear canvas + iframeEle.contentWindow.postMessage({ type: 'ExplorerClearCanvas' }, '*') ``` - - type: The method type must be `NebulaGraphExploreLogin`. - - data: - - `authorization`: NebulaGraph accounts and passwords were formed into an array and serialized, then Base64 encoded. The array format is `['account', 'password']`. The example is['root', 'nebula']. The encoded result is `WyJyb290IiwibmVidWxhIl0=`. - - `host`: The graph service address of NebulaGraph. - - `space`: The name of the target graph space. - 5. Start the Explorer service. !!! note diff --git a/docs-2.0-en/nebula-importer/config-with-header.md b/docs-2.0-en/nebula-importer/config-with-header.md deleted file mode 100644 index 4112bade2ed..00000000000 --- a/docs-2.0-en/nebula-importer/config-with-header.md +++ /dev/null @@ -1,182 +0,0 @@ -# Configuration with Header - -For a CSV file with header, you need to set `withHeader` to `true` in the configuration file, indicating that the first behavior in the CSV file is the header. The header content has special meanings. - -!!! caution - - If the CSV file contains headers, the Importer will parse the Schema of each row of data according to the headers and ignore the vertex or edge settings in the YAML file. - -## Sample files - -The following is an example of a CSV file with header: - -- sample of vertex - - Example data for `student_with_header.csv`: - - ```csv - :VID(string),student.name:string,student.age:int,student.gender:string - student100,Monica,16,female - student101,Mike,18,male - student102,Jane,17,female - ``` - - The first column is the vertex ID, followed by the properties `name`, `age`, and `gender`. - -- sample of edge - - Example data for `follow_with_header.csv`: - - ```csv - :SRC_VID(string),:DST_VID(string),:RANK,follow.degree:double - student100,student101,0,92.5 - student101,student100,1,85.6 - student101,student102,2,93.2 - student100,student102,1,96.2 - ``` - - The first two columns are the start vertex ID and destination vertex ID, respectively. The third column is rank, and the fourth column is property `degree`. - -## Header format description - -The header defines the start vertex, the destination vertex, the rank, and some special functions by keywords as follows: - -- `:VID`(mandatory): Vertex ID. Need to use `:VID(type)` form to set data type, for example `:VID(string)` or `:VID(int)`. - -- `:SRC_VID`(mandatory): The start vertex ID of the edge. The data type needs to be set in the form `:SRC_VID(type)`. - -- `:DST_VID`(mandatory): The destination vertex ID of the edge. The data type needs to be set in the form `:DST_VID(type)`. - -- `:RANK`(optional): The rank value of the edge. - -- `:IGNORE`(optional): Ignore this column when inserting data. - -- `:LABEL`(optional): Insert (+) or delete (-) the row. Must be column 1. For example: - - ```csv - :LABEL, - +, - -, - ``` - -!!! note - All columns except the `:LABEL` column can be sorted in any order, so for larger CSV files, the user has the flexibility to set the header to select the desired column. - -For Tag or Edge type properties, the format is `.:`, described as follows: - -- ``: Tag or Edge type name. - -- ``: property name. - -- ``: property type. Support `bool`, `int`, `float`, `double`, `timestamp` and `string`, default `string`. - -Such as `student.name:string`, `follow.degree:double`. - -## Sample configuration - -```yaml -version: v2 - -description: example - -# Whether to delete temporarily generated logs and error data files. -removeTempFiles: false - -clientSettings: - - # Retry times of nGQL statement execution failures. - retry: 3 - - # Number of NebulaGraph client concurrency. - concurrency: 10 - - # Cache queue size per NebulaGraph client. - channelBufferSize: 128 - - # Specifies the NebulaGraph space to import the data into. - space: student - - # Connection information. - connection: - user: root - password: nebula - address: 192.168.*.13:9669 - - postStart: - # Configure some of the operations to perform after connecting to the NebulaGraph server, and before inserting data. - commands: | - DROP SPACE IF EXISTS student; - CREATE SPACE IF NOT EXISTS student(partition_num=5, replica_factor=1, vid_type=FIXED_STRING(20)); - USE student; - CREATE TAG student(name string, age int,gender string); - CREATE EDGE follow(degree int); - - # The interval between the execution of the above command and the execution of the insert data command. - afterPeriod: 15s - - preStop: - # Configure some of the actions you performed before disconnecting from the NebulaGraph server. - commands: | - -# Path of the error log file. -logPath: ./err/test.log - -# CSV file Settings. -files: - - # Path for storing data files. If a relative path is used, the path is merged with the current configuration file directory. The first data file in this example is vertex data. - - path: ./student_with_header.csv - - # Insert the failed data file storage path, so that data can be written later. - failDataPath: ./err/studenterr - - # The number of statements inserting data in a batch. - batchSize: 10 - - # Limit on the number of rows of read data. - limit: 10 - - # Whether to insert rows in the file in order. If the value is set to false, the import rate decreases due to data skew. - inOrder: true - - # File type. Currently, only CSV files are supported. - type: csv - - csv: - # Whether there is a header. - withHeader: true - - # Whether there is a LABEL. - withLabel: false - - # Specifies the delimiter for the CSV file. A string delimiter that supports only one character. - delimiter: "," - - schema: - # Schema type. Possible values are vertex and edge. - type: vertex - - # The second data file in this example is edge data. - - path: ./follow_with_header.csv - failDataPath: ./err/followerr - batchSize: 10 - limit: 10 - inOrder: true - type: csv - csv: - withHeader: true - withLabel: false - schema: - # The type of Schema is edge. - type: edge - edge: - # Edge type name. - name: follow - - # Whether to include rank. - withRanking: true -``` - -!!! Note - - The data type of the vertex ID must be the same as the data type of the statement in `clientSettings.postStart.commands` that creates the graph space. diff --git a/docs-2.0-en/nebula-importer/config-without-header.md b/docs-2.0-en/nebula-importer/config-without-header.md deleted file mode 100644 index cda9718a9df..00000000000 --- a/docs-2.0-en/nebula-importer/config-without-header.md +++ /dev/null @@ -1,211 +0,0 @@ -# Configuration without Header - -For CSV files without header, you need to set `withHeader` to `false` in the configuration file, indicating that the CSV file contains only data (excluding the header of the first row). You may also need to set the data type and corresponding columns. - -## Sample files - -The following is an example of a CSV file without header: - -- sample of vertex - - Example data for `student_without_header.csv`: - - ```csv - student100,Monica,16,female - student101,Mike,18,male - student102,Jane,17,female - ``` - - The first column is the vertex ID, followed by the properties `name`, `age`, and `gender`. - -- sample of edge - - Example data for `follow_without_header.csv`: - - ```csv - student100,student101,0,92.5 - student101,student100,1,85.6 - student101,student102,2,93.2 - student100,student102,1,96.2 - ``` - - The first two columns are the start vertex ID and destination vertex ID, respectively. The third column is rank, and the fourth column is property `degree`. - -## Sample configuration - -```yaml -version: v2 - -description: example - -# Whether to delete temporarily generated logs and error data files. -removeTempFiles: false - -clientSettings: - - # Retry times of nGQL statement execution failures. - retry: 3 - - # Number of NebulaGraph client concurrency. - concurrency: 10 - - # Cache queue size per NebulaGraph client. - channelBufferSize: 128 - - # Specifies the NebulaGraph space to import the data into. - space: student - - # Connection information. - connection: - user: root - password: nebula - address: 192.168.*.13:9669 - - postStart: - # Configure some of the operations to perform after connecting to the NebulaGraph server, and before inserting data. - commands: | - DROP SPACE IF EXISTS student; - CREATE SPACE IF NOT EXISTS student(partition_num=5, replica_factor=1, vid_type=FIXED_STRING(20)); - USE student; - CREATE TAG student(name string, age int,gender string); - CREATE EDGE follow(degree int); - - # The interval between the execution of the above command and the execution of the insert data command. - afterPeriod: 15s - - preStop: - # Configure some of the actions you performed before disconnecting from the NebulaGraph server. - commands: | - -# Path of the error log file. -logPath: ./err/test.log - -# CSV file Settings. -files: - - # Path for storing data files. If a relative path is used, the path is merged with the current configuration file directory. The first data file in this example is vertex data. - - path: ./student_without_header.csv - - # Insert the failed data file storage path, so that data can be written later. - failDataPath: ./err/studenterr - - # The number of statements inserting data in a batch. - batchSize: 10 - - # Limit on the number of rows of read data. - limit: 10 - - # Whether to insert rows in the file in order. If the value is set to false, the import rate decreases due to data skew. - inOrder: true - - # File type. Currently, only CSV files are supported. - type: csv - - csv: - # Whether there is a header. - withHeader: false - - # Whether there is a LABEL. - withLabel: false - - # Specifies the delimiter for the CSV file. A string delimiter that supports only one character. - delimiter: "," - - schema: - # Schema type. Possible values are vertex and edge. - type: vertex - - vertex: - - # Vertex ID Settings. - vid: - # The vertex ID corresponds to the column number in the CSV file. Columns in the CSV file are numbered from 0. - index: 0 - - # The data type of the vertex ID. The optional values are int and string, corresponding to INT64 and FIXED_STRING in the NebulaGraph, respectively. - type: string - - # Tag Settings. - tags: - # Tag name. - - name: student - - # property Settings in the Tag. - props: - # property name. - - name: name - - # Property data type. - type: string - - # Property corresponds to the sequence number of the column in the CSV file. - index: 1 - - - name: age - type: int - index: 2 - - name: gender - type: string - index: 3 - - # The second data file in this example is edge data. - - path: ./follow_without_header.csv - failDataPath: ./err/followerr - batchSize: 10 - limit: 10 - inOrder: true - type: csv - csv: - withHeader: false - withLabel: false - schema: - # The type of Schema is edge. - type: edge - edge: - # Edge type name. - name: follow - - # Whether to include rank. - withRanking: true - - # Start vertex ID setting. - srcVID: - # Data type. - type: string - - # The start vertex ID corresponds to the sequence number of a column in the CSV file. - index: 0 - - # Destination vertex ID. - dstVID: - type: string - index: 1 - - # rank setting. - rank: - # Rank Indicates the rank number of a column in the CSV file. If index is not set, be sure to set the rank value in the third column. Subsequent columns set each property in turn. - index: 2 - - # Edge Type property Settings. - props: - # property name. - - name: degree - - # Data type. - type: double - - # Property corresponds to the sequence number of the column in the CSV file. - index: 3 -``` - -!!! Note - - - The sequence numbers of the columns in the CSV file start from 0, that is, the sequence numbers of the first column are 0, and the sequence numbers of the second column are 1. - - - The data type of the vertex ID must be the same as the data type of the statement in `clientSettings.postStart.commands` that creates the graph space. - - - If the index field is not specified, the CSV file must comply with the following rules: - - + In the vertex data file, the first column must be the vertex ID, followed by the properties, and must correspond to the order in the configuration file. - - + In the side data file, the first column must be the start vertex ID, the second column must be the destination vertex ID, if `withRanking` is `true`, the third column must be the rank value, and the following columns must be properties, and must correspond to the order in the configuration file. diff --git a/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md index df6814fc62a..7361aad3d0e 100644 --- a/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md @@ -41,7 +41,7 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra | NebulaGraph | NebulaGraph Operator | | ------------- | -------------------- | -| 3.5.x ~ 3.6.0 | 1.5.0, 1.6.0 | +| 3.5.x ~ 3.6.0 | 1.5.0, 1.6.x | | 3.0.0 ~ 3.4.1 | 1.3.0, 1.4.0 ~ 1.4.2 | | 3.0.0 ~ 3.3.x | 1.0.0, 1.1.0, 1.2.0 | | 2.5.x ~ 2.6.x | 0.9.0 | diff --git a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 5bc7fe3a54a..ad315dbcfb7 100644 --- a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -44,53 +44,15 @@ The following example shows how to create a NebulaGraph cluster by creating a cl - `DOCKER_REGISTRY_SERVER`: Specify the server address of the private repository from which the image will be pulled, such as `reg.example-inc.com`. - `DOCKER_USER`: The username for the image repository. - `DOCKER_PASSWORD`: The password for the image repository. + {{ent.ent_end}} 3. Create a file named `apps_v1alpha1_nebulacluster.yaml`. - - For a NebulaGraph Community cluster - - For the file content, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - ??? Info "Expand to show sample parameter descriptions" - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `metadata.name` | - | The name of the created NebulaGraph cluster. | - |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | - | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | - | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | - | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | - | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | - | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | - | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | - | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| - | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | - |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| - | `spec.reference.name` | - | The name of the dependent controller. | - | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| - |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| {{ ent.ent_begin }} - - For a NebulaGraph Enterprise cluster - - Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. - - !!! enterpriseonly - - Make sure that you have access to NebulaGraph Enterprise Edition images before pulling the image. + - To create a NebulaGraph Enterprise cluster === "Cluster without Zones" @@ -99,7 +61,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl | Parameter | Default value | Description | | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.xxx:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| @@ -107,74 +69,8 @@ The following example shows how to create a NebulaGraph cluster by creating a cl === "Cluster with Zones" NebulaGraph Operator supports creating a cluster with [Zones](../../4.deployment-and-installation/5.zone.md). - - You must set the following parameters for creating a cluster with Zones. Other parameters can be changed as needed. For more information on other parameters, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | - |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| - |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| - |`spec.alpineImage`|`reg.vesoft-inc.com/nebula-alpine:latest`|The Alpine Linux image, used to obtain the Zone information where nodes are located.| - |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| - |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage nodes in the same zone.
When set to `true`, the query is sent to the storage nodes in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | - |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| - - ???+ note "Learn more about Zones in NebulaGraph Operator" - - **Understanding NebulaGraph's Zone Feature** - - NebulaGraph utilizes a feature called Zones to efficiently manage its distributed architecture. Each Zone represents a logical grouping of Storage pods and Graph pods, responsible for storing the complete graph space data. The data within NebulaGraph's spaces is partitioned, and replicas of these partitions are evenly distributed across all available Zones. The utilization of Zones can significantly reduce inter-Zone network traffic costs and boost data transfer speeds. Moreover, intra-zone-reading allows for increased availability, because replicas of a partition spread out among different zones. - - **Configuring NebulaGraph Zones** - - To make the most of the Zone feature, you first need to determine the actual Zone where your cluster nodes are located. Typically, nodes deployed on cloud platforms are labeled with their respective Zones. Once you have this information, you can configure it in your cluster's configuration file by setting the `spec.metad.config.zone_list` parameter. This parameter should be a list of Zone names, separated by commas, and should match the actual Zone names where your nodes are located. For example, if your nodes are in Zones `az1`, `az2`, and `az3`, your configuration would look like this: - - ```yaml - spec: - metad: - config: - zone_list: az1,az2,az3 - ``` - - **Operator's Use of Zone Information** - - NebulaGraph Operator leverages Kubernetes' [TopoloySpread](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) feature to manage the scheduling of Storage and Graph pods. Once the `zone_list` is configured, Storage services are automatically assigned to their respective Zones based on the `topology.kubernetes.io/zone` label. - - For intra-zone data access, the Graph service dynamically assigns itself to a Zone using the `--assigned_zone=$NODE_ZONE` parameter. It identifies the Zone name of the node where the Graph service resides by utilizing an init-container to fetch this information. The Alpine Linux image specified in `spec.alpineImage` (default: `reg.vesoft-inc.com/nebula-alpine:latest`) plays a role in obtaining Zone information. - - **Prioritizing Intra-Zone Data Access** - - By setting `spec.graphd.config.prioritize_intra_zone_reading` to `true` in the cluster configuration file, you enable the Graph service to prioritize sending queries to Storage services within the same Zone. In the event of a read failure within that Zone, the behavior depends on the value of `spec.graphd.config.stick_to_intra_zone_on_failure`. If set to `true`, the Graph service avoids reading data from other Zones and returns an error. Otherwise, it reads data from leader partition replicas in other Zones. - - ```yaml - spec: - alpineImage: reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest - graphd: - config: - prioritize_intra_zone_reading: "true" - stick_to_intra_zone_on_failure: "false" - ``` - - **Zone Mapping for Resilience** - - Once Storage and Graph services are assigned to Zones, the mapping between the pod and its corresponding Zone is stored in a configmap named `-graphd|storaged-zone`. This mapping facilitates pod scheduling during rolling updates and pod restarts, ensuring that services return to their original Zones as needed. - - !!! caution - - DO NOT manually modify the configmaps created by NebulaGraph Operator. Doing so may cause unexpected behavior. - - - Other optional parameters for the enterprise edition are as follows: - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| - |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| - |`spec.graphd.enable_graph_ssl`|`false`| Specifies whether to enable SSL for the Graph service. For more details, see [Enable mTLS](../8.custom-cluster-configurations/8.5.enable-ssl.md). | - - ??? info "Expand to view sample cluster configurations" + ??? info "Expand to view sample configurations of a cluster with Zones" ```yaml apiVersion: apps.nebula-graph.io/v1alpha1 @@ -183,90 +79,34 @@ The following example shows how to create a NebulaGraph cluster by creating a cl name: nebula namespace: default spec: - alpineImage: "reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest" + # Used to obtain the Zone information where nodes are located. + alpineImage: "reg.vesoft-inc.com/xxx/xxx:latest" + # Used for backup and recovery as well as log cleanup functions. + # If you do not customize this configuration, + # the default configuration will be used. agent: - image: reg.vesoft-inc.com/cloud-dev/nebula-agent + image: reg.vesoft-inc.com/xxx/xxx version: v3.6.0-sc exporter: image: vesoft/nebula-stats-exporter replicas: 1 maxRequests: 20 + # Used to create a console container, + # which is used to connect to the NebulaGraph cluster. console: version: "nightly" graphd: config: + # The following parameters are required for creating a cluster with Zones. accept_partial_success: "true" - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - key_path: certs/server.key - enable_graph_ssl: "true" prioritize_intra_zone_reading: "true" - stick_to_intra_zone_on_failure: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" + # The following parameters are required for collecting logs. logtostderr: "1" redirect_stdout: "false" stderrthreshold: "0" - initContainers: - - name: init-auth-sidecar - imagePullPolicy: IfNotPresent - image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 - env: - - name: AUTH_SIDECAR_CONFIG_FILENAME - value: sidecar-init - volumeMounts: - - name: credentials - mountPath: /credentials - - name: auth-sidecar-config - mountPath: /etc/config - sidecarContainers: - - name: auth-sidecar - image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 - imagePullPolicy: IfNotPresent - resources: - requests: - cpu: 100m - memory: 500Mi - env: - - name: LOCAL_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - - name: LOCAL_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: LOCAL_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - readinessProbe: - httpGet: - path: /ready - port: 8086 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - failureThreshold: 3 - livenessProbe: - httpGet: - path: /live - port: 8086 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - failureThreshold: 3 - volumeMounts: - - name: credentials - mountPath: /credentials - - name: auth-sidecar-config - mountPath: /etc/config - volumes: - - name: credentials - emptyDir: - medium: Memory - volumeMounts: - - name: credentials - mountPath: /usr/local/nebula/certs resources: requests: cpu: "2" @@ -275,7 +115,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "2" memory: "2Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-graphd-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc metad: config: @@ -285,6 +125,8 @@ The following example shows how to create a NebulaGraph cluster by creating a cl # Zone names CANNOT be modified once set. # It's suggested to set an odd number of Zones. zone_list: az1,az2,az3 + validate_session_timestamp: "false" + # LM access address and port number. licenseManagerURL: "192.168.8.xxx:9119" resources: requests: @@ -294,7 +136,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "1" memory: "1Gi" replicas: 3 - image: reg.vesoft-inc.com/rc/nebula-metad-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaim: resources: @@ -314,13 +156,14 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "2" memory: "2Gi" replicas: 3 - image: reg.vesoft-inc.com/rc/nebula-storaged-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaims: - resources: requests: storage: 2Gi storageClassName: local-path + # Automatically balance storage data after scaling out. enableAutoBalance: true reference: name: statefulsets.apps @@ -331,14 +174,123 @@ The following example shows how to create a NebulaGraph cluster by creating a cl imagePullPolicy: Always imagePullSecrets: - name: nebula-image + # Evenly distribute storage Pods across Zones. + # Must be set when using Zones. topologySpreadConstraints: - topologyKey: "topology.kubernetes.io/zone" whenUnsatisfiable: "DoNotSchedule" + ``` + + !!! caution + + Make sure storage Pods are evenly distributed across zones before ingesting data by running `SHOW ZONES` in nebula-console. For zone-related commands, see [Zones](../../4.deployment-and-installation/5.zone.md). + + You must set the following parameters for creating a cluster with Zones. Other parameters can be changed as needed. + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.xxx:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| + |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| + |`spec.alpineImage`|-|The Alpine Linux image, used to obtain the Zone information where nodes are located.| + |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| + |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage pods in the same zone.
When set to `true`, the query is sent to the storage pods in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | + |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| + |`spec.schedulerName`|`kube-scheduler`|Schedules the restarted Graph and Storage pods to the same Zone. The value must be set to `nebula-scheduler`.| + |`spec.topologySpreadConstraints`|-| It is a field in Kubernetes used to control the distribution of storage Pods. Its purpose is to ensure that your storage Pods are evenly spread across Zones.
**To use the Zone feature, you must set the value of `topologySpreadConstraints[0].topologyKey` to `topology.kubernetes.io/zone` and the value of `topologySpreadConstraints[0].whenUnsatisfiable` to `DoNotSchedule`**. Run `kubectl get node --show-labels` to check the key. For more information, see [TopologySpread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#example-multiple-topologyspreadconstraints).| + + ???+ note "Learn more about Zones in NebulaGraph Operator" + + **Understanding NebulaGraph's Zone Feature** + + NebulaGraph utilizes a feature called Zones to efficiently manage its distributed architecture. Each Zone represents a logical grouping of Storage pods and Graph pods, responsible for storing the complete graph space data. The data within NebulaGraph's spaces is partitioned, and replicas of these partitions are evenly distributed across all available Zones. The utilization of Zones can significantly reduce inter-Zone network traffic costs and boost data transfer speeds. Moreover, intra-zone-reading allows for increased availability, because replicas of a partition spread out among different zones. + + **Configuring NebulaGraph Zones** + + To make the most of the Zone feature, you first need to determine the actual Zone where your cluster nodes are located. Typically, nodes deployed on cloud platforms are labeled with their respective Zones. Once you have this information, you can configure it in your cluster's configuration file by setting the `spec.metad.config.zone_list` parameter. This parameter should be a list of Zone names, separated by commas, and should match the actual Zone names where your nodes are located. For example, if your nodes are in Zones `az1`, `az2`, and `az3`, your configuration would look like this: + + ```yaml + spec: + metad: + config: + zone_list: az1,az2,az3 ``` + **Operator's Use of Zone Information** + + NebulaGraph Operator leverages Kubernetes' [TopoloySpread](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) feature to manage the scheduling of Storage and Graph pods. Once the `zone_list` is configured, Storage services are automatically assigned to their respective Zones based on the `topology.kubernetes.io/zone` label. + + For intra-zone data access, the Graph service dynamically assigns itself to a Zone using the `--assigned_zone=$NODE_ZONE` parameter. It identifies the Zone name of the node where the Graph service resides by utilizing an init-container to fetch this information. The Alpine Linux image specified in `spec.alpineImage` (default: `reg.vesoft-inc.com/nebula-alpine:latest`) plays a role in obtaining Zone information. + + **Prioritizing Intra-Zone Data Access** + + By setting `spec.graphd.config.prioritize_intra_zone_reading` to `true` in the cluster configuration file, you enable the Graph service to prioritize sending queries to Storage services within the same Zone. In the event of a read failure within that Zone, the behavior depends on the value of `spec.graphd.config.stick_to_intra_zone_on_failure`. If set to `true`, the Graph service avoids reading data from other Zones and returns an error. Otherwise, it reads data from leader partition replicas in other Zones. + + ```yaml + spec: + alpineImage: reg.vesoft-inc.com/xxx/xxx:latest + graphd: + config: + prioritize_intra_zone_reading: "true" + stick_to_intra_zone_on_failure: "false" + ``` + + **Zone Mapping for Resilience** + + Once Storage and Graph services are assigned to Zones, the mapping between the pod and its corresponding Zone is stored in a configmap named `-graphd|storaged-zone`. This mapping facilitates pod scheduling during rolling updates and pod restarts, ensuring that services return to their original Zones as needed. + + !!! caution + + DO NOT manually modify the configmaps created by NebulaGraph Operator. Doing so may cause unexpected behavior. + + + Other optional parameters for the enterprise edition are as follows: + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| + |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| + |`spec.graphd.enable_graph_ssl`|`false`| Specifies whether to enable SSL for the Graph service. For more details, see [Enable mTLS](../8.custom-cluster-configurations/8.5.enable-ssl.md). | + {{ ent.ent_end }} -1. Create a NebulaGraph cluster. + - To create a NebulaGraph Community cluster + + See [community cluster configurations](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). + + ??? Info "Expand to show parameter descriptions of community clusters" + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `metadata.name` | - | The name of the created NebulaGraph cluster. | + |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| + | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | + | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | + | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | + | `spec.graphd.service` | - | The Service configurations for the Graphd service. | + | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | + | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | + | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | + | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | + | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | + | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| + | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | + | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | + | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | + | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| + | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | + | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| + | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | + |`spec..securityContext`|`{}`|Defines privilege and access control settings for NebulaGraph service containers. For details, see [SecurityContext](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/security_context.md). | + |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| + | `spec.reference.name` | - | The name of the dependent controller. | + | `spec.schedulerName` | - | The scheduler name. | + | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | + |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| + |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| + + +4. Create a NebulaGraph cluster. ```bash kubectl create -f apps_v1alpha1_nebulacluster.yaml @@ -446,10 +398,9 @@ In the process of downsizing the cluster, if the operation is not complete succe !!! caution - - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. - {{ent.ent_begin}} + - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scaling Meta services. - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `spec.metad.config.zone_list` field. Otherwise, the cluster will fail to start. - {{ent.ent_end}} + {{ ent.ent_end }} diff --git a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 03b46388b3b..c076404970e 100644 --- a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -129,9 +129,18 @@ --set nebula.metad.config.zone_list= \ --set nebula.graphd.config.prioritize_intra_zone_reading=true \ --set nebula.graphd.config.stick_to_intra_zone_on_failure=false \ + # Evenly distribute the Pods of the Storage service across Zones. + --set nebula.topologySpreadConstraints[0].topologyKey=topology.kubernetes.io/zone \ + --set nebula.topologySpreadConstraints[0].whenUnsatisfiable=DoNotSchedule \ + # Used to schedule restarted Graph or Storage Pods to the specified Zone. + --set nebula.schedulerName=nebula-scheduler \ --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ ``` + !!! caution + + Make sure storage Pods are evenly distributed across zones before ingesting data by running `SHOW ZONES` in nebula-console. For zone-related commands, see [Zones](../../4.deployment-and-installation/5.zone.md). + {{ent.ent_end}} To view all configuration parameters of the NebulaGraph cluster, run the `helm show values nebula-operator/nebula-cluster` command or click [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml). @@ -183,14 +192,13 @@ helm upgrade "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ Similarly, you can scale in a NebulaGraph cluster by setting the value of the `replicas` corresponding to the different services in the cluster smaller than the original value. -In the process of downsizing the cluster, if the operation is not complete successfully and seems to be stuck, you may need to check the status of the job using the `nebula-console` client specified in the `spec.console` field. Analyzing the logs and manually intervening can help ensure that the Job runs successfully. For information on how to check jobs, see [Job statements](../../3.ngql-guide/4.job-statements.md). +In the process of downsizing the cluster, if the operation is not complete successfully and seems to be stuck, you may need to check the status of the job using the `nebula-console` client specified in the `nebula.console` field. Analyzing the logs and manually intervening can help ensure that the Job runs successfully. For information on how to check jobs, see [Job statements](../../3.ngql-guide/4.job-statements.md). !!! caution - - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. - {{ent.ent_begin}} - - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `spec.metad.config.zone_list` field. Otherwise, the cluster will fail to start. - {{ent.ent_end}} + - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scaling Meta services. + - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `nebula.metad.config.zone_list` field. Otherwise, the cluster will fail to start. + You can click on [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/charts/nebula-cluster/values.yaml) to see more configurable parameters of the nebula-cluster chart. For more information about the descriptions of configurable parameters, see **Configuration parameters of the nebula-cluster Helm chart** below. {{ ent.ent_end }} diff --git a/docs-2.0-en/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0-en/nebula-operator/4.connect-to-nebula-graph-service.md index f6dfbd97f2f..8c2696fcb39 100644 --- a/docs-2.0-en/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0-en/nebula-operator/4.connect-to-nebula-graph-service.md @@ -70,6 +70,7 @@ You can also create a `ClusterIP` type Service to provide an access point to the ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft + ``` - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - ``: The custom Pod name. @@ -86,13 +87,27 @@ You can also create a `ClusterIP` type Service to provide an access point to the (root@nebula) [(none)]> ``` -You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. + You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. -```bash -kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p -``` + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p + ``` + + `service_port` is the port to connect to Graphd services, the default port of which is `9669`. -`service_port` is the port to connect to Graphd services, the default port of which is `9669`. + !!! note + + If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command: + + ```bash + # Enter the nebula-console Pod. + kubectl exec -it nebula-console -- /bin/sh + + # Connect to NebulaGraph databases. + nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u -p + ``` + + For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via `NodePort` @@ -197,109 +212,6 @@ Steps: For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). -## Connect to NebulaGraph databases from within a NebulaGraph cluster - -You can also create a `ClusterIP` type Service to provide an access point to the NebulaGraph database for other Pods within the cluster. By using the Service's IP and the Graph service's port number (9669), you can connect to the NebulaGraph database. For more information, see [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/). - -1. Create a file named `graphd-clusterip-service.yaml`. The file contents are as follows: - - ```yaml - apiVersion: v1 - kind: Service - metadata: - labels: - app.kubernetes.io/cluster: nebula - app.kubernetes.io/component: graphd - app.kubernetes.io/managed-by: nebula-operator - app.kubernetes.io/name: nebula-graph - name: nebula-graphd-svc - namespace: default - spec: - externalTrafficPolicy: Local - ports: - - name: thrift - port: 9669 - protocol: TCP - targetPort: 9669 - - name: http - port: 19669 - protocol: TCP - targetPort: 19669 - selector: - app.kubernetes.io/cluster: nebula - app.kubernetes.io/component: graphd - app.kubernetes.io/managed-by: nebula-operator - app.kubernetes.io/name: nebula-graph - type: ClusterIP # Set the type to ClusterIP. - ``` - - - NebulaGraph uses port `9669` by default. `19669` is the HTTP port of the Graph service in a NebulaGraph cluster. - - `targetPort` is the port mapped to the database Pods, which can be customized. - -2. Create a ClusterIP Service. - - ```bash - kubectl create -f graphd-clusterip-service.yaml - ``` - -3. Check the IP of the Service: - - ```bash - $ kubectl get service -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-svc ClusterIP 10.98.213.34 9669/TCP,19669/TCP,19670/TCP 23h - ... - ``` - -4. Run the following command to connect to the NebulaGraph database using the IP of the `-graphd-svc` Service above: - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p - ``` - - For example: - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft - ``` - - - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - - ``: The custom Pod name. - - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. - - `-port`: The port to connect to Graphd services, the default port of which is `9669`. - - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. - - A successful connection to the database is indicated if the following is returned: - - ```bash - If you don't see a command prompt, try pressing enter. - - (root@nebula) [(none)]> - ``` - - You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p - ``` - - `service_port` is the port to connect to Graphd services, the default port of which is `9669`. - - !!! note - - If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command: - - ```bash - # Enter the nebula-console Pod. - kubectl exec -it nebula-console -- /bin/sh - - # Connect to NebulaGraph databases. - nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u -p - ``` - - For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). -s ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via Ingress When dealing with multiple pods in a cluster, managing services for each pod separately is not a good practice. Ingress is a Kubernetes resource that provides a unified entry point for accessing multiple services. Ingress can be used to expose multiple services under a single IP address. @@ -401,7 +313,7 @@ Steps are as follows. kubectl exec -it nebula-console -- /bin/sh # Connect to NebulaGraph databases. - nebula-console -addr -port -u -p + nebula-console -addr -port -u -p ``` For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). diff --git a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md index 88d79153796..8ca11e3f7e8 100644 --- a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +++ b/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md @@ -70,7 +70,5 @@ It should be noted that only when all configuration items in `config` are the pa For information about the parameters that can be dynamically modified for each service, see the parameter table column of **Whether supports runtime dynamic modifications** in [Meta service configuration parameters](../../5.configurations-and-logs/1.configurations/2.meta-config.md), [Storage service configuration parameters](../../5.configurations-and-logs/1.configurations/4.storage-config.md), and [Graph service configuration parameters](../../5.configurations-and-logs/1.configurations/3.graph-config.md), respectively. -## Learn more -For more information about the configuration parameters of Meta, Storage, and Graph services, see [Meta service configuration parameters](../../5.configurations-and-logs/1.configurations/2.meta-config.md), [Storage service configuration parameters](../../5.configurations-and-logs/1.configurations/4.storage-config.md), and [Graph service configuration parameters](../../5.configurations-and-logs/1.configurations/3.graph-config.md). diff --git a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md b/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md index 11fbfc9a8ee..b8cda0a2aad 100644 --- a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md +++ b/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md @@ -4,13 +4,12 @@ NebulaGraph Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume C You can also define the automatic deletion of PVCs to release data by setting the parameter `spec.enablePVReclaim` to `true` in the configuration file of the cluster instance. As for whether PV will be deleted automatically after PVC is deleted, you need to customize the PV reclaim policy. See [reclaimPolicy in StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#reclaim-policy) and [PV Reclaiming](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) for details. -## Notes - -The NebulaGraph Operator currently does not support the resizing of Persistent Volume Claims (PVCs), but this feature is expected to be supported in version 1.6.1. Additionally, the Operator does not support dynamically adding or mounting storage volumes to a running storaged instance. - ## Prerequisites You have created a cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). +## Notes + +NebulaGraph Operator does not support dynamically adding or mounting storage volumes to a running storaged instance. ## Steps diff --git a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md b/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md index b5f265c9845..9a97f56fac9 100644 --- a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +++ b/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md @@ -14,31 +14,28 @@ In the NebulaGraph environment running in Kubernetes, mutual TLS (mTLS) is used In the cluster created using Operator, the client and server use the same CA root certificate by default. -## Encryption policies +## Encryption scenarios -NebulaGraph provides three encryption policies for mTLS: +The following two scenarios are commonly used for encryption: -- Encryption of data transmission between the client and the Graph service. - - This policy only involves encryption between the client and the Graph service and does not encrypt data transmission between other services in the cluster. +- Encrypting communication between the client and the Graph service. -- Encrypt the data transmission between clients, the Graph service, the Meta service, and the Storage service. - - This policy encrypts data transmission between the client, Graph service, Meta service, and Storage service in the cluster. +- Encrypting communication between services, such as communication between the Graph service and the Meta service, communication between the Graph service and the Storage service, and communication between the Meta service and the Storage service. -- Encryption of data transmission related to the Meta service within the cluster. - - This policy only involves encrypting data transmission related to the Meta service within the cluster and does not encrypt data transmission between other services or the client. + !!! note + + - The Graph service in NebulaGraph is the entry point for all client requests. The Graph service communicates with the Meta service and the Storage service to complete the client requests. Therefore, the Graph service needs to be able to communicate with the Meta service and the Storage service. + - The Storage and Meta services in NebulaGraph communicate with each other through heartbeat messages to ensure their availability and health. Therefore, the Storage service needs to be able to communicate with the Meta service and vice versa. -For different encryption policies, you need to configure different fields in the cluster configuration file. For more information, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). +For all encryption scenarios, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). ## mTLS with certificate hot-reloading -NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The following provides an example of the configuration file to enable mTLS between the client and the Graph service. +NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. -### Sample configurations +The following provides examples of the configuration file to enable mTLS between the client and the Graph service, and between services. -??? info "Expand to view the sample configurations of mTLS" +??? info "View sample configurations of mTLS between the client and the Graph service" ```yaml apiVersion: apps.nebula-graph.io/v1alpha1 @@ -52,18 +49,152 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The maxRequests: 20 graphd: config: - accept_partial_success: "true" + # The following parameters are used to enable mTLS between the client and the Graph service. ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt + key_path: certs/server.key enable_graph_ssl: "true" - enable_intra_zone_routing: "true" + # The following parameters are required for creating a cluster with Zones. + accept_partial_success: "true" + prioritize_intra_zone_reading: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + resources: + requests: + cpu: "200m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/xxx/xxx + version: v3.5.0-sc + metad: + # Zone names CANNOT be modified once set. + # It's suggested to set an odd number of Zones. + zone_list: az1,az2,az3 + validate_session_timestamp: "false" + licenseManagerURL: "192.168.8.xxx:9119" + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/xxx/xxx + version: v3.5.0-sc + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + storaged: + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/xxx/xxx + version: v3.5.0-sc + dataVolumeClaims: + - resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + enableAutoBalance: true + reference: + name: statefulsets.apps + version: v1 + schedulerName: nebula-scheduler + imagePullPolicy: Always + imagePullSecrets: + - name: nebula-image + enablePVReclaim: true + topologySpreadConstraints: + - topologyKey: "kubernetes.io/zone" + whenUnsatisfiable: "DoNotSchedule" + ``` + +??? info "View sample configurations of mTLS between services" + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + spec: + exporter: + image: vesoft/nebula-stats-exporter + replicas: 1 + maxRequests: 20 + # The certificate files for NebulaGraph Operator to access Storage and Meta services. + sslCerts: + clientSecret: "client-cert" + caSecret: "ca-cert" + caCert: "root.crt" + graphd: + config: + # The following parameters are used to enable mTLS between services. + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt key_path: certs/server.key - logtostderr: "1" - redirect_stdout: "false" - stderrthreshold: "0" - stick_to_intra_zone_on_failure: "true" - timestamp_in_logfile_name: "false" + enable_meta_ssl: "true" + enable_storage_ssl: "true" + # The following parameters are required for creating a cluster with Zones. + accept_partial_success: "true" + prioritize_intra_zone_reading: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" initContainers: - name: init-auth-sidecar command: @@ -72,14 +203,14 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The args: - cp /certs/* /credentials/ imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials sidecarContainers: - name: auth-sidecar imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -103,10 +234,48 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-graphd-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc metad: - licenseManagerURL: "192.168.8.53:9119" + config: + # Zone names CANNOT be modified once set. + # It's suggested to set an odd number of Zones. + zone_list: az1,az2,az3 + validate_session_timestamp: "false" + # The following parameters are used to enable mTLS between services. + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + licenseManagerURL: "192.168.8.xx:9119" resources: requests: cpu: "300m" @@ -115,7 +284,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-metad-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaim: resources: @@ -128,6 +297,40 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The storage: 1Gi storageClassName: local-path storaged: + config: + # The following parameters are used to enable mTLS between services. + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs resources: requests: cpu: "300m" @@ -136,7 +339,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-storaged-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaims: - resources: @@ -148,23 +351,26 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The requests: storage: 1Gi storageClassName: local-path + # Automatically balance storage data after scaling out. enableAutoBalance: true reference: name: statefulsets.apps version: v1 - schedulerName: default-scheduler + schedulerName: nebula-scheduler imagePullPolicy: Always imagePullSecrets: - name: nebula-image + # Whether to automatically delete PVCs when deleting a cluster. enablePVReclaim: true + # Used to evenly distribute Pods across Zones. topologySpreadConstraints: - - topologyKey: "kubernetes.io/hostname" - whenUnsatisfiable: "ScheduleAnyway" + - topologyKey: "kubernetes.io/zone" + whenUnsatisfiable: "DoNotSchedule" ``` ### Configure `spec..config` -To enable mTLS between the client and the Graph service, configure the `spec.graphd.config` field in the cluster configuration file. The paths specified in fields with `*_path` correspond to file paths relative to `/user/local/nebula`. **It's important to avoid using absolute paths to prevent path recognition errors.** +To enable mTLS between the client and the Graph service, add the following fields under the `spec.graphd.config` in the cluster configuration file. The paths specified in fields with `*_path` correspond to file paths relative to `/user/local/nebula`. **It's important to avoid using absolute paths to prevent path recognition errors.** ```yaml spec: @@ -173,15 +379,11 @@ spec: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_graph_ssl: "true" key_path: certs/server.key + enable_graph_ssl: "true" ``` -For the configurations of the other two authentication policies: - -- To enable mTLS between the client, the Graph service, the Meta service, and the Storage service: - - Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. +To enable mTLS between services (Graph, Meta, and Storage), add the following fields under the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` respectively in the cluster configuration file. ```yaml spec: @@ -190,60 +392,37 @@ For the configurations of the other two authentication policies: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_ssl: "true" key_path: certs/server.key - metad: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - enable_ssl: "true" - key_path: certs/server.key - storaged: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - enable_ssl: "true" - key_path: certs/server.key - ``` - -- To enable mTLS related to the Meta service: - - Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. - - ```yaml - spec: - graph: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt enable_meta_ssl: "true" - key_path: certs/server.key + enable_storage_ssl: "true" metad: config: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_meta_ssl: "true" key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" storaged: config: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_meta_ssl: "true" key_path: certs/server.key - ``` + enable_meta_ssl: "true" + enable_storage_ssl: "true" + ``` ### Configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` -`initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` fields are essential for implementing mTLS certificate online hot-reloading. For the encryption scenario where only the Graph service needs to be encrypted, you need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`. +`initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` fields are essential for implementing mTLS certificate online hot-reloading. + +- For the encryption scenario where only the Graph service needs to be encrypted, configure these fields under `spec.graph.config`. +- For the encryption scenario where the Graph service, Meta service, and Storage service need to be encrypted, configure these fields under `spec.graph.config`, `spec.storage.config`, and `spec.meta.config` respectively. #### `initContainers` -The `initContainers` field is utilized to configure an init-container responsible for generating certificate files. Note that the `volumeMounts` field specifies how the `credentials` volume, shared with the NebulaGraph container, is mounted, providing read and write access. +The `initContainers` field is utilized to configure an init-container responsible for generating certificate files. Note that the `volumeMounts` field specifies how a volume specified by `volumes`, shared with the NebulaGraph container, is mounted, providing read and write access. In the following example, `init-auth-sidecar` performs the task of copying files from the `certs` directory within the image to `/credentials`. After this task is completed, the init-container exits. @@ -258,7 +437,7 @@ initContainers: args: - cp /certs/* /credentials/ imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -266,7 +445,7 @@ initContainers: #### `sidecarContainers` -The `sidecarContainers` field is dedicated to periodically monitoring the expiration time of certificates and, when they are near expiration, generating new certificates to replace the existing ones. This process ensures seamless online certificate hot-reloading without any service interruptions. The `volumeMounts` field specifies how the `credentials` volume is mounted, and this volume is shared with the NebulaGraph container. +The `sidecarContainers` field is dedicated to periodically monitoring the expiration time of certificates and, when they are near expiration, generating new certificates to replace the existing ones. This process ensures seamless online certificate hot-reloading without any service interruptions. The `volumeMounts` field specifies how a volume is mounted, and this volume is shared with the NebulaGraph container. In the example provided, the `auth-sidecar` container employs the `crond` process, which runs a crontab script every minute. This script checks the certificate's expiration status using the `openssl x509 -noout -enddate` command. @@ -276,7 +455,7 @@ Example: sidecarContainers: - name: auth-sidecar imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -309,9 +488,9 @@ volumeMounts: ### Configure `sslCerts` -The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). +When you enable mTLS between services, you still needs to set `spec.sslCerts`, because NebulaGraph Operator communicates with the Meta service and Storage service. -For the other two scenarios where the Graph service, Meta service, and Storage service need to be encrypted, and where only the Meta service needs to be encrypted, you not only need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`, `spec.storage.config`, and `spec.meta.config`, but also configure `spec.sslCerts`. +The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). ```yaml spec: @@ -357,255 +536,255 @@ nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u r ## mTLS without hot-reloading -??? info "If you don't need to perform TLS certificate hot-reloading and prefer to use TLS certificates stored in a Secret when deploying Kubernetes applications, expand to follow these steps" +If you don't need to perform TLS certificate hot-reloading and prefer to use TLS certificates stored in a Secret when deploying Kubernetes applications, you can follow the steps below to enable mTLS in NebulaGraph. - ### Create a TLS-type Secret +### Create a TLS-type Secret - In a K8s cluster, you can create Secrets to store sensitive information, such as passwords, OAuth tokens, and TLS certificates. In NebulaGraph, you can create a Secret to store TLS certificates and private keys. When creating a Secret, the type `tls` should be specified. A `tls` Secret is used to store TLS certificates. +In a K8s cluster, you can create Secrets to store sensitive information, such as passwords, OAuth tokens, and TLS certificates. In NebulaGraph, you can create a Secret to store TLS certificates and private keys. When creating a Secret, the type `tls` should be specified. A `tls` Secret is used to store TLS certificates. - For example, to create a Secret for storing server certificates and private keys: +For example, to create a Secret for storing server certificates and private keys: - ```bash - kubectl create secret tls --key= --cert= --namespace= - ``` +```bash +kubectl create secret tls --key= --cert= --namespace= +``` - - ``: The name of the Secret storing the server certificate and private key. - - ``: The path to the server private key file. - - ``: The path to the server certificate file. - - ``: The namespace where the Secret is located. If `--namespace` is not specified, it defaults to the `default` namespace. +- ``: The name of the Secret storing the server certificate and private key. +- ``: The path to the server private key file. +- ``: The path to the server certificate file. +- ``: The namespace where the Secret is located. If `--namespace` is not specified, it defaults to the `default` namespace. - You can follow the above steps to create Secrets for the client certificate and private key, and the CA certificate. +You can follow the above steps to create Secrets for the client certificate and private key, and the CA certificate. - To view the created Secrets: +To view the created Secrets: - ```bash - kubectl get secret --namespace= - ``` +```bash +kubectl get secret --namespace= +``` - ### Configure certifications +### Configure certifications - Operator provides the `sslCerts` field to specify the encrypted certificates. The `sslCerts` field contains four subfields. These three fields `serverSecret`, `clientSecret`, and `caSecret` are used to specify the Secret names of the NebulaGraph server certificate, client certificate, and CA certificate, respectively. - When you specify these three fields, Operator reads the certificate content from the corresponding Secret and mounts it into the cluster's Pod. The `autoMountServerCerts` must be set to `true` if you want to automatically mount the server certificate and private key into the Pod. The default value is `false`. +Operator provides the `sslCerts` field to specify the encrypted certificates. The `sslCerts` field contains four subfields. These three fields `serverSecret`, `clientSecret`, and `caSecret` are used to specify the Secret names of the NebulaGraph server certificate, client certificate, and CA certificate, respectively. +When you specify these three fields, Operator reads the certificate content from the corresponding Secret and mounts it into the cluster's Pod. The `autoMountServerCerts` must be set to `true` if you want to automatically mount the server certificate and private key into the Pod. The default value is `false`. - ```yaml +```yaml +sslCerts: + autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. + serverSecret: "server-cert" # The name of the server certificate Secret. + serverCert: "" # The key name of the certificate in the server certificate Secret, default is tls.crt. + serverKey: "" # The key name of the private key in the server certificate Secret, default is tls.key. + clientSecret: "client-cert" # The name of the client certificate Secret. + clientCert: "" # The key name of the certificate in the client certificate Secret, default is tls.crt. + clientKey: "" # The key name of the private key in the client certificate Secret, default is tls.key. + caSecret: "ca-cert" # The name of the CA certificate Secret. + caCert: "" # The key name of the certificate in the CA certificate Secret, default is ca.crt. +``` + +The `serverCert` and `serverKey`, `clientCert` and `clientKey`, and `caCert` are used to specify the key names of the certificate and private key of the server Secret, the key names of the certificate and private key of the client Secret, and the key name of the CA Secret certificate. If you do not customize these field values, Operator defaults `serverCert` and `clientCert` to `tls.crt`, `serverKey` and `clientKey` to `tls.key`, and `caCert` to `ca.crt`. However, in the K8s cluster, the TLS type Secret uses `tls.crt` and `tls.key` as the default key names for the certificate and private key. Therefore, after creating the NebulaGraph cluster, you need to manually change the `caCert` field from `ca.crt` to `tls.crt` in the cluster configuration, so that the Operator can correctly read the content of the CA certificate. Before you customize these field values, you need to specify the key names of the certificate and private key in the Secret when creating it. For how to create a Secret with the key name specified, run the `kubectl create secret generic -h` command for help. + +You can use the `insecureSkipVerify` field to decide whether the client will verify the server's certificate chain and hostname. In production environments, it is recommended to set this to `false` to ensure the security of communication. If set to `true`, the client will not verify the server's certificate chain and hostname. + +```yaml +sslCerts: + # Determines whether the client needs to verify the server's certificate chain and hostname when establishing an SSL connection. + insecureSkipVerify: false +``` + +!!! caution + + Make sure that you have added the hostname or IP of the server to the server's certificate's `subjectAltName` field before the `insecureSkipVerify` is set to `false`. If the hostname or IP of the server is not added, an error will occur when the client verifies the server's certificate chain and hostname. For details, see [openssl](https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl). + +When the certificates are approaching expiration, they can be automatically updated by installing [cert-manager](https://cert-manager.io/docs/installation/supported-releases/). NebulaGraph will monitor changes to the certificate directory files, and once a change is detected, it will load the new certificate content into memory. + +### Encryption strategies + +NebulaGraph offers three encryption strategies that you can choose and configure according to your needs. + +- Encryption of client-graph and all inter-service communications + + If you want to encrypt all data transmission between the client, Graph service, Meta service, and Storage service, you need to add the `enable_ssl = true` field to each service. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: sslCerts: - autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. - serverSecret: "server-cert" # The name of the server certificate Secret. - serverCert: "" # The key name of the certificate in the server certificate Secret, default is tls.crt. - serverKey: "" # The key name of the private key in the server certificate Secret, default is tls.key. - clientSecret: "client-cert" # The name of the client certificate Secret. - clientCert: "" # The key name of the certificate in the client certificate Secret, default is tls.crt. - clientKey: "" # The key name of the private key in the client certificate Secret, default is tls.key. - caSecret: "ca-cert" # The name of the CA certificate Secret. - caCert: "" # The key name of the certificate in the CA certificate Secret, default is ca.crt. - ``` + autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. + serverSecret: "server-cert" # The Secret name of the server certificate and private key. + clientSecret: "client-cert" # The Secret name of the client certificate and private key. + caSecret: "ca-cert" # The Secret name of the CA certificate. + graphd: + config: + enable_ssl: "true" + metad: + config: + enable_ssl: "true" + storaged: + config: + enable_ssl: "true" + ``` - The `serverCert` and `serverKey`, `clientCert` and `clientKey`, and `caCert` are used to specify the key names of the certificate and private key of the server Secret, the key names of the certificate and private key of the client Secret, and the key name of the CA Secret certificate. If you do not customize these field values, Operator defaults `serverCert` and `clientCert` to `tls.crt`, `serverKey` and `clientKey` to `tls.key`, and `caCert` to `ca.crt`. However, in the K8s cluster, the TLS type Secret uses `tls.crt` and `tls.key` as the default key names for the certificate and private key. Therefore, after creating the NebulaGraph cluster, you need to manually change the `caCert` field from `ca.crt` to `tls.crt` in the cluster configuration, so that the Operator can correctly read the content of the CA certificate. Before you customize these field values, you need to specify the key names of the certificate and private key in the Secret when creating it. For how to create a Secret with the key name specified, run the `kubectl create secret generic -h` command for help. - You can use the `insecureSkipVerify` field to decide whether the client will verify the server's certificate chain and hostname. In production environments, it is recommended to set this to `false` to ensure the security of communication. If set to `true`, the client will not verify the server's certificate chain and hostname. +- Encryption of only Graph service communication - ```yaml + If the K8s cluster is deployed in the same data center and only the port of the Graph service is exposed externally, you can choose to encrypt only data transmission between the client and the Graph service. In this case, other services can communicate internally without encryption. Just add the `enable_graph_ssl = true` field to the Graph service. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: sslCerts: - # Determines whether the client needs to verify the server's certificate chain and hostname when establishing an SSL connection. - insecureSkipVerify: false - ``` + autoMountServerCerts: "true" + serverSecret: "server-cert" + caSecret: "ca-cert" + graphd: + config: + enable_graph_ssl: "true" + ``` + + !!! note + + Because Operator doesn't need to call the Graph service through an interface, it's not necessary to set `clientSecret` in `sslCerts`. + +- Encryption of only Meta service communication + + If you need to transmit confidential information to the Meta service, you can choose to encrypt data transmission related to the Meta service. In this case, you need to add the `enable_meta_ssl = true` configuration to each component. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: "server-cert" + clientSecret: "client-cert" + caSecret: "ca-cert" + graphd: + config: + enable_meta_ssl: "true" + metad: + config: + enable_meta_ssl: "true" + storaged: + config: + enable_meta_ssl: "true" + ``` + + After setting up the encryption policy, when an external [client](../../14.client/1.nebula-client.md) needs to connect to the Graph service with mutual TLS, you still need to set the relevant TLS fields according to the different clients. See the Use NebulaGraph Console to connect to Graph service section below for examples. + +### Example of enabling mTLS without hot-reloading + +1. Use the pre-generated server and client certificates and private keys, and the CA certificate to create a Secret for each. + + ```yaml + kubectl create secret tls --key= --cert= + ``` + + - `tls`: Indicates that the type of secret being created is TLS, which is used to store TLS certificates. + - ``: Specifies the name of the new secret being created, which can be customized. + - `--key=`: Specifies the path to the private key file of the TLS certificate to be stored in the secret. + - `--cert=`: Specifies the path to the public key certificate file of the TLS certificate to be stored in the secret. + + +2. Add server certificate, client certificate, CA certificate configuration, and encryption policy configuration in the corresponding cluster instance YAML file. For details, see [Encryption strategies](#encryption_strategies). + + For example, add encryption configuration for transmission data between client, Graph service, Meta service, and Storage service. + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: "server-cert" // The name of the server Certificate Secret. + clientSecret: "client-cert" // The name of the client Certificate Secret. + caSecret: "ca-cert" // The name of the CA Certificate Secret. + graphd: + config: + enable_ssl: "true" + metad: + config: + enable_ssl: "true" + storaged: + config: + enable_ssl: "true" + ``` + +3. Use `kubectl apply -f` to apply the file to the Kubernetes cluster. + +4. Verify that the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration match the key names of the certificates and private keys stored in the created Secret. + + ```bash + # Check the key names of the certificate and private key stored in the Secret. For example, check the key name of the CA certificate stored in the Secret. + kubectl get secret ca-cert -o yaml + ``` - !!! caution - - Make sure that you have added the hostname or IP of the server to the server's certificate's `subjectAltName` field before the `insecureSkipVerify` is set to `false`. If the hostname or IP of the server is not added, an error will occur when the client verifies the server's certificate chain and hostname. For details, see [openssl](https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl). - - When the certificates are approaching expiration, they can be automatically updated by installing [cert-manager](https://cert-manager.io/docs/installation/supported-releases/). NebulaGraph will monitor changes to the certificate directory files, and once a change is detected, it will load the new certificate content into memory. - - ### Encryption strategies - - NebulaGraph offers three encryption strategies that you can choose and configure according to your needs. - - - Encryption of client-graph and all inter-service communications - - If you want to encrypt all data transmission between the client, Graph service, Meta service, and Storage service, you need to add the `enable_ssl = true` field to each service. - - Here is an example configuration: - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. - serverSecret: "server-cert" # The Secret name of the server certificate and private key. - clientSecret: "client-cert" # The Secret name of the client certificate and private key. - caSecret: "ca-cert" # The Secret name of the CA certificate. - graphd: - config: - enable_ssl: "true" - metad: - config: - enable_ssl: "true" - storaged: - config: - enable_ssl: "true" - ``` - - - - Encryption of only Graph service communication - - If the K8s cluster is deployed in the same data center and only the port of the Graph service is exposed externally, you can choose to encrypt only data transmission between the client and the Graph service. In this case, other services can communicate internally without encryption. Just add the `enable_graph_ssl = true` field to the Graph service. - - Here is an example configuration: - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: "server-cert" - caSecret: "ca-cert" - graphd: - config: - enable_graph_ssl: "true" - ``` - - !!! note - - Because Operator doesn't need to call the Graph service through an interface, it's not necessary to set `clientSecret` in `sslCerts`. - - - Encryption of only Meta service communication - - If you need to transmit confidential information to the Meta service, you can choose to encrypt data transmission related to the Meta service. In this case, you need to add the `enable_meta_ssl = true` configuration to each component. - - Here is an example configuration: - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: "server-cert" - clientSecret: "client-cert" - caSecret: "ca-cert" - graphd: - config: - enable_meta_ssl: "true" - metad: - config: - enable_meta_ssl: "true" - storaged: - config: - enable_meta_ssl: "true" - ``` - - After setting up the encryption policy, when an external [client](../../14.client/1.nebula-client.md) needs to connect to the Graph service with mutual TLS, you still need to set the relevant TLS fields according to the different clients. See the Use NebulaGraph Console to connect to Graph service section below for examples. - - ### Example of enabling mTLS without hot-reloading - - 1. Use the pre-generated server and client certificates and private keys, and the CA certificate to create a Secret for each. - - ```yaml - kubectl create secret tls --key= --cert= - ``` - - - `tls`: Indicates that the type of secret being created is TLS, which is used to store TLS certificates. - - ``: Specifies the name of the new secret being created, which can be customized. - - `--key=`: Specifies the path to the private key file of the TLS certificate to be stored in the secret. - - `--cert=`: Specifies the path to the public key certificate file of the TLS certificate to be stored in the secret. - - - 2. Add server certificate, client certificate, CA certificate configuration, and encryption policy configuration in the corresponding cluster instance YAML file. For details, see [Encryption strategies](#encryption_strategies). - - For example, add encryption configuration for transmission data between client, Graph service, Meta service, and Storage service. - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: "server-cert" // The name of the server Certificate Secret. - clientSecret: "client-cert" // The name of the client Certificate Secret. - caSecret: "ca-cert" // The name of the CA Certificate Secret. - graphd: - config: - enable_ssl: "true" - metad: - config: - enable_ssl: "true" - storaged: - config: - enable_ssl: "true" - ``` - - 3. Use `kubectl apply -f` to apply the file to the Kubernetes cluster. - - 4. Verify that the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration match the key names of the certificates and private keys stored in the created Secret. - - ```bash - # Check the key names of the certificate and private key stored in the Secret. For example, check the key name of the CA certificate stored in the Secret. - kubectl get secret ca-cert -o yaml - ``` - - ```bash - # Check the cluster configuration file. - kubectl get nebulacluster nebula -o yaml - ``` - - Example output: - - ``` - ... - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: server-cert - serverCert: tls.crt - serverKey: tls.key - clientSecret: client-cert - clientCert: tls.crt - clientKey: tls.key - caSecret: ca-cert - caCert: ca.crt - ... - ``` - - If the key names of the certificates and private keys stored in the Secret are different from the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration, you need to execute `kubectl edit nebulacluster ` to manually modify the cluster configuration file. - - In the example output, the key name of the CA certificate in the TLS-type Secret is `tls.crt`, so you need to change the value of caCert from `ca.crt` to `tls.crt`. - - 5. Use NebulaGraph Console to connect to the Graph service and establish a secure TLS connection. - - Example: - - ``` - kubectl run -ti --image vesoft/nebula-console:v{{console.release}} --restart=Never -- nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key - ``` - - - `-enable_ssl`: Use mTLS when connecting to NebulaGraph. - - `-ssl_root_ca_path`: Specify the storage path of the CA root certificate. - - `-ssl_cert_path`: Specify the storage path of the TLS public key certificate. - - `-ssl_private_key_path`: Specify the storage path of the TLS private key. - - For details on using NebulaGraph Console to connect to the Graph service, see [Connect to NebulaGraph](../4.connect-to-nebula-graph-service.md). - - !!! note - - If you set `spec.console` to start a NebulaGraph Console container in the cluster, you can enter the console container and run the following command to connect to the Graph service. - - ```bash - nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key - ``` - - At this point, you can enable mTLS in NebulaGraph. + ```bash + # Check the cluster configuration file. + kubectl get nebulacluster nebula -o yaml + ``` + + Example output: + + ``` + ... + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: server-cert + serverCert: tls.crt + serverKey: tls.key + clientSecret: client-cert + clientCert: tls.crt + clientKey: tls.key + caSecret: ca-cert + caCert: ca.crt + ... + ``` + + If the key names of the certificates and private keys stored in the Secret are different from the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration, you need to execute `kubectl edit nebulacluster ` to manually modify the cluster configuration file. + + In the example output, the key name of the CA certificate in the TLS-type Secret is `tls.crt`, so you need to change the value of caCert from `ca.crt` to `tls.crt`. + +5. Use NebulaGraph Console to connect to the Graph service and establish a secure TLS connection. + + Example: + + ``` + kubectl run -ti --image vesoft/nebula-console:v{{console.release}} --restart=Never -- nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key + ``` + + - `-enable_ssl`: Use mTLS when connecting to NebulaGraph. + - `-ssl_root_ca_path`: Specify the storage path of the CA root certificate. + - `-ssl_cert_path`: Specify the storage path of the TLS public key certificate. + - `-ssl_private_key_path`: Specify the storage path of the TLS private key. + - For details on using NebulaGraph Console to connect to the Graph service, see [Connect to NebulaGraph](../4.connect-to-nebula-graph-service.md). + + !!! note + + If you set `spec.console` to start a NebulaGraph Console container in the cluster, you can enter the console container and run the following command to connect to the Graph service. + + ```bash + nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key + ``` + +At this point, you can enable mTLS in NebulaGraph. diff --git a/docs-2.0-en/nebula-studio/about-studio/st-ug-limitations.md b/docs-2.0-en/nebula-studio/about-studio/st-ug-limitations.md new file mode 100644 index 00000000000..ebec57d204a --- /dev/null +++ b/docs-2.0-en/nebula-studio/about-studio/st-ug-limitations.md @@ -0,0 +1,49 @@ +# Limitations + +This topic introduces the limitations of Studio. + +## Architecture + +For now, Studio v3.x supports x86_64 architecture only. + +## Upload data + + + +Only CSV files without headers can be uploaded, but no limitations are applied to the size and store period for a single file. The maximum data volume depends on the storage capacity of your machine. + +## Data backup + +For now, only supports exporting query results in CSV format on **Console**, and other data backup methods are not supported. + +## nGQL statements + +On the **Console** page of Docker-based and RPM-based Studio v3.x, all the nGQL syntaxes except these are supported: + +- `USE `: You cannot run such a statement on the **Console** page to choose a graph space. As an alternative, you can click a graph space name in the drop-down list of **Current Graph Space**. +- You cannot use line breaks (\\). As an alternative, you can use the Enter key to split a line. + + + +## Browser + +We recommend that you use the latest version of Chrome to get access to Studio. Otherwise, some features may not work properly. diff --git a/docs-2.0-en/nebula-studio/about-studio/st-ug-what-is-graph-studio.md b/docs-2.0-en/nebula-studio/about-studio/st-ug-what-is-graph-studio.md new file mode 100644 index 00000000000..72db798e6df --- /dev/null +++ b/docs-2.0-en/nebula-studio/about-studio/st-ug-what-is-graph-studio.md @@ -0,0 +1,74 @@ +# What is NebulaGraph Studio + +NebulaGraph Studio (Studio in short) is a browser-based visualization tool to manage NebulaGraph. It provides you with a graphical user interface to manipulate graph schemas, import data, and run nGQL statements to retrieve data. With Studio, you can quickly become a graph exploration expert from scratch. You can view the latest source code in the NebulaGraph GitHub repository, see [nebula-studio](https://github.com/vesoft-inc/nebula-studio) for details. + +!!! Note + + You can also try some functions [online](https://playground.nebula-graph.io/explorer) in Studio. + +## Deployment + +In addition to deploying Studio with RPM-based, DEB-based, or Tar-based packages, or with Docker, you can also deploy Studio with Helm in the Kubernetes cluster. For more information, see [Deploy Studio](../deploy-connect/st-ug-deploy.md). + + + +The functions of the above four deployment methods are the same and may be restricted when using Studio. For more information, see [Limitations](../about-studio/st-ug-limitations.md). + +## Features + +Studio can easily manage NebulaGraph data, with the following functions: + +- On the **Schema** page, you can use the graphical user interface to create the space, Tag, Edge Type, Index, and view the statistics on the graph. It helps you quickly get started with NebulaGraph. + +- On the **Import** page, you can operate batch import of vertex and edge data with clicks, and view a real-time import log. + +- On the **Console** page, you can run nGQL statements and read the results in a human-friendly way. + +## Scenarios + +You can use Studio in one of these scenarios: + +- You have a dataset, and you want to explore and analyze data in a visualized way. You can use Docker Compose to deploy NebulaGraph and then use Studio to explore or analyze data in a visualized way. + +- You are a beginner of nGQL (NebulaGraph Query Language) and you prefer to use a GUI rather than a command-line interface (CLI) to learn the language. + +## Authentication + + + +Authentication is not enabled in NebulaGraph by default. Users can log into Studio with the `root` account and any password. + +When NebulaGraph enables authentication, users can only sign into Studio with the specified account. For more information, see [Authentication](../../7.data-security/1.authentication/1.authentication.md). + + +## Version compatibility + +!!! Note + + The Studio version is released independently of the NebulaGraph core. The correspondence between the versions of Studio and the NebulaGraph core, as shown in the table below. + +| NebulaGraph version | Studio version | +| --- | --- | +| 3.6.0 | 3.7.0 | +| 3.5.0 | 3.7.0 | +| 3.4.0 ~ 3.4.1| 3.7.0、3.6.0、3.5.1、3.5.0 | +| 3.3.0 | 3.5.1、3.5.0 | +| 3.0.0 ~ 3.2.0| 3.4.1、3.4.0| +| 3.1.0 | 3.3.2 | +| 3.0.0 | 3.2.x | +| 2.6.x | 3.1.x | +| 2.6.x | 3.1.x | +| 2.0 & 2.0.1 | 2.x | +| 1.x | 1.x| + +## Check updates + +Studio is in development. Users can view the latest releases features through [Changelog](../../20.appendix/release-notes/studio-release-note.md). + +To view the Changelog, on the upper-right corner of the page, click the version and then **New version**. + +![On the upper right corner of the page, click Version and then New Version](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-001-en.png) diff --git a/docs-2.0-en/nebula-studio/deploy-connect/st-ug-connect.md b/docs-2.0-en/nebula-studio/deploy-connect/st-ug-connect.md new file mode 100644 index 00000000000..a7e007da702 --- /dev/null +++ b/docs-2.0-en/nebula-studio/deploy-connect/st-ug-connect.md @@ -0,0 +1,73 @@ +# Connect to NebulaGraph + +After successfully launching Studio, you need to configure to connect to NebulaGraph. This topic describes how Studio connects to the NebulaGraph database. + +## Prerequisites + +Before connecting to the NebulaGraph database, you need to confirm the following information: + +- The NebulaGraph services and Studio are started. For more information, see [Deploy Studio](st-ug-deploy.md). + +- You have the local IP address and the port used by the Graph service of NebulaGraph. The default port is `9669`. + +- You have a NebulaGraph account and its password. + +## Procedure + +To connect Studio to NebulaGraph, follow these steps: + +1. Type `http://:7001` in the address bar of your browser. + + The following login page shows that Studio starts successfully. + + A screenshot that shows the login UI of studio + +2. On the **Config Server** page of Studio, configure these fields: + + - **Graphd IP address**: Enter the IP address of the Graph service of NebulaGraph. For example, `192.168.10.100`. + + !!! note + + - When NebulaGraph and Studio are deployed on the same machine, you must enter the IP address of the machine, instead of `127.0.0.1` or `localhost`. + - When connecting to a NebulaGraph database on a new browser tab, a new session will overwrite the sessions of the old tab. If you need to log in to multiple NebulaGraph databases simultaneously, you can use a different browser or non-trace mode. + + - **Port**: The port of the Graph service. The default port is `9669`. + + - **Username** and **Password**: Fill in the log in account according to the authentication settings of NebulaGraph. + + - If authentication is not enabled, you can use `root` and any password as the username and its password. + + - If authentication is enabled and no account information has been created, you can only log in as GOD role and use `root` and `nebula` as the username and its password. + + - If authentication is enabled and different users are created and assigned roles, users in different roles log in with their accounts and passwords. + +3. After the configuration, click the **Connect** button. + + !!! note + + One session continues for up to 30 minutes. If you do not operate Studio within 30 minutes, the active session will time out and you must connect to NebulaGraph again. + +A welcome page is displayed on the first login, showing the relevant functions according to the usage process, and the test datasets can be automatically downloaded and imported. + +To visit the welcome page, click ![help](https://docs-cdn.nebula-graph.com.cn/figures/navbar-help.png). + +## Next to do + +When Studio is successfully connected to NebulaGraph, you can do these operations: + +- Create a schema on the **[Console](../quick-start/st-ug-create-schema.md)** page or on the **[Schema](../manage-schema/st-ug-crud-space.md)** page. +- Batch import data on the **[Import](../quick-start/st-ug-import-data.md)** page. +- Execute nGQL statements on the **Console** page. +- Design the schema visually on the **Schema drafting** page. + +!!! note + + The permissions of an account determine the operations that can be performed. For details, see [Roles and privileges](../../7.data-security/1.authentication/3.role-list.md). + +### Log out + +If you want to reconnect to NebulaGraph, you can log out and reconfigure the database. + +Click the user profile picture in the upper right corner, and choose **Log out**. \ No newline at end of file diff --git a/docs-2.0-en/nebula-studio/deploy-connect/st-ug-deploy.md b/docs-2.0-en/nebula-studio/deploy-connect/st-ug-deploy.md new file mode 100644 index 00000000000..03c3bc2c414 --- /dev/null +++ b/docs-2.0-en/nebula-studio/deploy-connect/st-ug-deploy.md @@ -0,0 +1,329 @@ +# Deploy Studio + + +This topic describes how to deploy Studio locally by RPM, DEB, tar package and Docker. + +## RPM-based Studio + +### Prerequisites + +Before you deploy RPM-based Studio, you must confirm that: + +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). + +- The Linux distribution is CentOS, install `lsof`. + +- Before the installation starts, the following ports are not occupied. + + | Port | Description | + | ---- | ---- | + | 7001 | Web service provided by Studio. | + +### Install + +1. Select and download the RPM package according to your needs. It is recommended to select the latest version. Common links are as follows: + + | Installation package | Checksum | NebulaGraph version | + | ----- | ----- | ----- | + | [nebula-graph-studio-{{studio.release}}.x86_64.rpm](https://oss-cdn.nebula-graph.io/nebula-graph-studio/{{studio.release}}/nebula-graph-studio-{{studio.release}}.x86_64.rpm) | [nebula-graph-studio-{{studio.release}}.x86_64.rpm.sha256](https://oss-cdn.nebula-graph.io/nebula-graph-studio/{{studio.release}}/nebula-graph-studio-{{studio.release}}.x86_64.rpm.sha256) | {{nebula.release}} | + + +2. Use `sudo rpm -i ` to install RPM package. + + For example, install Studio {{studio.release}}, use the following command. The default installation path is `/usr/local/nebula-graph-studio`. + ```bash + $ sudo rpm -i nebula-graph-studio-{{studio.release}}.x86_64.rpm + ``` + + You can also install it to the specified path using the following command: + ```bash + $ sudo rpm -i nebula-graph-studio-{{studio.release}}.x86_64.rpm --prefix= + ``` + + When the screen returns the following message, it means that the PRM-based Studio has been successfully started. + + ```bash + Start installing NebulaGraph Studio now... + NebulaGraph Studio has been installed. + NebulaGraph Studio started automatically. + ``` + +3. When Studio is started, use `http://:7001` to get access to Studio. + + If you can see the **Config Server** page on the browser, Studio is started successfully. + + A screenshot that shows the login UI of studio + +### Uninstall + +You can uninstall Studio using the following command: + +```bash +$ sudo rpm -e nebula-graph-studio-{{studio.release}}.x86_64 +``` + +If these lines are returned, PRM-based Studio has been uninstalled. + +```bash +NebulaGraph Studio removed, bye~ +``` +### Exception handling + +If the automatic start fails during the installation process or you want to manually start or stop the service, use the following command: + +- Start the service manually +```bash +$ bash /usr/local/nebula-graph-studio/scripts/rpm/start.sh +``` + +- Stop the service manually +```bash +$ bash /usr/local/nebula-graph-studio/scripts/rpm/stop.sh +``` + +If you encounter an error `bind EADDRINUSE 0.0.0.0:7001` when starting the service, you can use the following command to check port 7001 usage. + +```bash +$ lsof -i:7001 +``` + +If the port is occupied and the process on that port cannot be terminated, you can modify the startup port within the studio configuration and restart the service. + +```bash +//Modify the studio service configuration. The default path to the configuration file is `/usr/local/nebula-graph-studio`. +$ vi etc/studio-api.yam + +//Modify this port number and change it to any +Port: 7001 + +//Restart service +$ systemctl restart nebula-graph-studio.service +``` + +## DEB-based Studio + +### Prerequisites + +Before you deploy DEB-based Studio, you must do a check of these: + +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). + +- The Linux distribution is Ubuntu. + +- Before the installation starts, the following ports are not occupied. + + | Port | Description | + | ---- | ---- | + | 7001 | Web service provided by Studio | + + - The path `/usr/lib/systemd/system` exists in the system. If not, create it manually. + +### Install + +1. Select and download the DEB package according to your needs. It is recommended to select the latest version. Common links are as follows: + + | Installation package | Checksum | NebulaGraph version| + | ----- | ----- | ----- | + | [nebula-graph-studio-{{studio.release}}.x86_64.deb](https://oss-cdn.nebula-graph.io/nebula-graph-studio/{{studio.release}}/nebula-graph-studio-{{studio.release}}.x86_64.deb) | [nebula-graph-studio-{{studio.release}}.x86_64.deb.sha256](https://oss-cdn.nebula-graph.io/nebula-graph-studio/{{studio.release}}/nebula-graph-studio-{{studio.release}}.x86_64.deb.sha256) | {{ nebula.release }} | + +2. Use `sudo dpkg -i ` to install DEB package. + + For example, install Studio {{studio.release}}, use the following command: + + ```bash + $ sudo dpkg -i nebula-graph-studio-{{ studio.release }}.x86_64.deb + ``` + +3. When Studio is started, use `http://:7001` to get access to Studio. + + If you can see the **Config Server** page on the browser, Studio is started successfully. + + A screenshot that shows the login UI of studio + +### Uninstall + +You can uninstall Studio using the following command: + +```bash +$ sudo dpkg -r nebula-graph-studio + +``` + +## tar-based Studio + +### Prerequisites + +Before you deploy tar-based Studio, you must do a check of these: + +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). + +- Before the installation starts, the following ports are not occupied. + + | Port | Description | + | ---- | ---- | + | 7001 | Web service provided by Studio | + +### Install and deploy + +1. Select and download the tar package according to your needs. It is recommended to select the latest version. Common links are as follows: + + | Installation package | Studio version | + | --- | --- | + | [nebula-graph-studio-{{studio.release}}.x86_64.tar.gz](https://oss-cdn.nebula-graph.io/nebula-graph-studio/{{studio.release}}/nebula-graph-studio-{{studio.release}}.x86_64.tar.gz) | {{studio.release}} | + +2. Use `tar -xvf` to decompress the tar package. + + ```bash + $ tar -xvf nebula-graph-studio-{{studio.release}}.x86_64.tar.gz + ``` + +3. Deploy and start nebula-graph-studio. + + ```bash + $ cd nebula-graph-studio + $ ./server + ``` + +4. When Studio is started, use `http://:7001` to get access to Studio. + + If you can see the **Config Server** page on the browser, Studio is started successfully. + + A screenshot that shows the login UI of studio + +### Stop Service + +You can use `kill pid` to stop the service: +```bash +$ kill $(lsof -t -i :7001) #stop nebula-graph-studio +``` + +## Docker-based Studio + +### Prerequisites + +Before you deploy Docker-based Studio, you must do a check of these: + +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). + +- On the machine where Studio will run, Docker Compose is installed and started. For more information, see [Docker Compose Documentation](https://docs.docker.com/compose/install/ "Click to go to Docker Documentation"). + +- Before the installation starts, the following ports are not occupied. + + | Port | Description | + | ---- | ---- | + | 7001 | Web service provided by Studio | + + +### Procedure + +To deploy and start Docker-based Studio, run the following commands. Here we use NebulaGraph v{{nebula.release}} for demonstration: + +1. Download the configuration files for the deployment. + + | Installation package | NebulaGraph version | + | ----- | ----- | + | [nebula-graph-studio-{{studio.release}}.tar.gz](https://oss-cdn.nebula-graph.io/nebula-graph-studio/{{studio.release}}/nebula-graph-studio-{{studio.release}}.tar.gz) | {{nebula.release}} | + +2. Create the `nebula-graph-studio-{{studio.release}}` directory and decompress the installation package to the directory. + + ```bash + $ mkdir nebula-graph-studio-{{studio.release}} -zxvf nebula-graph-studio-{{studio.release}}.gz -C nebula-graph-studio-{{studio.release}} + ``` + +3. Change to the `nebula-graph-studio-{{studio.release}}` directory. + ```bash + $ cd nebula-graph-studio-{{studio.release}} + ``` + +4. Pull the Docker image of Studio. + + ```bash + $ docker-compose pull + ``` + +5. Build and start Docker-based Studio. In this command, `-d` is to run the containers in the background. + + ```bash + $ docker-compose up -d + ``` + + If these lines are returned, Docker-based Studio v3.x is deployed and started. + + ```bash + Creating docker_web_1 ... done + ``` + +6. When Docker-based Studio is started, use `http://:7001` to get access to Studio. + + !!! note + + Run `ifconfig` or `ipconfig` to get the IP address of the machine where Docker-based Studio is running. On the machine running Docker-based Studio, you can use `http://localhost:7001` to get access to Studio. + + If you can see the **Config Server** page on the browser, Docker-based Studio is started successfully. + + A screenshot that shows the login UI of studio + +## Helm-based Studio + +This section describes how to deploy Studio with Helm. + +### Prerequisites + +Before installing Studio, you need to install the following software and ensure the correct version of the software: + +| Software | Requirement | +| ------------------------------------------------------------ | --------- | +| [Kubernetes](https://kubernetes.io) | \>= 1.14 | +| [Helm](https://helm.sh) | \>= 3.2.0 | + +### Install + +1. Use Git to clone the source code of Studio to the host. + + ```bash + $ git clone https://github.com/vesoft-inc/nebula-studio.git + ``` + +2. Make the `nebula-studio` directory the current working directory. + + ```bash + $ cd nebula-studio + ``` + +3. Assume using release name:`my-studio`, installed Studio in Helm Chart. + + ```bash + $ helm upgrade --install my-studio --set service.type=NodePort --set service.port=30070deployment/helm + ``` + + The configuration parameters of the Helm Chart are described below. + + | Parameter | Default value | Description | + |-----------|-------------|---------| + | replicaCount | 0 | The number of replicas for Deployment. | + | image.nebulaStudio.name | vesoft/nebula-graph-studio | The image name of nebula-graph-studio. | + | image.nebulaStudio.version | {{studio.tag}} | The image version of nebula-graph-studio. | + | service.type | ClusterIP | The service type, which should be one of `NodePort`, `ClusterIP`, and `LoadBalancer`. | + | service.port | 7001 | The expose port for nebula-graph-studio's web. | + | service.nodePort | 32701 | The proxy port for accessing nebula-studio outside kubernetes cluster. | + | resources.nebulaStudio | {} | The resource limits/requests for nebula-studio. | + | persistent.storageClassName | "" | The name of storageClass. The default value will be used if not specified. | + | persistent.size | 5Gi | The persistent volume size. | + +4. When Studio is started, use `http://:30070/` to get access to Studio. + + If you can see the **Config Server** page on the browser, Studio is started successfully. + + A screenshot that shows the login UI of studio + +### Uninstall + +```bash + $ helm uninstall my-studio +``` + +## Next to do + +On the **Config Server** page, connect Docker-based Studio to NebulaGraph. For more information, see [Connect to NebulaGraph](st-ug-connect.md). diff --git a/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-edge-type.md b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-edge-type.md new file mode 100644 index 00000000000..505a755550c --- /dev/null +++ b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-edge-type.md @@ -0,0 +1,98 @@ +# Manage edge types + +After a graph space is created in NebulaGraph, you can create edge types. With Studio, you can choose to use the **Console** page or the **Schema** page to create, retrieve, update, or delete edge types. This topic introduces how to use the **Schema** page to operate edge types in a graph space only. + +## Prerequisites + +To operate an edge type on the **Schema** page of Studio, you must do a check of these: + +- Studio is connected to NebulaGraph. +- A graph space is created. +- Your account has the authority of GOD, ADMIN, or DBA. + +## Create an edge type + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Edge Type** tab and click the **+ Create** button. + +5. On the **Create Edge Type** page, do these settings: + + - **Name**: Specify an appropriate name for the edge type. In this example, `serve` is used. + + - **Comment** (Optional): Enter the description for edge type. + + - **Define Properties** (Optional): If necessary, click **+ Add Property** to do these settings: + + - Enter a property name. + + - Select a data type. + + - Select whether to allow null values.. + + - (Optional) Enter the default value. + + - (Optional) Enter the description. + + - **Set TTL (Time To Live)** (Optional): If no index is set for the edge type, you can set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box to expand the panel, and configure `TTL_COL` and `TTL_ DURATION` (in seconds). For more information about both parameters, see [TTL configuration](../../3.ngql-guide/8.clauses-and-options/ttl-options.md "Click to go to NebulaGraph website"). + +6. When the preceding settings are completed, in the **Equivalent to the following nGQL statement** panel, you can see the nGQL statement equivalent to these settings. + + ![Define properties of the `action` edge type](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-004-en.png "Define an edge type") + +7. Confirm the settings and then click the **+ Create** button. + +When the edge type is created successfully, the **Define Properties** panel shows all its properties on the list. + +## Edit an edge type + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Edge Type** tab, find an edge type and then click the button ![Icon of edit](https://docs-cdn.nebula-graph.com.cn/figures/Setup.png) in the **Operations** column. + +5. On the **Edit** page, do these operations: + + - To edit a comment: Click **Edit** on the right of `Comment`. + - To edit a property: On the **Define Properties** panel, find a property, click **Edit**, and then change the data type or the default value. + + - To delete a property: On the **Define Properties** panel, find a property, click **Delete**. + + - To add more properties: On the **Define Properties** panel, click the **Add Property** button to add a new property. + + - To set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box and then set TTL. + + - To delete the TTL configuration: When the **Set TTL** panel is expanded, in the upper left corner of the panel, click the check box to delete the configuration. + + - To edit the TTL configuration: On the **Set TTL** panel, click **Edit** and then change the configuration of `TTL_COL` and `TTL_DURATION` (in seconds). + + !!! note + + For information about the coexistence problem of TTL and index, see [TTL]((../../3.ngql-guide/8.clauses-and-options/ttl-options.md). + +## Delete an Edge type + +!!! danger + + Confirm the [impact](../../3.ngql-guide/11.edge-type-statements/2.drop-edge.md) before deleting the Edge type. The deleted data cannot be restored if it is not [backup](../../backup-and-restore/nebula-br/1.what-is-br.md). + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Edge Type** tab, find an edge type and then click the button ![Icon of deletion](https://docs-cdn.nebula-graph.com.cn/figures/alert-delete.png) in the **Operations** column. + +5. Click **OK** to confirm in the pop-up dialog box. + +## Next to do + +After the edge type is created, you can use the **Console** page to insert edge data one by one manually or use the **Import** page to bulk import edge data. diff --git a/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-index.md b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-index.md new file mode 100644 index 00000000000..b1313f3b2e4 --- /dev/null +++ b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-index.md @@ -0,0 +1,89 @@ +# Manage indexes + +You can create an index for a Tag and/or an Edge type. An index lets traversal start from vertices or edges with the same property and it can make a query more efficient. With Studio, you can use the **Console** page or the **Schema** page to create, retrieve, and delete indexes. This topic introduces how to use the **Schema** page to operate an index only. + +!!! note + + You can create an index when a Tag or an Edge Type is created. But an index can decrease the write speed during data import. We recommend that you import data firstly and then create and rebuild an index. For more information, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md "Click to go to the NebulaGraph website"). + +## Prerequisites + +To operate an index on the **Schema** page of Studio, you must do a check of these: + +- Studio is connected to NebulaGraph. +- A graph Space, Tags, and Edge Types are created. +- Your account has the authority of GOD, ADMIN, or DBA. + +## Create an index + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Index** tab and then click the **+ Create** button. + +5. On the **Create** page, do these settings: + + - **Index Type**: Choose to create an index for a tag or for an edge type. In this example, **Edge Type** is chosen. + + - **Associated tag name**: Choose a tag name or an edge type name. In this example, **follow** is chosen. + + - **Index Name**: Specify a name for the new index. In this example, **follow_index** is used. + + - **Comment** (Optional): Enter the description for index. + + - **Indexed Properties** (Optional): Click **Add property**, and then, in the dialog box, choose a property. If necessary, repeat this step to choose more properties. You can drag the properties to sort them. In this example, `degree` is chosen. + + !!! note + + The order of the indexed properties has an effect on the result of the `LOOKUP` statement. For more information, see [nGQL Manual](../../3.ngql-guide/7.general-query-statements/5.lookup.md). + +6. When the settings are done, the **Equivalent to the following nGQL statement** panel shows the statement equivalent to the settings. + + ![A page for index creation](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-005-en.png) + +7. Confirm the settings and then click the **+ Create** button. When an index is created, the index list shows the new index. + +## View indexes + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Index** tab, in the upper left corner, choose an index type, **Tag** or **Edge Type**. + +5. In the list, find an index and click its row. All its details are shown in the expanded row. + +## Rebuild indexes + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Index** tab, in the upper left corner, choose an index type, **Tag** or **Edge Type**. + +5. Click the **Index** tab, find an index and then click the button **Rebuild** in the **Operations** column. + +!!! note + + For more Information, see [REBUILD INDEX](../../3.ngql-guide/14.native-index-statements/4.rebuild-native-index.md). + +## Delete an index + +To delete an index on **Schema**, follow these steps: + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Index** tab, find an index and then click the button ![Icon of deletion](https://docs-cdn.nebula-graph.com.cn/figures/alert-delete.png) in the **Operations** column. + +5. Click **OK** to confirm in the pop-up dialog box. diff --git a/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-space.md b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-space.md new file mode 100644 index 00000000000..9920237cc89 --- /dev/null +++ b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-space.md @@ -0,0 +1,58 @@ +# Manage graph spaces + +When Studio is connected to NebulaGraph, you can create or delete a graph space. You can use the **Console** page or the **Schema** page to do these operations. This article only introduces how to use the **Schema** page to operate graph spaces in NebulaGraph. + +## Prerequisites + +To operate a graph space on the **Schema** page of Studio, you must do a check of these: + +- Studio is connected to NebulaGraph. +- Your account has the authority of GOD. It means that: + - If the authentication is enabled in NebulaGraph, you can use `root` and any password to sign in to Studio. + - If the authentication is disabled in NebulaGraph, you must use `root` and its password to sign in to Studio. + +## Create a graph space + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, click **Create Space**, do these settings: + + - **Name**: Specify a name to the new graph space. In this example, `basketballplayer` is used. The name must be unique in the database. + + - **Vid Type**: The data types of VIDs are restricted to `FIXED_STRING()` or `INT64`. A graph space can only select one VID type. In this example, `FIXED_STRING(32)` is used. For more information, see [VID](../../1.introduction/3.vid.md). + + - **Comment**: Enter the description for graph space. The maximum length is 256 bytes. By default, there will be no comments on a space. But in this example, `Statistics of basketball players` is used. + + - **Optional Parameters**: Set the values of `partition_num` and `replica_factor` respectively. In this example, these parameters are set to `100` and `1` respectively. For more information, see [`CREATE SPACE` syntax](../../3.ngql-guide/9.space-statements/1.create-space.md "Click to go to the NebulaGraph website"). + + In the **Equivalent to the following nGQL statement** panel, you can see the statement equivalent to the preceding settings. + + ```bash + CREATE SPACE basketballplayer (partition_num = 100, replica_factor = 1, vid_type = FIXED_STRING(32)) COMMENT = "Statistics of basketball players" + ``` + +3. Confirm the settings and then click the **+ Create** button. If the graph space is created successfully, you can see it on the graph space list. + +![The Create page with settings for a graph space](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-006-en.png) + +## Delete a graph space + +!!! danger + + Deleting the space will delete all the data in it, and the deleted data cannot be restored if it is not [backed up](../../backup-and-restore/3.manage-snapshot.md). + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List**, find the space you want to be deleted, and click **Delete Graph Space** in the **Operation** column. + + ![Graph space list with the graph space to be deleted](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-007-en.png) + +3. On the dialog box, confirm the information and then click **OK**. + +## Next to do + +After a graph space is created, you can create or edit a schema, including: + +- [Operate tags](st-ug-crud-tag.md) +- [Operate edge types](st-ug-crud-edge-type.md) +- [Operate indexes](st-ug-crud-index.md) diff --git a/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-tag.md b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-tag.md new file mode 100644 index 00000000000..3b865f3bc0a --- /dev/null +++ b/docs-2.0-en/nebula-studio/manage-schema/st-ug-crud-tag.md @@ -0,0 +1,100 @@ +# Manage tags + +After a graph space is created in NebulaGraph, you can create tags. With Studio, you can use the **Console** page or the **Schema** page to create, retrieve, update, or delete tags. This topic introduces how to use the **Schema** page to operate tags in a graph space only. + +## Prerequisites + +To operate a tag on the **Schema** page of Studio, you must do a check of these: + +- Studio is connected to NebulaGraph. +- A graph space is created. +- Your account has the authority of GOD, ADMIN, or DBA. + +## Create a tag + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Tag** tab and click the **+ Create** button. + +5. On the **Create** page, do these settings: + + - **Name**: Specify an appropriate name for the tag. In this example, `course` is specified. + + - **Comment** (Optional): Enter the description for tag. + + - **Define Properties** (Optional): If necessary, click **+ Add Property** to do these settings: + + - Enter a property name. + + - Select a data type. + + - Select whether to allow null values.. + + - (Optional) Enter the default value. + + - (Optional) Enter the description. + + - **Set TTL (Time To Live)** (Optional): If no index is set for the tag, you can set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box to expand the panel, and configure `TTL_COL` and `TTL_ DURATION` (in seconds). For more information about both parameters, see [TTL configuration](../../3.ngql-guide/8.clauses-and-options/ttl-options.md "Click to go to NebulaGraph website"). + +6. When the preceding settings are completed, in the **Equivalent to the following nGQL statement** panel, you can see the nGQL statement equivalent to these settings. + + ![Define properties of the `course` tag](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-008-en.png) + +7. Confirm the settings and then click the **+ Create** button. + +When the tag is created successfully, the **Define Properties** panel shows all its properties on the list. + +## Edit a tag + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Tag** tab, find a tag and then click the button ![Icon of edit](https://docs-cdn.nebula-graph.com.cn/figures/Setup.png) in the **Operations** column. + +5. On the **Edit** page, do these operations: + + - To edit a Comment: Click **Edit** on the right of `Comment`. + + - To edit a property: On the **Define Properties** panel, find a property, click **Edit**, and then change the data type or the default value. + + - To delete a property: On the **Define Properties** panel, find a property, click **Delete**. + + - To add more properties: On the **Define Properties** panel, click the **Add Property** button to add a new property. + + - To set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box and then set TTL. + + - To delete the TTL configuration: When the **Set TTL** panel is expanded, in the upper left corner of the panel, click the check box to delete the configuration. + + - To edit the TTL configuration: On the **Set TTL** panel, click **Edit** and then change the configuration of `TTL_COL` and `TTL_DURATION` (in seconds). + + !!! note + + The problem of coexistence of TTL and index, see [TTL]((../../3.ngql-guide/8.clauses-and-options/ttl-options.md). + +## Delete a tag + +!!! danger + + Confirm the [impact](../../3.ngql-guide/10.tag-statements/2.drop-tag.md) before deleting the tag. The deleted data cannot be restored if it is not [backup](../../backup-and-restore/nebula-br/1.what-is-br.md). + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. In the **Current Graph Space** field, confirm the name of the graph space. If necessary, you can choose another name to change the graph space. + +4. Click the **Tag** tab, find an tag and then click the button ![Icon of deletion](https://docs-cdn.nebula-graph.com.cn/figures/alert-delete.png) in the **Operations** column. + +5. Click **OK** to confirm delete a tag in the pop-up dialog box. + +## Next to do + +After the tag is created, you can use the **Console** page to insert vertex data one by one manually or use the **Import** page to bulk import vertex data. + diff --git a/docs-2.0-en/nebula-studio/manage-schema/st-ug-view-schema.md b/docs-2.0-en/nebula-studio/manage-schema/st-ug-view-schema.md new file mode 100644 index 00000000000..9dc1a4c11a3 --- /dev/null +++ b/docs-2.0-en/nebula-studio/manage-schema/st-ug-view-schema.md @@ -0,0 +1,19 @@ +# View Schema + +Users can visually view schemas in NebulaGraph Studio. + +## Steps + +1. In the toolbar, click the **Schema** tab. + +2. In the **Graph Space List** page, find a graph space and then click its name or click **Schema** in the **Operations** column. + +3. Click **View Schema** tab and click the **Get Schema** button. + +## Other operations + +In the **Graph Space List** page, find a graph space and then perform the following operations in the **Operations** column: + +- View Schema DDL: Displays schema creation statements for the graph space, including graph spaces, tags, edge types, and indexes. +- Clone Graph Space: Clones the schema of the graph space to a new graph space. +- Delete Graph pace: Deletes the graph space, including the schema and all vertices and edges. \ No newline at end of file diff --git a/docs-2.0-en/nebula-studio/quick-start/draft.md b/docs-2.0-en/nebula-studio/quick-start/draft.md new file mode 100644 index 00000000000..d176b246f31 --- /dev/null +++ b/docs-2.0-en/nebula-studio/quick-start/draft.md @@ -0,0 +1,53 @@ +# Schema drafting + +Studio supports the schema drafting function. Users can design their schemas on the canvas to visually display the relationships between vertices and edges, and apply the schema to a specified graph space after the design is completed. + +## Features + +- Design schema visually. +- Applies schema to a specified graph space. +- Export the schema as a PNG image. + +## Entry + +At the top navigation bar, click ![Template](https://docs-cdn.nebula-graph.com.cn/figures/sketch_cion_221018.png) . + +## Design schema + +The following steps take designing the schema of the `basketballplayer` dataset as an example to demonstrate how to use the schema drafting function. + +1. At the upper left corner of the page, click **New**. +2. Create a tag by selecting the appropriate color tag under the canvas. You can hold down the left button and drag the tag into the canvas. +3. Click the tag. On the right side of the page, you need to fill in the name of the tag as `player`, and add two properties `name` and `age`. +4. Create a tag again. The name of the tag is `team`, and the property is `name`. +5. Connect from the anchor point of the tag `player` to the anchor point of the tag `team`. Click the generated edge, fill in the name of the edge type as `serve`, and add two properties `start_year` and `end_year`. +6. Connect from an anchor point of the tag `player` to another one of its own. Click the generated edge, fill in the name of the edge type as `follow`, and add a property `degree`. +7. After the design is complete, click ![setup](https://docs-cdn.nebula-graph.com.cn/figures/setup-220916.png) at the top of the page to change the name of the draft, and then click ![save](https://docs-cdn.nebula-graph.com.cn/figures/workflow-saveAs-220623.png) at the top right corner to save the draft. + +A screenshot that shows the draft UI of studio + +## Apply schema + +1. Select the draft that you want to import from the **Draft list** on the left side of the page, and then click **Apply to Space** at the upper right corner. +2. Import the schema to a new or existing space, and click **Confirm**. + + !!! note + + - For more information about the parameters for creating a graph space, see [CREATE SPACE](../../3.ngql-guide/9.space-statements/1.create-space.md). + - If the same schema exists in the graph space, the import operation fails, and the system prompts you to modify the name or change the graph space. + +## Modify schema + +Select the schema draft that you want to modify from the **Draft list** on the left side of the page. Click ![save](https://docs-cdn.nebula-graph.com.cn/figures/workflow-saveAs-220623.png) at the upper right corner after the modification. + +!!! note + + The graph space to which the schema has been applied will not be modified synchronously. + +## Delete schema + +Select the schema draft that you want to delete from the **Draft list** on the left side of the page, click **X** at the upper right corner of the thumbnail, and confirm to delete it. + +## Export Schema + +Click ![data_output](https://docs-cdn.nebula-graph.com.cn/figures/explorer-btn-output.png) at the upper right corner to export the schema as a PNG image. diff --git a/docs-2.0-en/nebula-studio/quick-start/st-ug-console.md b/docs-2.0-en/nebula-studio/quick-start/st-ug-console.md new file mode 100644 index 00000000000..793beb434c2 --- /dev/null +++ b/docs-2.0-en/nebula-studio/quick-start/st-ug-console.md @@ -0,0 +1,25 @@ +# Console + +Studio console interface is shown as follows. + +![console](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-015-en.png) + +The following table lists various functions on the console interface. + +| number | function | descriptions | +| :-- | :--| :-- | +| 1 | toolbar | Click the **Console** tab to enter the console page. | +| 2 | select a space | Select a space in the Current Graph Space list.
**descriptions**: Studio does not support running the `USE ` statements directly in the input box. | +| 3 | favorites | Click the ![save](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-save.png) button to expand the favorites, click one of the statements, and the input box will automatically enter the statement. | +| 4 | history list | Click ![history](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-history.png) button representing the statement record. In the statement running record list, click one of the statements, and the statement will be automatically entered in the input box. The list provides the record of the last 15 statements. | +| 5 | clean input box | Click ![clean](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-clear.png) button to clear the content entered in the input box. | +| 6 | run | After inputting the nGQL statement in the input box, click ![run](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-play.png) button to indicate the operation to start running the statement. | +| 7 | custom parameters display | Click the ![Query](https://docs-cdn.nebula-graph.com.cn/figures/down.png) button to expand the custom parameters for parameterized query. For details, see [Manage parameters](../../14.client/nebula-console.md).| +| 8 | input box | After inputting the nGQL statements, click the ![run](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-play.png) button to run the statement. You can input multiple statements and run them at the same time by using the separator `;`, and also use the symbol `//` to add comments. | +| 9 | statement running status | After running the nGQL statement, the statement running status is displayed. If the statement runs successfully, the statement is displayed in green. If the statement fails, the statement is displayed in red. | +| 10 | add to favorites | Click the ![save](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-save.png) button to save the statement as a favorite, the button for the favorite statement is colored in yellow exhibit.| +| 11 | export CSV file or PNG file | After running the nGQL statement to return the result, when the result is in **Table** window, click the ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) button to export as a CSV file. Switch to the **Graph** window and click the ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) button to save the results as a CSV file or PNG image export. | +| 12 | expand/hide execution results | Click the ![up](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-up.png) button to hide the result or click ![down](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-down.png) button to expand the result. | +| 13 | close execution results | Click the ![close](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-close.png) button to close the result returned by this nGQL statement. | +| 14 | **Table** window | Display the result from running nGQL statement. If the statement returns results, the window displays the results in a table. | +| 15 | **Graph** window | Display the result from running nGQL statement. If the statement returns the complete vertex-edge result, the window displays the result as a graph . Click the ![expand](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-back.png) button on the right to view the overview panel. | diff --git a/docs-2.0-en/nebula-studio/quick-start/st-ug-create-schema.md b/docs-2.0-en/nebula-studio/quick-start/st-ug-create-schema.md new file mode 100644 index 00000000000..c97df31e5aa --- /dev/null +++ b/docs-2.0-en/nebula-studio/quick-start/st-ug-create-schema.md @@ -0,0 +1,76 @@ +# Create a schema + +To batch import data into NebulaGraph, you must have a graph schema. You can create a schema on the **Console** page or on the **Schema** page of Studio. + +!!! note + + - Users can use nebula-console to create a schema. For more information, see [NebulaGraph Manual](../../README.md) and [Get started with NebulaGraph](../../2.quick-start/1.quick-start-workflow.md). + - Users can use the Schema drafting function to design schema visually. For more information, see [Schema drafting](draft.md). + +## Prerequisites + +To create a graph schema on Studio, you must do a check of these: + +- Studio is connected to NebulaGraph. + +- Your account has the privilege of GOD, ADMIN, or DBA. + +- The schema is designed. + +- A graph space is created. + +!!! note + + If no graph space exists and your account has the GOD privilege, you can create a graph space on the **Console** page. For more information, see [CREATE SPACE](../../3.ngql-guide/9.space-statements/1.create-space.md). + +## Create a schema with Schema + +1. Create tags. For more information, see [Operate tags](../manage-schema/st-ug-crud-tag.md). + +2. Create edge types. For more information, see [Operate edge types](../manage-schema/st-ug-crud-edge-type.md). + +## Create a schema with Console + +1. In the toolbar, click the **Console** tab. + +2. In the **Current Graph Space** field, choose a graph space name. In this example, **basketballplayer** is used. + + ![Choose a graph space name for the Current Graph Space field](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-009-en.png "Choose a graph space") + +3. In the input box, enter these statements one by one and click the button **Run**. + + ```ngql + // To create a tag named "player", with two property + nebula> CREATE TAG player(name string, age int); + + // To create a tag named "team", with one property + nebula> CREATE TAG team(name string); + + // To create an edge type named "follow", with one properties + nebula> CREATE EDGE follow(degree int); + + // To create an edge type named "serve", with two properties + nebula> CREATE EDGE serve(start_year int, end_year int); + ``` + +If the preceding statements are executed successfully, the schema is created. You can run the statements as follows to view the schema. + +```ngql +// To list all the tags in the current graph space +nebula> SHOW TAGS; + +// To list all the edge types in the current graph space +nebula> SHOW EDGES; + +// To view the definition of the tags and edge types +DESCRIBE TAG player; +DESCRIBE TAG team; +DESCRIBE EDGE follow; +DESCRIBE EDGE serve; +``` + +If the schema is created successfully, in the result window, you can see the definition of the tags and edge types. + +## Next to do + +When a schema is created, you can [import data](st-ug-import-data.md). diff --git a/docs-2.0-en/nebula-studio/quick-start/st-ug-import-data.md b/docs-2.0-en/nebula-studio/quick-start/st-ug-import-data.md new file mode 100644 index 00000000000..615cbd7dfbd --- /dev/null +++ b/docs-2.0-en/nebula-studio/quick-start/st-ug-import-data.md @@ -0,0 +1,67 @@ +# Import data + +Studio supports importing data in CSV format into NebulaGraph through an interface. + +## Prerequisites + +To batch import data, do a check of these: + +- The schema has been created in NebulaGraph. + +- The CSV files meet the demands of the schema. + +- The account has GOD, ADMIN, or DBA permissions. For details, see [Built-in Roles](../../7.data-security/1.authentication/3.role-list.md). + +## Entry + +In the top navigation bar, click ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png). + +Importing data is divided into 2 parts, creating a new data source and creating an import task, which will be described in detail next. + +## Create a new data source + +Click **New Data Source** in the upper right corner of the page to set the data source and its related settings. Currently, 3 types of data sources are supported. + +| Type of data source | Description | +| :--- | :--- | +| Cloud storage | Add cloud storage as the CSV file source, which only supports cloud services compatible with the Amazon S3 interface. +| SFTP | Add SFTP as the CSV file source. | +| Local file | Upload a local CSV file. The file size can not exceed 200 MB, please put the files exceeding the limit into other types of data sources. | + +!!! Note + + - When uploading a local CSV file, you can select more than one CSV file at one time. + - After adding a data source, you can click **Data Source Management** at the top of the page and switch tabs to view the details of different types of data sources, and you can also edit or delete data sources. + +## Create an import task + +1. Click **New Import** at the top left corner of the page to complete the following settings: + + !!! caution + + Users can also click **Import Template** to download the sample configuration file `example.yaml`, configure it and then upload the configuration file. Configure in the same way as [NebulaGraph Importer](../../nebula-importer/use-importer.md). + + - **Space**: The name of the graph space where the data needs to be imported. + - **Task Name**: automatically generated by default, can be modified. + - (optional)**More configuration**: You can customize the concurrency, batch size, retry times, read concurrency, and import concurrency. + - **Map Tags**: + + 1. Click **Add Tag**, and then select the tag within the added tags below. + 2. Click **Add source file**, select **Data Source Type** and **File Path** in **Data source file**, find the file you need to import, and then click **Add**. + 3. In the preview page, set the file separator and whether to carry the table header, and then click **Confirm**. + 4. Select the corresponding column for VID in **VID Columns**. You can select multiple columns to be merged into a VID, and you can also add a prefix or suffix to the VID. + 5. Select the corresponding column for the attribute in the properties box. For properties that can be `NULL` or have `DEFAULT` set, you can leave the corresponding column unspecified. + 6. Repeat steps 2 to 5 to import all the data files of the Tag selected. + 7. Repeat steps 1 to 6 to import all Tag data. + + - **Map Edges**: Same operation as map tags. + + ![import](https://docs-cdn.nebula-graph.com.cn/figures/explorer_import_230830.png) + +2. After completing the settings, click **Import**, enter the password for the NebulaGraph account, and confirm. + +After the import task is created, you can view the progress of the import task in the **Import Data** tab, which supports operations such as editing the task, viewing logs, downloading logs, reimporting, downloading configuration files, and deleting tasks. + +## Next + +After completing the data import, users can access the [Console](st-ug-console.md) page. diff --git a/docs-2.0-en/nebula-studio/quick-start/st-ug-plan-schema.md b/docs-2.0-en/nebula-studio/quick-start/st-ug-plan-schema.md new file mode 100644 index 00000000000..dffd1fb336f --- /dev/null +++ b/docs-2.0-en/nebula-studio/quick-start/st-ug-plan-schema.md @@ -0,0 +1,24 @@ +# Design a schema + +To manipulate graph data in NebulaGraph with Studio, you must have a graph schema. This article introduces how to design a graph schema for NebulaGraph. + +A graph schema for NebulaGraph must have these essential elements: + +- Tags (namely vertex types) and their properties. + +- Edge types and their properties. + +In this article, you can install the sample data set [basketballplayer](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) and use it to explore a pre-designed schema. + +This table gives all the essential elements of the schema. + +| Element | Name | Property name (Data type) | Description | +| :--- | :--- | :--- | :--- | +| Tag | **player** | - `name` (`string`)
- `age` (`int`) | Represents the player. | +| Tag | **team** | - `name` (`string`) | Represents the team. | +| Edge type | **serve** | - `start_year` (`int`)
- `end_year` (`int`) | Represent the players behavior.
This behavior connects the player to the team, and the direction is from player to team. | +| Edge type | **follow** | - `degree` (`int`) | Represent the players behavior.
This behavior connects the player to the player, and the direction is from a player to a player. | + +This figure shows the relationship (**serve**/**follow**) between a **player** and a **team**. + +![The relationship between players and between players and teams](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-013-cn.png "Relationship between players and teams in the example dataset") diff --git a/docs-2.0-en/nebula-studio/troubleshooting/st-ug-config-server-errors.md b/docs-2.0-en/nebula-studio/troubleshooting/st-ug-config-server-errors.md new file mode 100644 index 00000000000..7032d6f7f0e --- /dev/null +++ b/docs-2.0-en/nebula-studio/troubleshooting/st-ug-config-server-errors.md @@ -0,0 +1,45 @@ +# Connecting to the database error + +## Problem description + +According to the [connect Studio](../deploy-connect/st-ug-connect.md) operation, it prompts **failed**. + +## Possible causes and solutions + +You can troubleshoot the problem by following the steps below. + +### Step1: Confirm that the format of the **Host** field is correct + +You must fill in the IP address (`graph_server_ip`) and port of the NebulaGraph database Graph service. If no changes are made, the port defaults to `9669`. Even if NebulaGraph and Studio are deployed on the current machine, you must use the local IP address instead of `127.0.0.1`, `localhost` or `0.0.0.0`. + +### Step2: Confirm that the **username** and **password** are correct + +If authentication is not enabled, you can use root and any password as the username and its password. + +If authentication is enabled and different users are created and assigned roles, users in different roles log in with their accounts and passwords. + +### Step3: Confirm that NebulaGraph service is normal + +Check NebulaGraph service status. Regarding the operation of viewing services: + +- If you compile and deploy NebulaGraph on a Linux server, refer to the [NebulaGraph service](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md). + +- If you use NebulaGraph deployed by Docker Compose and RPM, refer to the [NebulaGraph service status and ports](../deploy-connect/st-ug-deploy.md). + +If the NebulaGraph service is normal, proceed to Step 4 to continue troubleshooting. Otherwise, please restart NebulaGraph service. + +!!! Note + + If you used `docker-compose up -d` to satrt NebulaGraph before, you must run the `docker-compose down` to stop NebulaGraph. + +### Step4: Confirm the network connection of the Graph service is normal + +Run a command (for example, telnet 9669) on the Studio machine to confirm whether NebulaGraph's Graph service network connection is normal. + +If the connection fails, check according to the following steps: + +- If Studio and NebulaGraph are on the same machine, check if the port is exposed. + +- If Studio and NebulaGraph are not on the same machine, check the network configuration of the NebulaGraph server, such as firewall, gateway, and port. + +If you cannot connect to the NebulaGraph service after troubleshooting with the above steps, please go to the [NebulaGraph forum](https://discuss.nebula-graph.io) for consultation. \ No newline at end of file diff --git a/docs-2.0-en/nebula-studio/troubleshooting/st-ug-connection-errors.md b/docs-2.0-en/nebula-studio/troubleshooting/st-ug-connection-errors.md new file mode 100644 index 00000000000..bb24e42ab01 --- /dev/null +++ b/docs-2.0-en/nebula-studio/troubleshooting/st-ug-connection-errors.md @@ -0,0 +1,59 @@ +# Cannot access to Studio + +## Problem description + +I follow the document description and visit `127.0.0.1:7001` or `0.0.0.0:7001` after starting Studio, why can’t I open the page? + +## Possible causes and solutions + +You can troubleshoot the problem by following the steps below. + +### Step1: Confirm system architecture + +It is necessary to confirm whether the machine where the Studio service is deployed is of x86_64 architecture. Currently, Studio only supports x86_64 architecture. + +### Step2: Check if the Studio service starts normally + +- For Studio deployed with RPM or DEB packages, use `systemctl status nebula-graph-studio` to see the running status. + +- For Studio deployed with tar package, use `sudo lsof -i:7001` to check port status. + +- For Studio deployed with docker, use `docker-compose ps` to see the running status. +Run `docker-compose ps` to check if the service has started normally. + + If the service is normal, the return result is as follows. Among them, the `State` column should all be displayed as `Up`. + + ```bash + Name Command State Ports + ------------------------------------------------------------------------------------------------------ + nebula-web-docker_client_1 ./nebula-go-api Up 0.0.0.0:32782->8080/tcp + nebula-web-docker_importer_1 nebula-importer --port=569 ... Up 0.0.0.0:32783->5699/tcp + nebula-web-docker_nginx_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:7001->7001/tcp, 80/tcp + nebula-web-docker_web_1 docker-entrypoint.sh npm r ... Up 0.0.0.0:32784->7001/tcp + ``` + +If the above result is not returned, stop Studio and restart it first. For details, refer to [Deploy Studio](../deploy-connect/st-ug-deploy.md). + + !!! note + + If you used `docker-compose up -d` to satrt NebulaGraph before, you must run the `docker-compose down` to stop NebulaGraph. + +### Step3: Confirm address + +If Studio and the browser are on the same machine, users can use `localhost:7001`, `127.0.0.1:7001` or `0.0.0.0:7001` in the browser to access Studio. + +If Studio and the browser are not on the same machine, you must enter `:7001` in the browser. Among them, `studio_server_ip` refers to the IP address of the machine where the Studio service is deployed. + +### Step4: Confirm network connection + +Run `curl :7001` -I to confirm if it is normal. If it returns `HTTP/1.1 200 OK`, it means that the network is connected normally. + +If the connection is refused, check according to the following steps: + +If the connection fails, check according to the following steps: + +- If Studio and NebulaGraph are on the same machine, check if the port is exposed. + +- If Studio and NebulaGraph are not on the same machine, check the network configuration of the NebulaGraph server, such as firewall, gateway, and port. + +If you cannot connect to the NebulaGraph service after troubleshooting with the above steps, please go to the [NebulaGraph forum](https://discuss.nebula-graph.io) for consultation. \ No newline at end of file diff --git a/docs-2.0-en/nebula-studio/troubleshooting/st-ug-faq.md b/docs-2.0-en/nebula-studio/troubleshooting/st-ug-faq.md new file mode 100644 index 00000000000..ed752fce256 --- /dev/null +++ b/docs-2.0-en/nebula-studio/troubleshooting/st-ug-faq.md @@ -0,0 +1,13 @@ +# FAQ + +!!! faq "Why can't I use a function?" + + If you find that a function cannot be used, it is recommended to troubleshoot the problem according to the following steps: + + 1. Confirm that NebulaGraph is the latest version. If you use Docker Compose to deploy the NebulaGraph database, it is recommended to run `docker-compose pull && docker-compose up -d` to pull the latest Docker image and start the container. + + 2. Confirm that Studio is the latest version. For more information, refer to [check updates](../../20.appendix/release-notes/studio-release-note.md). + + 3. Search the [nebula forum](https://github.com/vesoft-inc/nebula/discussions), [nebula](https://github.com/vesoft-inc/nebula) and [nebula-studio](https://github.com/vesoft-inc/nebula-studio) projects on the GitHub to confirm if there are already similar problems. + + 4. If none of the above steps solve the problem, you can submit a problem on the forum. diff --git a/docs-2.0-en/reuse/source-monitoring-metrics.md b/docs-2.0-en/reuse/source-monitoring-metrics.md index 2e6b4cbb96f..c71f9203511 100644 --- a/docs-2.0-en/reuse/source-monitoring-metrics.md +++ b/docs-2.0-en/reuse/source-monitoring-metrics.md @@ -26,6 +26,7 @@ | `query_latency_us` | The latency of queries. | | `slow_query_latency_us` | The latency of slow queries. | | `num_queries_hit_memory_watermark` | The number of queries reached the memory watermark. | +| `resp_part_completeness` | The completeness of the partial success. You need to set `accept_partial_success` to `true` in the graph configuration first.| ### Meta @@ -167,4 +168,4 @@ Graph, Meta, and Storage services all have their own single process metrics. | `read_bytes_total` | The number of bytes read. | | `write_bytes_total` | The number of bytes written. | -{{ent.ent_end}} \ No newline at end of file +{{ent.ent_end}} diff --git a/docs-2.0-zh/14.client/1.nebula-client.md b/docs-2.0-zh/14.client/1.nebula-client.md index 20e59f10cad..90c2cb4ff7f 100644 --- a/docs-2.0-zh/14.client/1.nebula-client.md +++ b/docs-2.0-zh/14.client/1.nebula-client.md @@ -2,7 +2,7 @@ {{nebula.name}}提供多种类型客户端,便于用户连接、管理{{nebula.name}}图数据库。 -- [NebulaGraph Console](../nebula-console.md):原生 CLI 客户端 +- [NebulaGraph Console](nebula-console.md):原生 CLI 客户端 - [NebulaGraph CPP](3.nebula-cpp-client.md):C++ 客户端 diff --git a/docs-2.0-zh/nebula-console.md b/docs-2.0-zh/14.client/nebula-console.md similarity index 100% rename from docs-2.0-zh/nebula-console.md rename to docs-2.0-zh/14.client/nebula-console.md diff --git a/docs-2.0-zh/20.appendix/6.eco-tool-version.md b/docs-2.0-zh/20.appendix/6.eco-tool-version.md index d7a00c2bfe5..391fd9df86b 100644 --- a/docs-2.0-zh/20.appendix/6.eco-tool-version.md +++ b/docs-2.0-zh/20.appendix/6.eco-tool-version.md @@ -93,7 +93,7 @@ NebulaGraph Algorithm(简称 Algorithm)是一款基于 [GraphX](https://spar ## NebulaGraph Console -NebulaGraph Console 是{{nebula.name}}的原生 CLI 客户端。如何使用请参见 [NebulaGraph Console](../nebula-console.md)。 +NebulaGraph Console 是{{nebula.name}}的原生 CLI 客户端。如何使用请参见 [NebulaGraph Console](../14.client/nebula-console.md)。 |{{nebula.name}}版本|Console 版本| |:---|:---| diff --git a/docs-2.0-zh/nebula-explorer/db-management/explorer-console.md b/docs-2.0-zh/nebula-explorer/db-management/explorer-console.md index 89b3553fc6b..9b3b406e3cb 100644 --- a/docs-2.0-zh/nebula-explorer/db-management/explorer-console.md +++ b/docs-2.0-zh/nebula-explorer/db-management/explorer-console.md @@ -21,7 +21,7 @@ | 5 | 运行 | 在输入框中输入 nGQL 语句后,点击 ![play](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-play.png) 按钮即开始运行语句。 | | 6 | 保存为模板 | 将输入框中输入的 nGQL 语句保存为模板。详情参见[查询语句模板](ngql-template.md)。 | | 7 | 输入框 | 输入 nGQL 语句的区域。可以同时输入多个语句按顺序执行,语句之间以 `;` 分隔。支持用`//`添加注释。 | -| 8 | 自定义参数展示 | 点击 ![查询](https://docs-cdn.nebula-graph.com.cn/figures/down.png)按钮可展开查看自定义参数,用于参数化查询。详情信息可见[管理参数](../../nebula-console.md)。| +| 8 | 自定义参数展示 | 点击 ![查询](https://docs-cdn.nebula-graph.com.cn/figures/down.png)按钮可展开查看自定义参数,用于参数化查询。详情信息可见[管理参数](../../14.client/nebula-console.md)。| | 9 | 语句运行状态 | 运行 nGQL 语句后,这里显示语句运行状态。如果语句运行成功,语句以绿色显示。如果语句运行失败,语句以红色显示。 | | 10 | 添加到收藏夹 | 点击![save](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-save.png) 按钮,将语句存入收藏夹中,已收藏的语句该按钮以黄色展示。| | 11 | 导出 CSV 文件或 PNG 格式图片 | 运行 nGQL 语句返回结果后,返回结果为表格形式时,点击 ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) 按钮即能将结果以 CSV 文件的形式导出。
切换到可视化窗口,点击 ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) 按钮即能将结果以 CSV 文件或 PNG 图片的形式导出。 | diff --git a/mkdocs.yml b/mkdocs.yml index 41bf6328347..828a3490ee9 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -499,7 +499,7 @@ nav: - 管理 Storage 主机: 4.deployment-and-installation/manage-storage-host.md - 管理 Zone: 4.deployment-and-installation/5.zone.md - 升级版本: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent.md - - 卸载悦数图数据库: 4.deployment-and-installation/4.uninstall-nebula-graph.md + - 卸载图数据库: 4.deployment-and-installation/4.uninstall-nebula-graph.md - 配置与日志: - 配置: @@ -584,7 +584,7 @@ nav: - 客户端: - 客户端介绍: 14.client/1.nebula-client.md - - NebulaGraph Console: nebula-console.md + - NebulaGraph Console: 14.client/nebula-console.md - NebulaGraph CPP: 14.client/3.nebula-cpp-client.md - NebulaGraph Java: 14.client/4.nebula-java-client.md - NebulaGraph Python: 14.client/5.nebula-python-client.md @@ -746,14 +746,11 @@ nav: nav: - About: README.md - Introduction: - - Introduction to graphs: 1.introduction/0-0-graph.md - - Graph databases: 1.introduction/0-1-graph-database.md - - Related technologies: 1.introduction/0-2.relates.md - What is NebulaGraph: 1.introduction/1.what-is-nebula-graph.md - Data model: 1.introduction/2.data-model.md - Path: 1.introduction/2.1.path.md - VID: 1.introduction/3.vid.md - - NebulaGraph architecture: + - Architecture: - Architecture overview: 1.introduction/3.nebula-graph-architecture/1.architecture-overview.md - Meta Service: 1.introduction/3.nebula-graph-architecture/2.meta-service.md - Graph Service: 1.introduction/3.nebula-graph-architecture/3.graph-service.md @@ -825,7 +822,7 @@ nav: - Conditional expressions: 3.ngql-guide/6.functions-and-expressions/5.conditional-expressions.md - Predicate functions: 3.ngql-guide/6.functions-and-expressions/8.predicate.md - Geography functions: 3.ngql-guide/6.functions-and-expressions/14.geo.md - - User-defined functions: 3.ngql-guide/6.functions-and-expressions/9.user-defined-functions.md +# - User-defined functions: 3.ngql-guide/6.functions-and-expressions/9.user-defined-functions.md - General queries statements: - MATCH: 3.ngql-guide/7.general-query-statements/2.match.md @@ -836,7 +833,6 @@ nav: - SHOW: - SHOW CHARSET: 3.ngql-guide/7.general-query-statements/6.show/1.show-charset.md - SHOW COLLATION: 3.ngql-guide/7.general-query-statements/6.show/2.show-collation.md - - SHOW CONFIGS: 3.ngql-guide/7.general-query-statements/6.show/3.show-configs.md - SHOW CREATE SPACE: 3.ngql-guide/7.general-query-statements/6.show/4.show-create-space.md - SHOW CREATE TAG/EDGE: 3.ngql-guide/7.general-query-statements/6.show/5.show-create-tag-edge.md - SHOW HOSTS: 3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md @@ -913,7 +909,7 @@ nav: - DROP INDEX: 3.ngql-guide/14.native-index-statements/6.drop-native-index.md - Full-text index statements: - - Index overview: 3.ngql-guide/14.native-index-statements/README.md +# - Index overview: 3.ngql-guide/14.native-index-statements/README.md - Full-text restrictions: 4.deployment-and-installation/6.deploy-text-based-index/1.text-based-index-restrictions.md - Deploy Elasticsearch cluster: 4.deployment-and-installation/6.deploy-text-based-index/2.deploy-es.md - Deploy Raft Listener cluster: 4.deployment-and-installation/6.deploy-text-based-index/3.deploy-listener.md @@ -932,24 +928,16 @@ nav: - Deploy and install: - Resource preparations: 4.deployment-and-installation/1.resource-preparations.md - - Compile and install: - - Compile the source: 4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md - - Compile using Docker: 4.deployment-and-installation/2.compile-and-install-nebula-graph/7.compile-using-docker.md - Local single-node installation: - Install using RPM or DEB package: 4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md - Install using TAR package: 4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md - - Install standalone NebulaGraph: 4.deployment-and-installation/standalone-deployment.md - Local multi-node installation: 4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md - - Install using Docker Compose: 4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md - Install with ecosystem tools: 4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md - Manage services: 4.deployment-and-installation/manage-service.md - Connect to services: 4.deployment-and-installation/connect-to-nebula-graph.md - Manage Storage hosts: 4.deployment-and-installation/manage-storage-host.md - Manage Zones: 4.deployment-and-installation/5.zone.md - - Upgrade: - - Upgrade NebulaGraph Community to the latest version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md - - Upgrade NebulaGraph from v3.x to v3.4 (Community Edition): 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-300-to-latest.md - - Upgrade NebulaGraph Enterprise to the latest version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent-from-3.x-3.4.md + - Upgrade: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-ent.md - Uninstall NebulaGraph: 4.deployment-and-installation/4.uninstall-nebula-graph.md - Configure and log: @@ -961,10 +949,10 @@ nav: - Kernel configurations: 5.configurations-and-logs/1.configurations/6.kernel-config.md - Log management: - Runtime logs: 5.configurations-and-logs/2.log-management/logs.md - - Audit logs(Enterprise): 5.configurations-and-logs/2.log-management/audit-log.md + - Audit logs: 5.configurations-and-logs/2.log-management/audit-log.md - Monitor: - - Query NebulaGraph metrics: 6.monitor-and-metrics/1.query-performance-metrics.md + - Query metrics: 6.monitor-and-metrics/1.query-performance-metrics.md - RocksDB Statistics: 6.monitor-and-metrics/2.rocksdb-statistics.md - Black-box monitoring: - What is black-box monitoring: 6.monitor-and-metrics/3.bbox/3.1.bbox.md @@ -978,18 +966,50 @@ nav: - OpenLDAP authentication: 7.data-security/1.authentication/4.ldap.md - SSL: 7.data-security/4.ssl.md - - Backup and restore: - - NebulaGraph BR Enterprise: - - What is BR Enterprise: backup-and-restore/nebula-br-ent/1.br-ent-overview.md - - Install BR: backup-and-restore/nebula-br-ent/2.install-tools.md - - Back up data with BR: backup-and-restore/nebula-br-ent/3.backup-data.md - - Restore data with BR: backup-and-restore/nebula-br-ent/4.restore-data.md - - Manage snapshots: backup-and-restore/3.manage-snapshot.md +# - Backup and restore: +# - NebulaGraph BR Enterprise: +# - What is BR Enterprise: backup-and-restore/nebula-br-ent/1.br-ent-overview.md +# - Install BR: backup-and-restore/nebula-br-ent/2.install-tools.md +# - Back up data with BR: backup-and-restore/nebula-br-ent/3.backup-data.md +# - Restore data with BR: backup-and-restore/nebula-br-ent/4.restore-data.md +# - Manage snapshots: backup-and-restore/3.manage-snapshot.md - Synchronize and migrate: - Load balance: synchronization-and-migration/2.balance-syntax.md - Synchronize between two clusters: synchronization-and-migration/replication-between-clusters.md + - Import and export: + - Overview: import-export/write-tools.md + - Use NebulaGraph Importer: import-export/use-importer.md + - NebulaGraph Exchange: + - Introduction: + - What is NebulaGraph Exchange: import-export/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md + - Limitations: import-export/nebula-exchange/about-exchange/ex-ug-limitations.md + - Get Exchange: import-export/nebula-exchange/ex-ug-compile.md + - Exchange configurations: + - Options for import: import-export/nebula-exchange/parameter-reference/ex-ug-para-import-command.md + - Parameters in the configuration file: import-export/nebula-exchange/parameter-reference/ex-ug-parameter.md + - Use NebulaGraph Exchange: + - Import data from CSV files: import-export/nebula-exchange/use-exchange/ex-ug-import-from-csv.md + - Import data from JSON files: import-export/nebula-exchange/use-exchange/ex-ug-import-from-json.md + - Import data from ORC files: import-export/nebula-exchange/use-exchange/ex-ug-import-from-orc.md + - Import data from Parquet files: import-export/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md + - Import data from HBase: import-export/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md + - Import data from MySQL/PostgreSQL: import-export/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md + - Import data from Oracle: import-export/nebula-exchange/use-exchange/ex-ug-import-from-oracle.md + - Import data from ClickHouse: import-export/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md + - Import data from Neo4j: import-export/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md + - Import data from Hive: import-export/nebula-exchange/use-exchange/ex-ug-import-from-hive.md + - Import data from MaxCompute: import-export/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md + - Import data from Pulsar: import-export/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md + - Import data from Kafka: import-export/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md + - Import data from JDBC: import-export/nebula-exchange/use-exchange/ex-ug-import-from-jdbc.md + - Import data from SST files: import-export/nebula-exchange/use-exchange/ex-ug-import-from-sst.md + - Export data from NebulaGraph: import-export/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md + - Exchange FAQ: import-export/nebula-exchange/ex-ug-FAQ.md + - Spark Connector: nebula-spark-connector.md + - Flink Connector: nebula-flink-connector.md + - Best practices: - Compaction: 8.service-tuning/compaction.md - Storage load balance: 8.service-tuning/load-balance.md @@ -1002,7 +1022,7 @@ nav: - Clients: - Clients overview: 14.client/1.nebula-client.md - - NebulaGraph Console: nebula-console.md + - NebulaGraph Console: 14.client/nebula-console.md - NebulaGraph CPP: 14.client/3.nebula-cpp-client.md - NebulaGraph Java: 14.client/4.nebula-java-client.md - NebulaGraph Python: 14.client/5.nebula-python-client.md @@ -1025,157 +1045,120 @@ nav: # - Privacy policy: nebula-cloud/8.privacy-policy.md - - Dashboard: - - What is NebulaGraph Dashboard Enterprise Edition: nebula-dashboard-ent/1.what-is-dashboard-ent.md - - Deploy Dashboard Enterprise Edition: nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md - - Connect to Dashboard: nebula-dashboard-ent/3.connect-dashboard.md - - Create and import clusters: - - Create clusters: nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md - - Import clusters: nebula-dashboard-ent/3.create-import-dashboard/2.import-cluster.md - - Cluster management: - - Cluster overview: nebula-dashboard-ent/4.cluster-operator/1.overview.md - - Cluster monitoring: nebula-dashboard-ent/4.cluster-operator/2.monitor.md - - Operation: - - Node: nebula-dashboard-ent/4.cluster-operator/operator/node.md - - Scale: nebula-dashboard-ent/4.cluster-operator/operator/scale.md - - Service: nebula-dashboard-ent/4.cluster-operator/operator/service.md - - Config Management: nebula-dashboard-ent/4.cluster-operator/operator/config-management.md - - Member management: nebula-dashboard-ent/4.cluster-operator/operator/member-management.md - - Version upgrade: nebula-dashboard-ent/4.cluster-operator/operator/version-upgrade.md - - Backup and restore: nebula-dashboard-ent/4.cluster-operator/operator/backup-and-restore.md - - Analysis: - - Slow query analyst: nebula-dashboard-ent/4.cluster-operator/analysis-diagnosis/slow-query-analyst.md - - Cluster diagnostics: nebula-dashboard-ent/4.cluster-operator/analysis-diagnosis/cluster-diagnosis.md - - Information: - - Information overview: nebula-dashboard-ent/4.cluster-operator/cluster-information/overview-info.md - - Job management: nebula-dashboard-ent/4.cluster-operator/cluster-information/job-management.md - - Audit log: nebula-dashboard-ent/4.cluster-operator/cluster-information/audit-log.md - - Runtime log: nebula-dashboard-ent/4.cluster-operator/cluster-information/runtime-log.md - - Notification: nebula-dashboard-ent/4.cluster-operator/9.notification.md - - Data Synchronization: nebula-dashboard-ent/4.cluster-operator/7.data-synchronization.md - - Operation records: nebula-dashboard-ent/4.cluster-operator/5.operation-record.md - - Other settings: nebula-dashboard-ent/4.cluster-operator/6.settings.md - - Authority management: nebula-dashboard-ent/5.account-management.md - - Task center: nebula-dashboard-ent/10.tasks.md - - NebulaGraph Dashboard Enterprise Edition LM: nebula-dashboard-ent/11.license-manager.md - - System settings: - - System settings: nebula-dashboard-ent/system-settings/system-settings.md - - Notification endpoint: nebula-dashboard-ent/system-settings/notification-endpoint.md - - Single sign-on: nebula-dashboard-ent/system-settings/single-sign-on.md - - Package management: nebula-dashboard-ent/system-settings/manage-package.md - - Monitoring metrics: nebula-dashboard-ent/7.monitor-parameter.md - - FAQ: nebula-dashboard-ent/8.faq.md +# - Dashboard: +# - What is NebulaGraph Dashboard Enterprise Edition: nebula-dashboard-ent/1.what-is-dashboard-ent.md +# - Deploy Dashboard Enterprise Edition: nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md +# - Connect to Dashboard: nebula-dashboard-ent/3.connect-dashboard.md +# - Create and import clusters: +# - Create clusters: nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md +# - Import clusters: nebula-dashboard-ent/3.create-import-dashboard/2.import-cluster.md +# - Cluster management: +# - Cluster overview: nebula-dashboard-ent/4.cluster-operator/1.overview.md +# - Cluster monitoring: nebula-dashboard-ent/4.cluster-operator/2.monitor.md +# - Operation: +# - Node: nebula-dashboard-ent/4.cluster-operator/operator/node.md +# - Scale: nebula-dashboard-ent/4.cluster-operator/operator/scale.md +# - Service: nebula-dashboard-ent/4.cluster-operator/operator/service.md +# - Config Management: nebula-dashboard-ent/4.cluster-operator/operator/config-management.md +# - Member management: nebula-dashboard-ent/4.cluster-operator/operator/member-management.md +# - Version upgrade: nebula-dashboard-ent/4.cluster-operator/operator/version-upgrade.md +# - Backup and restore: nebula-dashboard-ent/4.cluster-operator/operator/backup-and-restore.md +# - Analysis: +# - Slow query analyst: nebula-dashboard-ent/4.cluster-operator/analysis-diagnosis/slow-query-analyst.md +# - Cluster diagnostics: nebula-dashboard-ent/4.cluster-operator/analysis-diagnosis/cluster-diagnosis.md +# - Information: +# - Information overview: nebula-dashboard-ent/4.cluster-operator/cluster-information/overview-info.md +# - Job management: nebula-dashboard-ent/4.cluster-operator/cluster-information/job-management.md +# - Audit log: nebula-dashboard-ent/4.cluster-operator/cluster-information/audit-log.md +# - Runtime log: nebula-dashboard-ent/4.cluster-operator/cluster-information/runtime-log.md +# - Notification: nebula-dashboard-ent/4.cluster-operator/9.notification.md +# - Data Synchronization: nebula-dashboard-ent/4.cluster-operator/7.data-synchronization.md +# - Operation records: nebula-dashboard-ent/4.cluster-operator/5.operation-record.md +# - Other settings: nebula-dashboard-ent/4.cluster-operator/6.settings.md +# - Authority management: nebula-dashboard-ent/5.account-management.md +# - Task center: nebula-dashboard-ent/10.tasks.md +# - NebulaGraph Dashboard Enterprise Edition LM: nebula-dashboard-ent/11.license-manager.md +# - System settings: +# - System settings: nebula-dashboard-ent/system-settings/system-settings.md +# - Notification endpoint: nebula-dashboard-ent/system-settings/notification-endpoint.md +# - Single sign-on: nebula-dashboard-ent/system-settings/single-sign-on.md +# - Package management: nebula-dashboard-ent/system-settings/manage-package.md +# - Monitoring metrics: nebula-dashboard-ent/7.monitor-parameter.md +# - FAQ: nebula-dashboard-ent/8.faq.md - - Explorer: - - What is NebulaGraph Explorer: nebula-explorer/about-explorer/ex-ug-what-is-explorer.md - - Deploy and connect: - - Deploy Explorer: nebula-explorer/deploy-connect/ex-ug-deploy.md - - Connect to NebulaGraph: nebula-explorer/deploy-connect/ex-ug-connect.md - - Page overview: nebula-explorer/ex-ug-page-overview.md - - Database management: - - Schema drafting: nebula-explorer/db-management/draft.md - - Schema management: nebula-explorer/db-management/10.create-schema.md - - Data import: nebula-explorer/db-management/11.import-data.md - - Console: nebula-explorer/db-management/explorer-console.md - - nGQL template: nebula-explorer/db-management/ngql-template.md - - Database user management: nebula-explorer/db-management/dbuser_management.md - - Graph explorer: - - Choose graph space: nebula-explorer/graph-explorer/13.choose-graphspace.md - - Start querying: nebula-explorer/graph-explorer/ex-ug-query-exploration.md - - Vertex Filter: nebula-explorer/graph-explorer/node-filtering.md - - Graph exploration: nebula-explorer/graph-explorer/ex-ug-graph-exploration.md - - Graph computing: nebula-explorer/graph-explorer/graph-algorithm.md - - Property calculation: nebula-explorer/graph-explorer/property-calculation.md - - Visual query: nebula-explorer/12.query-visually.md - - Canvas: - - Canvas overview: nebula-explorer/canvas-operations/canvas-overview.md - - Visualization modes: nebula-explorer/canvas-operations/visualization-mode.md - - Canvas snapshots: nebula-explorer/canvas-operations/canvas-snapshot.md - - Workflow: - - Workflow overview: nebula-explorer/workflow/workflows.md - - Resource preparations: nebula-explorer/workflow/1.prepare-resources.md - - Workflow example: nebula-explorer/workflow/2.create-workflow.md - - Workflow management: nebula-explorer/workflow/3.workflow-management.md - - Job management: nebula-explorer/workflow/4.jobs-management.md - - Workflow API: - - API overview: nebula-explorer/workflow/workflow-api/workflow-api-overview.md - - Add a new job: nebula-explorer/workflow/workflow-api/api-post-jobs.md - - Get a list of all jobs: nebula-explorer/workflow/workflow-api/api-get-jobs.md - - Get a list of jobs for a specified workflow: nebula-explorer/workflow/workflow-api/api-get-workflow-jobs.md - - Query details for a specified job: nebula-explorer/workflow/workflow-api/api-desc-job.md - - Cancel a running job: nebula-explorer/workflow/workflow-api/api-cancel-job.md - - Get the result data of a specified task: nebula-explorer/workflow/workflow-api/api-desc-task.md - - Inline frame: nebula-explorer/iframe.md - - System settings: nebula-explorer/system-settings.md - - Basic operations and shortcuts: nebula-explorer/ex-ug-shortcuts.md - - FAQ: nebula-explorer/faq.md - - - - Importer: - - Use NebulaGraph Importer: nebula-importer/use-importer.md -# - Configuration with Header: nebula-importer/config-with-header.md -# - Configuration without Header: nebula-importer/config-without-header.md - - - Exchange: - - Introduction: - - What is NebulaGraph Exchange: nebula-exchange/about-exchange/ex-ug-what-is-exchange.md - - Limitations: nebula-exchange/about-exchange/ex-ug-limitations.md - - Get Exchange: nebula-exchange/ex-ug-compile.md - - Exchange configurations: - - Options for import: nebula-exchange/parameter-reference/ex-ug-para-import-command.md - - Parameters in the configuration file: nebula-exchange/parameter-reference/ex-ug-parameter.md - - Use NebulaGraph Exchange: - - Import data from CSV files: nebula-exchange/use-exchange/ex-ug-import-from-csv.md - - Import data from JSON files: nebula-exchange/use-exchange/ex-ug-import-from-json.md - - Import data from ORC files: nebula-exchange/use-exchange/ex-ug-import-from-orc.md - - Import data from Parquet files: nebula-exchange/use-exchange/ex-ug-import-from-parquet.md - - Import data from HBase: nebula-exchange/use-exchange/ex-ug-import-from-hbase.md - - Import data from MySQL/PostgreSQL: nebula-exchange/use-exchange/ex-ug-import-from-mysql.md - - Import data from Oracle: nebula-exchange/use-exchange/ex-ug-import-from-oracle.md - - Import data from ClickHouse: nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md - - Import data from Neo4j: nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md - - Import data from Hive: nebula-exchange/use-exchange/ex-ug-import-from-hive.md - - Import data from MaxCompute: nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md - - Import data from Pulsar: nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md - - Import data from Kafka: nebula-exchange/use-exchange/ex-ug-import-from-kafka.md - - Import data from JDBC: nebula-exchange/use-exchange/ex-ug-import-from-jdbc.md - - Import data from SST files: nebula-exchange/use-exchange/ex-ug-import-from-sst.md - - Export data from NebulaGraph: nebula-exchange/use-exchange/ex-ug-export-from-nebula.md - - Exchange FAQ: nebula-exchange/ex-ug-FAQ.md - - - NebulaGraph Operator: - - What is NebulaGraph Operator: nebula-operator/1.introduction-to-nebula-operator.md - - Overview of using NebulaGraph Operator: nebula-operator/6.get-started-with-operator.md - - Deploy NebulaGraph Operator: nebula-operator/2.deploy-nebula-operator.md - - Deploy clusters: - - Deploy LM: nebula-operator/3.deploy-nebula-graph-cluster/3.0.deploy-lm.md - - Deploy clusters with Kubectl: nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md - - Deploy clusters with Helm: nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md - - Connect to NebulaGraph databases: nebula-operator/4.connect-to-nebula-graph-service.md - - Configure clusters: - - Custom configuration parameters for a NebulaGraph cluster: nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md - - Reclaim PVs: nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md - - Balance storage data after scaling out: nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md - - Manage cluster logs: nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md - - Enable mTLS: nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md - - Upgrade NebulaGraph clusters: nebula-operator/9.upgrade-nebula-cluster.md - - Specify a rolling update strategy: nebula-operator/11.rolling-update-strategy.md - - Backup and restore: nebula-operator/10.backup-restore-using-operator.md - - Self-healing: nebula-operator/5.operator-failover.md - - FAQ: nebula-operator/7.operator-faq.md +# - Explorer: +# - What is NebulaGraph Explorer: nebula-explorer/about-explorer/ex-ug-what-is-explorer.md +# - Deploy and connect: +# - Deploy Explorer: nebula-explorer/deploy-connect/ex-ug-deploy.md +# - Connect to NebulaGraph: nebula-explorer/deploy-connect/ex-ug-connect.md +# - Page overview: nebula-explorer/ex-ug-page-overview.md +# - Database management: +# - Schema drafting: nebula-explorer/db-management/draft.md +# - Schema management: nebula-explorer/db-management/10.create-schema.md +# - Data import: nebula-explorer/db-management/11.import-data.md +# - Console: nebula-explorer/db-management/explorer-console.md +# - nGQL template: nebula-explorer/db-management/ngql-template.md +# - Database user management: nebula-explorer/db-management/dbuser_management.md +# - Graph explorer: +# - Choose graph space: nebula-explorer/graph-explorer/13.choose-graphspace.md +# - Start querying: nebula-explorer/graph-explorer/ex-ug-query-exploration.md +# - Vertex Filter: nebula-explorer/graph-explorer/node-filtering.md +# - Graph exploration: nebula-explorer/graph-explorer/ex-ug-graph-exploration.md +# - Graph computing: nebula-explorer/graph-explorer/graph-algorithm.md +# - Property calculation: nebula-explorer/graph-explorer/property-calculation.md +# - Visual query: nebula-explorer/12.query-visually.md +# - Canvas: +# - Canvas overview: nebula-explorer/canvas-operations/canvas-overview.md +# - Visualization modes: nebula-explorer/canvas-operations/visualization-mode.md +# - Canvas snapshots: nebula-explorer/canvas-operations/canvas-snapshot.md +# - Workflow: +# - Workflow overview: nebula-explorer/workflow/workflows.md +# - Resource preparations: nebula-explorer/workflow/1.prepare-resources.md +# - Workflow example: nebula-explorer/workflow/2.create-workflow.md +# - Workflow management: nebula-explorer/workflow/3.workflow-management.md +# - Job management: nebula-explorer/workflow/4.jobs-management.md +# - Workflow API: +# - API overview: nebula-explorer/workflow/workflow-api/workflow-api-overview.md +# - Add a new job: nebula-explorer/workflow/workflow-api/api-post-jobs.md +# - Get a list of all jobs: nebula-explorer/workflow/workflow-api/api-get-jobs.md +# - Get a list of jobs for a specified workflow: nebula-explorer/workflow/workflow-api/api-get-workflow-jobs.md +# - Query details for a specified job: nebula-explorer/workflow/workflow-api/api-desc-job.md +# - Cancel a running job: nebula-explorer/workflow/workflow-api/api-cancel-job.md +# - Get the result data of a specified task: nebula-explorer/workflow/workflow-api/api-desc-task.md +# - Inline frame: nebula-explorer/iframe.md +# - System settings: nebula-explorer/system-settings.md +# - Basic operations and shortcuts: nebula-explorer/ex-ug-shortcuts.md +# - FAQ: nebula-explorer/faq.md - Graph computing: - - Algorithm overview: graph-computing/algorithm-description.md +# - Algorithm overview: graph-computing/algorithm-description.md - NebulaGraph Algorithm: graph-computing/nebula-algorithm.md - - NebulaGraph Analytics: graph-computing/nebula-analytics.md - - NebulaGraph Explorer workflow: graph-computing/use-explorer.md - - - Spark Connector: nebula-spark-connector.md - - - Flink Connector: nebula-flink-connector.md +# - NebulaGraph Analytics: graph-computing/nebula-analytics.md +# - NebulaGraph Explorer workflow: graph-computing/use-explorer.md + +# - NebulaGraph Operator: +# - What is NebulaGraph Operator: nebula-operator/1.introduction-to-nebula-operator.md +# - Overview of using NebulaGraph Operator: nebula-operator/6.get-started-with-operator.md +# - Deploy NebulaGraph Operator: nebula-operator/2.deploy-nebula-operator.md +# - Deploy clusters: +# - Deploy LM: nebula-operator/3.deploy-nebula-graph-cluster/3.0.deploy-lm.md +# - Deploy clusters with Kubectl: nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +# - Deploy clusters with Helm: nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +# - Connect to NebulaGraph databases: nebula-operator/4.connect-to-nebula-graph-service.md +# - Configure clusters: +# - Custom configuration parameters for a NebulaGraph cluster: nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +# - Reclaim PVs: nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md +# - Balance storage data after scaling out: nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md +# - Manage cluster logs: nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md +# - Enable mTLS: nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +# - Upgrade NebulaGraph clusters: nebula-operator/9.upgrade-nebula-cluster.md +# - Specify a rolling update strategy: nebula-operator/11.rolling-update-strategy.md +# - Backup and restore: nebula-operator/10.backup-restore-using-operator.md +# - Self-healing: nebula-operator/5.operator-failover.md +# - FAQ: nebula-operator/7.operator-faq.md - Bench: nebula-bench.md - FAQ: 20.appendix/0.FAQ.md @@ -1185,12 +1168,9 @@ nav: - NebulaGraph Enterprise: 20.appendix/release-notes/nebula-ent-release-note.md - NebulaGraph Dashboard Enterprise: 20.appendix/release-notes/dashboard-ent-release-note.md - NebulaGraph Explorer: 20.appendix/release-notes/explorer-release-note.md - - Learning path: 20.appendix/learning-path.md - - Ecosystem tools: 20.appendix/6.eco-tool-version.md +# - Learning path: 20.appendix/learning-path.md - Port guide for company products: 20.appendix/port-guide.md - - Write tools: 20.appendix/write-tools.md - - How to contribute: 15.contribution/how-to-contribute.md - - History timeline: 20.appendix/history.md + - Ecosystem tools: 20.appendix/6.eco-tool-version.md - Error code: 20.appendix/error-code.md # nav.pdf.begin