From 6142d03d566b4f56e4ed5f4b297e2917d73bf0c6 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Tue, 18 Apr 2023 10:59:02 +0800 Subject: [PATCH 1/5] change-default-service-type-to-nodeport --- .../4.connect-to-nebula-graph-service.md | 162 +++++++++++------- 1 file changed, 101 insertions(+), 61 deletions(-) diff --git a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md index efadcbb058f..934e59fd906 100644 --- a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md @@ -6,62 +6,14 @@ After creating a NebulaGraph cluster with NebulaGraph Operator on Kubernetes, yo Create a NebulaGraph cluster with NebulaGraph Operator on Kubernetes. For more information, see [Deploy NebulaGraph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). -## Connect to NebulaGraph databases from within a NebulaGraph cluster - -When a NebulaGraph cluster is created, NebulaGraph Operator automatically creates a Service named `-graphd-svc` with the type `ClusterIP` under the same namespace. With the IP of the Service and the port number of the NebulaGraph database, you can connect to the NebulaGraph database. - -1. Run the following command to check the IP of the Service: - - ```bash - $ kubectl get service -l app.kubernetes.io/cluster= # is a variable value. Replace it with the desired name. - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-svc ClusterIP 10.98.213.34 9669/TCP,19669/TCP,19670/TCP 23h - nebula-metad-headless ClusterIP None 9559/TCP,19559/TCP,19560/TCP 23h - nebula-storaged-headless ClusterIP None 9779/TCP,19779/TCP,19780/TCP,9778/TCP 23h - ``` - - Services of the `ClusterIP` type only can be accessed by other applications in a cluster. For more information, see [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/). - -2. Run the following command to connect to the NebulaGraph database using the IP of the `-graphd-svc` Service above: - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p - ``` - - For example: - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft - - - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - - ``: The custom Pod name. - - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. - - `-port`: The port to connect to Graphd services, the default port of which is 9669. - - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. - - A successful connection to the database is indicated if the following is returned: - - ```bash - If you don't see a command prompt, try pressing enter. - - (root@nebula) [(none)]> - ``` - -You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`: - -```bash -kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p -``` - -The default value of `CLUSTER_DOMAIN` is `cluster.local`. - ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via `NodePort` -You can create a Service of type `NodePort` to connect to NebulaGraph databases from outside a NebulaGraph cluster with a node IP and an exposed node port. You can also use load balancing software provided by cloud providers (such as Azure, AWS, etc.) and set the Service of type `LoadBalancer`. +You can create a `NodePort` type Service to access internal cluster services from outside the cluster using any node IP and the exposed node port. You can also utilize load balancing services provided by cloud vendors (such as Azure, AWS, etc.) by setting the Service type to `LoadBalancer`. This allows external access to internal cluster services through the public IP and port of the load balancer provided by the cloud vendor. The Service of type `NodePort` forwards the front-end requests via the label selector `spec.selector` to Graphd pods with labels `app.kubernetes.io/cluster: ` and `app.kubernetes.io/component: graphd`. +After creating a NebulaGraph cluster based on the [example template](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml), where `spec.graphd.service.type=NodePort`, the NebulaGraph Operator will automatically create a NodePort type Service named `-graphd-svc` in the same namespace. You can directly connect to the NebulaGraph database through any node IP and the exposed node port (see step 4 below). You can also create a custom Service according to your needs. + Steps: 1. Create a YAML file named `graphd-nodeport-service.yaml`. The file contents are as follows: @@ -93,10 +45,10 @@ Steps: app.kubernetes.io/component: graphd app.kubernetes.io/managed-by: nebula-operator app.kubernetes.io/name: nebula-graph - type: NodePort + type: NodePort # Set the type to NodePort. ``` - - NebulaGraph uses port `9669` by default. `19669` is the port of the Graph service in a NebulaGraph cluster. + - NebulaGraph uses port `9669` by default. `19669` is the HTTP port of the Graph service in a NebulaGraph cluster. - The value of `targetPort` is the port mapped to the database Pods, which can be customized. 2. Run the following command to create a NodePort Service. @@ -108,17 +60,15 @@ Steps: 3. Check the port mapped on all of your cluster nodes. ```bash - kubectl get services + kubectl get services -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. ``` Output: ```bash NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-svc ClusterIP 10.98.213.34 9669/TCP,19669/TCP,19670/TCP 23h nebula-graphd-svc-nodeport NodePort 10.107.153.129 9669:32236/TCP,19669:31674/TCP,19670:31057/TCP 24h - nebula-metad-headless ClusterIP None 9559/TCP,19559/TCP,19560/TCP 23h - nebula-storaged-headless ClusterIP None 9779/TCP,19779/TCP,19780/TCP,9778/TCP 23h + ... ``` As you see, the mapped port of NebulaGraph databases on all cluster nodes is `32236`. @@ -132,26 +82,116 @@ Steps: For example: ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console2 -addr 192.168.8.24 -port 32236 -u root -p vesoft + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 192.168.8.24 -port 32236 -u root -p vesoft If you don't see a command prompt, try pressing enter. (root@nebula) [(none)]> ``` - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - - ``: The custom Pod name. The above example uses `nebula-console2`. + - ``: The custom Pod name. The above example uses `nebula-console`. - `-addr`: The IP of any node in a NebulaGraph cluster. The above example uses `192.168.8.24`. - `-port`: The mapped port of NebulaGraph databases on all cluster nodes. The above example uses `32236`. - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. + +## Connect to NebulaGraph databases from within a NebulaGraph cluster + +You can also create a `ClusterIP` type Service to provide an access point to the NebulaGraph database for other Pods within the cluster. By using the Service's IP and the Graph service's port number (9669), you can connect to the NebulaGraph database. For more information, see [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/). + +1. Create a file named `graphd-clusterip-service.yaml`. The file contents are as follows: + + ```yaml + apiVersion: v1 + kind: Service + metadata: + labels: + app.kubernetes.io/cluster: nebula + app.kubernetes.io/component: graphd + app.kubernetes.io/managed-by: nebula-operator + app.kubernetes.io/name: nebula-graph + name: nebula-graphd-svc + namespace: default + spec: + externalTrafficPolicy: Local + ports: + - name: thrift + port: 9669 + protocol: TCP + targetPort: 9669 + - name: http + port: 19669 + protocol: TCP + targetPort: 19669 + selector: + app.kubernetes.io/cluster: nebula + app.kubernetes.io/component: graphd + app.kubernetes.io/managed-by: nebula-operator + app.kubernetes.io/name: nebula-graph + type: ClusterIP # Set the type to ClusterIP. + ``` + - NebulaGraph uses port `9669` by default. `19669` is the HTTP port of the Graph service in a NebulaGraph cluster. + - `targetPort` is the port mapped to the database Pods, which can be customized. + +2. Create a ClusterIP Service. + + ```bash + kubectl create -f graphd-clusterip-service.yaml + ``` + +3. Check the IP of the Service: + + ```bash + $ kubectl get service -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + nebula-graphd-svc ClusterIP 10.98.213.34 9669/TCP,19669/TCP,19670/TCP 23h + ... + ``` + +4. Run the following command to connect to the NebulaGraph database using the IP of the `-graphd-svc` Service above: + + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p + ``` + + For example: + + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft + + - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. + - ``: The custom Pod name. + - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. + - `-port`: The port to connect to Graphd services, the default port of which is `9669`. + - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. + - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. + + A successful connection to the database is indicated if the following is returned: + + ```bash + If you don't see a command prompt, try pressing enter. + + (root@nebula) [(none)]> + ``` + +You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. + +```bash +kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p +``` + +`service_port` is the port to connect to Graphd services, the default port of which is `9669`. + ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via Ingress +When dealing with multiple pods in a cluster, managing services for each pod separately is not a good practice. Ingress is a Kubernetes resource that provides a unified entry point for accessing multiple services. Ingress can be used to expose multiple services under a single IP address. + Nginx Ingress is an implementation of Kubernetes Ingress. Nginx Ingress watches the Ingress resource of a Kubernetes cluster and generates the Ingress rules into Nginx configurations that enable Nginx to forward 7 layers of traffic. -You can use Nginx Ingress to connect to a NebulaGraph cluster from outside the cluster using a combination of the HostNetwork and DaemonSet pattern. +You can use Nginx Ingress to connect to a NebulaGraph cluster from outside the cluster using a combination of the host network and DaemonSet pattern. -As HostNetwork is used, the Nginx Ingress pod cannot be scheduled to the same node. To avoid listening port conflicts, some nodes can be selected and labeled as edge nodes in advance, which are specially used for the Nginx Ingress deployment. Nginx Ingress is then deployed on these nodes in a DaemonSet mode. +As the host network is used, the Nginx Ingress pod cannot be scheduled to the same node. To avoid listening port conflicts, some nodes can be selected and labeled as edge nodes in advance, which are specially used for the Nginx Ingress deployment. Nginx Ingress is then deployed on these nodes in a DaemonSet mode. Ingress does not support TCP or UDP services. For this reason, the nginx-ingress-controller pod uses the flags `--tcp-services-configmap` and `--udp-services-configmap` to point to an existing ConfigMap where the key refers to the external port to be used and the value refers to the format of the service to be exposed. The format of the value is `:`. From cdcb3554a065cacb546d3fe093758171f84a8051 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Tue, 18 Apr 2023 15:31:56 +0800 Subject: [PATCH 2/5] opt operator --- .../2.deploy-nebula-operator.md | 49 +++-- .../3.1create-cluster-with-kubectl.md | 196 ++++-------------- .../3.2create-cluster-with-helm.md | 6 +- .../8.1.custom-conf-parameter.md | 3 +- .../8.4.manage-running-logs.md | 5 +- 5 files changed, 81 insertions(+), 178 deletions(-) diff --git a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md index 4c972ec93f7..42f7297dcd1 100644 --- a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md +++ b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md @@ -32,7 +32,7 @@ Before installing NebulaGraph Operator, you need to install the following softwa helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts ``` -1. Update information of available charts locally from repositories. +2. Update information of available charts locally from repositories. ```bash helm repo update @@ -40,7 +40,22 @@ Before installing NebulaGraph Operator, you need to install the following softwa For more information about `helm repo`, see [Helm Repo](https://helm.sh/docs/helm/helm_repo/). -3. Install NebulaGraph Operator. +3. Create a namespace for NebulaGraph Operator. + + ```bash + kubectl create namespace + ``` + + For example, run the following command to create a namespace named `nebula-operator-system`. + + ```bash + kubectl create namespace nebula-operator-system + ``` + + - All the resources of NebulaGraph Operator are deployed in this namespace. + - You can also use a different name. + +4. Install NebulaGraph Operator. ```bash helm install nebula-operator nebula-operator/nebula-operator --namespace= --version=${chart_version} @@ -54,13 +69,18 @@ Before installing NebulaGraph Operator, you need to install the following softwa - `nebula-operator-system` is a user-created namespace name. If you have not created this namespace, run `kubectl create namespace nebula-operator-system` to create one. You can also use a different name. - - `{{operator.release}}` is the version of the NebulaGraph Operator chart. When not specifying `--version`, the latest version of the nebula-operator chart is used by default. Run `helm search repo -l nebula-operator` to see chart versions. + - `{{operator.release}}` is the version of the nebula-operator chart. When not specifying `--version`, the latest version of the nebula-operator chart is used by default. Run `helm search repo -l nebula-operator` to see chart versions. You can customize the configuration items of the NebulaGraph Operator chart before running the installation command. For more information, see **Customize Helm charts** below. ### Customize Helm charts -Run `helm show values [CHART] [flags]` to see configurable options. +When executing the `helm install [NAME] [CHART] [flags]` command to install a chart, you can specify the chart configuration. For more information, see [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). + +View the related configuration options in the [nebula-operator chart](https://github.com/vesoft-inc/nebula-operator/blob/v{{operation.release}}/charts/nebula-operator/values.yaml) configuration file. + +Alternatively, you can view the configurable options through the command `helm show values nebula-operator/nebula-operator`, as shown below. + For example: @@ -74,7 +94,7 @@ image: image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0 imagePullPolicy: Always kubeScheduler: - image: k8s.gcr.io/kube-scheduler:v1.18.8 + image: k8s.gcr.io/kube-scheduler:v1.22.12 imagePullPolicy: Always imagePullSecrets: [] @@ -93,7 +113,7 @@ controllerManager: memory: 100Mi admissionWebhook: - create: true + create: false scheduler: create: true @@ -119,17 +139,17 @@ Part of the above parameters are described as follows: | `kubernetesClusterDomain` | `cluster.local` | The cluster domain. | | `controllerManager.create` | `true` | Whether to enable the controller-manager component. | | `controllerManager.replicas` | `2` | The numeric value of controller-manager replicas. | -| `admissionWebhook.create` | `true` | Whether to enable Admission Webhook. | +| `admissionWebhook.create` | `false` | Whether to enable Admission Webhook. This option is disabled. To enable it, set the value to `true` and you will need to install [cert-manager](https://cert-manager.io/docs/installation/helm/). | | `shceduler.create` | `true` | Whether to enable Scheduler. | | `shceduler.schedulerName` | `nebula-scheduler` | The Scheduler name. | | `shceduler.replicas` | `2` | The numeric value of nebula-scheduler replicas. | You can run `helm install [NAME] [CHART] [flags]` to specify chart configurations when installing a chart. For more information, see [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). -The following example shows how to specify the NebulaGraph Operator's AdmissionWebhook mechanism to be turned off when you install NebulaGraph Operator (AdmissionWebhook is enabled by default): +The following example shows how to specify the NebulaGraph Operator's AdmissionWebhook mechanism to be turned on when you install NebulaGraph Operator (AdmissionWebhook is disabled by default): ```bash -helm install nebula-operator nebula-operator/nebula-operator --namespace= --set admissionWebhook.create=false +helm install nebula-operator nebula-operator/nebula-operator --namespace= --set admissionWebhook.create=true ``` For more information about `helm install`, see [Helm Install](https://helm.sh/docs/helm/helm_install/). @@ -142,17 +162,14 @@ For more information about `helm install`, see [Helm Install](https://helm.sh/do helm repo update ``` -2. Update NebulaGraph Operator by passing configuration parameters via `-set` or `-values` flag. - - - `--set`:Overrides values using the command line. - - `--values` (or `-f`):Overrides values using YAML files. +1. Update NebulaGraph Operator by passing configuration parameters via `--set`. - For configurable items, see the above-mentioned section **Customize Helm charts**. + - `--set`:Overrides values using the command line. For configurable items, see the above-mentioned section **Customize Helm charts**. - For example, to disable the AdmissionWebhook ( AdmissionWebhook is enabled by default), run the following command: + For example, to enable the AdmissionWebhook, run the following command: ```bash - helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} --set admissionWebhook.create=false + helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} --set admissionWebhook.create=true ``` For more information, see [Helm upgrade](https://helm.sh/docs/helm/helm_update/). diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 760d97aacd0..585389fc1e0 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -6,7 +6,9 @@ ## Prerequisites -- [Install NebulaGraph Operator](../2.deploy-nebula-operator.md) +- [You have installed NebulaGraph Operator](../2.deploy-nebula-operator.md) + +- [You have created StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) {{ ent.ent_begin }} - You have prepared the license file for NebulaGraph Enterprise Edition clusters. @@ -21,163 +23,49 @@ The following example shows how to create a NebulaGraph cluster by creating a cl 1. Create a file named `apps_v1alpha1_nebulacluster.yaml`. - - The file contents for a NebulaGraph Community cluster are as follows: + - For a NebulaGraph Community cluster - ``` - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - spec: - graphd: - resources: - requests: - cpu: "500m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-graphd - version: {{nebula.tag}} - logVolumeClaim: - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - metad: - resources: - requests: - cpu: "500m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-metad - version: {{nebula.tag}} - logVolumeClaim: - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - dataVolumeClaim: - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - storaged: - resources: - requests: - cpu: "500m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-storaged - version: {{nebula.tag}} - logVolumeClaim: - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - dataVolumeClaims: // You can mount multiple disks starting from NebulaGraph Operator 1.3.0. - - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - enableAutoBalance: true - reference: - name: statefulsets.apps - version: v1 - schedulerName: default-scheduler - nodeSelector: - nebula: cloud - imagePullPolicy: Always - unsatisfiableAction: ScheduleAnyway - ``` + Create a file named `apps_v1alpha1_nebulacluster.yaml`. For the file content, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). + + The parameters in the file are described as follows: + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `metadata.name` | - | The name of the created NebulaGraph cluster. | + | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | + | `spec.graphd.images` | `vesoft/nebula-graphd` | The container image of the Graphd service. | + | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | + | `spec.graphd.service` | - | The Service configurations for the Graphd service. | + | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | + | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | + | `spec.metad.images` | `vesoft/nebula-metad` | The container image of the Metad service. | + | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | + | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | + | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| + | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | + | `spec.storaged.images` | `vesoft/nebula-storaged` | The container image of the Storaged service. | + | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | + | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| + | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | + | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| + | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | + | `spec.reference.name` | - | The name of the dependent controller. | + | `spec.schedulerName` | - | The scheduler name. | + | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | + |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| + |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| {{ ent.ent_begin }} - - The file contents for a NebulaGraph Enterprise cluster are as follows: - - ```yaml - # Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. - - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - annotations: - nebula-graph.io/owner: test - name: nebula - spec: - graphd: - readinessProbe: - failureThreshold: 3 - httpGet: - path: /status - port: 19669 - scheme: HTTP - initialDelaySeconds: 40 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 10 - image: reg.vesoft-inc.com/vesoft-ent/nebula-graphd - logVolumeClaim: - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - replicas: 1 - resources: - limits: - cpu: "1" - memory: 1Gi - requests: - cpu: 500m - memory: 500Mi - version: {{nebula.tag}} - imagePullPolicy: Always - imagePullSecrets: - - name: vesoft - metad: - license: - secretName: nebula-license - licenseKey: nebula.license - ... - ``` - - The parameters in the file are described as follows: - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `metadata.name` | - | The name of the created NebulaGraph cluster. | - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | - | `spec.graphd.images` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | - | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | - | `spec.metad.images` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | - | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | - | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | - | `spec.storaged.images` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | - | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| - | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | - | `spec.reference.name` | - | The name of the dependent controller. | - | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - | `spec.metad.license` | - | The configuration of the license for creating a NebulaGraph Enterprise Edition cluster. | + - For a NebulaGraph Enterprise cluster + + Create a file named `apps_v1alpha1_nebulacluster.yaml`. Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `spec.metad.license` | - | The configuration of the license for creating a NebulaGraph Enterprise Edition cluster. | + |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| + |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| !!! enterpriseonly diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index aaf8b948a1b..968f9c1b0ac 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -6,7 +6,9 @@ ## Prerequisite -- [Install NebulaGraph Operator](../2.deploy-nebula-operator.md) +- [You have installed NebulaGraph Operator](../2.deploy-nebula-operator.md) + +- [You have created StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) {{ ent.ent_begin }} - You have prepared the license file for NebulaGraph Enterprise Edition clusters. @@ -35,7 +37,7 @@ ```bash export NEBULA_CLUSTER_NAME=nebula # The desired NebulaGraph cluster name. export NEBULA_CLUSTER_NAMESPACE=nebula # The desired namespace where your NebulaGraph cluster locates. - export STORAGE_CLASS_NAME=fast-disks # The desired StorageClass name in your NebulaGraph cluster. + export STORAGE_CLASS_NAME=fast-disks # The name of the StorageClass that has been created. ``` 4. Create a namespace for your NebulaGraph cluster (If you have created one, skip this step). diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md index 2f2c24c8270..b20af9d3794 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md @@ -1,6 +1,6 @@ # Customize configuration parameters for a NebulaGraph cluster -Meta, Storage, and Graph services in a NebulaGraph Cluster have their configurations, which are defined as `config` in the YAML file of the CR instance (NebulaGraph cluster) you created. The settings in `config` are mapped and loaded into the ConfigMap of the corresponding service in Kubernetes. +Meta, Storage, and Graph services in a NebulaGraph Cluster have their own configuration settings, which are defined in the YAML file of the NebulaGraph cluster instance as `config`. These settings are mapped and loaded into the corresponding service's ConfigMap in Kubernetes. !!! note @@ -15,7 +15,6 @@ Config map[string]string `json:"config,omitempty"` You have created a NebulaGraph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). - ## Steps The following example uses a cluster named `nebula` and the cluster's configuration file named `nebula_cluster.yaml` to show how to set `config` for the Graph service in a NebulaGraph cluster. diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md index 921ea0c9cb7..5469992ef9b 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md @@ -10,11 +10,8 @@ For example, to view the running logs of the Storage service: ```bash // View the name of the Storage service Pod, nebula-storaged-0. -$ kubectl get pods +$ kubectl get pods -l app.kubernetes.io/component=storaged NAME READY STATUS RESTARTS AGE -nebula-exporter-84b6974497-cr54d 1/1 Running 0 43h -nebula-graphd-0 1/1 Running 0 22h -nebula-metad-0 1/1 Running 0 45h nebula-storaged-0 1/1 Running 0 45h ... From e23da96732b9933f9bbcb4c565adfce90462dfc0 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Tue, 18 Apr 2023 15:34:24 +0800 Subject: [PATCH 3/5] Update 3.2create-cluster-with-helm.md --- .../3.2create-cluster-with-helm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 968f9c1b0ac..1f68757192b 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -76,7 +76,7 @@ --set nameOverride=${NEBULA_CLUSTER_NAME} \ --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ # Specify the version of the NebulaGraph cluster. - --set nebula.version={{nebula.release}} \ + --set nebula.version=v{{nebula.release}} \ # Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default. --version={{operator.release}} ``` From 29f3ae52335c275e7ccc6aeabbffa68ae696b22a Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Wed, 19 Apr 2023 13:52:04 +0800 Subject: [PATCH 4/5] image changes --- docs-2.0/nebula-operator/2.deploy-nebula-operator.md | 4 ++-- .../8.1.custom-conf-parameter.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md index 42f7297dcd1..f2dd87b59ce 100644 --- a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md +++ b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md @@ -91,10 +91,10 @@ image: image: vesoft/nebula-operator:{{operator.tag}} imagePullPolicy: Always kubeRBACProxy: - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0 + image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0 imagePullPolicy: Always kubeScheduler: - image: k8s.gcr.io/kube-scheduler:v1.22.12 + image: registry.k8s.io/kube-scheduler:v1.24.11 imagePullPolicy: Always imagePullSecrets: [] diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md index b20af9d3794..88d79153796 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md @@ -1,6 +1,6 @@ # Customize configuration parameters for a NebulaGraph cluster -Meta, Storage, and Graph services in a NebulaGraph Cluster have their own configuration settings, which are defined in the YAML file of the NebulaGraph cluster instance as `config`. These settings are mapped and loaded into the corresponding service's ConfigMap in Kubernetes. +Meta, Storage, and Graph services in a NebulaGraph Cluster have their own configuration settings, which are defined in the YAML file of the NebulaGraph cluster instance as `config`. These settings are mapped and loaded into the corresponding service's ConfigMap in Kubernetes. At the time of startup, the configuration present in the ConfigMap is mounted onto the directory `/usr/local/nebula/etc/` for every service. !!! note From b1c96690d488277d0dd088aeab440b886c1605ec Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Wed, 19 Apr 2023 14:07:43 +0800 Subject: [PATCH 5/5] Update 4.connect-to-nebula-graph-service.md --- docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md index 934e59fd906..41500ce3afc 100644 --- a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md @@ -131,6 +131,7 @@ You can also create a `ClusterIP` type Service to provide an access point to the app.kubernetes.io/name: nebula-graph type: ClusterIP # Set the type to ClusterIP. ``` + - NebulaGraph uses port `9669` by default. `19669` is the HTTP port of the Graph service in a NebulaGraph cluster. - `targetPort` is the port mapped to the database Pods, which can be customized.