Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: rename operations to administration #1200

Merged
merged 4 commits into from
Sep 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/contributor-guide/frontend/table-sharding.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The sharding of stored data is essential to any distributed database. This docum

## Partition

For the syntax of creating a partitioned table, please refer to the [Table Sharding](/user-guide/operations/data-management/table-sharding.md) section in the User Guide.
For the syntax of creating a partitioned table, please refer to the [Table Sharding](/user-guide/administration/manage-data/table-sharding.md) section in the User Guide.

## Region

Expand Down
4 changes: 2 additions & 2 deletions docs/faq-and-others/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Please check out our initial version on [GitHub Repo](https://github.com/Greptim

Yes, GreptimeDB is a schemaless database without need for creating tables in advance. The table and columns will be created automatically when writing data with protocol gRPC, InfluxDB, OpentsDB, Prometheus remote write.

For more information, refer to [this document](/user-guide/operations/data-management/basic-table-operations.md#create-table).
For more information, refer to [this document](/user-guide/administration/manage-data/basic-table-operations.md#create-table).

### How do you measure the passing rate of PromQL compatibility tests? Is there any testing framework?

Expand Down Expand Up @@ -183,7 +183,7 @@ A minimum of 3 nodes is required, with each node running the 3 services: metasrv

It is not necessary to deploy all three services on each node. A small-sized cluster can be set up with 3 nodes dedicated to metasrv. Frontend and datanode can be deployed on equal nodes, with one container running two processes.

For more general advice for deployment, please read [Capacity Plan](/user-guide/operations/capacity-plan.md).
For more general advice for deployment, please read [Capacity Plan](/user-guide/administration/capacity-plan.md).

### Does GreptimeDB support inverted indexes, and does it use Tantivy?

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/command-lines.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,5 +197,5 @@ greptime flownode start --node-id=0 --rpc-addr=127.0.0.1:6800 --metasrv-addrs=12

### Upgrade GreptimeDB version

Please refer to [the upgrade steps](/user-guide/operations/upgrade.md)
Please refer to [the upgrade steps](/user-guide/administration/upgrade.md)

4 changes: 2 additions & 2 deletions docs/reference/sql/admin.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ GreptimeDB provides some administration functions to manage the database and dat

* `flush_table(table_name)` to flush a table's memtables into SST file by table name.
* `flush_region(region_id)` to flush a region's memtables into SST file by region id. Find the region id through [PARTITIONS](./information-schema/partitions.md) table.
* `compact_table(table_name, [type], [options])` to schedule a compaction task for a table by table name, read [compaction](/user-guide/operations/data-management/compaction.md#strict-window-compaction-strategy-swcs-and-manual-compaction) for more details.
* `compact_table(table_name, [type], [options])` to schedule a compaction task for a table by table name, read [compaction](/user-guide/administration/manage-data/compaction.md#strict-window-compaction-strategy-swcs-and-manual-compaction) for more details.
* `compact_region(region_id)` to schedule a compaction task for a region by region id.
* `migrate_region(region_id, from_peer, to_peer, [timeout])` to migrate regions between datanodes, please read the [Region Migration](/user-guide/operations/data-management/region-migration.md).
* `migrate_region(region_id, from_peer, to_peer, [timeout])` to migrate regions between datanodes, please read the [Region Migration](/user-guide/administration/manage-data/region-migration.md).
* `procedure_state(procedure_id)` to query a procedure state by its id.
* `flush_flow(flow_name)` to flush a flow's output into the sink table.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ there are several key considerations:
- Data retention policy
- Hardware costs

To monitor the various metrics of GreptimeDB, please refer to [Monitoring](/user-guide/operations/monitoring/export-metrics.md).
To monitor the various metrics of GreptimeDB, please refer to [Monitoring](/user-guide/administration/monitoring/export-metrics.md).

## CPU

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,5 @@
* [Expire Data by Setting TTL](/user-guide/manage-data/overview.md#manage-data-retention-with-ttl-policies)
* [Table Sharding](table-sharding.md): Partition tables by regions
* [Region Migration](region-migration.md): Migrate regions for load balancing
* [Region Failover](/user-guide/operations/data-management/region-failover.md)
* [Region Failover](/user-guide/administration/manage-data/region-failover.md)
* [Compaction](compaction.md)
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Region Failover

Region Failover provides the ability to recover regions from region failures without losing data. This is implemented via [Region Migration](/user-guide/operations/data-management/region-migration.md).
Region Failover provides the ability to recover regions from region failures without losing data. This is implemented via [Region Migration](/user-guide/administration/manage-data/region-migration.md).

## Enable the Region Failover

Expand Down
14 changes: 14 additions & 0 deletions docs/user-guide/administration/overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Overview

This document addresses strategies and practices used in the administration of GreptimeDB.

* [Installation](/getting-started/installation/overview.md) for GreptimeDB and the [g-t-control](/reference/gtctl.md) command line tool
* [Capacity Plan](/user-guide/administration/capacity-plan.md) for GreptimeDB based on your workload
* [Manage Data](/user-guide/administration/manage-data/overview.md) for avoiding data loss, lower cost and better performance
* Database Configuration, please read the [Configuration](/user-guide/deployments/configuration.md) reference
* GreptimeDB [Disaster Recovery](/user-guide/administration/disaster-recovery/overview.md)
* Cluster Failover for GreptimeDB by [Setting Remote WAL](./remote-wal/quick-start.md)
* [Monitoring metrics](/user-guide/administration/monitoring/export-metrics.md) and [Tracing](/user-guide/administration/monitoring/tracing.md) for GreptimeDB
* [Performance Tuning Tips](/user-guide/administration/performance-tuning-tips.md)
* [Upgrade](/user-guide/administration/upgrade.md) GreptimeDB to a new version
* Get the [runtime information](/user-guide/administration/runtime-info.md) of the cluster
26 changes: 26 additions & 0 deletions docs/user-guide/administration/runtime-info.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Runtime Information

The `INFORMATION_SCHEMA` database provides access to system metadata, such as the name of a database or table, the data type of a column, etc.

* Find the topology information of the cluster though [CLUSTER_INFO](/reference/sql/information-schema/cluster-info.md) table.
* Find the table regions distribution though [PARTITIONS](/reference/sql/information-schema/partitions.md) and [REGION_PEERS](/reference/sql/information-schema/region-peers.md) tables.

For example, find all the region id of a table:

```sql
SELECT greptime_partition_id FROM PARTITIONS WHERE table_name = 'monitor'
```

Find the distribution of all regions in a table:

```sql
SELECT b.peer_id as datanode_id,
a.greptime_partition_id as region_id
FROM information_schema.partitions a LEFT JOIN information_schema.region_peers b
ON a.greptime_partition_id = b.region_id
WHERE a.table_name='monitor'
ORDER BY datanode_id ASC
```

For more information about the `INFORMATION_SCHEMA` database,
Please read the [reference](/reference/sql/information-schema/overview.md).
4 changes: 2 additions & 2 deletions docs/user-guide/concepts/data-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ CREATE TABLE access_logs (
- `remote_addr`, `http_status`, `http_method`, `http_refer` and `user_agent` are tags.
- `request` is a field that enables full-text index by the [`FULLTEXT` column option](/reference/sql/create.md#fulltext-column-option).

To learn how to indicate `Tag`, `Timestamp`, and `Field` columns, Please refer to [table management](/user-guide/operations/data-management/basic-table-operations.md#create-a-table) and [CREATE statement](/reference/sql/create.md).
To learn how to indicate `Tag`, `Timestamp`, and `Field` columns, Please refer to [table management](/user-guide/administration/manage-data/basic-table-operations.md#create-a-table) and [CREATE statement](/reference/sql/create.md).

Of course, you can place metrics and logs in a single table at any time, which is also a key capability provided by GreptimeDB.

Expand All @@ -95,4 +95,4 @@ GreptimeDB is designed on top of Table for the following reasons:
The multi-value model is used to model data sources, where a metric can have multiple values represented by fields.
The advantage of the multi-value model is that it can write or read multiple values to the database at once, reducing transfer traffic and simplifying queries. In contrast, the single-value model requires splitting the data into multiple records. Read the [blog](https://greptime.com/blogs/2024-05-09-prometheus) for more detailed benefits of multi-value mode.

GreptimeDB uses SQL to manage table schema. Please refer to [table management](/user-guide/operations/data-management/basic-table-operations.md) for more information. However, our definition of schema is not mandatory and leans towards a **schemaless** approach, similar to MongoDB. For more details, see [Automatic Schema Generation](/user-guide/ingest-data/overview.md#automatic-schema-generation).
GreptimeDB uses SQL to manage table schema. Please refer to [table management](/user-guide/administration/manage-data/basic-table-operations.md) for more information. However, our definition of schema is not mandatory and leans towards a **schemaless** approach, similar to MongoDB. For more details, see [Automatic Schema Generation](/user-guide/ingest-data/overview.md#automatic-schema-generation).
2 changes: 1 addition & 1 deletion docs/user-guide/concepts/features-that-you-concern.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,4 @@ Please read the performance benchmark reports:

## Does GreptimeDB have disaster recovery solutions?

Yes. Please refer to [disaster recovery](/user-guide/operations/disaster-recovery/overview.md).
Yes. Please refer to [disaster recovery](/user-guide/administration/disaster-recovery/overview.md).
2 changes: 1 addition & 1 deletion docs/user-guide/concepts/storage-location.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ The storage file structure of GreptimeDB includes of the following:

The `data` directory in the file structure can be stored in cloud storage. Please refer to [Storage option](../deployments/configuration.md#storage-options) for more details.

Please note that only storing the data directory in object storage is not sufficient to ensure data reliability and disaster recovery. The `wal` and `metadata` also need to be considered for disaster recovery. Please refer to the [disaster recovery documentation](/user-guide/operations/disaster-recovery/overview.md).
Please note that only storing the data directory in object storage is not sufficient to ensure data reliability and disaster recovery. The `wal` and `metadata` also need to be considered for disaster recovery. Please refer to the [disaster recovery documentation](/user-guide/administration/disaster-recovery/overview.md).

## Multiple storage engines

Expand Down
6 changes: 3 additions & 3 deletions docs/user-guide/deployments/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -386,9 +386,9 @@ default_ratio = 1.0
- `enable_otlp_tracing`: whether to turn on tracing, not turned on by default.
- `otlp_endpoint`: Export the target endpoint of tracing using gRPC-based OTLP protocol, the default value is `localhost:4317`.
- `append_stdout`: Whether to append logs to stdout. Defaults to `true`.
- `tracing_sample_ratio`: This field can configure the sampling rate of tracing. How to use `tracing_sample_ratio`, please refer to [How to configure tracing sampling rate](/user-guide/operations/monitoring/tracing.md#guide-how-to-configure-tracing-sampling-rate).
- `tracing_sample_ratio`: This field can configure the sampling rate of tracing. How to use `tracing_sample_ratio`, please refer to [How to configure tracing sampling rate](/user-guide/administration/monitoring/tracing.md#guide-how-to-configure-tracing-sampling-rate).

How to use distributed tracing, please reference [Tracing](/user-guide/operations/monitoring/tracing.md#tutorial-use-jaeger-to-trace-greptimedb)
How to use distributed tracing, please reference [Tracing](/user-guide/administration/monitoring/tracing.md#tutorial-use-jaeger-to-trace-greptimedb)

### Region engine options

Expand Down Expand Up @@ -482,7 +482,7 @@ The `meta_client` configures the Metasrv client, including:
### Monitor metrics options

These options are used to save system metrics to GreptimeDB itself.
For instructions on how to use this feature, please refer to the [Monitoring](/user-guide/operations/monitoring/export-metrics.md) guide.
For instructions on how to use this feature, please refer to the [Monitoring](/user-guide/administration/monitoring/export-metrics.md) guide.

```toml
[export_metrics]
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/deployments/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Learn how to [run GreptimeDB on Android devices](run-on-android.md).

## Capacity plan

Understand how to [plan for capacity](/user-guide/operations/capacity-plan.md) to ensure your GreptimeDB deployment can handle your workload.
Understand how to [plan for capacity](/user-guide/administration/capacity-plan.md) to ensure your GreptimeDB deployment can handle your workload.

## GreptimeCloud

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/ingest-data/for-iot/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The above statement will create a table with the following schema:
```

For more information about the `CREATE TABLE` statement,
please refer to [table management](/user-guide/operations/data-management/basic-table-operations.md#create-a-table).
please refer to [table management](/user-guide/administration/manage-data/basic-table-operations.md#create-a-table).

## Insert data

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/manage-data/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -317,5 +317,5 @@ For more information about TTL policies, please refer to the [CREATE](/reference

## More data management operations

For more advanced data management operations, such as basic table operations, table sharding and region migration, please refer to the [Data Management](/user-guide/operations/data-management/overview.md) in the administration section.
For more advanced data management operations, such as basic table operations, table sharding and region migration, please refer to the [Data Management](/user-guide/administration/manage-data/overview.md) in the administration section.

39 changes: 0 additions & 39 deletions docs/user-guide/operations/admin.md

This file was deleted.

9 changes: 0 additions & 9 deletions docs/user-guide/operations/overview.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/user-guide/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,5 +66,5 @@ Having understood these features, you can now go directly to exploring the featu
* [Integrations](./integrations/overview.md)
* [Protocols](./protocols/overview.md)
* [Continuous Aggregation](./continuous-aggregation/overview.md)
* [Operations](./operations/overview.md)
* [Operations](./administration/overview.md)

2 changes: 1 addition & 1 deletion docs/user-guide/protocols/mysql.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ mysql -h <host> -P 4002 -u <username> -p

## Table management

Please refer to [Table Management](/user-guide/operations/data-management/basic-table-operations.md).
Please refer to [Table Management](/user-guide/administration/manage-data/basic-table-operations.md).

## Ingest data

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/protocols/postgresql.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ psql -h <host> -p 4003 -U <username> -d public

## Table management

Please refer to [Table Management](/user-guide/operations/data-management/basic-table-operations.md).
Please refer to [Table Management](/user-guide/administration/manage-data/basic-table-operations.md).

## Ingest data

Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/query-data/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ GreptimeDB supports full SQL for querying data from a database.

In this document, we will use the `monitor` table to demonstrate how to query data.
For instructions on creating the `monitor` table and inserting data into it,
Please refer to [table management](/user-guide/operations/data-management/basic-table-operations.md#create-a-table) and [Ingest Data](/user-guide/ingest-data/for-iot/sql.md).
Please refer to [table management](/user-guide/administration/manage-data/basic-table-operations.md#create-a-table) and [Ingest Data](/user-guide/ingest-data/for-iot/sql.md).

## Basic query

Expand Down Expand Up @@ -265,7 +265,7 @@ SELECT DISTINCT ON (host) * FROM monitor ORDER BY host, ts DESC;

GreptimeDB supports [Range Query](/reference/sql/range.md) to aggregate data by time window.

Suppose we have the following data in the [`monitor` table](/user-guide/operations/data-management/basic-table-operations.md#create-a-table):
Suppose we have the following data in the [`monitor` table](/user-guide/administration/manage-data/basic-table-operations.md#create-a-table):

```sql
+-----------+---------------------+------+--------+
Expand Down
8 changes: 2 additions & 6 deletions i18n/zh/docusaurus-plugin-content-docs/current.json
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@
"message": "客户端库",
"description": "The label for category Client Libraries in sidebar docs"
},
"sidebar.docs.category.Operations": {
"message": "运维操作",
"sidebar.docs.category.Administration": {
"message": "管理",
"description": "The label for category Operations in sidebar docs"
},
"sidebar.docs.category.Deployments": {
Expand Down Expand Up @@ -171,10 +171,6 @@
"message": "管理数据",
"description": "The label for category Manage Data in sidebar docs"
},
"sidebar.docs.category.Data Management": {
"message": "管理数据",
"description": "The label for category Data Management in sidebar docs"
},
"sidebar.docs.category.Protocols": {
"message": "协议",
"description": "The label for category Manage Data in sidebar docs"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

## 分区

有关创建分区表的语法,请参阅用户指南中的[表分片](/user-guide/operations/data-management/table-sharding.md)部分。
有关创建分区表的语法,请参阅用户指南中的[表分片](/user-guide/administration/manage-data/table-sharding.md)部分。

## Region

Expand Down
Loading
Loading