From cda9c969ff2f9fbc6c42b5d33845bc61c3dc290b Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 31 Jan 2023 17:39:42 +0800 Subject: [PATCH 001/135] add 6.6.0 release notes --- releases/release-6.6.0.md | 409 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 409 insertions(+) create mode 100644 releases/release-6.6.0.md diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md new file mode 100644 index 000000000000..665b29b49d31 --- /dev/null +++ b/releases/release-6.6.0.md @@ -0,0 +1,409 @@ +--- +title: TiDB 6.6.0 Release Notes +--- + +# TiDB 6.6.0 Release Notes + +发版日期:2023 年 x 月 x 日 + +TiDB 版本:6.6.0 + +试用链接:[快速体验](https://docs.pingcap.com/zh/tidb/v6.6/quick-start-with-tidb) | [下载离线包](https://cn.pingcap.com/product-community/) + +在 6.6.0 版本中,你可以获得以下关键特性: + +- MySQL 8.0 兼容的多值索引 (Multi-Valued Index) (实验特性) +- 基于资源组的资源管控 (实验特性) +- 悲观锁队列的稳定唤醒模型 +- 数据请求的批量聚合 + +## 新功能 + +### SQL + +* 支持 DDL 动态资源管控(实验性特性) [#issue](链接) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** + + TiDB v6.6.0 版本引入了 DDL 动态资源管控, 通过自动控制 DDL 的 CPU 和内存使用量,尽量降低 DDL 变更任务对线上业务的影响。 + + 更多信息,请参考[用户文档](链接)。 + +* 支持 MySQL 语法兼容的外键约束 (实验特性)[#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** + + TiDB 在 v6.6.0 引入了 MySQL 语法兼容的外键约束特性,支持表内,表间的数据关联和约束校验能力,支持集联操作。该特性有助于保持数据一致性,提升数据质量,也方便客户进行数据建模。 + + 更多信息,请参考[用户文档](/sql-statements/sql-statement-foreign-key.md)。 + +* 支持通过`FLASHBACK CLUSTER TO TIMESTAMP` 命令闪回 DDL 操作 [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** + + [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) 语句支持在 Garbage Collection (GC) life time 内快速回退整个集群到指定的时间点,该功能在 TiDB v6.6.0 版本新增支持撤销 DDL 操作,适用于快速撤消集群的 DML 或 DDL 误操作、支持集群分钟级别的快速回退、支持在时间线上多次回退以确定特定数据更改发生的时间。 + + 更多信息,请参考[用户文档](/sql-statements/sql-statement-flashback-to-timestamp.md)。 + +* 支持 DDL 分布式并行执行框架(实验性特性) [#issue](链接) @[zimulala](https://github.com/zimulala) **tw@ran-huang** + + 在过去的版本中,整个 TiDB 集群中仅允许一个 TiDB 实例作为 DDL Owner 有权处理 Schema 变更任务,为了进一步提升 DDL 的并发性,TiDB v6.6.0 版本引入了 DDL 分布式并行执行框架,支持集群中所有的 TiDB 实例都作为 Owner 并发执行同一个 Schema 变更子任务,加速 DDL 的执行。 + + 更多信息,请参考[用户文档](链接)。 + +* MySQL 兼容的多值索引(Multi-Valued Index) (实验特性) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** + + TiDB 在 v6.6.0 引入了 MySQL 兼容的多值索引 (Multi-Valued Index)。 过滤 JSON 类型中某个数组的值是一个常见操作, 但普通索引对这类操作起不到加速作用,而在数组上创建多值索引能够大幅提升过滤的性能。 如果 JSON 类型中的某个数组上存在多值索引, 带有`MEMBER OF()`,`JSON_CONTAINS()`,`JSON_OVERLAPS()` 这几个函数的检索条件可以利用多值索引进行过滤,减少大量的 I/O 消耗,提升运行速度。 + + 多值索引的引入, 是对 JSON 类型的进一步增强, 同时也提升了 TiDB 对 MySQL 8.0 的兼容性。 + + 更多信息,请参考[用户文档](/sql-statements/sql-statement-create-index.md#多值索引)。 + +* 绑定历史执行计划 GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** + + 在 v6.5 中,TiDB 扩展了 [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) 语句中的绑定对象,支持根据历史执行计划创建绑定。在 v6.6 中这个功能 GA, 执行计划的选择不仅限在当前 TiDB 节点,任意 TiDB 节点产生的历史执行计划都可以被选为 [SQL Binding]((/sql-statements/sql-statement-create-binding.md)) 的目标,进一步提升了功能的易用性。 + + 更多信息,请参考[用户文档](/sql-plan-management.md#根据历史执行计划创建绑定)。 + +* 支持 `ALTER TABLE…REORGANIZE PARTITION` [#15000](https://github.com/pingcap/tidb/issues/15000) @[mjonss](https://github.com/mjonss) **tw@qiancai** + + TiDB 支持 `ALTER TABLE…REORGANIZE PARTITION` 语法。此语法用于对表的部分分区、全部分区重新组织分区结构,并且不丢失数据。 + + 更多信息,请参考[用户文档](/partitioned-table.md#重组分区) + +* [Placement Rules in SQL](https://docs.pingcap.com/zh/tidb/dev/placement-rules-in-sql) 支持指定 `SURVIVAL_PREFERENCE` [#38605](https://github.com/pingcap/tidb/issues/38605) @nolouch[https://github.com/nolouch] **tw@qiancai** + + 通过指定 `SURVIVAL_PREFERENCE`,用户可以: + 1. 跨区域部署的 TiDB 集群,用户可以控制指定数据库或表在某个区域故障时,也能在另一个区域提供服务。 + 2. 单区域部署的 TiDB 集群,用户可以控制指定数据库或表在某个可用区故障时,也能在另一个可用区提供服务。 + + 更多信息,请参考[用户文档](/placement-rules-in-sql.md#生存偏好) + +### 安全 + +* TiFlash 支持 TLS certificate hot reload @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** + + TiFlash TLS 证书自动轮换指在开启组件间加密传输的 TiDB Cluster 上,当 TiFlash 的 TLS 证书过期,重新签发一个新 TLS 证书给 TiFlash 时,可以不用重启 TiDB Cluster,自动加载新 TiFlash TLS 证书。TiDB Cluster 内部组件之间 TLS 过期轮换不影响 TiDB Cluster 的正常使用,保障了 TiDB 集群高可用性。 + + 更多信息,请参考:https://docs.pingcap.com/tidb/stable/enable-tls-between-components + +### 可观测性 + +* 快速绑定执行计划 [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** + + TiDB 的执行计划快速绑定功能:允许用户在 TiDB Dashboard 中一分钟内完成 SQL 与特定计划的绑定。 + + 通过提供友好的界面简化在 TiDB 上绑定计划的过程,减少计划绑定过程的复杂性提高用户体验,提高计划绑定过程的效率。 + + 更多信息,请参考[用户文档](/dashboard/dashboard-statement-details.md)。 + +* 为执行计划缓存增加告警 [#issue号](链接) @[qw4990](https://github.com/qw4990) **tw@TomShawn** + + 当执行计划无法进入执行计划缓存时, TiDB 会通过 warning 的方式说明其无法被缓存的原因, 降低诊断的难度。例如: + + ```sql + mysql> prepare st from 'select * from t where a set @a='1'; + Query OK, 0 rows affected (0.00 sec) + + mysql> execute st using @a; + Empty set, 1 warning (0.01 sec) + + mysql> show warnings; + +---------+------+----------------------------------------------+ + | Level | Code | Message | + +---------+------+----------------------------------------------+ + | Warning | 1105 | skip plan-cache: '1' may be converted to INT | + +---------+------+----------------------------------------------+ + ``` + + 上述例子中, 优化器进行了非 INT 类型到 INT 类型的转换,产生的计划可能随着参数变化有风险,因此不缓存。 + + 更多信息,请参考[用户文档](/sql-prepared-plan-cache.md#prepared-plan-cache-诊断)。 + +* 在慢查询中增加告警字段 [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** + + 向慢查询日志中增加一个新的字段 `Warning` ,以 JSON 格式记录该慢查询语句在执行过程中产生的警告,用来协助查询性能问题的诊断。 + + 用户也可以在 TiDB Dashboard 中的慢查询页面中查看。 + + 更多信息,请参考[用户文档](/identify-slow-queries.md)。 + +* 自动捕获执行计划的生成 [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) **tw@ran-huang** + + 在执行计划问题的排查过程中,`PLAN REPLAYER` 能够协助保存现场,提升诊断的效率。 但在个别场景中,一些执行计划的生成无法任意重现,给诊断工作增加了难度。 针对这类问题, `PLAN REPLAYER` 扩展了自动捕获的能力。 通过 `PLAN REPLAYER CAPTURE` 命令字,用户可提前注册目标 SQL,也可以同时指定目标执行计划, 当 TiDB 检测到执行的 SQL 和执行计划与注册目标匹配时, 会自动生成并打包 `PLAN REPLAYER` 的信息,提升执行计划不稳定问题的诊断效率。 + + 启用这个功能需要设置系统变量 [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) 为 `ON`。 + + 更多信息,请参考[用户文档](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。 + +* Statements Summary 持久化(实验特性) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** + + Statements Summary 过去只在内存中维护,一旦 TiDB 发生重启数据便会全部丢失。开启持久化配置后历史数据将会定期被写入磁盘,相关系统表的查询数据源也将由内存变为磁盘,TiDB 发生重启后历史数据将依然保持存在。 + + 更多信息,请参考[用户文档](/statement-summary-tables.md#持久化-statements-summary)。 +### 性能 + +* 使用 Witness 节约成本 [#12876](https://github.com/tikv/tikv/issues/12876) [@Connor1996](https://github.com/Connor1996) [@ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** + + 在云环境中,当 TiKV 使用如 AWS EBS 或 GCP 的 Persistent Disk 作为单节点存储时,它们提供的持久性相比物理磁盘更高。此时,TiKV 使用 3 个 Raft 副本虽然可行,但并不必要。为了降低成本,TiKV 引入了 Witness 功能,即 2 Replicas With 1 Log Only 机制。其中 1 Log Only 副本仅存储 Raft 日志但不进行数据 apply,依然可以通过 Raft 协议保证数据一致性。与标准的 3 副本架构相比,Witness 可以节省存储资源及 CPU 使用率。 + + 更多信息,请参考[用户文档](/use-witness-to-save-costs.md)。 + +* TiFlash 引擎支持 Stale Read 功能 [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** + + 在 v5.1.1 中,TiDB 正式发布了 Stale Read 功能,TiDB 可以读取指定的时间点或时间范围内的历史数据。Stale Read 允许引擎直接读取本地副本数据,降低读取延迟,提升查询性能。但是,只有 TiKV 引擎支持 Stale Read 功能,TiFlash 引擎并不支持 Stale Read 功能,即使查询的表包含 TiFlash 副本,TiDB 也只能使用 TiKV 副本进行指定时间点或时间范围内的历史数据查询。在 v6.6.0 中,TiFlash 引擎实现了对 Stale Read 功能的支持。使用语法 `AS OF TIMESTAMP` 或系统变量 `tidb_read_staleness` 等方式进行指定时间点或时间范围内的历史数据查询时,如果查询的表包含 TiFlash 副本,优化器可以选择 TiFlash 引擎读取指定的时间点或时间范围内的历史数据。 + + 更多信息,请参考[用户文档](/stale-read.md)。 + +* 新增支持下推以下字符串函数至 TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@shichun-0415** + + * `regexp_replace` + +* TiFlash 引擎支持独立的 MVCC 位图过滤器 [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) **tw@qiancai** + + TiFlash 引擎的数据扫描流程包含 MVCC 过滤和扫描列数据等操作。由于 MVCC 过滤和其他数据扫描操作具有较高的耦合性,导致无法对数据扫描流程进行优化改进。在 v6.6.0 中,TiFlash 将整体数据扫描流程中的 MVCC 过滤操作进行解耦,提供独立的 MVCC 位图过滤器,为后续优化数据扫描流程提供基础。 + +* 批量聚合数据请求 [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** + + 当 TiDB 向 TiKV 发送数据请求时, 会根据数据所在的 Region 将请求编入不同的子任务,每个子任务只处理单个 Region 的请求。 当访问的数据离散度很高时, 即使数据量不大,也会生成众多的子任务,进而产生大量 RPC 请求,消耗额外的时间。 在 v6.6.0 中,TiDB 支持将发送到相同 TiKV 实例的数据请求部分合并,减少子任务的数量和 RPC 请求的开销。 在数据离散度高且 gRPC 线程池资源紧张的情况下,批量化请求能够将性能提升 50% 以上。 + + 此特性默认打开, 通过系统变量 [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) 设置批量请求的大小。 + +* 新增一系列优化器 Hint [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** + + TiDB 在新版本中增加了一系列优化器 Hint, 用来控制 `LIMIT` 操作的执行计划选择,以及 MPP 执行过程中的部分行为。 其中包括: + + - [`KEEP_ORDER()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): 提示优化器使用指定的索引,读取时保持索引的顺序。 生成类似 `Limit + IndexScan(keep order: true)` 的计划。 + - [`NO_KEEP_ORDER()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): 提示优化器使用指定的索引,读取时不保持顺序。 生成类似 `TopN + IndexScan(keep order: false)` 的计划。 + - [`SHUFFLE_JOIN()`](/optimizer-hints.md#shuffle_joint1_name--tl_name-): 针对 MPP 生效。 提示优化器对指定表使用 Shuffle Join 算法。 + - [`BROADCAST_JOIN()`](/optimizer-hints.md#broadcast_joint1_name--tl_name-): 针对 MPP 生效。提示优化器对指定表使用 Broadcast Join 算法。 + - [`MPP_1PHASE_AGG()`](/optimizer-hints.md#mpp_1phase_agg): 针对 MPP 生效。提示优化器对指定查询块中所有聚合函数使用一阶段聚合算法。 + - [`MPP_2PHASE_AGG()`](/optimizer-hints.md#mpp_2phase_agg): 针对 MPP 生效。 提示优化器对指定查询块中所有聚合函数使用二阶段聚合算法。 + + 优化器 Hint 的持续引入,为用户提供了更多的干预手段,有助于 SQL 性能问题的解决,并提升了整体性能的稳定性。 + +* 解除执行计划缓存对 `LIMIT` 子句的限制 [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** + + TiDB 移除了执行计划缓存的限制,`LIMIT` 后带有变量的子句可进入执行计划缓存, 如 `Limit ?` 或者 `Limit 10, ?`。这使得更多的 SQL 能够从计划缓存中获益,提升执行效率。 + + 更多信息,请参考[用户文档](/sql-prepared-plan-cache.md)。 + +* 悲观锁队列的稳定唤醒模型 [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** + + 如果业务场景存在单点悲观锁冲突频繁的情况,原有的唤醒机制无法保证事务获取锁的时间,造成长尾延迟高,甚至获取超时。 在 v6.6.0 中,通过设置系统变量 [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入) 为 `ON` 可以开启悲观锁的稳定唤醒模型。 在新的唤醒模型下, 队列的唤醒顺序可被严格控制,避免无效的唤醒造成的资源浪费,在锁冲突严重的场景中,能够减少长尾延时,降低 P99 响应时间。 + + 更多信息,请参考[用户文档](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入)。 + +### 事务 + +* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) + + 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) + + 更多信息,请参考[用户文档](链接)。 + +### 稳定性 + +* 基于资源组的资源管控 (实验特性) #[38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** + + TiDB 集群支持创建资源组,将不用的数据库用户映射到对应的资源组中,根据实际需要设置每个资源组的配额。当集群资源紧张时,来自同一个资源组的会话所使用的全部资源将被限制在配额内,避免其中一个资源组过度消耗从而抑制其他资源组中的会话正常运行。系统内置视图会对资源的实际使用情况进行反馈和展示,协助用户更合理地配置资源。 + + 资源管控技术的引入对 TiDB 具有里程碑的意义,它能够将一个分布式数据库集群中划分成多个逻辑单元,即使个别单元对资源过度使用,也不会完全挤占其他单元所需的资源。利用这个技术,你可以将数个来自不同系统的中小型应用合入一个 TiDB 集群中,个别应用的负载提升,不会影响其他业务的正常运行;而在系统负载较低的时候,繁忙的应用即使超过限额,也仍旧可以被分配到所需的系统资源,达到资源的最大化利用。 同样的,你可以选择将所有测试环境合入一个集群,或者将消耗较大的批量任务编入一个单独的资源组,在保证重要应用获得必要资源的同时,提升硬件利用率,降低运行成本。另外,合理利用资源管控技术可以减少集群数量,降低运维难度及管理成本。 + + 在 v6.6 中, 启用资源管控技术需要同时打开 TiDB 的全局变量 [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) 及 TiKV 的配置项 [`resource_control.enabled`](/tikv-configuration-file.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5)。 当前支持的限额方式是基于"[用量](/tidb-RU.md)" (即 Request Unit 或 RU ),RU 是 TiDB 对 CPU、IO 等系统资源的统一抽象单位。 + + 更多信息,请参考[用户文档](/tidb-resource-control.md)。 + +* 使用临时 Witness 副本来加速副本恢复 [#12876](https://github.com/tikv/tikv/issues/12876) [@Connor1996](https://github.com/Connor1996) [@ethercflow](https://github.com/ethercflow) **tw@Orexmt** + + Witness 功能可用于快速恢复 failover,以提高系统可用性。例如在 3 缺 1 的情况下,虽然满足多数派要求,但是系统很脆弱,而完整恢复一个新成员的时间通常很长(需要先拷贝 snapshot 然后 apply 最新的日志),特别是 Region snapshot 比较大的情况。而且拷贝副本的过程可能会对不健康的副本造成更多的压力。因此,先添加一个 Witness 可以快速下掉不健康的节点,保证恢复数据的过程中日志的安全性,后续再由 PD 的 rule checker 将 Witness 副本变为普通的 Voter。 + + 更多信息,请参考[用户文档](/use-witness-to-speed-up-failover.md)。 + +### 易用性 + +* 支持动态修改参数 store-io-pool-size [#13964](https://github.com/tikv/tikv/issues/13964) @[LykxSassinator](https://github.com/LykxSassinator) **tw@shichun-0415** + + TiKV 中的 raftstore.store-io-pool-size 参数用于设定处理 Raft I/O 任务的线程池中线程的数量,需要在 TiKV 性能调优时进行修改调整。在 v6.6.0 版本之前,这个参数无法动态修改。v6.6.0 支持对该参数的动态修改功能,提高了 TiKV 性能调优的灵活性。 + + 更多信息,请参考[用户文档](/dynamic-config.md)。 + +* 可通过命令行参数或者配置项在 TiDB 集群初次启动时指定执行的初始化 SQL 脚本 [#35625](https://github.com/pingcap/tidb/pull/35625) @[morgo](https://github.com/morgo) **tw@TomShawn** + + 命令行参数 `--initialize-sql-file` 用于指定 TiDB 集群初次启动时执行的 SQL 脚本,可用于修改系统变量的值,或者创建用户、分配权限等。 + + 更多信息,请参考[配置项 `initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-从-v660-版本开始引入)。 + +### MySQL 兼容性 + +* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) + + 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) + + 更多信息,请参考[用户文档](链接)。 + +### 数据迁移 + +* Data Migration(DM) 集成了 Lightning 的 Physical Import Mode ,全量迁移性能最高提升 10 倍 @[lance6716](https://github.com/lance6716) **tw@ran-huang** + + 功能描述 :Data Migration (DM)的全量迁移能力,集成了 Lightning 的 Physical Import Mode ,使得 DM 做全量数据迁移时的性能最高可提升 10 倍,大大缩短了大数据量场景下的迁移时间。原先客户数据量较多时,客户得单独配置 Lightning 的 Physical Import Mode 的任务来做快速的全量数据迁移,之后再用 DM 来做增量数据迁移,配置复杂。现在集成该能力后,用户迁移大数据量的场景,无需再配置 Lightning 的任务,在一个 DM 任务里就可以搞定了。 + + 更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12296)。 + +### 数据共享与订阅 + +* TiKV-CDC 工具 GA,支持 RawKV 的 Change Data Capture [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** + + TiKV-CDC 是一个 TiKV 集群的 CDC (Change Data Capture) 工具。TiKV 可以独立于 TiDB,与 PD 构成 KV 数据库,此时的产品形态为 RawKV。TiKV-CDC 支持订阅 RawKV 的数据变更,并实时同步到下游 TiKV 集群,从而实现 RawKV 的跨集群复制能力。 + + 更多信息,请参考[用户文档](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc-cn/)。 + +* 同步到下游 Kafka 的 Changefeed 可将上游单表的同步任务下发到多个 TiCDC Nodes 执行,实现单表同步性能的水平扩展 [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** + + 功能描述:下游为 Kafka 的 Changefeed 可将上游单表的复制任务调度到多个 TiCDC Nodes 执行,实现单张表同步性能的水平扩展。在这个功能发布之前,上游单表写入数据量较大时,无法水平扩展单表的复制能力,导致同步延迟增加。该功能发布后,就可以通过水平扩展,解决单表同步性能的问题。 + + 更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12693)。 + +### 部署及运维 + +* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) + + 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) + + 更多信息,请参考[用户文档](链接)。 + +## 兼容性变更 + +### 系统变量 + +| 变量名 | 修改类型(包括新增/修改/删除) | 描述 | +|--------|------------------------------|------| +| [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 该变量是资源管控特性的开关。该变量设置为 `ON` 后,集群支持应用按照资源组做资源隔离。 | +| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | 修改 | 此变量可用于生产环境。 设置 `IndexLookUp` 算子回表时多个 Coprocessor Task 的 batch 大小。`0` 代表不使用 batch。当 `IndexLookUp` 算子的回表 Task 数量特别多,出现极长的慢查询时,可以适当调大该参数以加速查询。 | +| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入) | 新增 | 是否对悲观锁启用加强的悲观锁唤醒模型。 | +| [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | 新增 | 这个变量用来控制是否开启 [`PLAN REPLAYER CAPTURE`](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。默认值 `OFF`, 代表关闭 `PLAN REPLAYER CAPTURE`。 | + +### 配置文件参数 + +| 配置文件 | 配置项 | 修改类型 | 描述 | +| -------- | -------- | -------- | -------- | +| TiKV | [`resource_control.enabled`](/tikv-configuration-file.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 是否支持按照资源组配额调度。 默认 `false` ,即关闭按照资源组配额调度。 | +| | | | | +| | | | | +| | | | | + +### 其他 + +## 废弃功能 + +## 改进提升 + ++ TiDB + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ TiKV + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ PD + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ TiFlash + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ Tools + + + Backup & Restore (BR) + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiCDC + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Data Migration (DM) + + - 优化了 DM 的告警规则和内容。 + 之前 DM_XXX_process_exits_with_error 类告警是遇到错误就报警,有些告警实际是由于 db conn 长时间 idle 导致,重连后即可恢复,为了降低这类 false alerm,现在细分为可自动恢复错误和不可恢复错误 + 对不可自动恢复错误,维持旧的行为,立即 alert + 对可自动回复错误,只有在 2m 内发生超过 3 次时才报警 + [7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Lightning +- 修复了一个在部分场景下 TiDB 重启导致 Lightning timeout 卡主的 bug。[33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) **tw@shichun-0415** + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiUP + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + Sync-diff-inspector + + - 新增一个参数,当下游数据库的表在上游不存在时,可配置该参数跳过对上下游数据库表数量不一致场景的校验,而不是任务中断退出。 @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya9) **tw@shichun-0415** + - note [#issue](链接) @[贡献者 GitHub ID](链接) + +## 错误修复 + ++ TiDB + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ TiKV + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ PD + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ TiFlash + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ Tools + + + Backup & Restore (BR) + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiCDC + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Data Migration (DM) + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Lightning + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiUP + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + +## 贡献者 + +感谢来自 TiDB 社区的贡献者们: + +- [贡献者 GitHub ID]() From e6887c79a36ae5a418b6741e63afc73098b5cef7 Mon Sep 17 00:00:00 2001 From: qiancai Date: Fri, 3 Feb 2023 18:44:39 +0800 Subject: [PATCH 002/135] add feature description translations --- releases/release-6.6.0.md | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 665b29b49d31..2a828b5b4206 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -65,21 +65,22 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](/partitioned-table.md#重组分区) -* [Placement Rules in SQL](https://docs.pingcap.com/zh/tidb/dev/placement-rules-in-sql) 支持指定 `SURVIVAL_PREFERENCE` [#38605](https://github.com/pingcap/tidb/issues/38605) @nolouch[https://github.com/nolouch] **tw@qiancai** +* Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @nolouch[https://github.com/nolouch] **tw@qiancai** - 通过指定 `SURVIVAL_PREFERENCE`,用户可以: - 1. 跨区域部署的 TiDB 集群,用户可以控制指定数据库或表在某个区域故障时,也能在另一个区域提供服务。 - 2. 单区域部署的 TiDB 集群,用户可以控制指定数据库或表在某个可用区故障时,也能在另一个可用区提供服务。 + `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: - 更多信息,请参考[用户文档](/placement-rules-in-sql.md#生存偏好) + - For TiDB clusters deployed across regions, when a region with the specified databases or tables fails, another region can provide the service. + - For TiDB clusters deployed in a single region, when an availability zone with the specified databases or tables fails, another availability zone can provide the service. -### 安全 + For more information,see [documentation](/placement-rules-in-sql.md#survival-preference)。 -* TiFlash 支持 TLS certificate hot reload @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** +### Security - TiFlash TLS 证书自动轮换指在开启组件间加密传输的 TiDB Cluster 上,当 TiFlash 的 TLS 证书过期,重新签发一个新 TLS 证书给 TiFlash 时,可以不用重启 TiDB Cluster,自动加载新 TiFlash TLS 证书。TiDB Cluster 内部组件之间 TLS 过期轮换不影响 TiDB Cluster 的正常使用,保障了 TiDB 集群高可用性。 +* TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** - 更多信息,请参考:https://docs.pingcap.com/tidb/stable/enable-tls-between-components + For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. The rotation of a TLS certificate between componets within a TiDB cluster does not affect the normal use of the TiDB cluster, which ensures the cluster high availability. + + For more information, see [documentation](/enable-tls-between-components.md). ### 可观测性 @@ -146,15 +147,15 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](/use-witness-to-save-costs.md)。 -* TiFlash 引擎支持 Stale Read 功能 [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** +* TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** - 在 v5.1.1 中,TiDB 正式发布了 Stale Read 功能,TiDB 可以读取指定的时间点或时间范围内的历史数据。Stale Read 允许引擎直接读取本地副本数据,降低读取延迟,提升查询性能。但是,只有 TiKV 引擎支持 Stale Read 功能,TiFlash 引擎并不支持 Stale Read 功能,即使查询的表包含 TiFlash 副本,TiDB 也只能使用 TiKV 副本进行指定时间点或时间范围内的历史数据查询。在 v6.6.0 中,TiFlash 引擎实现了对 Stale Read 功能的支持。使用语法 `AS OF TIMESTAMP` 或系统变量 `tidb_read_staleness` 等方式进行指定时间点或时间范围内的历史数据查询时,如果查询的表包含 TiFlash 副本,优化器可以选择 TiFlash 引擎读取指定的时间点或时间范围内的历史数据。 + The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. - 更多信息,请参考[用户文档](/stale-read.md)。 + Staring from v6.6.0, TiFlash supports the Stale Read feature. When you query historical data of a table using the `AS OF TIMESTAMP` syntax or the `tidb_read_staleness` system variable, if the table has a TiFlash replica, the optimizer now can choose to read the corresponding data from the TiFlash replica, thus further improving query performance. -* 新增支持下推以下字符串函数至 TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@shichun-0415** + For more information, see [documentation](/stale-read.md). - * `regexp_replace` +* Support pushing down the `regexp_replace` string function to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** * TiFlash 引擎支持独立的 MVCC 位图过滤器 [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) **tw@qiancai** @@ -312,7 +313,7 @@ TiDB 版本:6.6.0 + TiFlash - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides a foundation for subsequent optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin] **tw@qiancai** - note [#issue](链接) @[贡献者 GitHub ID](链接) + Tools From 596c177cac81f60d73666fe9929ecebf10479eb0 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 3 Feb 2023 18:46:42 +0800 Subject: [PATCH 003/135] Apply suggestions from code review --- releases/release-6.6.0.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 2a828b5b4206..1b8920a075fd 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -157,10 +157,6 @@ TiDB 版本:6.6.0 * Support pushing down the `regexp_replace` string function to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** -* TiFlash 引擎支持独立的 MVCC 位图过滤器 [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) **tw@qiancai** - - TiFlash 引擎的数据扫描流程包含 MVCC 过滤和扫描列数据等操作。由于 MVCC 过滤和其他数据扫描操作具有较高的耦合性,导致无法对数据扫描流程进行优化改进。在 v6.6.0 中,TiFlash 将整体数据扫描流程中的 MVCC 过滤操作进行解耦,提供独立的 MVCC 位图过滤器,为后续优化数据扫描流程提供基础。 - * 批量聚合数据请求 [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** 当 TiDB 向 TiKV 发送数据请求时, 会根据数据所在的 Region 将请求编入不同的子任务,每个子任务只处理单个 Region 的请求。 当访问的数据离散度很高时, 即使数据量不大,也会生成众多的子任务,进而产生大量 RPC 请求,消耗额外的时间。 在 v6.6.0 中,TiDB 支持将发送到相同 TiKV 实例的数据请求部分合并,减少子任务的数量和 RPC 请求的开销。 在数据离散度高且 gRPC 线程池资源紧张的情况下,批量化请求能够将性能提升 50% 以上。 From 258a557111a70e1e14ffb7e67fd843cc01999e69 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 3 Feb 2023 18:54:05 +0800 Subject: [PATCH 004/135] Apply suggestions from code review --- releases/release-6.6.0.md | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 1b8920a075fd..0501fcdce624 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -59,12 +59,6 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](/sql-plan-management.md#根据历史执行计划创建绑定)。 -* 支持 `ALTER TABLE…REORGANIZE PARTITION` [#15000](https://github.com/pingcap/tidb/issues/15000) @[mjonss](https://github.com/mjonss) **tw@qiancai** - - TiDB 支持 `ALTER TABLE…REORGANIZE PARTITION` 语法。此语法用于对表的部分分区、全部分区重新组织分区结构,并且不丢失数据。 - - 更多信息,请参考[用户文档](/partitioned-table.md#重组分区) - * Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @nolouch[https://github.com/nolouch] **tw@qiancai** `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: @@ -72,13 +66,13 @@ TiDB 版本:6.6.0 - For TiDB clusters deployed across regions, when a region with the specified databases or tables fails, another region can provide the service. - For TiDB clusters deployed in a single region, when an availability zone with the specified databases or tables fails, another availability zone can provide the service. - For more information,see [documentation](/placement-rules-in-sql.md#survival-preference)。 + For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). ### Security * TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** - For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. The rotation of a TLS certificate between componets within a TiDB cluster does not affect the normal use of the TiDB cluster, which ensures the cluster high availability. + For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. The rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures the cluster high availability. For more information, see [documentation](/enable-tls-between-components.md). @@ -151,7 +145,7 @@ TiDB 版本:6.6.0 The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. - Staring from v6.6.0, TiFlash supports the Stale Read feature. When you query historical data of a table using the `AS OF TIMESTAMP` syntax or the `tidb_read_staleness` system variable, if the table has a TiFlash replica, the optimizer now can choose to read the corresponding data from the TiFlash replica, thus further improving query performance. + Starting from v6.6.0, TiFlash supports the Stale Read feature. When you query the historical data of a table using the `AS OF TIMESTAMP` syntax or the `tidb_read_staleness` system variable, if the table has a TiFlash replica, the optimizer now can choose to read the corresponding data from the TiFlash replica, thus further improving query performance. For more information, see [documentation](/stale-read.md). From 99950190b5d76577ae82fe68367a5323fa77843c Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Fri, 3 Feb 2023 20:10:23 +0800 Subject: [PATCH 005/135] translate features of shichun --- releases/release-6.6.0.md | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 0501fcdce624..97e150a345e4 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -128,11 +128,12 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。 -* Statements Summary 持久化(实验特性) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** +* Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** - Statements Summary 过去只在内存中维护,一旦 TiDB 发生重启数据便会全部丢失。开启持久化配置后历史数据将会定期被写入磁盘,相关系统表的查询数据源也将由内存变为磁盘,TiDB 发生重启后历史数据将依然保持存在。 + Before v6.6.0, statements summary data is maintained in memory. Once the TiDB server restarts, all the statements summary data gets lost. Starting from v6.6.0, TiDB supports enabling statements summary persistence, which allows historical data to be written to disks on a regular basis. In the meantime, the result of queries on system tables will derive from disks, instead of memory. After TiDB restarts, all historical data is still available. + + For more information, see [documentation](/statement-summary-tables.md#persist-statements-summary). - 更多信息,请参考[用户文档](/statement-summary-tables.md#持久化-statements-summary)。 ### 性能 * 使用 Witness 节约成本 [#12876](https://github.com/tikv/tikv/issues/12876) [@Connor1996](https://github.com/Connor1996) [@ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** @@ -170,11 +171,11 @@ TiDB 版本:6.6.0 优化器 Hint 的持续引入,为用户提供了更多的干预手段,有助于 SQL 性能问题的解决,并提升了整体性能的稳定性。 -* 解除执行计划缓存对 `LIMIT` 子句的限制 [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** +* Remove the limit on `LIMIT` statements [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** - TiDB 移除了执行计划缓存的限制,`LIMIT` 后带有变量的子句可进入执行计划缓存, 如 `Limit ?` 或者 `Limit 10, ?`。这使得更多的 SQL 能够从计划缓存中获益,提升执行效率。 + Starting from v6.6.0, TiDB plan cache supports caching queries containing `?` after `Limit`, such as `Limit ?` or `Limit 10, ?`. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. - 更多信息,请参考[用户文档](/sql-prepared-plan-cache.md)。 + For more information, see [documentation](/sql-prepared-plan-cache.md). * 悲观锁队列的稳定唤醒模型 [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** @@ -210,17 +211,17 @@ TiDB 版本:6.6.0 ### 易用性 -* 支持动态修改参数 store-io-pool-size [#13964](https://github.com/tikv/tikv/issues/13964) @[LykxSassinator](https://github.com/LykxSassinator) **tw@shichun-0415** +* Support dynamically modifying `store-io-pool-size` [#13964](https://github.com/tikv/tikv/issues/13964) @[LykxSassinator](https://github.com/LykxSassinator) **tw@shichun-0415** - TiKV 中的 raftstore.store-io-pool-size 参数用于设定处理 Raft I/O 任务的线程池中线程的数量,需要在 TiKV 性能调优时进行修改调整。在 v6.6.0 版本之前,这个参数无法动态修改。v6.6.0 支持对该参数的动态修改功能,提高了 TiKV 性能调优的灵活性。 + The TiKV configuration item [`raftstore.store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530) specifies the allowable number of threads that process Raft I/O tasks, which can be adjusted when tuning TiKV performance. Before v6.6.0, this configuration item cannot be modified dynamically. Starting from v6.6.0, you can modify this configuration without restarting the server, which means more flexible performance tuning. - 更多信息,请参考[用户文档](/dynamic-config.md)。 + For more information, see [documentation](/dynamic-config.md). -* 可通过命令行参数或者配置项在 TiDB 集群初次启动时指定执行的初始化 SQL 脚本 [#35625](https://github.com/pingcap/tidb/pull/35625) @[morgo](https://github.com/morgo) **tw@TomShawn** +* Support specifying the SQL script executed upon TiDB cluster intialization [#35624](https://github.com/pingcap/tidb/issues/35624) @[morgo](https://github.com/morgo) **tw@shichun-0415** - 命令行参数 `--initialize-sql-file` 用于指定 TiDB 集群初次启动时执行的 SQL 脚本,可用于修改系统变量的值,或者创建用户、分配权限等。 + When you start a TiDB cluster for the first time, you can specify the SQL script to be executed by configuring the CLI parameter `--initialize-sql-file`. You can use this feature when you need to perform such operations as modifying the value of a system variable, creating a user, or granting privileges. - 更多信息,请参考[配置项 `initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-从-v660-版本开始引入)。 + For more information, see the [configuration item `initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660). ### MySQL 兼容性 @@ -330,7 +331,6 @@ TiDB 版本:6.6.0 - note [#issue](链接) @[贡献者 GitHub ID](链接) + TiDB Lightning -- 修复了一个在部分场景下 TiDB 重启导致 Lightning timeout 卡主的 bug。[33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) **tw@shichun-0415** - note [#issue](链接) @[贡献者 GitHub ID](链接) - note [#issue](链接) @[贡献者 GitHub ID](链接) @@ -341,7 +341,7 @@ TiDB 版本:6.6.0 + Sync-diff-inspector - - 新增一个参数,当下游数据库的表在上游不存在时,可配置该参数跳过对上下游数据库表数量不一致场景的校验,而不是任务中断退出。 @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya9) **tw@shichun-0415** + - Add a new parameter `skip-non-existing-table` to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya9) **tw@shichun-0415** - note [#issue](链接) @[贡献者 GitHub ID](链接) ## 错误修复 @@ -385,6 +385,7 @@ TiDB 版本:6.6.0 + TiDB Lightning + - Fix the issue that TiDB Lightning timeout hangs due to TiDB restart in some scenarios [#33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) **tw@shichun-0415** - note [#issue](链接) @[贡献者 GitHub ID](链接) - note [#issue](链接) @[贡献者 GitHub ID](链接) From d9a12893e219c8beae7fb6b1a6783deaf3ed068e Mon Sep 17 00:00:00 2001 From: shichun-0415 Date: Sun, 5 Feb 2023 17:54:47 +0800 Subject: [PATCH 006/135] update compatibility changes --- releases/release-6.6.0.md | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 97e150a345e4..ff83c09c7132 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -267,6 +267,7 @@ TiDB 版本:6.6.0 | 变量名 | 修改类型(包括新增/修改/删除) | 描述 | |--------|------------------------------|------| +| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `count` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `count` that is greater than 10000. | | [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 该变量是资源管控特性的开关。该变量设置为 `ON` 后,集群支持应用按照资源组做资源隔离。 | | [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | 修改 | 此变量可用于生产环境。 设置 `IndexLookUp` 算子回表时多个 Coprocessor Task 的 batch 大小。`0` 代表不使用 batch。当 `IndexLookUp` 算子的回表 Task 数量特别多,出现极长的慢查询时,可以适当调大该参数以加速查询。 | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入) | 新增 | 是否对悲观锁启用加强的悲观锁唤醒模型。 | @@ -277,11 +278,21 @@ TiDB 版本:6.6.0 | 配置文件 | 配置项 | 修改类型 | 描述 | | -------- | -------- | -------- | -------- | | TiKV | [`resource_control.enabled`](/tikv-configuration-file.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 是否支持按照资源组配额调度。 默认 `false` ,即关闭按照资源组配额调度。 | +| TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from 0 to 0.8, which means the limit is 80% of the total memory.| +| TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | Added to value options: GCS and Azure. | +| TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | New | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | +| TiDB | [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) | New | Controls whether to enable statements summary persistence. The default value is `false`, which means this feature is not enabled by default. | +| TiDB | [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the file to which persistent data is written. | +| TiDB | [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of days to keep persistent data files. | +| TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | +| TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | +| sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | New | Controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | | | | | | -| | | | | -| | | | | -### 其他 +### Others + +- Support dynamically modifying `store-io-pool-size`. This facilitate more flexible TiKV performance tuning. +- Remove the limit on `LIMIT` statements, thus improving the execution performance. ## 废弃功能 From 0b6aea3498103c1384d606f55a1b96a3e444c69c Mon Sep 17 00:00:00 2001 From: Aolin Date: Sun, 5 Feb 2023 23:52:48 +0800 Subject: [PATCH 007/135] update new features Signed-off-by: Aolin --- releases/release-6.6.0.md | 44 +++++++++++++++++++++------------------ 1 file changed, 24 insertions(+), 20 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ff83c09c7132..2e19abb5e483 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -27,11 +27,11 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](链接)。 -* 支持 MySQL 语法兼容的外键约束 (实验特性)[#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support the foreign key constraint that is compatible with MySQL (experimental) [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - TiDB 在 v6.6.0 引入了 MySQL 语法兼容的外键约束特性,支持表内,表间的数据关联和约束校验能力,支持集联操作。该特性有助于保持数据一致性,提升数据质量,也方便客户进行数据建模。 + TiDB v6.6.0 introduces the foreign key constraint feature compatible with MySQL. This feature supports data correlation in a table or between tables, constraint validation, and supports cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. - 更多信息,请参考[用户文档](/sql-statements/sql-statement-foreign-key.md)。 + For more information, see [documentation](/sql-statements/sql-statement-foreign-key.md). * 支持通过`FLASHBACK CLUSTER TO TIMESTAMP` 命令闪回 DDL 操作 [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** @@ -112,13 +112,11 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](/sql-prepared-plan-cache.md#prepared-plan-cache-诊断)。 -* 在慢查询中增加告警字段 [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** +* Add the `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** - 向慢查询日志中增加一个新的字段 `Warning` ,以 JSON 格式记录该慢查询语句在执行过程中产生的警告,用来协助查询性能问题的诊断。 + The `Warnings` field is added to the slow query log in JSON format to record the warnings generated during the execution of the slow query to help diagnose performance issues. You can also view this in the slow query page of TiDB Dashboard. - 用户也可以在 TiDB Dashboard 中的慢查询页面中查看。 - - 更多信息,请参考[用户文档](/identify-slow-queries.md)。 + For more information, see [documentation](/identify-slow-queries.md). * 自动捕获执行计划的生成 [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) **tw@ran-huang** @@ -136,11 +134,11 @@ TiDB 版本:6.6.0 ### 性能 -* 使用 Witness 节约成本 [#12876](https://github.com/tikv/tikv/issues/12876) [@Connor1996](https://github.com/Connor1996) [@ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** +* Use Witness to Save Costs in a highly reliable storage environment [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - 在云环境中,当 TiKV 使用如 AWS EBS 或 GCP 的 Persistent Disk 作为单节点存储时,它们提供的持久性相比物理磁盘更高。此时,TiKV 使用 3 个 Raft 副本虽然可行,但并不必要。为了降低成本,TiKV 引入了 Witness 功能,即 2 Replicas With 1 Log Only 机制。其中 1 Log Only 副本仅存储 Raft 日志但不进行数据 apply,依然可以通过 Raft 协议保证数据一致性。与标准的 3 副本架构相比,Witness 可以节省存储资源及 CPU 使用率。 + In cloud environments, it is recommended to use Amazon Elastic Block Store or Persistent Disk of Google Cloud Platform as the storage of each TiKV node. In this case, it is not necessary to use three Raft replicas. To reduce costs, TiKV introduces the Witness feature, which is the "2 Replicas With 1 Log Only" mechanism. The 1 Log Only replica only stores Raft logs but does not apply data, and data consistency is still guaranteed through the Raft protocol. Compared with the standard three replica architecture, Witness can save storage resources and CPU usage. - 更多信息,请参考[用户文档](/use-witness-to-save-costs.md)。 + For more information, see [documentation](/use-witness-to-save-costs.md). * TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** @@ -203,11 +201,17 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](/tidb-resource-control.md)。 -* 使用临时 Witness 副本来加速副本恢复 [#12876](https://github.com/tikv/tikv/issues/12876) [@Connor1996](https://github.com/Connor1996) [@ethercflow](https://github.com/ethercflow) **tw@Orexmt** +* Use a temporary Witness replica to spped up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** + + The Witness feature can be used to quickly recover a failover to improve system availability and data durability. For example, in a 3-out-of-4 scenario, although it meets the majority requirement, the system is fragile and the time to completely recover a new member is often long (requires copying the snapshot first and then applying the latest log), especially when the Region snapshot is relatively large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness can quickly bring down an unhealthy node and enmsure the security of logs during recovery. + + For more information, see [documentation](/use-witness-to-speed-up-failover.md)。 + +* Support configuring read-only storage nodes for resource-consuming tasks [#issue号](链接) @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** - Witness 功能可用于快速恢复 failover,以提高系统可用性。例如在 3 缺 1 的情况下,虽然满足多数派要求,但是系统很脆弱,而完整恢复一个新成员的时间通常很长(需要先拷贝 snapshot 然后 apply 最新的日志),特别是 Region snapshot 比较大的情况。而且拷贝副本的过程可能会对不健康的副本造成更多的压力。因此,先添加一个 Witness 可以快速下掉不健康的节点,保证恢复数据的过程中日志的安全性,后续再由 PD 的 rule checker 将 Witness 副本变为普通的 Voter。 + In production environments, some read-only operations might consume a large amount of resources regularly, which might affect the performance of the entire cluster, such as backups and large-scale data analysis. TiDB v6.6.0 supports configuring read-only storage nodes to execute resource-consuming read-only tasks to reduce the impact on the online application. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#操作步骤) and specify where to read data through a system variable or client parameter to ensure the stability of cluster performance. - 更多信息,请参考[用户文档](/use-witness-to-speed-up-failover.md)。 + For more information, see [documentation](/best-practices/readonly-nodes.md). ### 易用性 @@ -241,17 +245,17 @@ TiDB 版本:6.6.0 ### 数据共享与订阅 -* TiKV-CDC 工具 GA,支持 RawKV 的 Change Data Capture [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** +* The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** - TiKV-CDC 是一个 TiKV 集群的 CDC (Change Data Capture) 工具。TiKV 可以独立于 TiDB,与 PD 构成 KV 数据库,此时的产品形态为 RawKV。TiKV-CDC 支持订阅 RawKV 的数据变更,并实时同步到下游 TiKV 集群,从而实现 RawKV 的跨集群复制能力。 + TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV can operate independently of TiDB and form a KV database with PD. In this case, the product is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. - 更多信息,请参考[用户文档](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc-cn/)。 + For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc-cn/). -* 同步到下游 Kafka 的 Changefeed 可将上游单表的同步任务下发到多个 TiCDC Nodes 执行,实现单表同步性能的水平扩展 [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** +* TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** - 功能描述:下游为 Kafka 的 Changefeed 可将上游单表的复制任务调度到多个 TiCDC Nodes 执行,实现单张表同步性能的水平扩展。在这个功能发布之前,上游单表写入数据量较大时,无法水平扩展单表的复制能力,导致同步延迟增加。该功能发布后,就可以通过水平扩展,解决单表同步性能的问题。 + Before v6.6.0, when the write throughput of the upstream table is large, the replication capability of a single table could not be scaled out, resulting in an increase in replication latency. Starting from TiCDC v6.6.0. the changefeed of a upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which enables scaling out the replication capability of a single table. - 更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12693)。 + For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). ### 部署及运维 From 3b8784c231a283d3610fa4f929a2a2794da93321 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 6 Feb 2023 09:57:42 +0800 Subject: [PATCH 008/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 2e19abb5e483..f80461fdce4b 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -191,15 +191,22 @@ TiDB 版本:6.6.0 ### 稳定性 -* 基于资源组的资源管控 (实验特性) #[38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** +* Resource control based on resource groups (experimental) #[38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** - TiDB 集群支持创建资源组,将不用的数据库用户映射到对应的资源组中,根据实际需要设置每个资源组的配额。当集群资源紧张时,来自同一个资源组的会话所使用的全部资源将被限制在配额内,避免其中一个资源组过度消耗从而抑制其他资源组中的会话正常运行。系统内置视图会对资源的实际使用情况进行反馈和展示,协助用户更合理地配置资源。 + TiDB clusters support creating resource groups, binding different database users to corresponding resource groups, and setting quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions from the same resource group will be limited to the quota, so that one resource group will not be over-consumed and affect the normal operation of sessions in other resource groups. The built-in view of the system will display the actual usage of resources, assisting you to allocate resources more rationally. - 资源管控技术的引入对 TiDB 具有里程碑的意义,它能够将一个分布式数据库集群中划分成多个逻辑单元,即使个别单元对资源过度使用,也不会完全挤占其他单元所需的资源。利用这个技术,你可以将数个来自不同系统的中小型应用合入一个 TiDB 集群中,个别应用的负载提升,不会影响其他业务的正常运行;而在系统负载较低的时候,繁忙的应用即使超过限额,也仍旧可以被分配到所需的系统资源,达到资源的最大化利用。 同样的,你可以选择将所有测试环境合入一个集群,或者将消耗较大的批量任务编入一个单独的资源组,在保证重要应用获得必要资源的同时,提升硬件利用率,降低运行成本。另外,合理利用资源管控技术可以减少集群数量,降低运维难度及管理成本。 + The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. - 在 v6.6 中, 启用资源管控技术需要同时打开 TiDB 的全局变量 [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) 及 TiKV 的配置项 [`resource_control.enabled`](/tikv-configuration-file.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5)。 当前支持的限额方式是基于"[用量](/tidb-RU.md)" (即 Request Unit 或 RU ),RU 是 TiDB 对 CPU、IO 等系统资源的统一抽象单位。 + With this feature, you can: - 更多信息,请参考[用户文档](/tidb-resource-control.md)。 + - Combine multiple small and medium-sized applications from different systems into one TiDB cluster. If the load of an individual application grows larger, it does not affect the normal operation of other businesses. When the system load is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. + - Choose to combine all test environments into a single cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can still get the necessary resources. + + In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. + + In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource_control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. The currently supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. + + For more information, see [documentation](/tidb-resource-control.md). * Use a temporary Witness replica to spped up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** From 538f65861b05d7a85871d500d7a354d6cf0eb443 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 6 Feb 2023 10:09:54 +0800 Subject: [PATCH 009/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index f80461fdce4b..9e028a143da6 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -343,11 +343,12 @@ TiDB 版本:6.6.0 + TiDB Data Migration (DM) - - 优化了 DM 的告警规则和内容。 - 之前 DM_XXX_process_exits_with_error 类告警是遇到错误就报警,有些告警实际是由于 db conn 长时间 idle 导致,重连后即可恢复,为了降低这类 false alerm,现在细分为可自动恢复错误和不可恢复错误 - 对不可自动恢复错误,维持旧的行为,立即 alert - 对可自动回复错误,只有在 2m 内发生超过 3 次时才报警 - [7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + Optimize DM alert rules and content. [7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + + Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever an error occured. But some alerts are actually caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, the alerts are divided into two types: automatically recoverable errors and unrecoverable errors. + + - For errors that are automatically recoverable, report the alert only if the error occurs more than 3 times within 2 minutes. + - For errors that are not automatically recoverable, maintain the original behavior and report the alert immediately. - note [#issue](链接) @[贡献者 GitHub ID](链接) - note [#issue](链接) @[贡献者 GitHub ID](链接) From 922439a8b094ef43f29b1eb9ea558fd8235d526b Mon Sep 17 00:00:00 2001 From: Ran Date: Mon, 6 Feb 2023 11:05:08 +0800 Subject: [PATCH 010/135] Apply suggestions from code review --- releases/release-6.6.0.md | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 9e028a143da6..66525e7f8464 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -33,17 +33,17 @@ TiDB 版本:6.6.0 For more information, see [documentation](/sql-statements/sql-statement-foreign-key.md). -* 支持通过`FLASHBACK CLUSTER TO TIMESTAMP` 命令闪回 DDL 操作 [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** +* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** - [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) 语句支持在 Garbage Collection (GC) life time 内快速回退整个集群到指定的时间点,该功能在 TiDB v6.6.0 版本新增支持撤销 DDL 操作,适用于快速撤消集群的 DML 或 DDL 误操作、支持集群分钟级别的快速回退、支持在时间线上多次回退以确定特定数据更改发生的时间。 + The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, fall back a cluster within minutes, and fall back a cluster multiple times on the timeline to determine when specific data changes occurred. - 更多信息,请参考[用户文档](/sql-statements/sql-statement-flashback-to-timestamp.md)。 + For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). -* 支持 DDL 分布式并行执行框架(实验性特性) [#issue](链接) @[zimulala](https://github.com/zimulala) **tw@ran-huang** +* Support the distributed parallel execution framework for DDL (experimental) [#issue](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** - 在过去的版本中,整个 TiDB 集群中仅允许一个 TiDB 实例作为 DDL Owner 有权处理 Schema 变更任务,为了进一步提升 DDL 的并发性,TiDB v6.6.0 版本引入了 DDL 分布式并行执行框架,支持集群中所有的 TiDB 实例都作为 Owner 并发执行同一个 Schema 变更子任务,加速 DDL 的执行。 + In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. - 更多信息,请参考[用户文档](链接)。 + For more information, see [documentation](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660). * MySQL 兼容的多值索引(Multi-Valued Index) (实验特性) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** @@ -78,13 +78,13 @@ TiDB 版本:6.6.0 ### 可观测性 -* 快速绑定执行计划 [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** +* Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** - TiDB 的执行计划快速绑定功能:允许用户在 TiDB Dashboard 中一分钟内完成 SQL 与特定计划的绑定。 + TiDB v6.6.0 supports creating SQL binding from statement history, which allows you to bind a SQL statement to a specific plan on TiDB Dashboard within a minute. - 通过提供友好的界面简化在 TiDB 上绑定计划的过程,减少计划绑定过程的复杂性提高用户体验,提高计划绑定过程的效率。 + By providing a user-friendly interface, this feature simplifies the process of binding plans in TiDB, reduces the operation complexity, and improves the efficiency and user experience of the plan binding process. - 更多信息,请参考[用户文档](/dashboard/dashboard-statement-details.md)。 + For more information, see [documentation](/dashboard/dashboard-statement-details.md#create-sql-binding). * 为执行计划缓存增加告警 [#issue号](链接) @[qw4990](https://github.com/qw4990) **tw@TomShawn** @@ -118,13 +118,15 @@ TiDB 版本:6.6.0 For more information, see [documentation](/identify-slow-queries.md). -* 自动捕获执行计划的生成 [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) **tw@ran-huang** +* Automatically capture the generation of SQL execution plans [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) **tw@ran-huang** + + In the process of troubleshooting execution plan issues, `PLAN REPLAYER` can help preserve the scene and improve the efficiency of diagnosis. However, in some scenarios, the generation of some execution plans cannot be reproduced freely, which makes the diagnosis work more difficult. - 在执行计划问题的排查过程中,`PLAN REPLAYER` 能够协助保存现场,提升诊断的效率。 但在个别场景中,一些执行计划的生成无法任意重现,给诊断工作增加了难度。 针对这类问题, `PLAN REPLAYER` 扩展了自动捕获的能力。 通过 `PLAN REPLAYER CAPTURE` 命令字,用户可提前注册目标 SQL,也可以同时指定目标执行计划, 当 TiDB 检测到执行的 SQL 和执行计划与注册目标匹配时, 会自动生成并打包 `PLAN REPLAYER` 的信息,提升执行计划不稳定问题的诊断效率。 + To address such issues, in TiDB v6.6.0, `PLAN REPLAYER` extends the capability of automatic capture. With the `PLAN REPLAYER CAPTURE` command, you can register the target SQL statement in advance and also specify the target execution plan at the same time. When TiDB detects the SQL statement or the execution plan that matches the registered target, it automatically generates and packages the `PLAN REPLAYER` information. When the execution plan is unstable, this feature can improve diagnostic efficiency. - 启用这个功能需要设置系统变量 [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) 为 `ON`。 + To use this feature, set the value of [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) to `ON`. - 更多信息,请参考[用户文档](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。 + For more information, see [documentation](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。 * Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** @@ -244,11 +246,13 @@ TiDB 版本:6.6.0 ### 数据迁移 -* Data Migration(DM) 集成了 Lightning 的 Physical Import Mode ,全量迁移性能最高提升 10 倍 @[lance6716](https://github.com/lance6716) **tw@ran-huang** +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration @[lance6716](https://github.com/lance6716) **tw@ran-huang** + + In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. - 功能描述 :Data Migration (DM)的全量迁移能力,集成了 Lightning 的 Physical Import Mode ,使得 DM 做全量数据迁移时的性能最高可提升 10 倍,大大缩短了大数据量场景下的迁移时间。原先客户数据量较多时,客户得单独配置 Lightning 的 Physical Import Mode 的任务来做快速的全量数据迁移,之后再用 DM 来做增量数据迁移,配置复杂。现在集成该能力后,用户迁移大数据量的场景,无需再配置 Lightning 的任务,在一个 DM 任务里就可以搞定了。 + Prior to v6.6.0, for high data volume scenarios, you were required to configure TiDB Lightning's physical import task separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning's tasks; one DM task can accomplish the migration. - 更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12296)。 + For more information, see [documentation]/dm/dm-precheck.md#physical-import-check-items). ### 数据共享与订阅 From 6e16c3ea82c8d80331f75f4e12359234dbecae18 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 6 Feb 2023 11:39:38 +0800 Subject: [PATCH 011/135] translate new feature desc of tomshawn --- releases/release-6.6.0.md | 62 ++++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 30 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 66525e7f8464..457d152c8442 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -45,19 +45,19 @@ TiDB 版本:6.6.0 For more information, see [documentation](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660). -* MySQL 兼容的多值索引(Multi-Valued Index) (实验特性) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** - TiDB 在 v6.6.0 引入了 MySQL 兼容的多值索引 (Multi-Valued Index)。 过滤 JSON 类型中某个数组的值是一个常见操作, 但普通索引对这类操作起不到加速作用,而在数组上创建多值索引能够大幅提升过滤的性能。 如果 JSON 类型中的某个数组上存在多值索引, 带有`MEMBER OF()`,`JSON_CONTAINS()`,`JSON_OVERLAPS()` 这几个函数的检索条件可以利用多值索引进行过滤,减少大量的 I/O 消耗,提升运行速度。 + TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column type is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON type has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. - 多值索引的引入, 是对 JSON 类型的进一步增强, 同时也提升了 TiDB 对 MySQL 8.0 的兼容性。 + Introducing multi-valued indexes further enhances the JSON type and also improves TiDB's compatibility with MySQL 8.0. - 更多信息,请参考[用户文档](/sql-statements/sql-statement-create-index.md#多值索引)。 + For details, see [documentation]((/sql-statements/sql-statement-create-index.md#multi-valued-index) -* 绑定历史执行计划 GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** +* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** - 在 v6.5 中,TiDB 扩展了 [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) 语句中的绑定对象,支持根据历史执行计划创建绑定。在 v6.6 中这个功能 GA, 执行计划的选择不仅限在当前 TiDB 节点,任意 TiDB 节点产生的历史执行计划都可以被选为 [SQL Binding]((/sql-statements/sql-statement-create-binding.md)) 的目标,进一步提升了功能的易用性。 + In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. - 更多信息,请参考[用户文档](/sql-plan-management.md#根据历史执行计划创建绑定)。 + For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). * Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @nolouch[https://github.com/nolouch] **tw@qiancai** @@ -86,21 +86,21 @@ TiDB 版本:6.6.0 For more information, see [documentation](/dashboard/dashboard-statement-details.md#create-sql-binding). -* 为执行计划缓存增加告警 [#issue号](链接) @[qw4990](https://github.com/qw4990) **tw@TomShawn** +* Add warning for caching execution plans @[qw4990](https://github.com/qw4990) **tw@TomShawn** - 当执行计划无法进入执行计划缓存时, TiDB 会通过 warning 的方式说明其无法被缓存的原因, 降低诊断的难度。例如: + When an execution plan cannot be cached, TiDB indicates the reason in warning to make diagnostics easier. For example: ```sql - mysql> prepare st from 'select * from t where a PREPARE st FROM 'SELECT * FROM t WHERE a set @a='1'; + mysql> SET @a='1'; Query OK, 0 rows affected (0.00 sec) - mysql> execute st using @a; + mysql> EXECUTE st USING @a; Empty set, 1 warning (0.01 sec) - mysql> show warnings; + mysql> SHOW WARNINGS; +---------+------+----------------------------------------------+ | Level | Code | Message | +---------+------+----------------------------------------------+ @@ -108,9 +108,9 @@ TiDB 版本:6.6.0 +---------+------+----------------------------------------------+ ``` - 上述例子中, 优化器进行了非 INT 类型到 INT 类型的转换,产生的计划可能随着参数变化有风险,因此不缓存。 + In the preceding example, the optimizer converts a non-INT type to an INT type, and the execution plan might change with the change of the parameter, so TiDB does not cache the plan. - 更多信息,请参考[用户文档](/sql-prepared-plan-cache.md#prepared-plan-cache-诊断)。 + For more information, see [documentation](/sql-prepared-plan-cache.md#diagnostics-of-prepared-plan-cache). * Add the `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** @@ -152,24 +152,20 @@ TiDB 版本:6.6.0 * Support pushing down the `regexp_replace` string function to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** -* 批量聚合数据请求 [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** +* Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** - 当 TiDB 向 TiKV 发送数据请求时, 会根据数据所在的 Region 将请求编入不同的子任务,每个子任务只处理单个 Region 的请求。 当访问的数据离散度很高时, 即使数据量不大,也会生成众多的子任务,进而产生大量 RPC 请求,消耗额外的时间。 在 v6.6.0 中,TiDB 支持将发送到相同 TiKV 实例的数据请求部分合并,减少子任务的数量和 RPC 请求的开销。 在数据离散度高且 gRPC 线程池资源紧张的情况下,批量化请求能够将性能提升 50% 以上。 + When TiDB sends a data request to TiKV, TiDB will compile the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. - 此特性默认打开, 通过系统变量 [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) 设置批量请求的大小。 + This feature is enabled by default. You can set the batch size of requests using the system variable [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size). -* 新增一系列优化器 Hint [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** +* Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** - TiDB 在新版本中增加了一系列优化器 Hint, 用来控制 `LIMIT` 操作的执行计划选择,以及 MPP 执行过程中的部分行为。 其中包括: + TiDB adds several optimizer hints in v6.6.0 to control the selection of `LIMIT` operations in execution plans. - - [`KEEP_ORDER()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): 提示优化器使用指定的索引,读取时保持索引的顺序。 生成类似 `Limit + IndexScan(keep order: true)` 的计划。 - - [`NO_KEEP_ORDER()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): 提示优化器使用指定的索引,读取时不保持顺序。 生成类似 `TopN + IndexScan(keep order: false)` 的计划。 - - [`SHUFFLE_JOIN()`](/optimizer-hints.md#shuffle_joint1_name--tl_name-): 针对 MPP 生效。 提示优化器对指定表使用 Shuffle Join 算法。 - - [`BROADCAST_JOIN()`](/optimizer-hints.md#broadcast_joint1_name--tl_name-): 针对 MPP 生效。提示优化器对指定表使用 Broadcast Join 算法。 - - [`MPP_1PHASE_AGG()`](/optimizer-hints.md#mpp_1phase_agg): 针对 MPP 生效。提示优化器对指定查询块中所有聚合函数使用一阶段聚合算法。 - - [`MPP_2PHASE_AGG()`](/optimizer-hints.md#mpp_2phase_agg): 针对 MPP 生效。 提示优化器对指定查询块中所有聚合函数使用二阶段聚合算法。 + - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. + - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. - 优化器 Hint 的持续引入,为用户提供了更多的干预手段,有助于 SQL 性能问题的解决,并提升了整体性能的稳定性。 + Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. * Remove the limit on `LIMIT` statements [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** @@ -177,11 +173,17 @@ TiDB 版本:6.6.0 For more information, see [documentation](/sql-prepared-plan-cache.md). -* 悲观锁队列的稳定唤醒模型 [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** +* Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** - 如果业务场景存在单点悲观锁冲突频繁的情况,原有的唤醒机制无法保证事务获取锁的时间,造成长尾延迟高,甚至获取超时。 在 v6.6.0 中,通过设置系统变量 [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入) 为 `ON` 可以开启悲观锁的稳定唤醒模型。 在新的唤醒模型下, 队列的唤醒顺序可被严格控制,避免无效的唤醒造成的资源浪费,在锁冲突严重的场景中,能够减少长尾延时,降低 P99 响应时间。 + If an application scenario encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, long-tail latency and the P99 response time can be reduced. - 更多信息,请参考[用户文档](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入)。 + For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). + +* TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) **tw@TomShawn** + + To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. + + For details, see [documentation](). ### 事务 From 8a110cf974c4297d559fc98a75499d785c9885f8 Mon Sep 17 00:00:00 2001 From: Aolin Date: Mon, 6 Feb 2023 13:40:01 +0800 Subject: [PATCH 012/135] translate new features Signed-off-by: Aolin --- releases/release-6.6.0.md | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 457d152c8442..fc666c06a7bc 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -27,9 +27,9 @@ TiDB 版本:6.6.0 更多信息,请参考[用户文档](链接)。 -* Support the foreign key constraint that is compatible with MySQL (experimental) [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - TiDB v6.6.0 introduces the foreign key constraint feature compatible with MySQL. This feature supports data correlation in a table or between tables, constraint validation, and supports cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. + TiDB v6.6.0 introduces the foreign key constraint feature, which is compatible with MySQL. This feature supports data correlation in a table or between tables, constraint validation, and cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/sql-statements/sql-statement-foreign-key.md). @@ -114,7 +114,7 @@ TiDB 版本:6.6.0 * Add the `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** - The `Warnings` field is added to the slow query log in JSON format to record the warnings generated during the execution of the slow query to help diagnose performance issues. You can also view this in the slow query page of TiDB Dashboard. + The `Warnings` field is added to the slow query log in JSON format to record the warnings generated during the execution of the slow query to help diagnose performance issues. You can also view this on the slow query page of TiDB Dashboard. For more information, see [documentation](/identify-slow-queries.md). @@ -214,13 +214,13 @@ TiDB 版本:6.6.0 * Use a temporary Witness replica to spped up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - The Witness feature can be used to quickly recover a failover to improve system availability and data durability. For example, in a 3-out-of-4 scenario, although it meets the majority requirement, the system is fragile and the time to completely recover a new member is often long (requires copying the snapshot first and then applying the latest log), especially when the Region snapshot is relatively large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness can quickly bring down an unhealthy node and enmsure the security of logs during recovery. + The Witness feature can be used to quickly recover a failover to improve system availability and data durability. For example, in a 3-out-of-4 scenario, although it meets the majority requirement, the system is fragile and the time to completely recover a new member is often long (requires copying the snapshot first and then applying the latest log), especially when the Region snapshot is relatively large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness can quickly bring down an unhealthy node and ensure the security of logs during recovery. For more information, see [documentation](/use-witness-to-speed-up-failover.md)。 * Support configuring read-only storage nodes for resource-consuming tasks [#issue号](链接) @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** - In production environments, some read-only operations might consume a large amount of resources regularly, which might affect the performance of the entire cluster, such as backups and large-scale data analysis. TiDB v6.6.0 supports configuring read-only storage nodes to execute resource-consuming read-only tasks to reduce the impact on the online application. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#操作步骤) and specify where to read data through a system variable or client parameter to ensure the stability of cluster performance. + In production environments, some read-only operations might consume a large number of resources regularly, which might affect the performance of the entire cluster, such as backups and large-scale data analysis. TiDB v6.6.0 supports configuring read-only storage nodes to execute resource-consuming read-only tasks to reduce the impact on the online application. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where to read data through a system variable or client parameter to ensure the stability of cluster performance. For more information, see [documentation](/best-practices/readonly-nodes.md). @@ -240,11 +240,9 @@ TiDB 版本:6.6.0 ### MySQL 兼容性 -* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) - - 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) +* Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - 更多信息,请参考[用户文档](链接)。 + For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-foreign-key.md). ### 数据迁移 @@ -266,7 +264,7 @@ TiDB 版本:6.6.0 * TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** - Before v6.6.0, when the write throughput of the upstream table is large, the replication capability of a single table could not be scaled out, resulting in an increase in replication latency. Starting from TiCDC v6.6.0. the changefeed of a upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which enables scaling out the replication capability of a single table. + Before v6.6.0, when the write throughput of the upstream table is large, the replication capability of a single table could not be scaled out, resulting in an increase in replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which enables scaling out the replication capability of a single table. For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). From b0bd6164abd5815ca236a9145db037021cf80a5b Mon Sep 17 00:00:00 2001 From: shichun-0415 Date: Mon, 6 Feb 2023 13:50:17 +0800 Subject: [PATCH 013/135] explain ticdc consistent.storage --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index fc666c06a7bc..cb0a75f43ac2 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -199,7 +199,7 @@ TiDB 版本:6.6.0 TiDB clusters support creating resource groups, binding different database users to corresponding resource groups, and setting quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions from the same resource group will be limited to the quota, so that one resource group will not be over-consumed and affect the normal operation of sessions in other resource groups. The built-in view of the system will display the actual usage of resources, assisting you to allocate resources more rationally. - The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. + The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. With this feature, you can: @@ -248,7 +248,7 @@ TiDB 版本:6.6.0 * TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration @[lance6716](https://github.com/lance6716) **tw@ran-huang** - In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. + In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. Prior to v6.6.0, for high data volume scenarios, you were required to configure TiDB Lightning's physical import task separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning's tasks; one DM task can accomplish the migration. @@ -294,7 +294,7 @@ TiDB 版本:6.6.0 | -------- | -------- | -------- | -------- | | TiKV | [`resource_control.enabled`](/tikv-configuration-file.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 是否支持按照资源组配额调度。 默认 `false` ,即关闭按照资源组配额调度。 | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from 0 to 0.8, which means the limit is 80% of the total memory.| -| TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | Added to value options: GCS and Azure. | +| TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS and Azure. | | TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | New | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | | TiDB | [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) | New | Controls whether to enable statements summary persistence. The default value is `false`, which means this feature is not enabled by default. | | TiDB | [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the file to which persistent data is written. | From 3ab87af035a2159328b0d01c1f839e58a54d804a Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 6 Feb 2023 17:20:19 +0800 Subject: [PATCH 014/135] translate some titles to english --- releases/release-6.6.0.md | 62 +++++++++++++++------------------------ 1 file changed, 23 insertions(+), 39 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index cb0a75f43ac2..abe9c3c93da4 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -4,20 +4,20 @@ title: TiDB 6.6.0 Release Notes # TiDB 6.6.0 Release Notes -发版日期:2023 年 x 月 x 日 +Release date: xx, 2023 -TiDB 版本:6.6.0 +TiDB version: 6.6.0-DMR -试用链接:[快速体验](https://docs.pingcap.com/zh/tidb/v6.6/quick-start-with-tidb) | [下载离线包](https://cn.pingcap.com/product-community/) +Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.6/quick-start-with-tidb) | [Installation package](https://cn.pingcap.com/product-community/) -在 6.6.0 版本中,你可以获得以下关键特性: +In v6.6.0-DMR, the key new features and improvements are as follows: - MySQL 8.0 兼容的多值索引 (Multi-Valued Index) (实验特性) - 基于资源组的资源管控 (实验特性) - 悲观锁队列的稳定唤醒模型 - 数据请求的批量聚合 -## 新功能 +## New features ### SQL @@ -76,7 +76,7 @@ TiDB 版本:6.6.0 For more information, see [documentation](/enable-tls-between-components.md). -### 可观测性 +### Observability * Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** @@ -134,7 +134,7 @@ TiDB 版本:6.6.0 For more information, see [documentation](/statement-summary-tables.md#persist-statements-summary). -### 性能 +### Performance * Use Witness to Save Costs in a highly reliable storage environment [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** @@ -185,15 +185,7 @@ TiDB 版本:6.6.0 For details, see [documentation](). -### 事务 - -* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) - - 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) - - 更多信息,请参考[用户文档](链接)。 - -### 稳定性 +### Stability * Resource control based on resource groups (experimental) #[38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** @@ -224,7 +216,7 @@ TiDB 版本:6.6.0 For more information, see [documentation](/best-practices/readonly-nodes.md). -### 易用性 +### Ease of use * Support dynamically modifying `store-io-pool-size` [#13964](https://github.com/tikv/tikv/issues/13964) @[LykxSassinator](https://github.com/LykxSassinator) **tw@shichun-0415** @@ -238,13 +230,13 @@ TiDB 版本:6.6.0 For more information, see the [configuration item `initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660). -### MySQL 兼容性 +### MySQL compatibility * Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-foreign-key.md). -### 数据迁移 +### Data migration * TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration @[lance6716](https://github.com/lance6716) **tw@ran-huang** @@ -254,13 +246,13 @@ TiDB 版本:6.6.0 For more information, see [documentation]/dm/dm-precheck.md#physical-import-check-items). -### 数据共享与订阅 +### TiDB data share subscription * The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV can operate independently of TiDB and form a KV database with PD. In this case, the product is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. - For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc-cn/). + For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc/). * TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** @@ -268,19 +260,11 @@ TiDB 版本:6.6.0 For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). -### 部署及运维 - -* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) +## Compatibility changes - 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) +### System variables - 更多信息,请参考[用户文档](链接)。 - -## 兼容性变更 - -### 系统变量 - -| 变量名 | 修改类型(包括新增/修改/删除) | 描述 | +| Variable name | Change type | Description | |--------|------------------------------|------| | [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `count` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `count` that is greater than 10000. | | [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 该变量是资源管控特性的开关。该变量设置为 `ON` 后,集群支持应用按照资源组做资源隔离。 | @@ -288,7 +272,7 @@ TiDB 版本:6.6.0 | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入) | 新增 | 是否对悲观锁启用加强的悲观锁唤醒模型。 | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | 新增 | 这个变量用来控制是否开启 [`PLAN REPLAYER CAPTURE`](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。默认值 `OFF`, 代表关闭 `PLAN REPLAYER CAPTURE`。 | -### 配置文件参数 +### Configuration file parameters | 配置文件 | 配置项 | 修改类型 | 描述 | | -------- | -------- | -------- | -------- | @@ -309,9 +293,9 @@ TiDB 版本:6.6.0 - Support dynamically modifying `store-io-pool-size`. This facilitate more flexible TiKV performance tuning. - Remove the limit on `LIMIT` statements, thus improving the execution performance. -## 废弃功能 +## Deprecated feature -## 改进提升 +## Improvements + TiDB @@ -348,9 +332,9 @@ TiDB 版本:6.6.0 + TiDB Data Migration (DM) Optimize DM alert rules and content. [7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** - + Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever an error occured. But some alerts are actually caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, the alerts are divided into two types: automatically recoverable errors and unrecoverable errors. - + - For errors that are automatically recoverable, report the alert only if the error occurs more than 3 times within 2 minutes. - For errors that are not automatically recoverable, maintain the original behavior and report the alert immediately. @@ -371,7 +355,7 @@ TiDB 版本:6.6.0 - Add a new parameter `skip-non-existing-table` to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya9) **tw@shichun-0415** - note [#issue](链接) @[贡献者 GitHub ID](链接) -## 错误修复 +## Bug fixes + TiDB @@ -421,7 +405,7 @@ TiDB 版本:6.6.0 - note [#issue](链接) @[贡献者 GitHub ID](链接) - note [#issue](链接) @[贡献者 GitHub ID](链接) -## 贡献者 +## Contributors 感谢来自 TiDB 社区的贡献者们: From 9117c5cf8b164db13b6d3b6d42857bf025b8d75b Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 7 Feb 2023 10:14:20 +0800 Subject: [PATCH 015/135] Apply suggestions from code review Co-authored-by: Ran --- releases/release-6.6.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index abe9c3c93da4..3505aad5d701 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -47,9 +47,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** - TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column type is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON type has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. + TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. - Introducing multi-valued indexes further enhances the JSON type and also improves TiDB's compatibility with MySQL 8.0. + Introducing multi-valued indexes further enhances TiDB's support for the JSON data type and also improves TiDB's compatibility with MySQL 8.0. For details, see [documentation]((/sql-statements/sql-statement-create-index.md#multi-valued-index) @@ -154,13 +154,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** - When TiDB sends a data request to TiKV, TiDB will compile the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. + When TiDB sends a data request to TiKV, TiDB compiles the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. This feature is enabled by default. You can set the batch size of requests using the system variable [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size). * Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** - TiDB adds several optimizer hints in v6.6.0 to control the selection of `LIMIT` operations in execution plans. + TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of `LIMIT` operations. - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. @@ -175,7 +175,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** - If an application scenario encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, long-tail latency and the P99 response time can be reduced. + If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). From b4dc1d7073db9a582f2059d6ff52213cef2e727e Mon Sep 17 00:00:00 2001 From: Aolin Date: Tue, 7 Feb 2023 11:04:44 +0800 Subject: [PATCH 016/135] update links and apply suggestions Signed-off-by: Aolin --- releases/release-6.6.0.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 3505aad5d701..20a4b6994711 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -27,11 +27,11 @@ In v6.6.0-DMR, the key new features and improvements are as follows: 更多信息,请参考[用户文档](链接)。 -* Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - TiDB v6.6.0 introduces the foreign key constraint feature, which is compatible with MySQL. This feature supports data correlation in a table or between tables, constraint validation, and cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. + TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports data association in a table or between tables, constraints validation, and cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. - For more information, see [documentation](/sql-statements/sql-statement-foreign-key.md). + For more information, see [documentation](/foreign-key.md). * Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** @@ -112,9 +112,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/sql-prepared-plan-cache.md#diagnostics-of-prepared-plan-cache). -* Add the `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** +* Add a `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** - The `Warnings` field is added to the slow query log in JSON format to record the warnings generated during the execution of the slow query to help diagnose performance issues. You can also view this on the slow query page of TiDB Dashboard. + TiDB v6.6.0 adds a `Warnings` field to the slow query log to help diagnose performance issues. The field records warnings generated during the execution of the slow query. You can also view this on the slow query page of TiDB Dashboard. For more information, see [documentation](/identify-slow-queries.md). @@ -136,9 +136,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Performance -* Use Witness to Save Costs in a highly reliable storage environment [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** +* Use a Witness replica to save costs in a highly reliable storage environment [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - In cloud environments, it is recommended to use Amazon Elastic Block Store or Persistent Disk of Google Cloud Platform as the storage of each TiKV node. In this case, it is not necessary to use three Raft replicas. To reduce costs, TiKV introduces the Witness feature, which is the "2 Replicas With 1 Log Only" mechanism. The 1 Log Only replica only stores Raft logs but does not apply data, and data consistency is still guaranteed through the Raft protocol. Compared with the standard three replica architecture, Witness can save storage resources and CPU usage. + In cloud environments, when you use the Amazon Elastic Block Store or Persistent Disk of Google Cloud Platform as the storage of each TiKV node, the durability is higher than that of physical disks. In this case, using three Raft replicas with TiKV is possible but not necessary. To reduce costs, TiKV introduces the Witness feature, which is the "2 Replicas With 1 Log Only" mechanism. The 1 Log Only replica only stores Raft logs but does not apply data, and still ensures data consistency through the Raft protocol. Compared with the standard three replica architecture, Witness can save storage resources and CPU usage. For more information, see [documentation](/use-witness-to-save-costs.md). @@ -204,15 +204,15 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/tidb-resource-control.md). -* Use a temporary Witness replica to spped up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** +* Use a temporary Witness replica to speed up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - The Witness feature can be used to quickly recover a failover to improve system availability and data durability. For example, in a 3-out-of-4 scenario, although it meets the majority requirement, the system is fragile and the time to completely recover a new member is often long (requires copying the snapshot first and then applying the latest log), especially when the Region snapshot is relatively large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness can quickly bring down an unhealthy node and ensure the security of logs during recovery. + The Witness feature can be used to quickly recover from a failover to improve system availability and data durability. For example, in a Raft group of three replicas, if one replica fails, the system is fragile although it meets the majority requirement. It takes a long time to recover a new member (the process requires copying the snapshot first and then applying the latest logs), especially when the Region snapshot is large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness replica can quickly remove the unhealthy node, reduce the risk of the Raft group being unavailable due to another node failure during recovering a new member (the Learner replica cannot participate in the election and submission), and ensure the security of logs during recovery. For more information, see [documentation](/use-witness-to-speed-up-failover.md)。 * Support configuring read-only storage nodes for resource-consuming tasks [#issue号](链接) @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** - In production environments, some read-only operations might consume a large number of resources regularly, which might affect the performance of the entire cluster, such as backups and large-scale data analysis. TiDB v6.6.0 supports configuring read-only storage nodes to execute resource-consuming read-only tasks to reduce the impact on the online application. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where to read data through a system variable or client parameter to ensure the stability of cluster performance. + In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--read-only`, to ensure the stability of cluster performance. For more information, see [documentation](/best-practices/readonly-nodes.md). From f6af1055f4ce0d7e75227c5fdb723fb062cf10e2 Mon Sep 17 00:00:00 2001 From: Ran Date: Tue, 7 Feb 2023 11:30:07 +0800 Subject: [PATCH 017/135] Apply suggestions from code review Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.6.0.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 20a4b6994711..9dcdf4109f6c 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -33,18 +33,16 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/foreign-key.md). -* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** +* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** - The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, fall back a cluster within minutes, and fall back a cluster multiple times on the timeline to determine when specific data changes occurred. + The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, roll back a cluster within minutes, and roll back a cluster multiple times on the timeline to determine when specific data changes occurred. For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). -* Support the distributed parallel execution framework for DDL (experimental) [#issue](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** +* Support the distributed parallel execution framework for DDL operations (experimental) [#issue](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. - For more information, see [documentation](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660). - * Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -80,7 +78,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** - TiDB v6.6.0 supports creating SQL binding from statement history, which allows you to bind a SQL statement to a specific plan on TiDB Dashboard within a minute. + TiDB v6.6.0 supports creating SQL binding from statement history, which allows you to quickly bind a SQL statement to a specific plan on TiDB Dashboard. By providing a user-friendly interface, this feature simplifies the process of binding plans in TiDB, reduces the operation complexity, and improves the efficiency and user experience of the plan binding process. @@ -242,7 +240,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. - Prior to v6.6.0, for high data volume scenarios, you were required to configure TiDB Lightning's physical import task separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning's tasks; one DM task can accomplish the migration. + Before v6.6.0, for high data volume scenarios, you were required to configure TiDB Lightning's physical import task separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning's tasks; one DM task can accomplish the migration. For more information, see [documentation]/dm/dm-precheck.md#physical-import-check-items). From c40b0d3f9d1289e6a46ef7cde1a4ae2199e058fa Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Tue, 7 Feb 2023 15:05:43 +0800 Subject: [PATCH 018/135] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.6.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 9dcdf4109f6c..b6b96b6668d3 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -128,7 +128,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** - Before v6.6.0, statements summary data is maintained in memory. Once the TiDB server restarts, all the statements summary data gets lost. Starting from v6.6.0, TiDB supports enabling statements summary persistence, which allows historical data to be written to disks on a regular basis. In the meantime, the result of queries on system tables will derive from disks, instead of memory. After TiDB restarts, all historical data is still available. + Before v6.6.0, statements summary data is kept in memory and would be lost upon a TiDB server restart. Starting from v6.6.0, TiDB supports enabling statements summary persistence, which allows historical data to be written to disks on a regular basis. In the meantime, the result of queries on system tables will derive from disks, instead of memory. After TiDB restarts, all historical data remains available. For more information, see [documentation](/statement-summary-tables.md#persist-statements-summary). @@ -165,7 +165,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. -* Remove the limit on `LIMIT` statements [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** +* Remove the limit on `LIMIT` clauses [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** Starting from v6.6.0, TiDB plan cache supports caching queries containing `?` after `Limit`, such as `Limit ?` or `Limit 10, ?`. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. @@ -224,9 +224,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support specifying the SQL script executed upon TiDB cluster intialization [#35624](https://github.com/pingcap/tidb/issues/35624) @[morgo](https://github.com/morgo) **tw@shichun-0415** - When you start a TiDB cluster for the first time, you can specify the SQL script to be executed by configuring the CLI parameter `--initialize-sql-file`. You can use this feature when you need to perform such operations as modifying the value of a system variable, creating a user, or granting privileges. + When you start a TiDB cluster for the first time, you can specify the SQL script to be executed by configuring the command line parameter `--initialize-sql-file`. You can use this feature when you need to perform such operations as modifying the value of a system variable, creating a user, or granting privileges. - For more information, see the [configuration item `initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660). + For more information, see [documentation](/tidb-configuration-file.md#initialize-sql-file-new-in-v660). ### MySQL compatibility @@ -350,7 +350,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + Sync-diff-inspector - - Add a new parameter `skip-non-existing-table` to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya9) **tw@shichun-0415** + - Add a new parameter `skip-non-existing-table` to control whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya94) **tw@shichun-0415** - note [#issue](链接) @[贡献者 GitHub ID](链接) ## Bug fixes From b2c341ae50b823cd49c012482a4a36a09a3fffb3 Mon Sep 17 00:00:00 2001 From: Ran Date: Tue, 7 Feb 2023 16:39:20 +0800 Subject: [PATCH 019/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index b6b96b6668d3..4024c8e2d001 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -284,7 +284,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | | TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | | sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | New | Controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | -| | | | | +| DM | Modified | [`import-mode`](/dm/task-configuration-file-full.md) | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | +| DM | Deleted | `on-duplicate` | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | +| DM | Newly added | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | +| DM | Newly added | [`on-duplicate-physical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the physical import mode. The default value is `"none"`, which means not resolving onflicting data. "none" has the best performance, but might lead to inconsistent data in the downstream database. | +| DM | Newly added | [`sorting-dir-physical`](/dm/task-configuration-file-full.md) | The directory used for local KV sorting in the physical import mode. The default value is the same as the `dir` configuration. | +| DM | Newly added | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration of TiDB Lightning](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). | +| DM | Newly added | [`checksum-physical`](/dm/task-configuration-file-full.md) | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE ` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | ### Others From acd9201596828ec4dafe00112d544a50c58a6d19 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 7 Feb 2023 16:44:50 +0800 Subject: [PATCH 020/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 4024c8e2d001..b577ddc8c56f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -61,8 +61,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: - - For TiDB clusters deployed across regions, when a region with the specified databases or tables fails, another region can provide the service. - - For TiDB clusters deployed in a single region, when an availability zone with the specified databases or tables fails, another availability zone can provide the service. + - For TiDB clusters deployed across cloud regions, when a cloud region fails, the specified databases or tables can survive in another cloud region. + - For TiDB clusters deployed in a single cloud region, when an availability fails, the specified databases or tables can survive in another availability zone. For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). From d663466d177278e0a116d4bd1914c161bd80cc87 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 7 Feb 2023 16:45:27 +0800 Subject: [PATCH 021/135] Update releases/release-6.6.0.md Co-authored-by: xixirangrang --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index b577ddc8c56f..db0b959c5d42 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -70,7 +70,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** - For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. The rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures the cluster high availability. + For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. The rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. For more information, see [documentation](/enable-tls-between-components.md). From 66ba064cd5d189da3b14244118f666f425e7c19f Mon Sep 17 00:00:00 2001 From: Aolin Date: Tue, 7 Feb 2023 16:47:30 +0800 Subject: [PATCH 022/135] Apply suggestions from code review Co-authored-by: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> --- releases/release-6.6.0.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index db0b959c5d42..d5b2178d1310 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -29,7 +29,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports data association in a table or between tables, constraints validation, and cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. + TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). @@ -112,7 +112,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Add a `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** - TiDB v6.6.0 adds a `Warnings` field to the slow query log to help diagnose performance issues. The field records warnings generated during the execution of the slow query. You can also view this on the slow query page of TiDB Dashboard. + TiDB v6.6.0 adds a `Warnings` field to the slow query log to help diagnose performance issues. This field records warnings generated during the execution of a slow query. You can also view the warnings on the slow query page of TiDB Dashboard. For more information, see [documentation](/identify-slow-queries.md). @@ -136,7 +136,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Use a Witness replica to save costs in a highly reliable storage environment [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - In cloud environments, when you use the Amazon Elastic Block Store or Persistent Disk of Google Cloud Platform as the storage of each TiKV node, the durability is higher than that of physical disks. In this case, using three Raft replicas with TiKV is possible but not necessary. To reduce costs, TiKV introduces the Witness feature, which is the "2 Replicas With 1 Log Only" mechanism. The 1 Log Only replica only stores Raft logs but does not apply data, and still ensures data consistency through the Raft protocol. Compared with the standard three replica architecture, Witness can save storage resources and CPU usage. + In cloud environments, when you use the Amazon Elastic Block Store or Persistent Disk of Google Cloud Platform as the storage of each TiKV node, the durability is higher than that of physical disks. In this case, using three Raft replicas with TiKV is possible but not necessary. To reduce costs, TiKV introduces the Witness feature, which is the "2 Replicas With 1 Log Only" mechanism. The 1 Log Only replica only stores Raft logs and does not apply data, and still ensures data consistency through the Raft protocol. Compared with the standard three replica architecture, Witness can save storage resources and CPU usage. For more information, see [documentation](/use-witness-to-save-costs.md). @@ -204,9 +204,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Use a temporary Witness replica to speed up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - The Witness feature can be used to quickly recover from a failover to improve system availability and data durability. For example, in a Raft group of three replicas, if one replica fails, the system is fragile although it meets the majority requirement. It takes a long time to recover a new member (the process requires copying the snapshot first and then applying the latest logs), especially when the Region snapshot is large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness replica can quickly remove the unhealthy node, reduce the risk of the Raft group being unavailable due to another node failure during recovering a new member (the Learner replica cannot participate in the election and submission), and ensure the security of logs during recovery. + The Witness feature can be used to quickly recover from any failure to improve system availability and data durability. For example, in a Raft group of three replicas, if one replica fails, the system is fragile although it meets the majority requirement. It takes a long time to recover a new member (the process requires copying the snapshot first and then applying the latest logs), especially when the Region snapshot is large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness replica can quickly remove the unhealthy node, reduce the risk of the Raft group being unavailable due to another node failure when recovering a new member (the Learner replica cannot participate in the election and submission), and ensure the security of logs during recovery. - For more information, see [documentation](/use-witness-to-speed-up-failover.md)。 + For more information, see [documentation](/use-witness-to-speed-up-failover.md). * Support configuring read-only storage nodes for resource-consuming tasks [#issue号](链接) @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** @@ -248,13 +248,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** - TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV can operate independently of TiDB and form a KV database with PD. In this case, the product is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. + TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc/). * TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** - Before v6.6.0, when the write throughput of the upstream table is large, the replication capability of a single table could not be scaled out, resulting in an increase in replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which enables scaling out the replication capability of a single table. + Before v6.6.0, when a table in the upstream accepts a large amount of writes, the replication capability of this table cannot be scaled out, resulting in an increase in the replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which means the replication capability of a single table is scaled out. For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). From f0c95c431556d8984cb458be887a77b66d130a3a Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 7 Feb 2023 19:19:45 +0800 Subject: [PATCH 023/135] add 2 pd gc tuner notes --- releases/release-6.6.0.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index d5b2178d1310..d7680d478fe8 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -315,6 +315,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - note [#issue](链接) @[贡献者 GitHub ID](链接) - note [#issue](链接) @[贡献者 GitHub ID](链接) + - Support limiting the global memory to alleviate the OOM problem [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) + - Add the GC Tuner to alleviate the GC pressure [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) + TiFlash From 1b0593a091bad98b0976ce980d7be54178e7885b Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 7 Feb 2023 19:30:03 +0800 Subject: [PATCH 024/135] Apply suggestions from code review Co-authored-by: xixirangrang --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index d7680d478fe8..42d7c640906c 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -70,7 +70,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** - For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. The rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. + In v6.6.0, TiDB supports automatic rotations of TiFlash TLS certificates. For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. In addition, the rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. For more information, see [documentation](/enable-tls-between-components.md). @@ -144,7 +144,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. - Starting from v6.6.0, TiFlash supports the Stale Read feature. When you query the historical data of a table using the `AS OF TIMESTAMP` syntax or the `tidb_read_staleness` system variable, if the table has a TiFlash replica, the optimizer now can choose to read the corresponding data from the TiFlash replica, thus further improving query performance. + Starting from v6.6.0, TiFlash supports the Stale Read feature. When you query the historical data of a table using the [`AS OF TIMESTAMP`](/as-of-timestamp.md) syntax or the [`tidb_read_staleness`](/tidb-read-staleness.md) system variable, if the table has a TiFlash replica, the optimizer now can choose to read the corresponding data from the TiFlash replica, thus further improving query performance. For more information, see [documentation](/stale-read.md). @@ -320,7 +320,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiFlash - - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides a foundation for subsequent optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin] **tw@qiancai** + - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin] **tw@qiancai** - note [#issue](链接) @[贡献者 GitHub ID](链接) + Tools From 5b22c6cf79987d0453670749208978909653a708 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 7 Feb 2023 19:30:20 +0800 Subject: [PATCH 025/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 42d7c640906c..556089b27b27 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -62,7 +62,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: - For TiDB clusters deployed across cloud regions, when a cloud region fails, the specified databases or tables can survive in another cloud region. - - For TiDB clusters deployed in a single cloud region, when an availability fails, the specified databases or tables can survive in another availability zone. + - For TiDB clusters deployed in a single cloud region, when an availability zone fails, the specified databases or tables can survive in another availability zone. For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). From 799dfaf42ad6ddf92e2b4130bf283a66d9817707 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Wed, 8 Feb 2023 09:04:36 +0800 Subject: [PATCH 026/135] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 556089b27b27..a95566ea6ce5 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -185,20 +185,20 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Stability -* Resource control based on resource groups (experimental) #[38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** +* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** - TiDB clusters support creating resource groups, binding different database users to corresponding resource groups, and setting quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions from the same resource group will be limited to the quota, so that one resource group will not be over-consumed and affect the normal operation of sessions in other resource groups. The built-in view of the system will display the actual usage of resources, assisting you to allocate resources more rationally. + Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. With this feature, you can: - - Combine multiple small and medium-sized applications from different systems into one TiDB cluster. If the load of an individual application grows larger, it does not affect the normal operation of other businesses. When the system load is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. - - Choose to combine all test environments into a single cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can still get the necessary resources. + - Combine multiple small and medium-sized applications from different systems into a single TiDB cluster. When the workload of an application grows larger, it does not affect the normal operation of other applications. When the system workload is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. + - Choose to combine all test environments into a single TiDB cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can always get the necessary resources. In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. - In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource_control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. The currently supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. + In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource_control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. For more information, see [documentation](/tidb-resource-control.md). From ffdb009d04dc812b78abf0c3f35baac1f30744d1 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Wed, 8 Feb 2023 09:53:48 +0800 Subject: [PATCH 027/135] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index a95566ea6ce5..a817d79be3b6 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -265,7 +265,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | Variable name | Change type | Description | |--------|------------------------------|------| | [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `count` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `count` that is greater than 10000. | -| [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 该变量是资源管控特性的开关。该变量设置为 `ON` 后,集群支持应用按照资源组做资源隔离。 | +| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660 | New | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | | [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | 修改 | 此变量可用于生产环境。 设置 `IndexLookUp` 算子回表时多个 Coprocessor Task 的 batch 大小。`0` 代表不使用 batch。当 `IndexLookUp` 算子的回表 Task 数量特别多,出现极长的慢查询时,可以适当调大该参数以加速查询。 | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入) | 新增 | 是否对悲观锁启用加强的悲观锁唤醒模型。 | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | 新增 | 这个变量用来控制是否开启 [`PLAN REPLAYER CAPTURE`](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。默认值 `OFF`, 代表关闭 `PLAN REPLAYER CAPTURE`。 | @@ -274,7 +274,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | 配置文件 | 配置项 | 修改类型 | 描述 | | -------- | -------- | -------- | -------- | -| TiKV | [`resource_control.enabled`](/tikv-configuration-file.md#tidb_enable_resource_control-%E4%BB%8E-v660-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) | 新增 | 是否支持按照资源组配额调度。 默认 `false` ,即关闭按照资源组配额调度。 | +| TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups.| | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from 0 to 0.8, which means the limit is 80% of the total memory.| | TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS and Azure. | | TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | New | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | @@ -337,12 +337,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Data Migration (DM) - Optimize DM alert rules and content. [7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + - Optimize DM alert rules and content [#7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** - Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever an error occured. But some alerts are actually caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, the alerts are divided into two types: automatically recoverable errors and unrecoverable errors. + Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occured. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors: - - For errors that are automatically recoverable, report the alert only if the error occurs more than 3 times within 2 minutes. - - For errors that are not automatically recoverable, maintain the original behavior and report the alert immediately. + - For an error that is automatically recoverable, DM reports the alert only if the error occurs more than 3 times within 2 minutes. + - For an error that is not automatically recoverable, DM maintains the original behavior and reports the alert immediately. - note [#issue](链接) @[贡献者 GitHub ID](链接) - note [#issue](链接) @[贡献者 GitHub ID](链接) From 017c86ea864d096cc90dac56e64e1f95b13a48dd Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 8 Feb 2023 14:10:39 +0800 Subject: [PATCH 028/135] add more and translate compatibility info from docs --- releases/release-6.6.0.md | 50 +++++++++++++++++++++++++++++---------- 1 file changed, 37 insertions(+), 13 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index a817d79be3b6..ddcc2f1f58d1 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -39,7 +39,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). -* Support the distributed parallel execution framework for DDL operations (experimental) [#issue](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** +* Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. @@ -51,13 +51,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For details, see [documentation]((/sql-statements/sql-statement-create-index.md#multi-valued-index) -* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** +* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). -* Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @nolouch[https://github.com/nolouch] **tw@qiancai** +* Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: @@ -208,7 +208,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/use-witness-to-speed-up-failover.md). -* Support configuring read-only storage nodes for resource-consuming tasks [#issue号](链接) @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** +* Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--read-only`, to ensure the stability of cluster performance. @@ -264,17 +264,30 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | Variable name | Change type | Description | |--------|------------------------------|------| -| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `count` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `count` that is greater than 10000. | +|--------|------------------------------|------| +| `tidb_enable_amend_pessimistic_txn` | Deleted | Starting from v6.5.0, this variable is deprecated. Starting from v6.6.0, this feature and the `AMEND TRANSACTION` feature are deleted. TiDB will use the [meta lock](/metadata-lock.md) mechanism to resolve the `Information schema is changed` error. | +| `tidb_enable_concurrent_ddl` | Deleted | This variable controls whether to allow TiDB to use concurrent DDL statements. When this variable is disabled, TiDB uses the old DDL execution framework, which provides limited support for concurrent DDL execution. Starting from v6.6.0, this variable is deleted and TiDB no longer supports the old DDL execution framework. | +| `tidb_ttl_job_run_interval` | Deleted | This variable is used to control the scheduling interval of TTL jobs in the background. Starting from v6.6.0, this variable is deleted, because TiDB provides the `TTL_JOB_INTERVAL` attribute for every table to control the TTL runtime, which is more flexible than `tidb_ttl_job_run_interval`. | +| [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variables controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means to enable the foreign key check by default. | +| [`tidb_enable_foreign_key`](/system-variables.md#tidb_enable_foreign_key-new-in-v630) | Modified | This variable controls whether to enable the foreign key feature. The default value changes from `OFF` to `ON`, which means to enable foreign key by default. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availablity of TiDB clusters. When this option is set, TiDB prefers to select the leader replica to perform read operations. When the performance of the leader replica significantly decreases, TiDB automatically transfers the read operations to follower replicas. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | +| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable is used to control the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is adjusted from `0` to `4`, which means 4 Coprocessor Tasks will be batched into one task for each batch of requests. | +| [`mpp_exchange_compression_mode`](/system-variables.md#mpp_exchange_compression_mode-new-in-v660) | New | This variable is used to specify the data compression mode of the MPP Exchange operator. This variable takes effect when TiDB selects the MPP execution plan with the version number `1`. The default value `UNSPECIFIED` means that TiDB automatically selects the `FAST` compression mode. | +| [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | New | This variable is used to specify different versions of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automically selects the latest version `1`. | +| [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | New | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | +| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | +| [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | New | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | | [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660 | New | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | -| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | 修改 | 此变量可用于生产环境。 设置 `IndexLookUp` 算子回表时多个 Coprocessor Task 的 batch 大小。`0` 代表不使用 batch。当 `IndexLookUp` 算子的回表 Task 数量特别多,出现极长的慢查询时,可以适当调大该参数以加速查询。 | -| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-从-v660-版本开始引入) | 新增 | 是否对悲观锁启用加强的悲观锁唤醒模型。 | -| [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | 新增 | 这个变量用来控制是否开启 [`PLAN REPLAYER CAPTURE`](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。默认值 `OFF`, 代表关闭 `PLAN REPLAYER CAPTURE`。 | +| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | New | Determines whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for for pessimistic transactions by default. | ### Configuration file parameters -| 配置文件 | 配置项 | 修改类型 | 描述 | +| Configuration file | Configuration parameter | Change type | Description | | -------- | -------- | -------- | -------- | -| TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups.| +| TiKV | `enable-statistics` | Deleted | This configuration items specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [tikv/tikv#13942](https://github.com/tikv/tikv/pull/13942). | +| TiKV | `storage.block-cache.shared` | Deleted | Starting from v6.6.0, this configuration item is deleted, and the block cache is enabled by default and cannot be disabled. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | +| TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from 0 to 0.8, which means the limit is 80% of the total memory.| | TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS and Azure. | | TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | New | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | @@ -283,7 +296,15 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiDB | [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of days to keep persistent data files. | | TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | | TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | -| sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | New | Controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | +| TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups.| +| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory usage limit for a PD instance. The default value `0.8` means that the memory usage of a PD instance is limited to 80% of the total memory by default. | +| PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold at which PD tries to trigger GC. The default value is `0.7`. | +| PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is enabled by default. | +| PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | New | The maximum memory threshold for tuning GOGC. The default value is `0.6`. | +| PD | [`schedule.enable-witness`](/pd-configuration-file.md#enable-witness-new-in-v660) | New | Controls whether to enable the Witness replica feature, which is disabled by default. | +| PD | [`schedule.switch-witness-interval`](/pd-configuration-file.md#switch-witness-interval-new-in-v660) | New | Controls the time interval in switching between [Witness](/glossary.md#witness) and non-Witness operations on the same Region. That means a Region newly switched to non-Witness cannot be switched to Witness for a while. The default value is 1 hour. | +| PD | [`schedule.witness-schedule-limit`](/pd-configuration-file.md#witness-schedule-limit-new-in-v660) | New | Controls the concurrency of Witness scheduling tasks, which defaults to `4`. | +| TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | New | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | | DM | Modified | [`import-mode`](/dm/task-configuration-file-full.md) | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | DM | Deleted | `on-duplicate` | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | | DM | Newly added | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | @@ -291,6 +312,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | DM | Newly added | [`sorting-dir-physical`](/dm/task-configuration-file-full.md) | The directory used for local KV sorting in the physical import mode. The default value is the same as the `dir` configuration. | | DM | Newly added | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration of TiDB Lightning](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). | | DM | Newly added | [`checksum-physical`](/dm/task-configuration-file-full.md) | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE
` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | +| sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | New | Controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | +| TiSpark | [`spark.tispark.replica_read`](/tispark-overview.md#tispark-configurations) | New | Controls the type of replicas to be read. The value options are `leader`, `follower`, and `learner`. | +| TiSpark | [`spark.tispark.replica_read.label`](/tispark-overview.md#tispark-configurations) | New | Sets labels for the target TiKV node. | ### Others @@ -337,7 +361,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Data Migration (DM) - - Optimize DM alert rules and content [#7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + - Optimize DM alert rules and content [#7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occured. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors: @@ -413,6 +437,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ## Contributors -感谢来自 TiDB 社区的贡献者们: +We would like to thank the following contributors from the TiDB community: - [贡献者 GitHub ID]() From 2b828c562f59ce5e5ee1454e4a03ee630e63b53e Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 8 Feb 2023 14:24:15 +0800 Subject: [PATCH 029/135] Update releases/release-6.6.0.md Co-authored-by: Aolin --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ddcc2f1f58d1..442d68af2f80 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -242,7 +242,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Before v6.6.0, for high data volume scenarios, you were required to configure TiDB Lightning's physical import task separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning's tasks; one DM task can accomplish the migration. - For more information, see [documentation]/dm/dm-precheck.md#physical-import-check-items). + For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). ### TiDB data share subscription From 93cfd8b874440a5a9311a447d97a6ec0c7f759dc Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 8 Feb 2023 17:28:23 +0800 Subject: [PATCH 030/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 442d68af2f80..75c7bc7b159f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -411,7 +411,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + Backup & Restore (BR) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - Fix the issue that when restoring log backup, hot Regions cause the restore to fail [#37207](https://github.com/pingcap/tidb/issues/37207) @[Leavrth](https://github.com/Leavrth) - note [#issue](链接) @[贡献者 GitHub ID](链接) + TiCDC From e9fc6d593a594f4f6f2e1c3ccd3e3197aa5d88df Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 8 Feb 2023 18:04:47 +0800 Subject: [PATCH 031/135] Update releases/release-6.6.0.md Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.6.0.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 75c7bc7b159f..118edf77a60e 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -21,11 +21,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* 支持 DDL 动态资源管控(实验性特性) [#issue](链接) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** +* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** - TiDB v6.6.0 版本引入了 DDL 动态资源管控, 通过自动控制 DDL 的 CPU 和内存使用量,尽量降低 DDL 变更任务对线上业务的影响。 - - 更多信息,请参考[用户文档](链接)。 + TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. * Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** From 55525f206b54f98a5fb894bdc6b4588143f32282 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 9 Feb 2023 10:44:11 +0800 Subject: [PATCH 032/135] Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: xixirangrang Co-authored-by: Daniël van Eeden --- releases/release-6.6.0.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 118edf77a60e..8bbbbf45a9a3 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -196,7 +196,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. - In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource_control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. + In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. For more information, see [documentation](/tidb-resource-control.md). @@ -234,7 +234,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Data migration -* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration @[lance6716](https://github.com/lance6716) **tw@ran-huang** +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. @@ -262,7 +262,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | Variable name | Change type | Description | |--------|------------------------------|------| -|--------|------------------------------|------| | `tidb_enable_amend_pessimistic_txn` | Deleted | Starting from v6.5.0, this variable is deprecated. Starting from v6.6.0, this feature and the `AMEND TRANSACTION` feature are deleted. TiDB will use the [meta lock](/metadata-lock.md) mechanism to resolve the `Information schema is changed` error. | | `tidb_enable_concurrent_ddl` | Deleted | This variable controls whether to allow TiDB to use concurrent DDL statements. When this variable is disabled, TiDB uses the old DDL execution framework, which provides limited support for concurrent DDL execution. Starting from v6.6.0, this variable is deleted and TiDB no longer supports the old DDL execution framework. | | `tidb_ttl_job_run_interval` | Deleted | This variable is used to control the scheduling interval of TTL jobs in the background. Starting from v6.6.0, this variable is deleted, because TiDB provides the `TTL_JOB_INTERVAL` attribute for every table to control the TTL runtime, which is more flexible than `tidb_ttl_job_run_interval`. | @@ -276,7 +275,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | New | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | | [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | New | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | -| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660 | New | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | +| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | New | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | New | Determines whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for for pessimistic transactions by default. | ### Configuration file parameters @@ -319,7 +318,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support dynamically modifying `store-io-pool-size`. This facilitate more flexible TiKV performance tuning. - Remove the limit on `LIMIT` statements, thus improving the execution performance. -## Deprecated feature +## Deprecated features ## Improvements From 31ddfd704cb36a7138add71c69c3b6ee6fc658f8 Mon Sep 17 00:00:00 2001 From: Sen Han <00hnes@gmail.com> Date: Thu, 9 Feb 2023 11:01:40 +0800 Subject: [PATCH 033/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 8bbbbf45a9a3..a582bc582b93 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -294,7 +294,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | | TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | | TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups.| -| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory usage limit for a PD instance. The default value `0.8` means that the memory usage of a PD instance is limited to 80% of the total memory by default. | +| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory usage limit for a PD instance, which is disabled by default. | | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is enabled by default. | | PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | New | The maximum memory threshold for tuning GOGC. The default value is `0.6`. | From 1de937dc9120b63a22b54520ed93d91c4da1816e Mon Sep 17 00:00:00 2001 From: Sen Han <00hnes@gmail.com> Date: Thu, 9 Feb 2023 11:02:00 +0800 Subject: [PATCH 034/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index a582bc582b93..2aec2b9fc53d 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -296,7 +296,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups.| | PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory usage limit for a PD instance, which is disabled by default. | | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold at which PD tries to trigger GC. The default value is `0.7`. | -| PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is enabled by default. | +| PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is disabled by default. | | PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | New | The maximum memory threshold for tuning GOGC. The default value is `0.6`. | | PD | [`schedule.enable-witness`](/pd-configuration-file.md#enable-witness-new-in-v660) | New | Controls whether to enable the Witness replica feature, which is disabled by default. | | PD | [`schedule.switch-witness-interval`](/pd-configuration-file.md#switch-witness-interval-new-in-v660) | New | Controls the time interval in switching between [Witness](/glossary.md#witness) and non-Witness operations on the same Region. That means a Region newly switched to non-Witness cannot be switched to Witness for a while. The default value is 1 hour. | From 77f77aedc60112fe0e6f467ce2905de82e5e0235 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 9 Feb 2023 11:09:17 +0800 Subject: [PATCH 035/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 2aec2b9fc53d..13000db5336e 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -232,6 +232,10 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-foreign-key.md). +* Support the MySQL-compatible multi-valued index [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** + + For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). + ### Data migration * TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** From 982650b202b930855ea43e6791213df84dc13578 Mon Sep 17 00:00:00 2001 From: shichun-0415 Date: Thu, 9 Feb 2023 11:17:05 +0800 Subject: [PATCH 036/135] fix gogc config --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 13000db5336e..856399b1aa59 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -298,10 +298,10 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | | TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | | TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups.| -| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory usage limit for a PD instance, which is disabled by default. | -| PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold at which PD tries to trigger GC. The default value is `0.7`. | +| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory limit ratio for a PD instance. The value `0` means no memory limit. | +| PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is disabled by default. | -| PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | New | The maximum memory threshold for tuning GOGC. The default value is `0.6`. | +| PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | New | The maximum memory threshold ratio for tuning GOGC. The default value is `0.6`. | | PD | [`schedule.enable-witness`](/pd-configuration-file.md#enable-witness-new-in-v660) | New | Controls whether to enable the Witness replica feature, which is disabled by default. | | PD | [`schedule.switch-witness-interval`](/pd-configuration-file.md#switch-witness-interval-new-in-v660) | New | Controls the time interval in switching between [Witness](/glossary.md#witness) and non-Witness operations on the same Region. That means a Region newly switched to non-Witness cannot be switched to Witness for a while. The default value is 1 hour. | | PD | [`schedule.witness-schedule-limit`](/pd-configuration-file.md#witness-schedule-limit-new-in-v660) | New | Controls the concurrency of Witness scheduling tasks, which defaults to `4`. | From a00403767256cde700db465b1cfcd0ba063d152f Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 9 Feb 2023 11:43:51 +0800 Subject: [PATCH 037/135] Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Daniël van Eeden --- releases/release-6.6.0.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 856399b1aa59..3c7a13645a94 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -14,6 +14,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - MySQL 8.0 兼容的多值索引 (Multi-Valued Index) (实验特性) - 基于资源组的资源管控 (实验特性) +- Support the MySQL-compatible foreign key constraints - 悲观锁队列的稳定唤醒模型 - 数据请求的批量聚合 @@ -27,7 +28,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to maintain data consistency, improve data quality, and facilitate data modeling. + TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). From 4efa1c30aace60468d13bc8812d6965c1aa2d6a6 Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 9 Feb 2023 13:59:09 +0800 Subject: [PATCH 038/135] Apply suggestions from code review --- releases/release-6.6.0.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 3c7a13645a94..4dbe61cf85d6 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -203,7 +203,14 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Use a temporary Witness replica to speed up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - The Witness feature can be used to quickly recover from any failure to improve system availability and data durability. For example, in a Raft group of three replicas, if one replica fails, the system is fragile although it meets the majority requirement. It takes a long time to recover a new member (the process requires copying the snapshot first and then applying the latest logs), especially when the Region snapshot is large. In addition, the process of copying replicas might cause more pressure on unhealthy Group members. Therefore, adding a Witness replica can quickly remove the unhealthy node, reduce the risk of the Raft group being unavailable due to another node failure when recovering a new member (the Learner replica cannot participate in the election and submission), and ensure the security of logs during recovery. + The Witness feature can be used to quickly recover from any failure to improve system availability and data durability. For example, in a Raft group of three replicas, if one replica fails, the following issues might occur: + + - The system is fragile although it meets the majority requirement. + - It takes a long time to recover a new member because the process requires copying the snapshot first and then applying the latest logs. + - It takes more time especially when the Region snapshot is large. + - The process of copying replicas might cause more pressure on unhealthy Group members. + + Therefore, adding a Witness replica can quickly remove the unhealthy node, reduce the risk of the Raft group being unavailable due to another node failure when recovering a new member (the Learner replica cannot participate in the election and submission), and ensure the security of logs during recovery. For more information, see [documentation](/use-witness-to-speed-up-failover.md). From 7a58cdd461d73d9e8ea50d7d0aa9b4a6c2650d04 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 9 Feb 2023 14:06:21 +0800 Subject: [PATCH 039/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 4dbe61cf85d6..d9b5e0440e68 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -348,8 +348,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - note [#issue](链接) @[贡献者 GitHub ID](链接) - note [#issue](链接) @[贡献者 GitHub ID](链接) - - Support limiting the global memory to alleviate the OOM problem [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - - Add the GC Tuner to alleviate the GC pressure [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) + - Support limiting the global memory to alleviate the OOM problem (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) + - Add the GC Tuner to alleviate the GC pressure (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) + TiFlash From e10d70741aafee7e92e72fadf094af92ddbc4645 Mon Sep 17 00:00:00 2001 From: qiancai Date: Thu, 9 Feb 2023 18:53:00 +0800 Subject: [PATCH 040/135] translate feature descriptions for #4075 and #41163 --- releases/release-6.6.0.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index d9b5e0440e68..de2da8572f7e 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -254,6 +254,20 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). +* TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** + + Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to acess S3 data. Staring from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. + + For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). + +* TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV [#41163](https://github.com/pingcap/tidb/issues/41163) @[gozssky](https://github.com/gozssky) **tw@qiancai** + + Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volume. + + This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to "gzip" or "gz". + + For more information, see [documentation](/tidb-lightning-configuration#tidb-lightning-task). + ### TiDB data share subscription * The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** From bf3bc1bbb7a310604914bb9f0deaccb15088d6b7 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 9 Feb 2023 19:25:20 +0800 Subject: [PATCH 041/135] remove witness --- releases/release-6.6.0.md | 22 ---------------------- 1 file changed, 22 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index de2da8572f7e..54364940e889 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -133,12 +133,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Performance -* Use a Witness replica to save costs in a highly reliable storage environment [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - - In cloud environments, when you use the Amazon Elastic Block Store or Persistent Disk of Google Cloud Platform as the storage of each TiKV node, the durability is higher than that of physical disks. In this case, using three Raft replicas with TiKV is possible but not necessary. To reduce costs, TiKV introduces the Witness feature, which is the "2 Replicas With 1 Log Only" mechanism. The 1 Log Only replica only stores Raft logs and does not apply data, and still ensures data consistency through the Raft protocol. Compared with the standard three replica architecture, Witness can save storage resources and CPU usage. - - For more information, see [documentation](/use-witness-to-save-costs.md). - * TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. @@ -201,19 +195,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/tidb-resource-control.md). -* Use a temporary Witness replica to speed up failover [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) @[ethercflow](https://github.com/ethercflow) **tw@Oreoxmt** - - The Witness feature can be used to quickly recover from any failure to improve system availability and data durability. For example, in a Raft group of three replicas, if one replica fails, the following issues might occur: - - - The system is fragile although it meets the majority requirement. - - It takes a long time to recover a new member because the process requires copying the snapshot first and then applying the latest logs. - - It takes more time especially when the Region snapshot is large. - - The process of copying replicas might cause more pressure on unhealthy Group members. - - Therefore, adding a Witness replica can quickly remove the unhealthy node, reduce the risk of the Raft group being unavailable due to another node failure when recovering a new member (the Learner replica cannot participate in the election and submission), and ensure the security of logs during recovery. - - For more information, see [documentation](/use-witness-to-speed-up-failover.md). - * Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--read-only`, to ensure the stability of cluster performance. @@ -324,9 +305,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is disabled by default. | | PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | New | The maximum memory threshold ratio for tuning GOGC. The default value is `0.6`. | -| PD | [`schedule.enable-witness`](/pd-configuration-file.md#enable-witness-new-in-v660) | New | Controls whether to enable the Witness replica feature, which is disabled by default. | -| PD | [`schedule.switch-witness-interval`](/pd-configuration-file.md#switch-witness-interval-new-in-v660) | New | Controls the time interval in switching between [Witness](/glossary.md#witness) and non-Witness operations on the same Region. That means a Region newly switched to non-Witness cannot be switched to Witness for a while. The default value is 1 hour. | -| PD | [`schedule.witness-schedule-limit`](/pd-configuration-file.md#witness-schedule-limit-new-in-v660) | New | Controls the concurrency of Witness scheduling tasks, which defaults to `4`. | | TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | New | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | | DM | Modified | [`import-mode`](/dm/task-configuration-file-full.md) | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | DM | Deleted | `on-duplicate` | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | From 248869ca0b923864a6bfeb5988991589134c8fc5 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 10 Feb 2023 11:23:19 +0800 Subject: [PATCH 042/135] Apply suggestions from code review Co-authored-by: xixirangrang Co-authored-by: Aolin --- releases/release-6.6.0.md | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 54364940e889..1b682441c2a5 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -235,6 +235,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). +* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) + + In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. The value of `false` means the headers do not match. In previous versions, this scenario caused import errors. In v6.6.0, when you set this parameter to `false`, if the column order of the source file is the same as that of the target table, TiDB Lightning will ignore the column names in the source file and import the data directly in the order of the columns in the target table. + + For more information, see [TiDB Lightning (Task)](tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). + * TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to acess S3 data. Staring from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. @@ -272,35 +278,35 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | `tidb_enable_amend_pessimistic_txn` | Deleted | Starting from v6.5.0, this variable is deprecated. Starting from v6.6.0, this feature and the `AMEND TRANSACTION` feature are deleted. TiDB will use the [meta lock](/metadata-lock.md) mechanism to resolve the `Information schema is changed` error. | | `tidb_enable_concurrent_ddl` | Deleted | This variable controls whether to allow TiDB to use concurrent DDL statements. When this variable is disabled, TiDB uses the old DDL execution framework, which provides limited support for concurrent DDL execution. Starting from v6.6.0, this variable is deleted and TiDB no longer supports the old DDL execution framework. | | `tidb_ttl_job_run_interval` | Deleted | This variable is used to control the scheduling interval of TTL jobs in the background. Starting from v6.6.0, this variable is deleted, because TiDB provides the `TTL_JOB_INTERVAL` attribute for every table to control the TTL runtime, which is more flexible than `tidb_ttl_job_run_interval`. | -| [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variables controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means to enable the foreign key check by default. | -| [`tidb_enable_foreign_key`](/system-variables.md#tidb_enable_foreign_key-new-in-v630) | Modified | This variable controls whether to enable the foreign key feature. The default value changes from `OFF` to `ON`, which means to enable foreign key by default. | -| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availablity of TiDB clusters. When this option is set, TiDB prefers to select the leader replica to perform read operations. When the performance of the leader replica significantly decreases, TiDB automatically transfers the read operations to follower replicas. | +| [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variable controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means enabling the foreign key check by default. | +| [`tidb_enable_foreign_key`](/system-variables.md#tidb_enable_foreign_key-new-in-v630) | Modified | This variable controls whether to enable the foreign key feature. The default value changes from `OFF` to `ON`, which means enabling foreign key by default. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availability of TiDB clusters. When this option is set, TiDB prefers to select the leader replica to perform read operations. When the performance of the leader replica significantly decreases, TiDB automatically transfers the read operations to follower replicas. | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | -| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable is used to control the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is adjusted from `0` to `4`, which means 4 Coprocessor Tasks will be batched into one task for each batch of requests. | +| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable is used to control the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is changed from `0` to `4`, which means 4 Coprocessor Tasks will be batched into one task for each batch of requests. | | [`mpp_exchange_compression_mode`](/system-variables.md#mpp_exchange_compression_mode-new-in-v660) | New | This variable is used to specify the data compression mode of the MPP Exchange operator. This variable takes effect when TiDB selects the MPP execution plan with the version number `1`. The default value `UNSPECIFIED` means that TiDB automatically selects the `FAST` compression mode. | -| [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | New | This variable is used to specify different versions of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automically selects the latest version `1`. | -| [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | New | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | +| [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | New | This variable is used to specify different versions of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automatically selects the latest version `1`. | +| [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | New | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | | [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | New | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | | [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | New | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | -| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | New | Determines whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for for pessimistic transactions by default. | +| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | New | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | ### Configuration file parameters | Configuration file | Configuration parameter | Change type | Description | | -------- | -------- | -------- | -------- | -| TiKV | `enable-statistics` | Deleted | This configuration items specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [tikv/tikv#13942](https://github.com/tikv/tikv/pull/13942). | +| TiKV | `enable-statistics` | Deleted | This configuration item specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [tikv/tikv#13942](https://github.com/tikv/tikv/pull/13942). | | TiKV | `storage.block-cache.shared` | Deleted | Starting from v6.6.0, this configuration item is deleted, and the block cache is enabled by default and cannot be disabled. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | -| TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from 0 to 0.8, which means the limit is 80% of the total memory.| -| TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS and Azure. | +| TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | +| TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS, and Azure. | | TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | New | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | | TiDB | [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) | New | Controls whether to enable statements summary persistence. The default value is `false`, which means this feature is not enabled by default. | | TiDB | [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the file to which persistent data is written. | | TiDB | [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of days to keep persistent data files. | | TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | | TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | -| TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups.| +| TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups. | | PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory limit ratio for a PD instance. The value `0` means no memory limit. | | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is disabled by default. | @@ -309,7 +315,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | DM | Modified | [`import-mode`](/dm/task-configuration-file-full.md) | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | DM | Deleted | `on-duplicate` | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | | DM | Newly added | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | -| DM | Newly added | [`on-duplicate-physical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the physical import mode. The default value is `"none"`, which means not resolving onflicting data. "none" has the best performance, but might lead to inconsistent data in the downstream database. | +| DM | Newly added | [`on-duplicate-physical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the physical import mode. The default value is `"none"`, which means not resolving conflicting data. "none" has the best performance, but might lead to inconsistent data in the downstream database. | | DM | Newly added | [`sorting-dir-physical`](/dm/task-configuration-file-full.md) | The directory used for local KV sorting in the physical import mode. The default value is the same as the `dir` configuration. | | DM | Newly added | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration of TiDB Lightning](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). | | DM | Newly added | [`checksum-physical`](/dm/task-configuration-file-full.md) | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE
` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | From d6a5a9abc6b1c194c6c1238663fe88c37f7eaee9 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 10 Feb 2023 11:56:09 +0800 Subject: [PATCH 043/135] add 2 compatibility items --- releases/release-6.6.0.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 1b682441c2a5..972dfcab04da 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -280,6 +280,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | `tidb_ttl_job_run_interval` | Deleted | This variable is used to control the scheduling interval of TTL jobs in the background. Starting from v6.6.0, this variable is deleted, because TiDB provides the `TTL_JOB_INTERVAL` attribute for every table to control the TTL runtime, which is more flexible than `tidb_ttl_job_run_interval`. | | [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variable controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means enabling the foreign key check by default. | | [`tidb_enable_foreign_key`](/system-variables.md#tidb_enable_foreign_key-new-in-v630) | Modified | This variable controls whether to enable the foreign key feature. The default value changes from `OFF` to `ON`, which means enabling foreign key by default. | +| `tidb_enable_general_plan_cache` | Modified | This variable controls whether to enable General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_enable_non_prepared_plan_cache`](/system-variables.md#tidb_enable_non_prepared_plan_cache). | +| `tidb_general_plan_cache_size` | Modified | This variable controls the maximum number of execution plans that can be cached by General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_non_prepared_plan_cache_size`](/system-variables.md#tidb_non_prepared_plan_cache_size). | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availability of TiDB clusters. When this option is set, TiDB prefers to select the leader replica to perform read operations. When the performance of the leader replica significantly decreases, TiDB automatically transfers the read operations to follower replicas. | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | | [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable is used to control the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is changed from `0` to `4`, which means 4 Coprocessor Tasks will be batched into one task for each batch of requests. | From 50dc52f8a8b94deb776fa8aefd29ec483a8f1638 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 10 Feb 2023 14:06:38 +0800 Subject: [PATCH 044/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 972dfcab04da..21b2c2ee1802 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -237,7 +237,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) - In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. The value of `false` means the headers do not match. In previous versions, this scenario caused import errors. In v6.6.0, when you set this parameter to `false`, if the column order of the source file is the same as that of the target table, TiDB Lightning will ignore the column names in the source file and import the data directly in the order of the columns in the target table. + In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. For more information, see [TiDB Lightning (Task)](tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). From 5f69d0a1a13d45f54dc381b4c2f95221a00c95eb Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 10 Feb 2023 17:35:26 +0800 Subject: [PATCH 045/135] refine layout according to yiwen92 --- releases/release-6.6.0.md | 272 ++++++++++++++++++++++---------------- 1 file changed, 156 insertions(+), 116 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 21b2c2ee1802..300721f2e196 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -12,37 +12,61 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.6/quick-start-with- In v6.6.0-DMR, the key new features and improvements are as follows: -- MySQL 8.0 兼容的多值索引 (Multi-Valued Index) (实验特性) -- 基于资源组的资源管控 (实验特性) -- Support the MySQL-compatible foreign key constraints -- 悲观锁队列的稳定唤醒模型 -- 数据请求的批量聚合 +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CategoryFeatureDescription
SQL

SEAMLESS to use
Foreign KeySupport MySQL-compatible foreign key constraints to maintain data consistency and improve data quality.
Multi-valued index (experimental)Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0.
DB Operations

SMOOTH to use
Resource group (experimental)Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs.
Stability

RELIABLE to use
Historical SQL bindingSupport binding historical execution plans and quickly binding execution plans on TiDB Dashboard.
Performance

POWERFUL to use
TiFlash supports compression exchangeTiFlash supports data compression to improve the efficiency of parallel data exchange.
TiFlash supports stale readTiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted.
DM support physical import (experimental)TiDB Data Migration (DM) integrates TiDB Lightning's Physical Import mode to improve the performance of full data migration, with performance being up to 10 times faster.
## New features ### SQL -* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** - - TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. - -* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** - - The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, roll back a cluster within minutes, and roll back a cluster multiple times on the timeline to determine when specific data changes occurred. - - For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). - -* Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** - - In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. - -* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -50,90 +74,66 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For details, see [documentation]((/sql-statements/sql-statement-create-index.md#multi-valued-index) -* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** - - In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. - - For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). - -* Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** - - `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: - - - For TiDB clusters deployed across cloud regions, when a cloud region fails, the specified databases or tables can survive in another cloud region. - - For TiDB clusters deployed in a single cloud region, when an availability zone fails, the specified databases or tables can survive in another availability zone. - - For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). - -### Security +* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** -* TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** + The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, roll back a cluster within minutes, and roll back a cluster multiple times on the timeline to determine when specific data changes occurred. - In v6.6.0, TiDB supports automatic rotations of TiFlash TLS certificates. For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. In addition, the rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. + For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). - For more information, see [documentation](/enable-tls-between-components.md). +### Stability -### Observability +* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** -* Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** + In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. - TiDB v6.6.0 supports creating SQL binding from statement history, which allows you to quickly bind a SQL statement to a specific plan on TiDB Dashboard. + For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). - By providing a user-friendly interface, this feature simplifies the process of binding plans in TiDB, reduces the operation complexity, and improves the efficiency and user experience of the plan binding process. +* Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** - For more information, see [documentation](/dashboard/dashboard-statement-details.md#create-sql-binding). + TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of `LIMIT` operations. -* Add warning for caching execution plans @[qw4990](https://github.com/qw4990) **tw@TomShawn** + - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. + - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. - When an execution plan cannot be cached, TiDB indicates the reason in warning to make diagnostics easier. For example: + Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. - ```sql - mysql> PREPARE st FROM 'SELECT * FROM t WHERE a SET @a='1'; - Query OK, 0 rows affected (0.00 sec) + If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. - mysql> EXECUTE st USING @a; - Empty set, 1 warning (0.01 sec) + For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). - mysql> SHOW WARNINGS; - +---------+------+----------------------------------------------+ - | Level | Code | Message | - +---------+------+----------------------------------------------+ - | Warning | 1105 | skip plan-cache: '1' may be converted to INT | - +---------+------+----------------------------------------------+ - ``` +* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** - In the preceding example, the optimizer converts a non-INT type to an INT type, and the execution plan might change with the change of the parameter, so TiDB does not cache the plan. + TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. - For more information, see [documentation](/sql-prepared-plan-cache.md#diagnostics-of-prepared-plan-cache). +### Performance -* Add a `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** +* Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** - TiDB v6.6.0 adds a `Warnings` field to the slow query log to help diagnose performance issues. This field records warnings generated during the execution of a slow query. You can also view the warnings on the slow query page of TiDB Dashboard. + When TiDB sends a data request to TiKV, TiDB compiles the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. - For more information, see [documentation](/identify-slow-queries.md). + This feature is enabled by default. You can set the batch size of requests using the system variable [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size). -* Automatically capture the generation of SQL execution plans [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) **tw@ran-huang** +* Remove the limit on `LIMIT` clauses [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** - In the process of troubleshooting execution plan issues, `PLAN REPLAYER` can help preserve the scene and improve the efficiency of diagnosis. However, in some scenarios, the generation of some execution plans cannot be reproduced freely, which makes the diagnosis work more difficult. + Starting from v6.6.0, TiDB plan cache supports caching queries containing `?` after `Limit`, such as `Limit ?` or `Limit 10, ?`. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. - To address such issues, in TiDB v6.6.0, `PLAN REPLAYER` extends the capability of automatic capture. With the `PLAN REPLAYER CAPTURE` command, you can register the target SQL statement in advance and also specify the target execution plan at the same time. When TiDB detects the SQL statement or the execution plan that matches the registered target, it automatically generates and packages the `PLAN REPLAYER` information. When the execution plan is unstable, this feature can improve diagnostic efficiency. + For more information, see [documentation](/sql-prepared-plan-cache.md). - To use this feature, set the value of [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) to `ON`. +* Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** - For more information, see [documentation](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。 + In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. -* Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** +### HTAP - Before v6.6.0, statements summary data is kept in memory and would be lost upon a TiDB server restart. Starting from v6.6.0, TiDB supports enabling statements summary persistence, which allows historical data to be written to disks on a regular basis. In the meantime, the result of queries on system tables will derive from disks, instead of memory. After TiDB restarts, all historical data remains available. +* TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) **tw@TomShawn** - For more information, see [documentation](/statement-summary-tables.md#persist-statements-summary). + To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. -### Performance + For details, see [documentation](). -* TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** +* TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. @@ -143,42 +143,34 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support pushing down the `regexp_replace` string function to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** -* Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** - - When TiDB sends a data request to TiKV, TiDB compiles the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. - - This feature is enabled by default. You can set the batch size of requests using the system variable [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size). - -* Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** - - TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of `LIMIT` operations. +### High availability - - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. - - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. +* Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** - Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. + `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: -* Remove the limit on `LIMIT` clauses [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** + - For TiDB clusters deployed across cloud regions, when a cloud region fails, the specified databases or tables can survive in another cloud region. + - For TiDB clusters deployed in a single cloud region, when an availability zone fails, the specified databases or tables can survive in another availability zone. - Starting from v6.6.0, TiDB plan cache supports caching queries containing `?` after `Limit`, such as `Limit ?` or `Limit 10, ?`. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. + For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). - For more information, see [documentation](/sql-prepared-plan-cache.md). +### Security -* Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** +* TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** - If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. + In v6.6.0, TiDB supports automatic rotations of TiFlash TLS certificates. For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. In addition, the rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. - For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). + For more information, see [documentation](/enable-tls-between-components.md). -* TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) **tw@TomShawn** +* TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** - To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. + Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to acess S3 data. Staring from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. - For details, see [documentation](). + For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). -### Stability +### DB operations -* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** +* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. @@ -201,8 +193,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/best-practices/readonly-nodes.md). -### Ease of use - * Support dynamically modifying `store-io-pool-size` [#13964](https://github.com/tikv/tikv/issues/13964) @[LykxSassinator](https://github.com/LykxSassinator) **tw@shichun-0415** The TiKV configuration item [`raftstore.store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530) specifies the allowable number of threads that process Raft I/O tasks, which can be adjusted when tuning TiKV performance. Before v6.6.0, this configuration item cannot be modified dynamically. Starting from v6.6.0, you can modify this configuration without restarting the server, which means more flexible performance tuning. @@ -215,17 +205,65 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/tidb-configuration-file.md#initialize-sql-file-new-in-v660). -### MySQL compatibility +### Observability -* Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** - For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-foreign-key.md). + TiDB v6.6.0 supports creating SQL binding from statement history, which allows you to quickly bind a SQL statement to a specific plan on TiDB Dashboard. -* Support the MySQL-compatible multi-valued index [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** + By providing a user-friendly interface, this feature simplifies the process of binding plans in TiDB, reduces the operation complexity, and improves the efficiency and user experience of the plan binding process. - For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). + For more information, see [documentation](/dashboard/dashboard-statement-details.md#create-sql-binding). + +* Add warning for caching execution plans @[qw4990](https://github.com/qw4990) **tw@TomShawn** + + When an execution plan cannot be cached, TiDB indicates the reason in warning to make diagnostics easier. For example: + + ```sql + mysql> PREPARE st FROM 'SELECT * FROM t WHERE a SET @a='1'; + Query OK, 0 rows affected (0.00 sec) + + mysql> EXECUTE st USING @a; + Empty set, 1 warning (0.01 sec) + + mysql> SHOW WARNINGS; + +---------+------+----------------------------------------------+ + | Level | Code | Message | + +---------+------+----------------------------------------------+ + | Warning | 1105 | skip plan-cache: '1' may be converted to INT | + +---------+------+----------------------------------------------+ + ``` + + In the preceding example, the optimizer converts a non-INT type to an INT type, and the execution plan might change with the change of the parameter, so TiDB does not cache the plan. + + For more information, see [documentation](/sql-prepared-plan-cache.md#diagnostics-of-prepared-plan-cache). + +* Add a `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** + + TiDB v6.6.0 adds a `Warnings` field to the slow query log to help diagnose performance issues. This field records warnings generated during the execution of a slow query. You can also view the warnings on the slow query page of TiDB Dashboard. + + For more information, see [documentation](/identify-slow-queries.md). + +* Automatically capture the generation of SQL execution plans [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) **tw@ran-huang** + + In the process of troubleshooting execution plan issues, `PLAN REPLAYER` can help preserve the scene and improve the efficiency of diagnosis. However, in some scenarios, the generation of some execution plans cannot be reproduced freely, which makes the diagnosis work more difficult. + + To address such issues, in TiDB v6.6.0, `PLAN REPLAYER` extends the capability of automatic capture. With the `PLAN REPLAYER CAPTURE` command, you can register the target SQL statement in advance and also specify the target execution plan at the same time. When TiDB detects the SQL statement or the execution plan that matches the registered target, it automatically generates and packages the `PLAN REPLAYER` information. When the execution plan is unstable, this feature can improve diagnostic efficiency. + + To use this feature, set the value of [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) to `ON`. -### Data migration + For more information, see [documentation](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。 + +* Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** + + Before v6.6.0, statements summary data is kept in memory and would be lost upon a TiDB server restart. Starting from v6.6.0, TiDB supports enabling statements summary persistence, which allows historical data to be written to disks on a regular basis. In the meantime, the result of queries on system tables will derive from disks, instead of memory. After TiDB restarts, all historical data remains available. + + For more information, see [documentation](/statement-summary-tables.md#persist-statements-summary). + +### Ecosystem * TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** @@ -239,13 +277,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. - For more information, see [TiDB Lightning (Task)](tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). - -* TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** - - Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to acess S3 data. Staring from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. - - For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). + For more information, see [TiDB Lightning (Task)](tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). * TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV [#41163](https://github.com/pingcap/tidb/issues/41163) @[gozssky](https://github.com/gozssky) **tw@qiancai** @@ -255,8 +287,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/tidb-lightning-configuration#tidb-lightning-task). -### TiDB data share subscription - * The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. @@ -271,6 +301,16 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ## Compatibility changes +### MySQL compatibility + +* Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** + + For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-foreign-key.md). + +* Support the MySQL-compatible multi-valued index [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** + + For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). + ### System variables | Variable name | Change type | Description | From 16b87001c87261c8cb41187c077d55fe29fe206a Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 10 Feb 2023 17:42:21 +0800 Subject: [PATCH 046/135] add more anchors --- releases/release-6.6.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 300721f2e196..7959a1368af1 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -37,20 +37,20 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Stability

RELIABLE to use - Historical SQL binding + Historical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard. Performance

POWERFUL to use - TiFlash supports compression exchange + TiFlash supports compression exchange TiFlash supports data compression to improve the efficiency of parallel data exchange. - TiFlash supports stale read + TiFlash supports stale read TiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted. - DM support physical import (experimental) + DM support physical import (experimental) TiDB Data Migration (DM) integrates TiDB Lightning's Physical Import mode to improve the performance of full data migration, with performance being up to 10 times faster. @@ -265,7 +265,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Ecosystem -* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. From 41d1842f2053508625d05fd4ecd1bc3dfc50eedd Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 10 Feb 2023 17:44:45 +0800 Subject: [PATCH 047/135] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.6.0.md | 80 +++++++++++++++++++-------------------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 7959a1368af1..4817cba09924 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -315,60 +315,60 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | Variable name | Change type | Description | |--------|------------------------------|------| -| `tidb_enable_amend_pessimistic_txn` | Deleted | Starting from v6.5.0, this variable is deprecated. Starting from v6.6.0, this feature and the `AMEND TRANSACTION` feature are deleted. TiDB will use the [meta lock](/metadata-lock.md) mechanism to resolve the `Information schema is changed` error. | -| `tidb_enable_concurrent_ddl` | Deleted | This variable controls whether to allow TiDB to use concurrent DDL statements. When this variable is disabled, TiDB uses the old DDL execution framework, which provides limited support for concurrent DDL execution. Starting from v6.6.0, this variable is deleted and TiDB no longer supports the old DDL execution framework. | -| `tidb_ttl_job_run_interval` | Deleted | This variable is used to control the scheduling interval of TTL jobs in the background. Starting from v6.6.0, this variable is deleted, because TiDB provides the `TTL_JOB_INTERVAL` attribute for every table to control the TTL runtime, which is more flexible than `tidb_ttl_job_run_interval`. | -| [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variable controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means enabling the foreign key check by default. | +| `tidb_enable_amend_pessimistic_txn` | Deleted | Starting from v6.5.0, this variable is deprecated. Starting from v6.6.0, this variable and the `AMEND TRANSACTION` feature are deleted. TiDB will use [meta lock](/metadata-lock.md) to avoid the `Information schema is changed` error. | +| `tidb_enable_concurrent_ddl` | Deleted | This variable controls whether to allow TiDB to use concurrent DDL statements. When this variable is disabled, TiDB uses the old DDL execution framework, which provides limited support for concurrent DDL execution. Starting from v6.6.0, this variable is deleted and TiDB no longer supports the old DDL execution framework. | +| `tidb_ttl_job_run_interval` | Deleted | This variable is used to control the scheduling interval of TTL jobs in the background. Starting from v6.6.0, this variable is deleted, because TiDB provides the `TTL_JOB_INTERVAL` attribute for every table to control the TTL runtime, which is more flexible than `tidb_ttl_job_run_interval`. | +| [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variable controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means enabling the foreign key check by default. | | [`tidb_enable_foreign_key`](/system-variables.md#tidb_enable_foreign_key-new-in-v630) | Modified | This variable controls whether to enable the foreign key feature. The default value changes from `OFF` to `ON`, which means enabling foreign key by default. | -| `tidb_enable_general_plan_cache` | Modified | This variable controls whether to enable General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_enable_non_prepared_plan_cache`](/system-variables.md#tidb_enable_non_prepared_plan_cache). | -| `tidb_general_plan_cache_size` | Modified | This variable controls the maximum number of execution plans that can be cached by General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_non_prepared_plan_cache_size`](/system-variables.md#tidb_non_prepared_plan_cache_size). | -| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availability of TiDB clusters. When this option is set, TiDB prefers to select the leader replica to perform read operations. When the performance of the leader replica significantly decreases, TiDB automatically transfers the read operations to follower replicas. | -| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | -| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable is used to control the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is changed from `0` to `4`, which means 4 Coprocessor Tasks will be batched into one task for each batch of requests. | -| [`mpp_exchange_compression_mode`](/system-variables.md#mpp_exchange_compression_mode-new-in-v660) | New | This variable is used to specify the data compression mode of the MPP Exchange operator. This variable takes effect when TiDB selects the MPP execution plan with the version number `1`. The default value `UNSPECIFIED` means that TiDB automatically selects the `FAST` compression mode. | -| [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | New | This variable is used to specify different versions of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automatically selects the latest version `1`. | -| [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | New | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | -| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | New | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | -| [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | New | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | +| `tidb_enable_general_plan_cache` | Modified | This variable controls whether to enable General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_enable_non_prepared_plan_cache`](/system-variables.md#tidb_enable_non_prepared_plan_cache). | +| `tidb_general_plan_cache_size` | Modified | This variable controls the maximum number of execution plans that can be cached by General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_non_prepared_plan_cache_size`](/system-variables.md#tidb_non_prepared_plan_cache_size). | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availability of TiDB clusters. When this option is set, TiDB prefers to read from the leader replica. When the performance of the leader replica significantly decreases, TiDB automatically reads from follower replicas. | +| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable is used to control the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is changed from `0` to `4`, which means 4 Coprocessor tasks will be batched into one task for each batch of requests. | +| [`mpp_exchange_compression_mode`](/system-variables.md#mpp_exchange_compression_mode-new-in-v660) | Newly added | This variable is used to specify the data compression mode of the MPP Exchange operator. This variable takes effect when TiDB selects the MPP execution plan with the version number `1`. The default value `UNSPECIFIED` means that TiDB automatically selects the `FAST` compression mode. | +| [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | Newly added | This variable is used to specify different versions of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automatically selects the latest version `1`. | +| [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | Newly added | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | +| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | Newly added | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | +| [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | Newly added | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | | [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | New | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | -| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | New | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | +| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | ### Configuration file parameters | Configuration file | Configuration parameter | Change type | Description | | -------- | -------- | -------- | -------- | -| TiKV | `enable-statistics` | Deleted | This configuration item specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [tikv/tikv#13942](https://github.com/tikv/tikv/pull/13942). | +| TiKV | `enable-statistics` | Deleted | This configuration item specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [#13942](https://github.com/tikv/tikv/pull/13942). | | TiKV | `storage.block-cache.shared` | Deleted | Starting from v6.6.0, this configuration item is deleted, and the block cache is enabled by default and cannot be disabled. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | | TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS, and Azure. | -| TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | New | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | -| TiDB | [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) | New | Controls whether to enable statements summary persistence. The default value is `false`, which means this feature is not enabled by default. | -| TiDB | [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the file to which persistent data is written. | -| TiDB | [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of days to keep persistent data files. | -| TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | -| TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | New | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | -| TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | New | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups. | -| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | New | The memory limit ratio for a PD instance. The value `0` means no memory limit. | -| PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | New | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | -| PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | New | Controls whether to enable the GOGC tuner, which is disabled by default. | -| PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | New | The maximum memory threshold ratio for tuning GOGC. The default value is `0.6`. | -| TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | New | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | -| DM | Modified | [`import-mode`](/dm/task-configuration-file-full.md) | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | -| DM | Deleted | `on-duplicate` | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | -| DM | Newly added | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | -| DM | Newly added | [`on-duplicate-physical`](/dm/task-configuration-file-full.md) | This configuration item controls how DM resolves conflicting data in the physical import mode. The default value is `"none"`, which means not resolving conflicting data. "none" has the best performance, but might lead to inconsistent data in the downstream database. | -| DM | Newly added | [`sorting-dir-physical`](/dm/task-configuration-file-full.md) | The directory used for local KV sorting in the physical import mode. The default value is the same as the `dir` configuration. | -| DM | Newly added | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration of TiDB Lightning](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). | -| DM | Newly added | [`checksum-physical`](/dm/task-configuration-file-full.md) | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE ` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | -| sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | New | Controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | -| TiSpark | [`spark.tispark.replica_read`](/tispark-overview.md#tispark-configurations) | New | Controls the type of replicas to be read. The value options are `leader`, `follower`, and `learner`. | -| TiSpark | [`spark.tispark.replica_read.label`](/tispark-overview.md#tispark-configurations) | New | Sets labels for the target TiKV node. | +| TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | Newly added | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | +| TiDB | [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) | Newly added | Controls whether to enable statements summary persistence. The default value is `false`, which means this feature is not enabled by default. | +| TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | +| TiDB | [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the maximum number of days to keep persistent data files. | +| TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | +| TiDB | [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the file to which persistent data is written. | +| TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | Newly added | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups. | +| PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | Newly added | Controls whether to enable the GOGC tuner, which is disabled by default. | +| PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | Newly added | The maximum memory threshold ratio for tuning GOGC. The default value is `0.6`. | +| PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | Newly added | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | +| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | The memory limit ratio for a PD instance. The value `0` means no memory limit. | +| TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | +| DM | `on-duplicate` | Deleted | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | +| DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | +| DM | [`checksum-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE
` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | +| DM | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620) of TiDB Lightning. | +| DM | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | +| DM | [`on-duplicate-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls how DM resolves conflicting data in the physical import mode. The default value is `"none"`, which means not resolving conflicting data. `"none"` has the best performance, but might lead to inconsistent data in the downstream database. | +| DM | [`sorting-dir-physical`](/dm/task-configuration-file-full.md) | Newly added | The directory used for local KV sorting in the physical import mode. The default value is the same as the `dir` configuration. | +| sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | Newly added | Controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | +| TiSpark | [`spark.tispark.replica_read`](/tispark-overview.md#tispark-configurations) | Newly added | Controls the type of replicas to be read. The value options are `leader`, `follower`, and `learner`. | +| TiSpark | [`spark.tispark.replica_read.label`](/tispark-overview.md#tispark-configurations) | Newly added | Sets labels for the target TiKV node. | ### Others -- Support dynamically modifying `store-io-pool-size`. This facilitate more flexible TiKV performance tuning. -- Remove the limit on `LIMIT` statements, thus improving the execution performance. +- Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitate more flexible TiKV performance tuning. +- Remove the limit on `LIMIT` clauses, thus improving the execution performance. ## Deprecated features From dfdc0bbe9931fbb11c2cc3fe677a933859e8c8dd Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 13 Feb 2023 10:43:10 +0800 Subject: [PATCH 048/135] Apply suggestions from code review Co-authored-by: Aolin Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 4817cba09924..1326f569200e 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -72,7 +72,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Introducing multi-valued indexes further enhances TiDB's support for the JSON data type and also improves TiDB's compatibility with MySQL 8.0. - For details, see [documentation]((/sql-statements/sql-statement-create-index.md#multi-valued-index) + For details, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index) * Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** @@ -354,6 +354,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | Newly added | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | The memory limit ratio for a PD instance. The value `0` means no memory limit. | | TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | +| TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning-configuration#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | | DM | `on-duplicate` | Deleted | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | | DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | DM | [`checksum-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE
` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | From b31ab1651a928b45d2fce883e311521e53c0eee0 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 13 Feb 2023 14:08:08 +0800 Subject: [PATCH 049/135] Apply suggestions from code review Co-authored-by: yiwen92 <34636520+yiwen92@users.noreply.github.com> --- releases/release-6.6.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 1326f569200e..de1610ed634a 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -22,7 +22,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - + @@ -31,17 +31,17 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - - + + - + - + From fbfda3ce00b9e4992706f0a452d94832fafdc9b6 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 13 Feb 2023 15:18:33 +0800 Subject: [PATCH 050/135] update markdown links to html links --- releases/release-6.6.0.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index de1610ed634a..e744f472a3ba 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -60,13 +60,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraints#18209@crazycs520 **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental)#39592@xiongjiwei@qw4990 **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -82,7 +82,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Stability -* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** +* Binding historical execution plans is GA#39199@fzzf678 **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. @@ -127,13 +127,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### HTAP -* TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) **tw@TomShawn** +* TiFlash supports data exchange with compression#6620@solotzg **tw@TomShawn** To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. For details, see [documentation](). -* TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** +* TiFlash supports the Stale Read feature#4483@hehechen **tw@qiancai** The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. @@ -170,7 +170,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### DB operations -* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** +* Support resource control based on resource groups (experimental)#38825@nolouch@BornChanger@glorv@tiancaiamao@Connor1996@JmPotato@hnes@CabinfeverB@HuSharp **tw@hfxsd** Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. @@ -187,7 +187,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/tidb-resource-control.md). -* Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** +* Support configuring read-only storage nodes for resource-consuming tasks @v01dstar **tw@Oreoxmt** In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--read-only`, to ensure the stability of cluster performance. @@ -265,7 +265,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Ecosystem -* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental)@lance6716 **tw@ran-huang** In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. @@ -273,7 +273,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). -* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) +* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @dsdashun In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. From 6ed9617e74848f33ecfa51b93e6ecf16f66a83fa Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 13 Feb 2023 17:06:10 +0800 Subject: [PATCH 051/135] add a few compatibility info --- releases/release-6.6.0.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index e744f472a3ba..096ef5f42b92 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -263,6 +263,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/statement-summary-tables.md#persist-statements-summary). +### Telemetry + +Starting from v6.6.0, the [telemetry](/telemetry.md) is disabled by default for TiDB and TiDB Dashboard. + +Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deployed TiUP. If you upgrade your TiUP version to v1.11.3 or later, the telemetry keeps the setting of the old TiUP versions. + ### Ecosystem * TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental)@lance6716 **tw@ran-huang** @@ -299,8 +305,19 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). +* GORM adds TiDB integration tests. Now TiDB is the default database supported by GORM. + + - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) adapts to the `AUTO_RANDOM` attribute of TiDB [#104](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) + - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) + - [GORM](https://github.com/go-gorm/gorm) adds TiDB as the default database [#6014](https://github.com/go-gorm/gorm/pull/6014) @[Icemap](https://github.com/Icemap) + - [GORM documentation](https://github.com/go-gorm/gorm.io) mentions TiDB as the default database [#638](https://github.com/go-gorm/gorm.io/pull/638) @[Icemap](https://github.com/Icemap) + ## Compatibility changes +> **Note:** +> +> If you are upgrading from v6.4 or earlier versions to v6.6, you might also need to check the compatibility changes introduced in the intermediate versions. + ### MySQL compatibility * Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** @@ -370,6 +387,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitate more flexible TiKV performance tuning. - Remove the limit on `LIMIT` clauses, thus improving the execution performance. +- Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.5.0. ## Deprecated features From 2f179eb0321f02bbad23647f6081c97b524e43d4 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 13 Feb 2023 18:13:01 +0800 Subject: [PATCH 052/135] wording updates --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 096ef5f42b92..306371d46f5e 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -164,7 +164,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** - Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to acess S3 data. Staring from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. + Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). From cccb7cd9c1ed3fd1a370782fcee60b6d8db8a798 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 14 Feb 2023 16:52:52 +0800 Subject: [PATCH 053/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 306371d46f5e..ba323c695416 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -189,7 +189,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support configuring read-only storage nodes for resource-consuming tasks @v01dstar **tw@Oreoxmt** - In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--read-only`, to ensure the stability of cluster performance. + In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--replica-read-label`, to ensure the stability of cluster performance. For more information, see [documentation](/best-practices/readonly-nodes.md). From c0da06b82f006e6ab19af313629115836ba700df Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 14 Feb 2023 17:58:01 +0800 Subject: [PATCH 054/135] add notes --- releases/release-6.6.0.md | 188 +++++++++++++++++++++++++++++--------- 1 file changed, 144 insertions(+), 44 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ba323c695416..84a07b5dd793 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -389,43 +389,56 @@ Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deplo - Remove the limit on `LIMIT` clauses, thus improving the execution performance. - Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.5.0. -## Deprecated features - ## Improvements + TiDB - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 改进了 TTL 后台清理任务的调度机制。允许将单个表的清理任务拆分成若干子任务并调度到多个 TiDB 节点同时运行。 [#40361](https://github.com/pingcap/tidb/issues/40361) @[YangKeao](https://github.com/YangKeao) + - 优化了在设置了非默认的 delimiter 后运行 multi-statement 返回结果的列名的显示 [#39662](https://github.com/pingcap/tidb/issues/39662) @[mjonss](https://github.com/mjonss) + - 优化了生成警告信息时的执行效率 [#39702](https://github.com/pingcap/tidb/issues/39702) @[tiancaiamao](https://github.com/tiancaiamao) + - 为 ADD INDEX 支持分布式数据回填 (实验特性) [#37119](https://github.com/pingcap/tidb/issues/37119) @[zimulala](https://github.com/zimulala) + - 允许使用 CURDATE() 作为列的默认值 [#38356](https://github.com/pingcap/tidb/issues/38356) @[CbcWestwolf](https://github.com/CbcWestwolf) + + + - 增加了 partial order prop push down 对 LIST 类型的分区表的支持 [#40273](https://github.com/pingcap/tidb/issues/40273) @[winoros](https://github.com/winoros) + - 增加了 hint 和 binding 冲突时的 warning 信息 [#40910](https://github.com/pingcap/tidb/issues/40910) @[Reminiscent](https://github.com/Reminiscent) + - 优化了 Plan Cache 策略避免在一些场景使用 Plan Cache 时产生不优的计划 [#40312](https://github.com/pingcap/tidb/pull/40312) [#40218](https://github.com/pingcap/tidb/pull/40218) [#40280](https://github.com/pingcap/tidb/pull/40280) [#41136](https://github.com/pingcap/tidb/pull/41136) [#40686](https://github.com/pingcap/tidb/pull/40686) @[qw4990](https://github.com/qw4990) + TiKV - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 优化一些参数的默认值,当partitioned-raft-kv开启时block-cache调整为0.3可用内存(原来是0.45), region-split-size调整为10GB。当沿用raft-kv时且enable-region-bucket为true时,region-split-size默认调整为1GB [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) + - 支持在Raftstore异步写入中的优先级调度[#13730] (https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) + - 支持TiKV在小于1 core的CPU下启动 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) + - 修改rocksdb.defaultcf.block-size以及rocksdb.writecf.block-size的默认参数为16KB [#14052](https://github.com/tikv/tikv/issues/14052) @[tonyxuqqi](https://github.com/tonyxuqqi) + - raftstore: 优化slow score探测的新机制,加入新的`evict-slow-trend-scheduler` [#14131](https://github.com/tikv/tikv/issues/14131) @[innerr](https://github.com/innerr) + - rocksdb的block cache强制为共享的。不支持按照CF单独设置Block cache [#12936](https://github.com/tikv/tikv/issues/12936) @[busyjay](https://github.com/busyjay) + PD - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - Support limiting the global memory to alleviate the OOM problem (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - Add the GC Tuner to alleviate the GC pressure (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) + - 新增 `balance-witness-scheduler` 调度器用于调度 witness [#5763](https://github.com/tikv/pd/pull/5763) @[ethercflow](https://github.com/ethercflow) + - 新加 `evict-slow-trend-scheduler` 调度器用于异常节点检测和调度 [#5808](https://github.com/tikv/pd/pull/5808) @[innerr](https://github.com/innerr) + - 新加 keyspace manager,支持对 keyspace 的管理 [#5293](https://github.com/tikv/pd/issues/5293) @[AmoebaProtozoa](https://github.com/AmoebaProtozoa) + TiFlash - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin] **tw@qiancai** - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 减少 TiFlash 在没有查询的情况下的内存使用,最高减少 30% [#6589](https://github.com/pingcap/tiflash/pull/6589) @[hongyunyan](https://github.com/hongyunyan) + Tools + Backup & Restore (BR) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 优化 TiKV 端下载日志备份文件的并发度,提升常规场景下 PITR 恢复的性能。[#14206](https://github.com/tikv/tikv/issues/14206) @[YuJuncen](https://github.com/YuJuncen) + TiCDC - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 支持 batch update dml 语句,提升 TiCDC 同步批量 update DML 的性能 [#8084](https://github.com/pingcap/tiflow/issues/8084) + (dup: release-6.3.0.md > 改进提升> Tools> TiCDC)- 采用异步的模式实现 MQ sink 和 MySQL sink,提升 sink 的吞吐能力 [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) + TiDB Data Migration (DM) @@ -436,17 +449,19 @@ Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deplo - For an error that is automatically recoverable, DM reports the alert only if the error occurs more than 3 times within 2 minutes. - For an error that is not automatically recoverable, DM maintains the original behavior and reports the alert immediately. - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 优化 relay 性能[#4287](https://github.com/pingcap/tiflow/issues/4287) @[GMHDBJD] + TiDB Lightning - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - + TiUP + - physical 导入模式支持 keyspace [#40531](https://github.com/pingcap/tidb/issues/40531) @[iosmanthus] + - 支持通过 lightning.max-error 设置最大冲突个数 [#40743](https://github.com/pingcap/tidb/issues/40743) @[dsdashun] + - 支持带有 BOM header 的数据文件 [#40744](https://github.com/pingcap/tidb/issues/40744) @[dsdashun] + - 优化遇到 tikv 限流错误时处理逻辑,改为尝试其他不繁忙的 region [#40205](https://github.com/pingcap/tidb/issues/40205) @[lance6716] + - 导入时关闭对表外键的检查 [#40027](https://github.com/pingcap/tidb/issues/40027) @[gozssky] - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + Dumpling + + - 支持导出外键相关设置 [#39913](https://github.com/pingcap/tidb/issues/39913) @[lichunzhu] + Sync-diff-inspector @@ -457,54 +472,139 @@ Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deplo + TiDB - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 修复了收集统计信息任务因为错误的 datetime 值而失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - 修复了 stats meta 没有创建的问题 [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - 优化了删除分区表所依赖的列时的错误提示 [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) + - 修复了 DDL 回填数据时频繁发生的事务写冲突问题 [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) + - 增加了 FLASHBACK CLUSTER 在检查 `min-resolved-ts` 失败后的重试机制 [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) + - 修复了部分情况下空表不能使用 ingest 模式添加索引的问题 [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) + - 修复了同一个事务中不同 SQL 的慢日志 `wait_ts` 相同的问题 [#39713](https://github.com/pingcap/tidb/issues/39713) @[TonsnakeLin](https://github.com/TonsnakeLin) + - 修复了添加列的过程中删除行记录报 "Assertion Failed" 错误的问题 [#39570](https://github.com/pingcap/tidb/issues/39570) @[wjhuang2016](https://github.com/wjhuang2016) + - 修复了修改列类型时报 "not a DDL owner" 错误的问题 [#39643](https://github.com/pingcap/tidb/issues/39643) @[zimulala](https://github.com/zimulala) + - 修复了 AUTO_INCREMENT 列自动分配值耗尽后插入一行不报错的问题 [#38950](https://github.com/pingcap/tidb/issues/38950) @[Dousir9](https://github.com/Dousir9) + - 修复了创建表达式索引时报 "Unknown column" 错误的问题 [#39784](https://github.com/pingcap/tidb/issues/39784) @[Defined2014](https://github.com/Defined2014) + - 修复了生成列表达式包含表名时,重命名表后无法插入数据的问题 [#39826](https://github.com/pingcap/tidb/issues/39826) @[Defined2014](https://github.com/Defined2014) + - 修复了列在 write-only 状态下 INSERT IGNORE 语句无法正确填充默认值的问题 [#40192](https://github.com/pingcap/tidb/issues/40192) @[YangKeao](https://github.com/YangKeao) + - 修复了资源管控模块关闭时未能释放资源的问题 [#40546](https://github.com/pingcap/tidb/issues/40546) @[zimulala](https://github.com/zimulala) + - 不支持在分区表上执行 MODIFY COLUMN [#39915](https://github.com/pingcap/tidb/issues/39915) @[wjhuang2016](https://github.com/wjhuang2016) + - 禁止重命名分区表所依赖的列 [#40150](https://github.com/pingcap/tidb/issues/40150) @[mjonss](https://github.com/mjonss) + - 修复了 TTL 任务不能及时触发统计信息更新的问题 [#40109](https://github.com/pingcap/tidb/issues/40109) @[YangKeao](https://github.com/YangKeao) + - 修复了 TiDB 构造 key 范围时对 null 值处理不当导致读到预期外数据的行为 [#40158](https://github.com/pingcap/tidb/issues/40158) @[tiancaiamao](https://github.com/tiancaiamao) + - 修复了 MODIFY COLUMN 同时修改列默认值导致写入非法值的问题 [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) + - 修复了表 region 比较多时因 region 缓存失效导致加索引效率低下的问题 [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) + - 修复了分配自增 ID 时的数据竞争问题 [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) + - 修复了 JSON 的 not 表达式实现与 MySQL 实现不兼容的问题 [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) + - 修复了并发视图时可能会造成 DDL 操作卡住的问题 [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) + (dup: release-6.1.4.md > 兼容性变更> TiDB)- 由于可能存在正确性问题,分区表目前不再支持修改列类型 [#40620](https://github.com/pingcap/tidb/issues/40620) @[mjonss](https://github.com/mjonss) @[mjonss](https://github.com/mjonss) + - 修复了使用 `caching_sha2_password` 方式进行认证时如果不指定的密码会报错 "Malformed packet" 的问题 [#40831](https://github.com/pingcap/tidb/issues/40831) @[dveeden](https://github.com/dveeden) + - 修复了在执行 TTL 任务时,如果表的主键包含 `ENUM` 类型的列任务会失败的问题 [#40456](https://github.com/pingcap/tidb/issues/40456) @[lcwangchao](https://github.com/lcwangchao) + - 修复了某些被 MDL 阻塞的 DDL 操作无法在 `mysql.tidb_mdl_view` 中查询到的问题 [#40838](https://github.com/pingcap/tidb/issues/40838) @[YangKeao](https://github.com/YangKeao) + - 修复了 DDL 在 ingest 过程中可能会发生数据竞争的问题 [#40970](https://github.com/pingcap/tidb/issues/40970) @[tangenta](https://github.com/tangenta) + - 修复了在改变时区后 TTL 任务可能会错误删除某些数据的问题 [41043](https://github.com/pingcap/tidb/issues/41043) @[lcwangchao](https://github.com/lcwangchao) + - 修复了 `JSON_OBJECT` 在某些情况下会报错的问题 [#39997](https://github.com/pingcap/tidb/pull/39997) @[YangKeao](https://github.com/YangKeao) + - 修复了 TiDB 在初始化时有可能死锁的问题 [#40408](https://github.com/pingcap/tidb/issues/40408) @[Defined2014](https://github.com/Defined2014) + - 修复了内存重用导致的在某些情况下系统变量的值会被错误修改的问题 [#40979](https://github.com/pingcap/tidb/issues/40979) @[lcwangchao](https://github.com/lcwangchao) + - 修复了 ingest 模式下创建唯一索引可能会导致数据和索引不一致的问题 [#40464](https://github.com/pingcap/tidb/issues/40464) @[tangenta](https://github.com/tangenta) + - 修复了并发 truncate 同一张表时部分 truncate 操作无法被 MDL 阻塞的问题 [#40484](https://github.com/pingcap/tidb/issues/40484) @[wjhuang2016](https://github.com/wjhuang2016) + - 修复了 `SHOW PRIVILEGES` 命令显示的权限列表不完整的问题 [#40591](https://github.com/pingcap/tidb/issues/40591) @[CbcWestwolf](https://github.com/CbcWestwolf) + - 修复了在 ADD UNIQUE INDEX 时有可能会 PANIC 的问题 [#40592](https://github.com/pingcap/tidb/issues/40592) @[tangenta](https://github.com/tangenta) + - 修复了 ADMIN RECOVER 操作可能会造成索引数据损坏的问题 [#40430](https://github.com/pingcap/tidb/issues/40430) @[xiongjiwei](https://github.com/xiongjiwei) + - 修复了表达式索引中含有 CAST 时对表进行查询可能出错的问题 [#40129](https://github.com/pingcap/tidb/pull/40129) @[xiongjiwei](https://github.com/xiongjiwei) + - 修复了某些情况下唯一索引仍然可能产生重复数据的问题 [#40217](https://github.com/pingcap/tidb/issues/40217) @[tangenta](https://github.com/tangenta) + - 修复了使用 Prepare/Execute 查询某些虚拟表时无法将表 ID 下推导致在大量 Region 的情况下 PD OOM 的问题 [#39605](https://github.com/pingcap/tidb/issues/39605) @[djshow832](https://github.com/djshow832) + - 修复了添加索引时可能导致数据竞争的问题 [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) + + + - 修复了非法的 datetime 值导致 analyze 失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - 修复了由 virtual column 引发的 can't find proper physical plan 问题 [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) + - 修复了当动态裁剪模式下的分区表有 global binding 时,TiDB 重启失败的问题 [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) + - 修复了 auto analyze 导致 graceful shutdown 耗时的问题 [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + + + - 修复了 IndexMerge 算子在触发内存限制行为时可能导致 tidb-server 崩溃的问题[#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) + - 修复了在分区表上执行查询 `select * from t limit 1` 时,执行速度慢的问题[#40741](https://github.com/pingcap/tidb/pull/40741)@[solotzg](https://github.com/solotzg) + + + - 修复了过期的 region cache 可能残留导致的内存泄漏和性能下降问题 [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + TiKV - - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 修复cast const Enum 到其他类型时的错误 [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) + - 减少resolve-ts带来的网络流量 [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) + - 支持TiKV在小于1 core的CPU下启动 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) + - copr: 修复old collation时Like中的 _ pattern的行为 [#13785](https://github.com/tikv/tikv/pull/13785) @[Yangkeao](https://github.com/Yangkeao) + (dup: release-6.1.4.md > Bug 修复> TiKV)- 修复 TiDB 中事务在执行悲观 DML 失败后,再执行其他 DML 时,如果 TiDB 和 TiKV 之间存在网络故障,可能会造成数据不一致的问题 [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD - - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 修复 region scatter 任务会生产多余非预期副本的问题 [#5920](https://github.com/tikv/pd/pull/5920) @[HundunDM](https://github.com/HunDunDM) + - 修复 online-unsafe-recovery 在 auto-detect 模式下卡住并超时的问题 [#5754](https://github.com/tikv/pd/pull/5754) @[Connor1996](https://github.com/Connor1996) + - 修复 replace down peer 在特定条件下执行变慢的问题 [#5789](https://github.com/tikv/pd/pull/5789)@[HundunDM](https://github.com/HunDunDM) + - 修复调用 ReportMinResolvedTS 过高的情况下造成 PD OOM 的问题 [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + TiFlash - - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 修复查询 TiFlash 相关的系统表可能会卡住的问题 [#6745](https://github.com/pingcap/tiflash/pull/6745) @[lidezhu](https://github.com/lidezhu) + - 修复半连接在计算笛卡尔积时,使用内存过量的问题 [#6730](https://github.com/pingcap/tiflash/issues/6730) @[gengliqi](https://github.com/gengliqi) + - 修复了 decimal 进行除法运算时不舍入的问题 [#6393](https://github.com/pingcap/tiflash/issues/6393) @[LittleFall](https://github.com/LittleFall) + Tools + Backup & Restore (BR) - Fix the issue that when restoring log backup, hot Regions cause the restore to fail [#37207](https://github.com/pingcap/tidb/issues/37207) @[Leavrth](https://github.com/Leavrth) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复了恢复数据到正在运行日志备份的集群,导致日志备份文件无法恢复的问题 [#40797](https://github.com/pingcap/tidb/issues/40797) @[Leavrth](https://github.com/Leavrth) + - 修复了 PITR 功能不支持 CA-bundle 认证的问题 [#38775](https://github.com/pingcap/tidb/issues/38775) @[YuJuncen](https://github.com/YuJuncen) + - 修复了恢复时重复的临时表导致的 Panic 问题 [#40797](https://github.com/pingcap/tidb/issues/40797) @[joccau](https://github.com/joccau) + - 修复了 PITR 不支持 PD 集群配置变更的问题 [#14165](https://github.com/tikv/tikv/issues/14165) @[YuJuncen](https://github.com/YuJuncen) + - 修复了 PD 与 tidb-server 的连接故障导致 PITR 备份进度不推进的问题 [#41082](https://github.com/pingcap/tidb/issues/41082) @[YuJuncen](https://github.com/YuJuncen) + - 修复了 PD 与 TiKV 的连接故障导致 TiKV 不能监听 PITR 任务的问题 [#14159](https://github.com/tikv/tikv/issues/14159) @[YuJuncen](https://github.com/YuJuncen) + - 修复了当 TiDB 集群不存在 PITR 备份任务时,`resolve lock` 频率过高的问题 [#40759](https://github.com/pingcap/tidb/issues/40759) @[joccau](https://github.com/joccau) + - 修复了 PITR 备份任务被删除时存在备份信息残留导致新任务出现数据不一致的问题 [#40403](https://github.com/pingcap/tidb/issues/40403) @[joccau](https://github.com/joccau) + TiCDC - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + (dup: release-6.1.4.md > Bug 修复> Tools> TiCDC)- 修复不能通过配置文件修改 `transaction_atomicity` 和 `protocol` 参数的问题 [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) + - 修复 redo log 存储路径没做权限预检查的问题。 [#6335](https://github.com/pingcap/tiflow/issues/6335) + - 修复 redo log 容忍S3存储故障过短的问题。 [#8089](https://github.com/pingcap/tiflow/issues/8089) + - 修复 changefeed 在 tikv、CDC 节点扩缩容特殊场景下卡住的问题。 [#8197](https://github.com/pingcap/tiflow/issues/8197) + - 修复 在6.5 中引入tikv 节点之间流量过大的问题。 [#14092](https://github.com/tikv/tikv/issues/14092) + - 优化 pull-based sink 打开时在 CPU 利用率、内存控制、吞吐等方面若干性能问题。[#8142](https://github.com/pingcap/tiflow/issues/8142) [#8157](https://github.com/pingcap/tiflow/issues/8157) [#8001](https://github.com/pingcap/tiflow/issues/8001) [#5928](https://github.com/pingcap/tiflow/issues/5928) + TiDB Data Migration (DM) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + + - 修复 binlog-schema delete 失败的问题[#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94] + - 修复最后一个 binlog 为被 skip 的 ddl 会导致 checkpoint 不推进的问题[#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter] + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- 修复当在某个表上同时指定 `UPDATE` 和非 `UPDATE` 类型的表达式过滤规则 `expression-filter` 时,所有 `UPDATE` 操作被跳过的问题 [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) @[lance6716] + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- 修复当某个表上仅指定 `update-old-value-expr` 或 `update-new-value-expr` 时,过滤规则不生效或 DM 发生 panic 的问题 [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) @[lance6716] + TiDB Lightning - Fix the issue that TiDB Lightning timeout hangs due to TiDB restart in some scenarios [#33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) **tw@shichun-0415** - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - + TiUP - - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复并行导入时,当除最后一个外的 lightning 实例都遇到本地重复时,lightning 可能会跳过冲突处理的问题 [#40923](https://github.com/pingcap/tidb/issues/40923) @[lichunzhu] + - 修复 precheck 无法准确检测目标集群是否存在运行中的 CDC 的问题 [#41040](https://github.com/pingcap/tidb/issues/41040) @[lance6716] + - 修复 lightning 在 split-region 阶段 panic 问题(next key) [#40934](https://github.com/pingcap/tidb/issues/40934) @[lance6716] + - 修复踢重逻辑可能导致 checksum 不一致的问题 [#40657](https://github.com/pingcap/tidb/issues/40657) @[gozssky] + - 修复当数据文件中存在未闭合的 delimiter 可能导致 OOM 的问题 [#40400](https://github.com/pingcap/tidb/issues/40400) @[buchuitoudegou] + - 修复报错中的文件 offset 超过文件大小的问题 [#40034](https://github.com/pingcap/tidb/issues/40034) @[buchuitoudegou] + - 修复新版 PDClient 可能导致并行导入失败的问题 [#40493](https://github.com/pingcap/tidb/issues/40493) @[AmoebaProtozoa] + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Lightning)- 修复 precheck 检查项有时无法监测到之前的导入失败遗留的脏数据的问题 [#39477](https://github.com/pingcap/tidb/issues/39477) @[dsdashun](https://github.com/dsdashun) @[dsdashun] ## Contributors We would like to thank the following contributors from the TiDB community: -- [贡献者 GitHub ID]() +- [morgo](https://github.com/morgo) +- [jiyfhust](https://github.com/jiyfhust) +- [b41sh](https://github.com/b41sh) +- [sourcelliu](https://github.com/sourcelliu) +- [songzhibin97](https://github.com/songzhibin97) +- [mamil](https://github.com/mamil) +- [Dousir9](https://github.com/Dousir9) +- [hihihuhu](https://github.com/hihihuhu) +- [mychoxin](https://github.com/mychoxin) +- [xuning97](https://github.com/xuning97) +- [andreid-db](https://github.com/andreid-db) From 31da409e0167c0584363080287eed8418c197cbe Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 14 Feb 2023 18:07:13 +0800 Subject: [PATCH 055/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 84a07b5dd793..8beff1db5816 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -299,7 +299,7 @@ Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deplo For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc/). -* TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** +* TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes (experimental) [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** Before v6.6.0, when a table in the upstream accepts a large amount of writes, the replication capability of this table cannot be scaled out, resulting in an increase in the replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which means the replication capability of a single table is scaled out. From 1c003f58b2d081dca4d7082761b40cbb692df97b Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Tue, 14 Feb 2023 22:46:33 +0800 Subject: [PATCH 056/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 8beff1db5816..3a80edcea733 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -347,7 +347,7 @@ Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deplo | [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | Newly added | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | | [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | Newly added | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | Newly added | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | -| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | New | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | +| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | Newly added | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | ### Configuration file parameters From 0e064f7c866ec828cc3cc26b82d1277a127d104b Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 15 Feb 2023 10:15:55 +0800 Subject: [PATCH 057/135] add telemetry configs --- releases/release-6.6.0.md | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 3a80edcea733..ce372d37d9f9 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -60,13 +60,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* Support the MySQL-compatible foreign key constraints #18209 @crazycs520 **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraints #18209 @crazycs520 **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index (experimental) #39592 @xiongjiwei @qw4990 **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) #39592 @xiongjiwei @qw4990 **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -82,7 +82,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Stability -* Binding historical execution plans is GA #39199 @fzzf678 **tw@TomShawn** +* Binding historical execution plans is GA #39199 @fzzf678 **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. @@ -127,7 +127,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### HTAP -* TiFlash supports data exchange with compression #6620 @solotzg **tw@TomShawn** +* TiFlash supports data exchange with compression #6620 @solotzg **tw@TomShawn** To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. @@ -265,9 +265,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Telemetry -Starting from v6.6.0, the [telemetry](/telemetry.md) is disabled by default for TiDB and TiDB Dashboard. +- 在 v6.6.0 及之后发布的里程碑版本 (DMR) 和长期支持版本 (LTS) 中,TiDB 和 TiDB Dashboard 默认关闭遥测功能,即默认不再收集使用情况信息。如果升级前使用默认的遥测配置,则升级后遥测功能处于关闭状态。 +- 从 v1.11.3 起,新部署的 TiUP 默认关闭遥测功能,即默认不再收集使用情况信息。如果从 v1.11.3 之前的 TiUP 版本升级至 v1.11.3 或更高 TiUP 版本,遥测保持升级前的开启或关闭状态。 -Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deployed TiUP. If you upgrade your TiUP version to v1.11.3 or later, the telemetry keeps the setting of the old TiUP versions. +> **注意:** +> +> 除了 v6.6.0 及之后发布的 DMR 和 LTS 版本默认关闭遥测外,2023 年 2 月 20 日后,为 TiDB LTS 版本发布的补丁版本也默认关闭遥测功能,默认不再收集使用情况信息分享给 PingCAP。具体的版本可参考 [TiDB 版本发布时间线](/releases/release-timeline.md)。 ### Ecosystem @@ -338,6 +341,7 @@ Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deplo | [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variable controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means enabling the foreign key check by default. | | [`tidb_enable_foreign_key`](/system-variables.md#tidb_enable_foreign_key-new-in-v630) | Modified | This variable controls whether to enable the foreign key feature. The default value changes from `OFF` to `ON`, which means enabling foreign key by default. | | `tidb_enable_general_plan_cache` | Modified | This variable controls whether to enable General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_enable_non_prepared_plan_cache`](/system-variables.md#tidb_enable_non_prepared_plan_cache). | +| [`tidb_enable_telemetry`](/system-variables.md#tidb_enable_telemetry-new-in-v402) | Modified | The default value changes from `ON` to `OFF`, which means that telemetry is disabled by default in TiDB. | | `tidb_general_plan_cache_size` | Modified | This variable controls the maximum number of execution plans that can be cached by General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_non_prepared_plan_cache_size`](/system-variables.md#tidb_non_prepared_plan_cache_size). | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availability of TiDB clusters. When this option is set, TiDB prefers to read from the leader replica. When the performance of the leader replica significantly decreases, TiDB automatically reads from follower replicas. | @@ -356,7 +360,9 @@ Starting from TiUP v1.11.3, the telemetry is disabled by default for newly deplo | -------- | -------- | -------- | -------- | | TiKV | `enable-statistics` | Deleted | This configuration item specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [#13942](https://github.com/tikv/tikv/pull/13942). | | TiKV | `storage.block-cache.shared` | Deleted | Starting from v6.6.0, this configuration item is deleted, and the block cache is enabled by default and cannot be disabled. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | +| TiDB | [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB. | | TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | +| PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dasboard. | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | | TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS, and Azure. | | TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | Newly added | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | From a1160b504faadebc7c75e4e8d08952848c0e701a Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 15 Feb 2023 10:22:49 +0800 Subject: [PATCH 058/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ce372d37d9f9..963a2ce4ba8a 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -60,13 +60,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* Support the MySQL-compatible foreign key constraints #18209 @crazycs520 **tw@Oreoxmt** +* #18209 @crazycs520 **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index (experimental) #39592 @xiongjiwei @qw4990 **tw@TomShawn** +* #39592 @xiongjiwei @qw4990 **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -82,7 +82,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Stability -* Binding historical execution plans is GA #39199 @fzzf678 **tw@TomShawn** +* #39199 @fzzf678 **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. @@ -127,7 +127,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### HTAP -* TiFlash supports data exchange with compression #6620 @solotzg **tw@TomShawn** +* #6620 @solotzg **tw@TomShawn** To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. From 9bf10959cd5a497a4392550a346718ec614fd5ac Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 15 Feb 2023 11:01:01 +0800 Subject: [PATCH 059/135] wrap html notes --- releases/release-6.6.0.md | 29 +++++++++++++---------------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 963a2ce4ba8a..5f2d840d093a 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -399,21 +399,21 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB - + `` - 改进了 TTL 后台清理任务的调度机制。允许将单个表的清理任务拆分成若干子任务并调度到多个 TiDB 节点同时运行。 [#40361](https://github.com/pingcap/tidb/issues/40361) @[YangKeao](https://github.com/YangKeao) - 优化了在设置了非默认的 delimiter 后运行 multi-statement 返回结果的列名的显示 [#39662](https://github.com/pingcap/tidb/issues/39662) @[mjonss](https://github.com/mjonss) - 优化了生成警告信息时的执行效率 [#39702](https://github.com/pingcap/tidb/issues/39702) @[tiancaiamao](https://github.com/tiancaiamao) - 为 ADD INDEX 支持分布式数据回填 (实验特性) [#37119](https://github.com/pingcap/tidb/issues/37119) @[zimulala](https://github.com/zimulala) - 允许使用 CURDATE() 作为列的默认值 [#38356](https://github.com/pingcap/tidb/issues/38356) @[CbcWestwolf](https://github.com/CbcWestwolf) - + `` - 增加了 partial order prop push down 对 LIST 类型的分区表的支持 [#40273](https://github.com/pingcap/tidb/issues/40273) @[winoros](https://github.com/winoros) - 增加了 hint 和 binding 冲突时的 warning 信息 [#40910](https://github.com/pingcap/tidb/issues/40910) @[Reminiscent](https://github.com/Reminiscent) - 优化了 Plan Cache 策略避免在一些场景使用 Plan Cache 时产生不优的计划 [#40312](https://github.com/pingcap/tidb/pull/40312) [#40218](https://github.com/pingcap/tidb/pull/40218) [#40280](https://github.com/pingcap/tidb/pull/40280) [#41136](https://github.com/pingcap/tidb/pull/41136) [#40686](https://github.com/pingcap/tidb/pull/40686) @[qw4990](https://github.com/qw4990) + TiKV - + `` - 优化一些参数的默认值,当partitioned-raft-kv开启时block-cache调整为0.3可用内存(原来是0.45), region-split-size调整为10GB。当沿用raft-kv时且enable-region-bucket为true时,region-split-size默认调整为1GB [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) - 支持在Raftstore异步写入中的优先级调度[#13730] (https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) - 支持TiKV在小于1 core的CPU下启动 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) @@ -442,7 +442,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiCDC - - 支持 batch update dml 语句,提升 TiCDC 同步批量 update DML 的性能 [#8084](https://github.com/pingcap/tiflow/issues/8084) (dup: release-6.3.0.md > 改进提升> Tools> TiCDC)- 采用异步的模式实现 MQ sink 和 MySQL sink,提升 sink 的吞吐能力 [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) @@ -478,7 +477,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB - + `` - 修复了收集统计信息任务因为错误的 datetime 值而失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - 修复了 stats meta 没有创建的问题 [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - 优化了删除分区表所依赖的列时的错误提示 [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) @@ -520,22 +519,20 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - 修复了某些情况下唯一索引仍然可能产生重复数据的问题 [#40217](https://github.com/pingcap/tidb/issues/40217) @[tangenta](https://github.com/tangenta) - 修复了使用 Prepare/Execute 查询某些虚拟表时无法将表 ID 下推导致在大量 Region 的情况下 PD OOM 的问题 [#39605](https://github.com/pingcap/tidb/issues/39605) @[djshow832](https://github.com/djshow832) - 修复了添加索引时可能导致数据竞争的问题 [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) - - + `` - 修复了非法的 datetime 值导致 analyze 失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - 修复了由 virtual column 引发的 can't find proper physical plan 问题 [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) - 修复了当动态裁剪模式下的分区表有 global binding 时,TiDB 重启失败的问题 [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) - 修复了 auto analyze 导致 graceful shutdown 耗时的问题 [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - + `` - 修复了 IndexMerge 算子在触发内存限制行为时可能导致 tidb-server 崩溃的问题[#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) - 修复了在分区表上执行查询 `select * from t limit 1` 时,执行速度慢的问题[#40741](https://github.com/pingcap/tidb/pull/40741)@[solotzg](https://github.com/solotzg) - - + `` - 修复了过期的 region cache 可能残留导致的内存泄漏和性能下降问题 [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + TiKV - + + `` - 修复cast const Enum 到其他类型时的错误 [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - 减少resolve-ts带来的网络流量 [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) - 支持TiKV在小于1 core的CPU下启动 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) @@ -543,14 +540,14 @@ In v6.6.0-DMR, the key new features and improvements are as follows: (dup: release-6.1.4.md > Bug 修复> TiKV)- 修复 TiDB 中事务在执行悲观 DML 失败后,再执行其他 DML 时,如果 TiDB 和 TiKV 之间存在网络故障,可能会造成数据不一致的问题 [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD - + `` - 修复 region scatter 任务会生产多余非预期副本的问题 [#5920](https://github.com/tikv/pd/pull/5920) @[HundunDM](https://github.com/HunDunDM) - 修复 online-unsafe-recovery 在 auto-detect 模式下卡住并超时的问题 [#5754](https://github.com/tikv/pd/pull/5754) @[Connor1996](https://github.com/Connor1996) - 修复 replace down peer 在特定条件下执行变慢的问题 [#5789](https://github.com/tikv/pd/pull/5789)@[HundunDM](https://github.com/HunDunDM) - 修复调用 ReportMinResolvedTS 过高的情况下造成 PD OOM 的问题 [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + TiFlash - + `` - 修复查询 TiFlash 相关的系统表可能会卡住的问题 [#6745](https://github.com/pingcap/tiflash/pull/6745) @[lidezhu](https://github.com/lidezhu) - 修复半连接在计算笛卡尔积时,使用内存过量的问题 [#6730](https://github.com/pingcap/tiflash/issues/6730) @[gengliqi](https://github.com/gengliqi) - 修复了 decimal 进行除法运算时不舍入的问题 [#6393](https://github.com/pingcap/tiflash/issues/6393) @[LittleFall](https://github.com/LittleFall) @@ -571,7 +568,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiCDC - + `` (dup: release-6.1.4.md > Bug 修复> Tools> TiCDC)- 修复不能通过配置文件修改 `transaction_atomicity` 和 `protocol` 参数的问题 [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) - 修复 redo log 存储路径没做权限预检查的问题。 [#6335](https://github.com/pingcap/tiflow/issues/6335) - 修复 redo log 容忍S3存储故障过短的问题。 [#8089](https://github.com/pingcap/tiflow/issues/8089) @@ -581,7 +578,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Data Migration (DM) - + `` - 修复 binlog-schema delete 失败的问题[#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94] - 修复最后一个 binlog 为被 skip 的 ddl 会导致 checkpoint 不推进的问题[#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter] (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- 修复当在某个表上同时指定 `UPDATE` 和非 `UPDATE` 类型的表达式过滤规则 `expression-filter` 时,所有 `UPDATE` 操作被跳过的问题 [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) @[lance6716] From ad044e1e4c5c6c6d7ffeb558725f614985313f99 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 15 Feb 2023 11:07:18 +0800 Subject: [PATCH 060/135] add telemetry notes --- releases/release-6.6.0.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 5f2d840d093a..943a8a182b14 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -265,12 +265,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Telemetry -- 在 v6.6.0 及之后发布的里程碑版本 (DMR) 和长期支持版本 (LTS) 中,TiDB 和 TiDB Dashboard 默认关闭遥测功能,即默认不再收集使用情况信息。如果升级前使用默认的遥测配置,则升级后遥测功能处于关闭状态。 -- 从 v1.11.3 起,新部署的 TiUP 默认关闭遥测功能,即默认不再收集使用情况信息。如果从 v1.11.3 之前的 TiUP 版本升级至 v1.11.3 或更高 TiUP 版本,遥测保持升级前的开启或关闭状态。 - -> **注意:** -> -> 除了 v6.6.0 及之后发布的 DMR 和 LTS 版本默认关闭遥测外,2023 年 2 月 20 日后,为 TiDB LTS 版本发布的补丁版本也默认关闭遥测功能,默认不再收集使用情况信息分享给 PingCAP。具体的版本可参考 [TiDB 版本发布时间线](/releases/release-timeline.md)。 +- Starting from Februray 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). +- Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade. ### Ecosystem From 9317e4bd6edb14c0d4a730a770f6b834bb76bdf4 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 15 Feb 2023 11:25:20 +0800 Subject: [PATCH 061/135] remove div --- releases/release-6.6.0.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 943a8a182b14..51d355190f23 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -60,13 +60,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* #18209 @crazycs520 **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraints #18209 @crazycs520 **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* #39592 @xiongjiwei @qw4990 **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) #39592 @xiongjiwei @qw4990 **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -82,7 +82,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Stability -* #39199 @fzzf678 **tw@TomShawn** +* Binding historical execution plans is GA #39199 @fzzf678 **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. @@ -127,7 +127,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### HTAP -* #6620 @solotzg **tw@TomShawn** +* TiFlash supports data exchange with compression #6620 @solotzg **tw@TomShawn** To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. From b7e1fa493d32157ad72e9f1b1f83679ed337172d Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Wed, 15 Feb 2023 12:28:03 +0800 Subject: [PATCH 062/135] Apply suggestions from code review --- releases/release-6.6.0.md | 44 +++++++++++++++++++-------------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 51d355190f23..1c1e65e0378c 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -434,35 +434,35 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + Backup & Restore (BR) - - 优化 TiKV 端下载日志备份文件的并发度,提升常规场景下 PITR 恢复的性能。[#14206](https://github.com/tikv/tikv/issues/14206) @[YuJuncen](https://github.com/YuJuncen) + - Optimize the concurrency of downloading log backup files on the TiKV side to improve the performance of PITR recovery in regular scenarios [#14206](https://github.com/tikv/tikv/issues/14206) @[YuJuncen](https://github.com/YuJuncen) + TiCDC - - 支持 batch update dml 语句,提升 TiCDC 同步批量 update DML 的性能 [#8084](https://github.com/pingcap/tiflow/issues/8084) - (dup: release-6.3.0.md > 改进提升> Tools> TiCDC)- 采用异步的模式实现 MQ sink 和 MySQL sink,提升 sink 的吞吐能力 [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) + - Support batch UPDATE DML statements to improve TiCDC replication performance [#8084](https://github.com/pingcap/tiflow/issues/8084) + - Implement MQ sink and MySQL sink in the asynchronous mode to improve the sink throughput [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) + TiDB Data Migration (DM) - Optimize DM alert rules and content [#7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** - Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occured. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors: + Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occured. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors: - - For an error that is automatically recoverable, DM reports the alert only if the error occurs more than 3 times within 2 minutes. - - For an error that is not automatically recoverable, DM maintains the original behavior and reports the alert immediately. + - For an error that is automatically recoverable, DM reports the alert only if the error occurs more than 3 times within 2 minutes. + - For an error that is not automatically recoverable, DM maintains the original behavior and reports the alert immediately. - - 优化 relay 性能[#4287](https://github.com/pingcap/tiflow/issues/4287) @[GMHDBJD] + - Optimize relay performance by adding the async/batch relay writer [#4287](https://github.com/pingcap/tiflow/issues/4287) @[GMHDBJD](https://github.com/GMHDBJD) + TiDB Lightning - - physical 导入模式支持 keyspace [#40531](https://github.com/pingcap/tidb/issues/40531) @[iosmanthus] - - 支持通过 lightning.max-error 设置最大冲突个数 [#40743](https://github.com/pingcap/tidb/issues/40743) @[dsdashun] - - 支持带有 BOM header 的数据文件 [#40744](https://github.com/pingcap/tidb/issues/40744) @[dsdashun] - - 优化遇到 tikv 限流错误时处理逻辑,改为尝试其他不繁忙的 region [#40205](https://github.com/pingcap/tidb/issues/40205) @[lance6716] - - 导入时关闭对表外键的检查 [#40027](https://github.com/pingcap/tidb/issues/40027) @[gozssky] + - Physical Import Mode supports Keyspace [#40531](https://github.com/pingcap/tidb/issues/40531) @[iosmanthus](https://github.com/iosmanthus) + - Support setting the maximum number of conflicts by `lightning.max-error` [#40743](https://github.com/pingcap/tidb/issues/40743) @[dsdashun](https://github.com/dsdashun) + - Support importing CSV data files with BOM headers [#40744](https://github.com/pingcap/tidb/issues/40744) @[dsdashun](https://github.com/dsdashun) + - Optimize the processing logic when encountering TiKV flow-limiting errors and try other available regions instead [#40205](https://github.com/pingcap/tidb/issues/40205) @[lance6716](https://github.com/lance6716) + - Disable checking the table foreign keys during import [#40027](https://github.com/pingcap/tidb/issues/40027) @[gozssky](https://github.com/gozssky) + Dumpling - - 支持导出外键相关设置 [#39913](https://github.com/pingcap/tidb/issues/39913) @[lichunzhu] + - Support exporting settings for foreign keys [#39913](https://github.com/pingcap/tidb/issues/39913) @[lichunzhu](https://github.com/lichunzhu) + Sync-diff-inspector @@ -552,15 +552,15 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + Backup & Restore (BR) - - Fix the issue that when restoring log backup, hot Regions cause the restore to fail [#37207](https://github.com/pingcap/tidb/issues/37207) @[Leavrth](https://github.com/Leavrth) - - 修复了恢复数据到正在运行日志备份的集群,导致日志备份文件无法恢复的问题 [#40797](https://github.com/pingcap/tidb/issues/40797) @[Leavrth](https://github.com/Leavrth) - - 修复了 PITR 功能不支持 CA-bundle 认证的问题 [#38775](https://github.com/pingcap/tidb/issues/38775) @[YuJuncen](https://github.com/YuJuncen) - - 修复了恢复时重复的临时表导致的 Panic 问题 [#40797](https://github.com/pingcap/tidb/issues/40797) @[joccau](https://github.com/joccau) - - 修复了 PITR 不支持 PD 集群配置变更的问题 [#14165](https://github.com/tikv/tikv/issues/14165) @[YuJuncen](https://github.com/YuJuncen) - - 修复了 PD 与 tidb-server 的连接故障导致 PITR 备份进度不推进的问题 [#41082](https://github.com/pingcap/tidb/issues/41082) @[YuJuncen](https://github.com/YuJuncen) - - 修复了 PD 与 TiKV 的连接故障导致 TiKV 不能监听 PITR 任务的问题 [#14159](https://github.com/tikv/tikv/issues/14159) @[YuJuncen](https://github.com/YuJuncen) - - 修复了当 TiDB 集群不存在 PITR 备份任务时,`resolve lock` 频率过高的问题 [#40759](https://github.com/pingcap/tidb/issues/40759) @[joccau](https://github.com/joccau) - - 修复了 PITR 备份任务被删除时存在备份信息残留导致新任务出现数据不一致的问题 [#40403](https://github.com/pingcap/tidb/issues/40403) @[joccau](https://github.com/joccau) + - Fix the issue that when restoring log backup, hot regions cause the restore to fail [#37207](https://github.com/pingcap/tidb/issues/37207) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that restoring data to a cluster on which the log backup is running causes the log backup file to be unrecoverable [#40797](https://github.com/pingcap/tidb/issues/40797) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that the PITR feature does not support CA-bundles [#38775](https://github.com/pingcap/tidb/issues/38775) @[YuJuncen](https://github.com/YuJuncen) + - Fix the panic issue caused by duplicate temporary tables during recovery [#40797](https://github.com/pingcap/tidb/issues/40797) @[joccau](https://github.com/joccau) + - Fix the issue that PITR does not support configuration changes for PD clusters [#14165](https://github.com/tikv/tikv/issues/14165) @[YuJuncen](https://github.com/YuJuncen) + - Fix the issue that the connection failure between PD and tidb-server causes PITR backup progress not to advance [#41082](https://github.com/pingcap/tidb/issues/41082) @[YuJuncen](https://github.com/YuJuncen) + - Fix the issue that TiKV cannot listen to PITR tasks due to the connection failure between PD and TiKV [#14159](https://github.com/tikv/tikv/issues/14159) @[YuJuncen](https://github.com/YuJuncen) + - Fix the issue that the frequency of `resolve lock` is too high when there is no PITR backup task in the TiDB cluster [#40759](https://github.com/pingcap/tidb/issues/40759) @[joccau](https://github.com/joccau) + - Fix the issue that when a PITR backup task is deleted, the residual backup data causes data inconsistency in new tasks [#40403](https://github.com/pingcap/tidb/issues/40403) @[joccau](https://github.com/joccau) + TiCDC From 2e86fb19799b10cca4425e6302d877c5ecf3b2ff Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 15 Feb 2023 14:40:13 +0800 Subject: [PATCH 063/135] Apply suggestions from code review --- releases/release-6.6.0.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 1c1e65e0378c..865d602276a1 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -564,13 +564,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiCDC - `` - (dup: release-6.1.4.md > Bug 修复> Tools> TiCDC)- 修复不能通过配置文件修改 `transaction_atomicity` 和 `protocol` 参数的问题 [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) - - 修复 redo log 存储路径没做权限预检查的问题。 [#6335](https://github.com/pingcap/tiflow/issues/6335) - - 修复 redo log 容忍S3存储故障过短的问题。 [#8089](https://github.com/pingcap/tiflow/issues/8089) + (dup: release-6.1.4.md > Bug 修复> Tools> TiCDC)- Fix the issue that `transaction_atomicity` and `protocol` cannot be updated via the configuration file [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) + - Fix the issue that precheck is not performed on the storage path of redo log [#6335](https://github.com/pingcap/tiflow/issues/6335) + - Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure [#8089](https://github.com/pingcap/tiflow/issues/8089) - 修复 changefeed 在 tikv、CDC 节点扩缩容特殊场景下卡住的问题。 [#8197](https://github.com/pingcap/tiflow/issues/8197) - - 修复 在6.5 中引入tikv 节点之间流量过大的问题。 [#14092](https://github.com/tikv/tikv/issues/14092) - - 优化 pull-based sink 打开时在 CPU 利用率、内存控制、吞吐等方面若干性能问题。[#8142](https://github.com/pingcap/tiflow/issues/8142) [#8157](https://github.com/pingcap/tiflow/issues/8157) [#8001](https://github.com/pingcap/tiflow/issues/8001) [#5928](https://github.com/pingcap/tiflow/issues/5928) + - Fix the issue of too large traffic among TiKV nodes [#14092](https://github.com/tikv/tikv/issues/14092) + - Fix the performance issues of TiCDC in terms of CPU usage, memory control, and throughput when the pull-based sink is enabled [#8142](https://github.com/pingcap/tiflow/issues/8142) [#8157](https://github.com/pingcap/tiflow/issues/8157) [#8001](https://github.com/pingcap/tiflow/issues/8001) [#5928](https://github.com/pingcap/tiflow/issues/5928) + TiDB Data Migration (DM) From d6f433c338a966781926e5155d23c0ebe01b7aef Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 15 Feb 2023 14:48:38 +0800 Subject: [PATCH 064/135] remove highlight links and anchors --- releases/release-6.6.0.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 865d602276a1..69ce97b4f474 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -23,34 +23,34 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - + - + - + - + - + - + - + @@ -60,13 +60,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* Support the MySQL-compatible foreign key constraints#18209@crazycs520 **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraints #18209@crazycs520 **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index (experimental)#39592@xiongjiwei@qw4990 **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) #39592@xiongjiwei@qw4990 **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -82,7 +82,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Stability -* Binding historical execution plans is GA#39199@fzzf678 **tw@TomShawn** +* Binding historical execution plans is GA #39199@fzzf678 **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. @@ -127,13 +127,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### HTAP -* TiFlash supports data exchange with compression#6620@solotzg **tw@TomShawn** +* TiFlash supports data exchange with compression #6620@solotzg **tw@TomShawn** To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. For details, see [documentation](). -* TiFlash supports the Stale Read feature#4483@hehechen **tw@qiancai** +* TiFlash supports the Stale Read feature #4483@hehechen **tw@qiancai** The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. @@ -170,7 +170,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### DB operations -* Support resource control based on resource groups (experimental)#38825@nolouch@BornChanger@glorv@tiancaiamao@Connor1996@JmPotato@hnes@CabinfeverB@HuSharp **tw@hfxsd** +* Support resource control based on resource groups (experimental) #38825@nolouch@BornChanger@glorv@tiancaiamao@Connor1996@JmPotato@hnes@CabinfeverB@HuSharp **tw@hfxsd** Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. @@ -270,7 +270,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Ecosystem -* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental)@lance6716 **tw@ran-huang** +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @lance6716 **tw@ran-huang** In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. From 0c5d3eb937c79267a5a1fe9c46e3356e0a5078f2 Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 15 Feb 2023 14:57:43 +0800 Subject: [PATCH 065/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 69ce97b4f474..8b96e94a136a 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -574,8 +574,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Data Migration (DM) `` - - 修复 binlog-schema delete 失败的问题[#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94] - - 修复最后一个 binlog 为被 skip 的 ddl 会导致 checkpoint 不推进的问题[#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter] + - Fix the issue that the `binlog-schema delete` command fails to execute [#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94] + - Fix the issue that the checkpoint does not advance when the last binlog is a skipped DDL [#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter] (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- 修复当在某个表上同时指定 `UPDATE` 和非 `UPDATE` 类型的表达式过滤规则 `expression-filter` 时,所有 `UPDATE` 操作被跳过的问题 [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) @[lance6716] (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- 修复当某个表上仅指定 `update-old-value-expr` 或 `update-new-value-expr` 时,过滤规则不生效或 DM 发生 panic 的问题 [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) @[lance6716] From c22331d115b2b5a60bf085cabd89f87f8ae81ebe Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 15 Feb 2023 14:58:51 +0800 Subject: [PATCH 066/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 8b96e94a136a..fb98bbdf5aae 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -576,8 +576,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `` - Fix the issue that the `binlog-schema delete` command fails to execute [#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94] - Fix the issue that the checkpoint does not advance when the last binlog is a skipped DDL [#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter] - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- 修复当在某个表上同时指定 `UPDATE` 和非 `UPDATE` 类型的表达式过滤规则 `expression-filter` 时,所有 `UPDATE` 操作被跳过的问题 [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) @[lance6716] - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- 修复当某个表上仅指定 `update-old-value-expr` 或 `update-new-value-expr` 时,过滤规则不生效或 DM 发生 panic 的问题 [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) @[lance6716] + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when the expression filters of both "update" and "non-update" types are specified in one table, all `UPDATE` statements are skipped [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) @[lance6716] + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when only one of `update-old-value-expr` or `update-new-value-expr` is set for a table, the filter rule does not take effect or DM panics [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) @[lance6716] + TiDB Lightning From 395906872489330aee510ce65578ab66e78a3cc9 Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 15 Feb 2023 16:15:28 +0800 Subject: [PATCH 067/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index fb98bbdf5aae..5d6f73790661 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -582,14 +582,14 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Lightning - Fix the issue that TiDB Lightning timeout hangs due to TiDB restart in some scenarios [#33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) **tw@shichun-0415** - - 修复并行导入时,当除最后一个外的 lightning 实例都遇到本地重复时,lightning 可能会跳过冲突处理的问题 [#40923](https://github.com/pingcap/tidb/issues/40923) @[lichunzhu] - - 修复 precheck 无法准确检测目标集群是否存在运行中的 CDC 的问题 [#41040](https://github.com/pingcap/tidb/issues/41040) @[lance6716] - - 修复 lightning 在 split-region 阶段 panic 问题(next key) [#40934](https://github.com/pingcap/tidb/issues/40934) @[lance6716] - - 修复踢重逻辑可能导致 checksum 不一致的问题 [#40657](https://github.com/pingcap/tidb/issues/40657) @[gozssky] - - 修复当数据文件中存在未闭合的 delimiter 可能导致 OOM 的问题 [#40400](https://github.com/pingcap/tidb/issues/40400) @[buchuitoudegou] - - 修复报错中的文件 offset 超过文件大小的问题 [#40034](https://github.com/pingcap/tidb/issues/40034) @[buchuitoudegou] - - 修复新版 PDClient 可能导致并行导入失败的问题 [#40493](https://github.com/pingcap/tidb/issues/40493) @[AmoebaProtozoa] - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Lightning)- 修复 precheck 检查项有时无法监测到之前的导入失败遗留的脏数据的问题 [#39477](https://github.com/pingcap/tidb/issues/39477) @[dsdashun](https://github.com/dsdashun) @[dsdashun] + - Fix the issue that TiDB Lightning might incorrectly skip conflict resolution when all but the last TiDB Lightning instance encounters a local duplicate record during a parallel import [#40923](https://github.com/pingcap/tidb/issues/40923) @[lichunzhu] + - Fix the issue that precheck cannot accurately detect the presence of a running TiCDC in the target cluster [#41040](https://github.com/pingcap/tidb/issues/41040) @[lance6716] + - Fix the issue that TiDB Lightning panics in the split-region phase [#40934](https://github.com/pingcap/tidb/issues/40934) @[lance6716] + - Fix the issue that the conflict resolution logic (`duplicate-resolution`) might lead to inconsistent checksums [#40657](https://github.com/pingcap/tidb/issues/40657) @[gozssky] + - Fix a possible OOM problem when there is an unclosed delimiter in the data file [#40400](https://github.com/pingcap/tidb/issues/40400) @[buchuitoudegou] + - Fix the issue that the file offset in the error report exceeds the file size [#40034](https://github.com/pingcap/tidb/issues/40034) @[buchuitoudegou] + - Fix an issue with the new version of PDClient that might cause parallel import to fail [#40493](https://github.com/pingcap/tidb/issues/40493) @[AmoebaProtozoa] + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Lightning)- Fix the issue that TiDB Lightning prechecks cannot find dirty data left by previously failed imports [#39477](https://github.com/pingcap/tidb/issues/39477) @[dsdashun](https://github.com/dsdashun) @[dsdashun] ## Contributors From 9b5e93d9ea47dd0d955832406b9e563e58b781d8 Mon Sep 17 00:00:00 2001 From: shichun-0415 Date: Wed, 15 Feb 2023 18:35:07 +0800 Subject: [PATCH 068/135] translate 20 sql-infra bug fix notes --- releases/release-6.6.0.md | 42 ++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 20 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 5d6f73790661..628491c21905 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -390,6 +390,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitate more flexible TiKV performance tuning. - Remove the limit on `LIMIT` clauses, thus improving the execution performance. - Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.5.0. +- TiDB no longer supports modifying column types on partitioned tables because of potential correctness issues. ## Improvements @@ -495,31 +496,32 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - 修复了 MODIFY COLUMN 同时修改列默认值导致写入非法值的问题 [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) - 修复了表 region 比较多时因 region 缓存失效导致加索引效率低下的问题 [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) - 修复了分配自增 ID 时的数据竞争问题 [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) - - 修复了 JSON 的 not 表达式实现与 MySQL 实现不兼容的问题 [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) - - 修复了并发视图时可能会造成 DDL 操作卡住的问题 [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) - (dup: release-6.1.4.md > 兼容性变更> TiDB)- 由于可能存在正确性问题,分区表目前不再支持修改列类型 [#40620](https://github.com/pingcap/tidb/issues/40620) @[mjonss](https://github.com/mjonss) @[mjonss](https://github.com/mjonss) - - 修复了使用 `caching_sha2_password` 方式进行认证时如果不指定的密码会报错 "Malformed packet" 的问题 [#40831](https://github.com/pingcap/tidb/issues/40831) @[dveeden](https://github.com/dveeden) - - 修复了在执行 TTL 任务时,如果表的主键包含 `ENUM` 类型的列任务会失败的问题 [#40456](https://github.com/pingcap/tidb/issues/40456) @[lcwangchao](https://github.com/lcwangchao) - - 修复了某些被 MDL 阻塞的 DDL 操作无法在 `mysql.tidb_mdl_view` 中查询到的问题 [#40838](https://github.com/pingcap/tidb/issues/40838) @[YangKeao](https://github.com/YangKeao) - - 修复了 DDL 在 ingest 过程中可能会发生数据竞争的问题 [#40970](https://github.com/pingcap/tidb/issues/40970) @[tangenta](https://github.com/tangenta) - - 修复了在改变时区后 TTL 任务可能会错误删除某些数据的问题 [41043](https://github.com/pingcap/tidb/issues/41043) @[lcwangchao](https://github.com/lcwangchao) - - 修复了 `JSON_OBJECT` 在某些情况下会报错的问题 [#39997](https://github.com/pingcap/tidb/pull/39997) @[YangKeao](https://github.com/YangKeao) - - 修复了 TiDB 在初始化时有可能死锁的问题 [#40408](https://github.com/pingcap/tidb/issues/40408) @[Defined2014](https://github.com/Defined2014) - - 修复了内存重用导致的在某些情况下系统变量的值会被错误修改的问题 [#40979](https://github.com/pingcap/tidb/issues/40979) @[lcwangchao](https://github.com/lcwangchao) - - 修复了 ingest 模式下创建唯一索引可能会导致数据和索引不一致的问题 [#40464](https://github.com/pingcap/tidb/issues/40464) @[tangenta](https://github.com/tangenta) - - 修复了并发 truncate 同一张表时部分 truncate 操作无法被 MDL 阻塞的问题 [#40484](https://github.com/pingcap/tidb/issues/40484) @[wjhuang2016](https://github.com/wjhuang2016) - - 修复了 `SHOW PRIVILEGES` 命令显示的权限列表不完整的问题 [#40591](https://github.com/pingcap/tidb/issues/40591) @[CbcWestwolf](https://github.com/CbcWestwolf) - - 修复了在 ADD UNIQUE INDEX 时有可能会 PANIC 的问题 [#40592](https://github.com/pingcap/tidb/issues/40592) @[tangenta](https://github.com/tangenta) - - 修复了 ADMIN RECOVER 操作可能会造成索引数据损坏的问题 [#40430](https://github.com/pingcap/tidb/issues/40430) @[xiongjiwei](https://github.com/xiongjiwei) - - 修复了表达式索引中含有 CAST 时对表进行查询可能出错的问题 [#40129](https://github.com/pingcap/tidb/pull/40129) @[xiongjiwei](https://github.com/xiongjiwei) - - 修复了某些情况下唯一索引仍然可能产生重复数据的问题 [#40217](https://github.com/pingcap/tidb/issues/40217) @[tangenta](https://github.com/tangenta) - - 修复了使用 Prepare/Execute 查询某些虚拟表时无法将表 ID 下推导致在大量 Region 的情况下 PD OOM 的问题 [#39605](https://github.com/pingcap/tidb/issues/39605) @[djshow832](https://github.com/djshow832) - - 修复了添加索引时可能导致数据竞争的问题 [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) + - Fix the issue that the implementation of the not expression of JSON is incompatible with the implementation of MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that DDL operations might be blocked in the concurrent view mode [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) + - Fix the issue that "Malformed packet" is reported when using `caching_sha2_password` for authentication without specifying a password [#40831](https://github.com/pingcap/tidb/issues/40831) @[dveeden](https://github.com/dveeden) + - Fix the issue that a TTL task fails if the primary key of the table contains an `ENUM` column [#40456](https://github.com/pingcap/tidb/issues/40456) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that some DDL operations blocked by MDL cannot be queried in `mysql.tidb_mdl_view` [#40838](https://github.com/pingcap/tidb/issues/40838) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that data race might occur during DDL ingestion [#40970](https://github.com/pingcap/tidb/issues/40970) @[tangenta](https://github.com/tangenta) + - Fix the issue that TTL task might delete some data incorrectly after the time zone changes [41043](https://github.com/pingcap/tidb/issues/41043) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that `JSON_OBJECT` might report an error in some cases [#39806](https://github.com/pingcap/tidb/issues/39806) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that TiDB might deadlock during initialization [#40408](https://github.com/pingcap/tidb/issues/40408) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that the value of system variables might be incorrectly modified in some cases due to memory reuse [#40979](https://github.com/pingcap/tidb/issues/40979) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that data might be inconsistent with the index after creating a unique index in the ingest mode [#40464](https://github.com/pingcap/tidb/issues/40464) @[tangenta](https://github.com/tangenta) + - Fix the issue that some truncate operations cannot be blocked by MDL when truncating the same table concurrently [#40484](https://github.com/pingcap/tidb/issues/40484) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that the privilege list returned by the `SHOW PRIVILEGES` command is incomplete [#40591](https://github.com/pingcap/tidb/issues/40591) @[CbcWestwolf](https://github.com/CbcWestwolf) + - Fix the issue that TiDB panics when adding a unique index [#40592](https://github.com/pingcap/tidb/issues/40592) @[tangenta](https://github.com/tangenta) + - Fix the issue that executing the `ADMIN RECOVER` statement might cause the index data to be corrupted [#40430](https://github.com/pingcap/tidb/issues/40430) @[xiongjiwei](https://github.com/xiongjiwei) + - Fix the issue that a query might fail when the queried table contains a `CAST` expression in the expression index [#40130](https://github.com/pingcap/tidb/issues/40130) @[xiongjiwei](https://github.com/xiongjiwei) + - Fix the issue that a unique index might still produce duplicate data in some cases [#40217](https://github.com/pingcap/tidb/issues/40217) @[tangenta](https://github.com/tangenta) + - Fix PD OOM when there is a large number of Regions but the table ID cannot be pushed down when querying some virtual tables using `Prepare` or `Execute` [#39605](https://github.com/pingcap/tidb/issues/39605) @[djshow832](https://github.com/djshow832) + - Fix the issue that data race might occur when an index is added [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) + `` - 修复了非法的 datetime 值导致 analyze 失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - 修复了由 virtual column 引发的 can't find proper physical plan 问题 [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) - 修复了当动态裁剪模式下的分区表有 global binding 时,TiDB 重启失败的问题 [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) - 修复了 auto analyze 导致 graceful shutdown 耗时的问题 [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + `` - 修复了 IndexMerge 算子在触发内存限制行为时可能导致 tidb-server 崩溃的问题[#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) - 修复了在分区表上执行查询 `select * from t limit 1` 时,执行速度慢的问题[#40741](https://github.com/pingcap/tidb/pull/40741)@[solotzg](https://github.com/solotzg) From e1a8f5dd6d48d8b94a258f616191910b83708068 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Wed, 15 Feb 2023 18:49:57 +0800 Subject: [PATCH 069/135] Apply suggestions from code review --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 628491c21905..ebf3fec62801 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -497,16 +497,16 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - 修复了表 region 比较多时因 region 缓存失效导致加索引效率低下的问题 [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) - 修复了分配自增 ID 时的数据竞争问题 [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) - Fix the issue that the implementation of the not expression of JSON is incompatible with the implementation of MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) - - Fix the issue that DDL operations might be blocked in the concurrent view mode [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) + - Fix the issue that concurrent view might cause DDL operations to be blocked [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) - Fix the issue that "Malformed packet" is reported when using `caching_sha2_password` for authentication without specifying a password [#40831](https://github.com/pingcap/tidb/issues/40831) @[dveeden](https://github.com/dveeden) - Fix the issue that a TTL task fails if the primary key of the table contains an `ENUM` column [#40456](https://github.com/pingcap/tidb/issues/40456) @[lcwangchao](https://github.com/lcwangchao) - Fix the issue that some DDL operations blocked by MDL cannot be queried in `mysql.tidb_mdl_view` [#40838](https://github.com/pingcap/tidb/issues/40838) @[YangKeao](https://github.com/YangKeao) - Fix the issue that data race might occur during DDL ingestion [#40970](https://github.com/pingcap/tidb/issues/40970) @[tangenta](https://github.com/tangenta) - - Fix the issue that TTL task might delete some data incorrectly after the time zone changes [41043](https://github.com/pingcap/tidb/issues/41043) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that TTL tasks might delete some data incorrectly after the time zone changes [41043](https://github.com/pingcap/tidb/issues/41043) @[lcwangchao](https://github.com/lcwangchao) - Fix the issue that `JSON_OBJECT` might report an error in some cases [#39806](https://github.com/pingcap/tidb/issues/39806) @[YangKeao](https://github.com/YangKeao) - Fix the issue that TiDB might deadlock during initialization [#40408](https://github.com/pingcap/tidb/issues/40408) @[Defined2014](https://github.com/Defined2014) - Fix the issue that the value of system variables might be incorrectly modified in some cases due to memory reuse [#40979](https://github.com/pingcap/tidb/issues/40979) @[lcwangchao](https://github.com/lcwangchao) - - Fix the issue that data might be inconsistent with the index after creating a unique index in the ingest mode [#40464](https://github.com/pingcap/tidb/issues/40464) @[tangenta](https://github.com/tangenta) + - Fix the issue that data might be inconsistent with the index when a unique index is created in the ingest mode [#40464](https://github.com/pingcap/tidb/issues/40464) @[tangenta](https://github.com/tangenta) - Fix the issue that some truncate operations cannot be blocked by MDL when truncating the same table concurrently [#40484](https://github.com/pingcap/tidb/issues/40484) @[wjhuang2016](https://github.com/wjhuang2016) - Fix the issue that the privilege list returned by the `SHOW PRIVILEGES` command is incomplete [#40591](https://github.com/pingcap/tidb/issues/40591) @[CbcWestwolf](https://github.com/CbcWestwolf) - Fix the issue that TiDB panics when adding a unique index [#40592](https://github.com/pingcap/tidb/issues/40592) @[tangenta](https://github.com/tangenta) From 9db730854396f480222a8ef4afc1cbf6b7af3a51 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Wed, 15 Feb 2023 19:11:47 +0800 Subject: [PATCH 070/135] add back 40620 --- releases/release-6.6.0.md | 1 + 1 file changed, 1 insertion(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ebf3fec62801..abf22cc442cb 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -498,6 +498,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - 修复了分配自增 ID 时的数据竞争问题 [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) - Fix the issue that the implementation of the not expression of JSON is incompatible with the implementation of MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) - Fix the issue that concurrent view might cause DDL operations to be blocked [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) + - Fix data inconsistency caused by concurrently executing DDL statements to modify columns of partitioned tables [#40620](https://github.com/pingcap/tidb/issues/40620) @[mjonss](https://github.com/mjonss) @[mjonss](https://github.com/mjonss) - Fix the issue that "Malformed packet" is reported when using `caching_sha2_password` for authentication without specifying a password [#40831](https://github.com/pingcap/tidb/issues/40831) @[dveeden](https://github.com/dveeden) - Fix the issue that a TTL task fails if the primary key of the table contains an `ENUM` column [#40456](https://github.com/pingcap/tidb/issues/40456) @[lcwangchao](https://github.com/lcwangchao) - Fix the issue that some DDL operations blocked by MDL cannot be queried in `mysql.tidb_mdl_view` [#40838](https://github.com/pingcap/tidb/issues/40838) @[YangKeao](https://github.com/YangKeao) From 9ae8082909861e276c212598e994c5fbb053d782 Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 16 Feb 2023 09:42:55 +0800 Subject: [PATCH 071/135] bug fix: translate sql-infra --- releases/release-6.6.0.md | 42 +++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index abf22cc442cb..e15cc8f95e08 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -475,27 +475,27 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB `` - - 修复了收集统计信息任务因为错误的 datetime 值而失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - 修复了 stats meta 没有创建的问题 [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - 优化了删除分区表所依赖的列时的错误提示 [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) - - 修复了 DDL 回填数据时频繁发生的事务写冲突问题 [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) - - 增加了 FLASHBACK CLUSTER 在检查 `min-resolved-ts` 失败后的重试机制 [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) - - 修复了部分情况下空表不能使用 ingest 模式添加索引的问题 [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) - - 修复了同一个事务中不同 SQL 的慢日志 `wait_ts` 相同的问题 [#39713](https://github.com/pingcap/tidb/issues/39713) @[TonsnakeLin](https://github.com/TonsnakeLin) - - 修复了添加列的过程中删除行记录报 "Assertion Failed" 错误的问题 [#39570](https://github.com/pingcap/tidb/issues/39570) @[wjhuang2016](https://github.com/wjhuang2016) - - 修复了修改列类型时报 "not a DDL owner" 错误的问题 [#39643](https://github.com/pingcap/tidb/issues/39643) @[zimulala](https://github.com/zimulala) - - 修复了 AUTO_INCREMENT 列自动分配值耗尽后插入一行不报错的问题 [#38950](https://github.com/pingcap/tidb/issues/38950) @[Dousir9](https://github.com/Dousir9) - - 修复了创建表达式索引时报 "Unknown column" 错误的问题 [#39784](https://github.com/pingcap/tidb/issues/39784) @[Defined2014](https://github.com/Defined2014) - - 修复了生成列表达式包含表名时,重命名表后无法插入数据的问题 [#39826](https://github.com/pingcap/tidb/issues/39826) @[Defined2014](https://github.com/Defined2014) - - 修复了列在 write-only 状态下 INSERT IGNORE 语句无法正确填充默认值的问题 [#40192](https://github.com/pingcap/tidb/issues/40192) @[YangKeao](https://github.com/YangKeao) - - 修复了资源管控模块关闭时未能释放资源的问题 [#40546](https://github.com/pingcap/tidb/issues/40546) @[zimulala](https://github.com/zimulala) - - 不支持在分区表上执行 MODIFY COLUMN [#39915](https://github.com/pingcap/tidb/issues/39915) @[wjhuang2016](https://github.com/wjhuang2016) - - 禁止重命名分区表所依赖的列 [#40150](https://github.com/pingcap/tidb/issues/40150) @[mjonss](https://github.com/mjonss) - - 修复了 TTL 任务不能及时触发统计信息更新的问题 [#40109](https://github.com/pingcap/tidb/issues/40109) @[YangKeao](https://github.com/YangKeao) - - 修复了 TiDB 构造 key 范围时对 null 值处理不当导致读到预期外数据的行为 [#40158](https://github.com/pingcap/tidb/issues/40158) @[tiancaiamao](https://github.com/tiancaiamao) - - 修复了 MODIFY COLUMN 同时修改列默认值导致写入非法值的问题 [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) - - 修复了表 region 比较多时因 region 缓存失效导致加索引效率低下的问题 [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) - - 修复了分配自增 ID 时的数据竞争问题 [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) + - Fix the issue that the statistics collection task fails due to an incorrect `DATETIME` value [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - Fix the issue that `stats_meta` is not created after creating a table [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - Optimize the error message when deleting a column that a partitioned table depends on [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) + - Fix the issue of frequent write conflicts in transactions when performing DDL data backfill [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) + - Add a retry mechanism for `FLASHBACK CLUSTER` after failing to check the `min-resolved-ts` [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that an index cannot be created into an empty table using injest mode [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) + - Fix the issue that `wait_ts` in the slow query log is the same for different SQL statements within the same transaction [#39713](https://github.com/pingcap/tidb/issues/39713) @[TonsnakeLin](https://github.com/TonsnakeLin) + - Fix the issue that the `Assertion Failed` error is reported when adding a column during the process of deleting a row record [#39570](https://github.com/pingcap/tidb/issues/39570) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that the `not a DDL owner` error is reported when modifying a column type [#39643](https://github.com/pingcap/tidb/issues/39643) @[zimulala](https://github.com/zimulala) + - Fix the issue that an error is not reported when inserting a row after exhaustion of the auto-increment values of the `AUTO_INCREMENT` column [#38950](https://github.com/pingcap/tidb/issues/38950) @[Dousir9](https://github.com/Dousir9) + - Fix the issue that the `Unknown column` error is reported when creating an expression index [#39784](https://github.com/pingcap/tidb/issues/39784) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that data cannot be inserted into a renamed table when the generated expression includes the table name [#39826](https://github.com/pingcap/tidb/issues/39826) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that `INSERT IGNORE` statements cannot correctly fill in default values when the column is write-only [#40192](https://github.com/pingcap/tidb/issues/40192) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that resources are not released when disabling the resource management module [#40546](https://github.com/pingcap/tidb/issues/40546) @[zimulala](https://github.com/zimulala) + - `MODIFY COLUMN` is not supported on partitioned tables [#39915](https://github.com/pingcap/tidb/issues/39915) @[wjhuang2016](https://github.com/wjhuang2016) + - Disable renaming of columns that partition tables depend on [#40150](https://github.com/pingcap/tidb/issues/40150) @[mjonss](https://github.com/mjonss) + - Fix the issue that TTL tasks cannot trigger statistics updates in time [#40109](https://github.com/pingcap/tidb/issues/40109) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that TiDB improperly handles `NULL` values when constructing key ranges, resulting in unexpected data being read [#40158](https://github.com/pingcap/tidb/issues/40158) @[tiancaiamao](https://github.com/tiancaiamao) + - Fix the issue that illegal values are written to the table when the `MODIFT COLUMN` statement modifies the default value of a column [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that the adding index operation is inefficient due to the invalid Region cache when there are many Regions in a table [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) + - Fix the data race issue when allocating auto-increment IDs [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) - Fix the issue that the implementation of the not expression of JSON is incompatible with the implementation of MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) - Fix the issue that concurrent view might cause DDL operations to be blocked [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) - Fix data inconsistency caused by concurrently executing DDL statements to modify columns of partitioned tables [#40620](https://github.com/pingcap/tidb/issues/40620) @[mjonss](https://github.com/mjonss) @[mjonss](https://github.com/mjonss) From f7345883abe173147e1bcdae5b664163b832ad46 Mon Sep 17 00:00:00 2001 From: qiancai Date: Thu, 16 Feb 2023 10:06:02 +0800 Subject: [PATCH 072/135] sync with source changes --- releases/release-6.6.0.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index e15cc8f95e08..bfeacc315efe 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -519,31 +519,31 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `` - 修复了非法的 datetime 值导致 analyze 失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - 修复了由 virtual column 引发的 can't find proper physical plan 问题 [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) + - 修复了由虚拟列引发的 `can't find proper physical plan` 问题 [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) - 修复了当动态裁剪模式下的分区表有 global binding 时,TiDB 重启失败的问题 [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) - 修复了 auto analyze 导致 graceful shutdown 耗时的问题 [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) `` - - 修复了 IndexMerge 算子在触发内存限制行为时可能导致 tidb-server 崩溃的问题[#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) - - 修复了在分区表上执行查询 `select * from t limit 1` 时,执行速度慢的问题[#40741](https://github.com/pingcap/tidb/pull/40741)@[solotzg](https://github.com/solotzg) + - 修复了 IndexMerge 算子在触发内存限制行为时可能导致 TiDB server 崩溃的问题[#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) + - 修复了在分区表上执行 `SELECT * FROM table_name LIMIT 1` 查询时,执行速度慢的问题 [#40741](https://github.com/pingcap/tidb/pull/40741) @[solotzg](https://github.com/solotzg) `` - - 修复了过期的 region cache 可能残留导致的内存泄漏和性能下降问题 [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + - 修复了过期的 region 缓存可能残留导致的内存泄漏和性能下降问题 [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + TiKV `` - - 修复cast const Enum 到其他类型时的错误 [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - - 减少resolve-ts带来的网络流量 [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) - - 支持TiKV在小于1 core的CPU下启动 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) + - 修复转换 `const Enum` 类型到其他类型时报错的问题 [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) + - 修复 Resolved TS 导致网络流量升高的问题 [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) + - 修复 TiKV 在 CPU 核数小于 1 时无法启动的问题 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) - copr: 修复old collation时Like中的 _ pattern的行为 [#13785](https://github.com/tikv/tikv/pull/13785) @[Yangkeao](https://github.com/Yangkeao) (dup: release-6.1.4.md > Bug 修复> TiKV)- 修复 TiDB 中事务在执行悲观 DML 失败后,再执行其他 DML 时,如果 TiDB 和 TiKV 之间存在网络故障,可能会造成数据不一致的问题 [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD `` - - 修复 region scatter 任务会生产多余非预期副本的问题 [#5920](https://github.com/tikv/pd/pull/5920) @[HundunDM](https://github.com/HunDunDM) - - 修复 online-unsafe-recovery 在 auto-detect 模式下卡住并超时的问题 [#5754](https://github.com/tikv/pd/pull/5754) @[Connor1996](https://github.com/Connor1996) - - 修复 replace down peer 在特定条件下执行变慢的问题 [#5789](https://github.com/tikv/pd/pull/5789)@[HundunDM](https://github.com/HunDunDM) - - 修复调用 ReportMinResolvedTS 过高的情况下造成 PD OOM 的问题 [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + - 修复 Region Scatter 任务会生成非预期的多余副本的问题 [#5909](https://github.com/tikv/pd/issues/5909) @[HundunDM](https://github.com/HunDunDM) + - 修复 Online Unsafe Recovery 功能在 `auto-detect` 模式下卡住并超时的问题 [#5753](https://github.com/tikv/pd/issues/5753) @[Connor1996](https://github.com/Connor1996) + - 修复 `replace-down-peer` 在特定条件下执行变慢的问题 [#5788](https://github.com/tikv/pd/issues/5788) @[HundunDM](https://github.com/HunDunDM) + - 修复调用 `ReportMinResolvedTS` 过高的情况下造成 PD OOM 的问题 [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + TiFlash `` From e11ee7a81cb4bf20aa807874e1cba922500fdb3e Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 10:41:30 +0800 Subject: [PATCH 073/135] Update releases/release-6.6.0.md Co-authored-by: xixirangrang --- releases/release-6.6.0.md | 1 + 1 file changed, 1 insertion(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index bfeacc315efe..0fb12b5678d8 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -1,5 +1,6 @@ --- title: TiDB 6.6.0 Release Notes +summary: Learn about the new features, compatibility changes, improvements, and bug fixes in TiDB 6.6.0. --- # TiDB 6.6.0 Release Notes From 5f5355b9e77c0d4ecc5795262956e50ccaab585b Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Thu, 16 Feb 2023 10:59:29 +0800 Subject: [PATCH 074/135] add five persist stmt system variables --- releases/release-6.6.0.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 0fb12b5678d8..c8eb30c3c5b4 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -350,6 +350,11 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | Newly added | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | | [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | Newly added | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | +| [`tidb_stmt_summary_enable_persistent`](/system-variables.md#tidb_stmt_summary_enable_persistent-new-in-v660) | Newly added | This variable is read-only. It controls whether to enable [statements summary persistence](/statement-summary-tables.md#persist-statements-summary). The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660). | +| [`tidb_stmt_summary_filename`](/system-variables.md#tidb_stmt_summary_filename-new-in-v660) | Newly added | This variable is read-only. It specifies the file to which persistent data is written when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660). | +| [`tidb_stmt_summary_file_max_backups`](/system-variables.md#tidb_stmt_summary_file_max_backups-new-in-v660) | Newly added | This variable is read-only. It specifies the maximum number of data files that can be persisted when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660). | +| [`tidb_stmt_summary_file_max_days`](/system-variables.md#tidb_stmt_summary_file_max_days-new-in-v660) | Newly added | This variable is read-only. It specifies the maximum number of days to keep persistent data files when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660). | +| [`tidb_stmt_summary_file_max_size`](/system-variables.md#tidb_stmt_summary_file_max_size-new-in-v660) | Newly added | This variable is read-only. It specifies the maximum size of a persistent data file when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660). | ### Configuration file parameters From 3d12e125329b2d2fe7b76849e7e0febdae52733d Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 14:35:54 +0800 Subject: [PATCH 075/135] bump version for 1 place --- upgrade-tidb-using-tiup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 45064b69b008..fbf21046dcc0 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -47,7 +47,7 @@ This section introduces the preparation works needed before upgrading your TiDB ### Step 1: Review compatibility changes -Review [the compatibility changes](/releases/release-6.5.0.md#compatibility-changes) and [deprecated features](/releases/release-6.5.0.md#deprecated-feature) in TiDB v6.5.0 release notes. If any changes affect your upgrade, take actions accordingly. +Review [the compatibility changes](/releases/release-6.6.0.md#compatibility-changes) in TiDB v6.6.0 release notes. If any changes affect your upgrade, take actions accordingly. ### Step 2: Upgrade TiUP or TiUP offline mirror From 3a8bea1c8d34381c4e60208506f7d866b1d39989 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 14:58:21 +0800 Subject: [PATCH 076/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index c8eb30c3c5b4..05702280ad79 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -256,7 +256,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: To use this feature, set the value of [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) to `ON`. - For more information, see [documentation](/sql-plan-replayer.md#使用-plan-replayer-capture-抓取目标计划)。 + For more information, see [documentation](/sql-plan-replayer.md#use-plan-replayer-capture). * Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** From 83af3e74d4301bc327dcb5f42503a261460d9d3b Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 14:59:11 +0800 Subject: [PATCH 077/135] Apply suggestions from code review Co-authored-by: xixirangrang Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 05702280ad79..d632c17c3660 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -275,7 +275,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. - Before v6.6.0, for high data volume scenarios, you were required to configure TiDB Lightning's physical import task separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning's tasks; one DM task can accomplish the migration. + Before v6.6.0, for high data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration. For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). @@ -287,7 +287,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV [#41163](https://github.com/pingcap/tidb/issues/41163) @[gozssky](https://github.com/gozssky) **tw@qiancai** - Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volume. + Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volumes. This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to "gzip" or "gz". @@ -434,7 +434,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiFlash - - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin] **tw@qiancai** + - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) **tw@qiancai** - 减少 TiFlash 在没有查询的情况下的内存使用,最高减少 30% [#6589](https://github.com/pingcap/tiflash/pull/6589) @[hongyunyan](https://github.com/hongyunyan) + Tools From c4b676ad3763e13ec764bfc7428d3bbde187be44 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 15:25:04 +0800 Subject: [PATCH 078/135] html -> markdown --- releases/release-6.6.0.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index d632c17c3660..d309ae711150 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -61,13 +61,14 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* Support the MySQL-compatible foreign key constraints #18209 @crazycs520 **tw@Oreoxmt** +* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) [@crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index (experimental) #39592 @xiongjiwei @qw4990 **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) + **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -83,7 +84,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Stability -* Binding historical execution plans is GA #39199 @fzzf678 **tw@TomShawn** +* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. @@ -128,13 +129,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### HTAP -* TiFlash supports data exchange with compression #6620 @solotzg **tw@TomShawn** +* TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) **tw@TomShawn** To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. For details, see [documentation](). -* TiFlash supports the Stale Read feature #4483 @hehechen **tw@qiancai** +* TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. @@ -171,7 +172,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### DB operations -* Support resource control based on resource groups (experimental) #38825 @nolouch @BornChanger @glorv @tiancaiamao @Connor1996 @JmPotato @hnes @CabinfeverB @HuSharp **tw@hfxsd** +* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. @@ -188,7 +189,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/tidb-resource-control.md). -* Support configuring read-only storage nodes for resource-consuming tasks @v01dstar **tw@Oreoxmt** +* Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--replica-read-label`, to ensure the stability of cluster performance. @@ -271,7 +272,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Ecosystem -* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @lance6716 **tw@ran-huang** +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. @@ -279,7 +280,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). -* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @dsdashun +* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. From 75ad35c17d9df0776e84416645f6d1acfc8f8297 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 15:26:56 +0800 Subject: [PATCH 079/135] Apply suggestions from code review Co-authored-by: xixirangrang --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index d309ae711150..ba90e351f11f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -274,7 +274,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** - In v6.6.0, DM's full migration capability integrates with TiDB Lightning's physical import mode, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. + In v6.6.0, DM full migration capability integrates with physical import mode of TiDB Lightning, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. Before v6.6.0, for high data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration. From db1fbe6ce13a1cb3947ef71bc7dce0400d99f898 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 17:11:36 +0800 Subject: [PATCH 080/135] translate notes and add 2 tikv config changes --- releases/release-6.6.0.md | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ba90e351f11f..afe151511ae2 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -364,6 +364,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiKV | `enable-statistics` | Deleted | This configuration item specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [#13942](https://github.com/tikv/tikv/pull/13942). | | TiKV | `storage.block-cache.shared` | Deleted | Starting from v6.6.0, this configuration item is deleted, and the block cache is enabled by default and cannot be disabled. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | TiDB | [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB. | +| TiKV | [`rocksdb.defaultcf.block-size`](/tikv-configuration-file.md#block-size) and [`rocksdb.writecf.block-size`](/tikv-configuration-file.md#block-size) | Modified | The default values change from `64K` to `16K`. | | TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dasboard. | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | @@ -397,46 +398,45 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitate more flexible TiKV performance tuning. - Remove the limit on `LIMIT` clauses, thus improving the execution performance. - Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.5.0. -- TiDB no longer supports modifying column types on partitioned tables because of potential correctness issues. +- Starting from v6.6.0, TiDB no longer supports modifying column types on partitioned tables because of potential correctness issues. ## Improvements + TiDB `` - - 改进了 TTL 后台清理任务的调度机制。允许将单个表的清理任务拆分成若干子任务并调度到多个 TiDB 节点同时运行。 [#40361](https://github.com/pingcap/tidb/issues/40361) @[YangKeao](https://github.com/YangKeao) - - 优化了在设置了非默认的 delimiter 后运行 multi-statement 返回结果的列名的显示 [#39662](https://github.com/pingcap/tidb/issues/39662) @[mjonss](https://github.com/mjonss) - - 优化了生成警告信息时的执行效率 [#39702](https://github.com/pingcap/tidb/issues/39702) @[tiancaiamao](https://github.com/tiancaiamao) - - 为 ADD INDEX 支持分布式数据回填 (实验特性) [#37119](https://github.com/pingcap/tidb/issues/37119) @[zimulala](https://github.com/zimulala) - - 允许使用 CURDATE() 作为列的默认值 [#38356](https://github.com/pingcap/tidb/issues/38356) @[CbcWestwolf](https://github.com/CbcWestwolf) + - Improve the scheduling mechanism of TTL background cleaning tasks to allow the cleaning task of a single table to be split into several sub-tasks and scheduled to run on multiple TiDB nodes simultaneously [#40361](https://github.com/pingcap/tidb/issues/40361) @[YangKeao](https://github.com/YangKeao) + - Optimize the column name display of the result returned by running multi-statements after setting a non-default delimiter [#39662](https://github.com/pingcap/tidb/issues/39662) @[mjonss](https://github.com/mjonss) + - Optimize the execution efficiency of statements after warning messages are generated [#39702](https://github.com/pingcap/tidb/issues/39702) @[tiancaiamao](https://github.com/tiancaiamao) + - Support distributed data backfill for `ADD INDEX` (experimental) [#37119](https://github.com/pingcap/tidb/issues/37119) @[zimulala](https://github.com/zimulala) + - Support using `CURDATE()` as the default value of a column [#38356](https://github.com/pingcap/tidb/issues/38356) @[CbcWestwolf](https://github.com/CbcWestwolf) `` - - 增加了 partial order prop push down 对 LIST 类型的分区表的支持 [#40273](https://github.com/pingcap/tidb/issues/40273) @[winoros](https://github.com/winoros) - - 增加了 hint 和 binding 冲突时的 warning 信息 [#40910](https://github.com/pingcap/tidb/issues/40910) @[Reminiscent](https://github.com/Reminiscent) - - 优化了 Plan Cache 策略避免在一些场景使用 Plan Cache 时产生不优的计划 [#40312](https://github.com/pingcap/tidb/pull/40312) [#40218](https://github.com/pingcap/tidb/pull/40218) [#40280](https://github.com/pingcap/tidb/pull/40280) [#41136](https://github.com/pingcap/tidb/pull/41136) [#40686](https://github.com/pingcap/tidb/pull/40686) @[qw4990](https://github.com/qw4990) + - `partial order prop push down` now supports the LIST-type partitioned tables [#40273](https://github.com/pingcap/tidb/issues/40273) @[winoros](https://github.com/winoros) + - Add error messages for conflicts between optimizer hints and execution plan bindings [#40910](https://github.com/pingcap/tidb/issues/40910) @[Reminiscent](https://github.com/Reminiscent) + - Optimize the plan cache strategy to avoid non-optimal plans when using plan cache in some scenarios [#40312](https://github.com/pingcap/tidb/pull/40312) [#40218](https://github.com/pingcap/tidb/pull/40218) [#40280](https://github.com/pingcap/tidb/pull/40280) [#41136](https://github.com/pingcap/tidb/pull/41136) [#40686](https://github.com/pingcap/tidb/pull/40686) @[qw4990](https://github.com/qw4990) + TiKV `` - - 优化一些参数的默认值,当partitioned-raft-kv开启时block-cache调整为0.3可用内存(原来是0.45), region-split-size调整为10GB。当沿用raft-kv时且enable-region-bucket为true时,region-split-size默认调整为1GB [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) - - 支持在Raftstore异步写入中的优先级调度[#13730] (https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) - - 支持TiKV在小于1 core的CPU下启动 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) - - 修改rocksdb.defaultcf.block-size以及rocksdb.writecf.block-size的默认参数为16KB [#14052](https://github.com/tikv/tikv/issues/14052) @[tonyxuqqi](https://github.com/tonyxuqqi) - - raftstore: 优化slow score探测的新机制,加入新的`evict-slow-trend-scheduler` [#14131](https://github.com/tikv/tikv/issues/14131) @[innerr](https://github.com/innerr) - - rocksdb的block cache强制为共享的。不支持按照CF单独设置Block cache [#12936](https://github.com/tikv/tikv/issues/12936) @[busyjay](https://github.com/busyjay) + - Optimize the default values of some parameters in partitioned-raft-kv mode: the default value of the TiKV configuration item `storage.block-cache.capacity` is adjusted from 45% to 30%, and the default value of `region-split-size` is adjusted from `96MiB` adjusted to `10GiB`. When using raft-kv mode and `enable-region-bucket` is `true`, `region-split-size` is adjusted to 1GiB by default. [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) + - Support priority scheduling in Raftstore asynchronous writes [#13730] (https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) + - Support starting TiKV on a CPU with less than 1 core [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) + - Optimize the new detection mechanism of Raftstore slow score and add `evict-slow-trend-scheduler` [#14131](https://github.com/tikv/tikv/issues/14131) @[innerr](https://github.com/innerr) + - Force the block cache of RocksDB to be shared and no longer support setting the block cache separately according to CF [#12936](https://github.com/tikv/tikv/issues/12936) @[busyjay](https://github.com/busyjay) + PD - Support limiting the global memory to alleviate the OOM problem (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - Add the GC Tuner to alleviate the GC pressure (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - - 新增 `balance-witness-scheduler` 调度器用于调度 witness [#5763](https://github.com/tikv/pd/pull/5763) @[ethercflow](https://github.com/ethercflow) - - 新加 `evict-slow-trend-scheduler` 调度器用于异常节点检测和调度 [#5808](https://github.com/tikv/pd/pull/5808) @[innerr](https://github.com/innerr) - - 新加 keyspace manager,支持对 keyspace 的管理 [#5293](https://github.com/tikv/pd/issues/5293) @[AmoebaProtozoa](https://github.com/AmoebaProtozoa) + - Add the `balance-witness-scheduler` scheduler to schedule witness [#5763](https://github.com/tikv/pd/pull/5763) @[ethercflow](https://github.com/ethercflow) + - Add the `evict-slow-trend-scheduler` scheduler to detect and schedule abnormal nodes [#5808](https://github.com/tikv/pd/pull/5808) @[innerr](https://github.com/innerr) + - Add the keyspace manager to manage keyspace [#5293](https://github.com/tikv/pd/issues/5293) @[AmoebaProtozoa](https://github.com/AmoebaProtozoa) + TiFlash - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) **tw@qiancai** - - 减少 TiFlash 在没有查询的情况下的内存使用,最高减少 30% [#6589](https://github.com/pingcap/tiflash/pull/6589) @[hongyunyan](https://github.com/hongyunyan) + - Reduce the memory usage of TiFlash up to 30% when there is no query [#6589](https://github.com/pingcap/tiflash/pull/6589) @[hongyunyan](https://github.com/hongyunyan) + Tools From e1b8f059521e3976bdede62f96047c2bec37c39f Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 16 Feb 2023 17:37:59 +0800 Subject: [PATCH 081/135] Apply suggestions from code review Co-authored-by: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> --- releases/release-6.6.0.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index afe151511ae2..7210d8869ac3 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -487,22 +487,22 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Optimize the error message when deleting a column that a partitioned table depends on [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) - Fix the issue of frequent write conflicts in transactions when performing DDL data backfill [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) - Add a retry mechanism for `FLASHBACK CLUSTER` after failing to check the `min-resolved-ts` [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) - - Fix the issue that an index cannot be created into an empty table using injest mode [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) + - Fix the issue that sometimes an index cannot be created for an empty table using injest mode [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) - Fix the issue that `wait_ts` in the slow query log is the same for different SQL statements within the same transaction [#39713](https://github.com/pingcap/tidb/issues/39713) @[TonsnakeLin](https://github.com/TonsnakeLin) - Fix the issue that the `Assertion Failed` error is reported when adding a column during the process of deleting a row record [#39570](https://github.com/pingcap/tidb/issues/39570) @[wjhuang2016](https://github.com/wjhuang2016) - Fix the issue that the `not a DDL owner` error is reported when modifying a column type [#39643](https://github.com/pingcap/tidb/issues/39643) @[zimulala](https://github.com/zimulala) - - Fix the issue that an error is not reported when inserting a row after exhaustion of the auto-increment values of the `AUTO_INCREMENT` column [#38950](https://github.com/pingcap/tidb/issues/38950) @[Dousir9](https://github.com/Dousir9) + - Fix the issue that no error is reported when inserting a row after exhaustion of the auto-increment values of the `AUTO_INCREMENT` column [#38950](https://github.com/pingcap/tidb/issues/38950) @[Dousir9](https://github.com/Dousir9) - Fix the issue that the `Unknown column` error is reported when creating an expression index [#39784](https://github.com/pingcap/tidb/issues/39784) @[Defined2014](https://github.com/Defined2014) - - Fix the issue that data cannot be inserted into a renamed table when the generated expression includes the table name [#39826](https://github.com/pingcap/tidb/issues/39826) @[Defined2014](https://github.com/Defined2014) - - Fix the issue that `INSERT IGNORE` statements cannot correctly fill in default values when the column is write-only [#40192](https://github.com/pingcap/tidb/issues/40192) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that data cannot be inserted into a renamed table when the generated expression includes the name of this table [#39826](https://github.com/pingcap/tidb/issues/39826) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that the `INSERT ignore` statement cannot fill in default values when the column is write-only [#40192](https://github.com/pingcap/tidb/issues/40192) @[YangKeao](https://github.com/YangKeao) - Fix the issue that resources are not released when disabling the resource management module [#40546](https://github.com/pingcap/tidb/issues/40546) @[zimulala](https://github.com/zimulala) - `MODIFY COLUMN` is not supported on partitioned tables [#39915](https://github.com/pingcap/tidb/issues/39915) @[wjhuang2016](https://github.com/wjhuang2016) - Disable renaming of columns that partition tables depend on [#40150](https://github.com/pingcap/tidb/issues/40150) @[mjonss](https://github.com/mjonss) - Fix the issue that TTL tasks cannot trigger statistics updates in time [#40109](https://github.com/pingcap/tidb/issues/40109) @[YangKeao](https://github.com/YangKeao) - - Fix the issue that TiDB improperly handles `NULL` values when constructing key ranges, resulting in unexpected data being read [#40158](https://github.com/pingcap/tidb/issues/40158) @[tiancaiamao](https://github.com/tiancaiamao) - - Fix the issue that illegal values are written to the table when the `MODIFT COLUMN` statement modifies the default value of a column [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) - - Fix the issue that the adding index operation is inefficient due to the invalid Region cache when there are many Regions in a table [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) - - Fix the data race issue when allocating auto-increment IDs [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) + - Fix the issue that unexpected data is read because TiDB improperly handles `NULL` values when constructing key ranges [#40158](https://github.com/pingcap/tidb/issues/40158) @[tiancaiamao](https://github.com/tiancaiamao) + - Fix the issue that illegal values are written to a table when the `MODIFT COLUMN` statement also changes the default value of a column [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that the adding index operation is inefficient due to invalid Region cache when there are many Regions in a table [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) + - Fix data race occurred in allocating auto-increment IDs [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) - Fix the issue that the implementation of the not expression of JSON is incompatible with the implementation of MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) - Fix the issue that concurrent view might cause DDL operations to be blocked [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) - Fix data inconsistency caused by concurrently executing DDL statements to modify columns of partitioned tables [#40620](https://github.com/pingcap/tidb/issues/40620) @[mjonss](https://github.com/mjonss) @[mjonss](https://github.com/mjonss) From b9c1474f25a99cc3a2b7a5bc1404a88595853c48 Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 16 Feb 2023 17:40:57 +0800 Subject: [PATCH 082/135] Apply suggestions from code review Co-authored-by: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> --- releases/release-6.6.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 7210d8869ac3..7dfaefb8a94b 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -482,11 +482,11 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB `` - - Fix the issue that the statistics collection task fails due to an incorrect `DATETIME` value [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - Fix the issue that `stats_meta` is not created after creating a table [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - Optimize the error message when deleting a column that a partitioned table depends on [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) - - Fix the issue of frequent write conflicts in transactions when performing DDL data backfill [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) - - Add a retry mechanism for `FLASHBACK CLUSTER` after failing to check the `min-resolved-ts` [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that a statistics collection task fails due to an incorrect `datetime` value [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - Fix the issue that `stats_meta` is not created following table creation [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - Refine the error message reported when a column that a partitioned table depends on is deleted [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) + - Fix frequent write conflicts in transactions when performing DDL data backfill [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) + - Add a mechanism that `FLASHBACK CLUSTER` retries when it fails to check the `min-resolved-ts` [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) - Fix the issue that sometimes an index cannot be created for an empty table using injest mode [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) - Fix the issue that `wait_ts` in the slow query log is the same for different SQL statements within the same transaction [#39713](https://github.com/pingcap/tidb/issues/39713) @[TonsnakeLin](https://github.com/TonsnakeLin) - Fix the issue that the `Assertion Failed` error is reported when adding a column during the process of deleting a row record [#39570](https://github.com/pingcap/tidb/issues/39570) @[wjhuang2016](https://github.com/wjhuang2016) From 8339c0f6ea4d89c7eba69f27d5a20c82de55ce29 Mon Sep 17 00:00:00 2001 From: qiancai Date: Thu, 16 Feb 2023 17:44:17 +0800 Subject: [PATCH 083/135] translate bug fixes --- releases/release-6.6.0.md | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index bfeacc315efe..020219519c88 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -519,37 +519,37 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `` - 修复了非法的 datetime 值导致 analyze 失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - 修复了由虚拟列引发的 `can't find proper physical plan` 问题 [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) - - 修复了当动态裁剪模式下的分区表有 global binding 时,TiDB 重启失败的问题 [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) - - 修复了 auto analyze 导致 graceful shutdown 耗时的问题 [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - Fix the `can't find proper physical plan` issue caused by virtual columns [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) + - Fix the issue that TiDB cannot restart after global bindings are created for partition tables in dynamic trimming mode [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) + - Fix the issue that auto analyze causes graceful shutdown to take a long time [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) `` - - 修复了 IndexMerge 算子在触发内存限制行为时可能导致 TiDB server 崩溃的问题[#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) - - 修复了在分区表上执行 `SELECT * FROM table_name LIMIT 1` 查询时,执行速度慢的问题 [#40741](https://github.com/pingcap/tidb/pull/40741) @[solotzg](https://github.com/solotzg) + - Fix the panic of the TiDB server when the IndexMerge operator triggering memory limiting behaviors [#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) + - Fix the issue that the `SELECT * FROM table_name LIMIT 1` query on partitioned tables is slow [#40741](https://github.com/pingcap/tidb/pull/40741) @[solotzg](https://github.com/solotzg) `` - - 修复了过期的 region 缓存可能残留导致的内存泄漏和性能下降问题 [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + - Clear expired region cache regularly to avoid memory leak and performance degradation [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + TiKV `` - - 修复转换 `const Enum` 类型到其他类型时报错的问题 [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - - 修复 Resolved TS 导致网络流量升高的问题 [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) - - 修复 TiKV 在 CPU 核数小于 1 时无法启动的问题 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) + - Fix an error that occurs when casting the `const Enum` type to other types [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) + - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) + - Fix the issue that TiKV cannot restart when the number of CPU cores is less than 1 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) - copr: 修复old collation时Like中的 _ pattern的行为 [#13785](https://github.com/tikv/tikv/pull/13785) @[Yangkeao](https://github.com/Yangkeao) - (dup: release-6.1.4.md > Bug 修复> TiKV)- 修复 TiDB 中事务在执行悲观 DML 失败后,再执行其他 DML 时,如果 TiDB 和 TiKV 之间存在网络故障,可能会造成数据不一致的问题 [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + (dup: release-6.1.4.md > Bug 修复> TiKV)- Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD `` - - 修复 Region Scatter 任务会生成非预期的多余副本的问题 [#5909](https://github.com/tikv/pd/issues/5909) @[HundunDM](https://github.com/HunDunDM) - - 修复 Online Unsafe Recovery 功能在 `auto-detect` 模式下卡住并超时的问题 [#5753](https://github.com/tikv/pd/issues/5753) @[Connor1996](https://github.com/Connor1996) - - 修复 `replace-down-peer` 在特定条件下执行变慢的问题 [#5788](https://github.com/tikv/pd/issues/5788) @[HundunDM](https://github.com/HunDunDM) - - 修复调用 `ReportMinResolvedTS` 过高的情况下造成 PD OOM 的问题 [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + - Fix the issue that the Region Scatter task generates redundant replicas unexpectedly [#5909](https://github.com/tikv/pd/issues/5909) @[HundunDM](https://github.com/HunDunDM) + - Fix the issue that the Online Unsafe Recovery feature would get stuck and time out in `auto-detect` mode [#5753](https://github.com/tikv/pd/issues/5753) @[Connor1996](https://github.com/Connor1996) + - Fix the issue that the execution `replace-down-peer` slows down under certain conditions [#5788](https://github.com/tikv/pd/issues/5788) @[HundunDM](https://github.com/HunDunDM) + - Fix the PD OOM issue that occurs when the calls of `ReportMinResolvedTS` is too frequent [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + TiFlash `` - - 修复查询 TiFlash 相关的系统表可能会卡住的问题 [#6745](https://github.com/pingcap/tiflash/pull/6745) @[lidezhu](https://github.com/lidezhu) - - 修复半连接在计算笛卡尔积时,使用内存过量的问题 [#6730](https://github.com/pingcap/tiflash/issues/6730) @[gengliqi](https://github.com/gengliqi) - - 修复了 decimal 进行除法运算时不舍入的问题 [#6393](https://github.com/pingcap/tiflash/issues/6393) @[LittleFall](https://github.com/LittleFall) + - Fix the issue that querying TiFlash-related system tables might get stuck [#6745](https://github.com/pingcap/tiflash/pull/6745) @[lidezhu](https://github.com/lidezhu) + - Fix the issue that semi-joins use excessive memory when calculating Cartesian products [#6730](https://github.com/pingcap/tiflash/issues/6730) @[gengliqi](https://github.com/gengliqi) + - Fix the issue that the result of division operation on the DECIMAL data type is not rounded [#6393](https://github.com/pingcap/tiflash/issues/6393) @[LittleFall](https://github.com/LittleFall) + Tools From 37f7e6d4adaa7458ef29aacd69ea74568de5ba06 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Thu, 16 Feb 2023 18:05:09 +0800 Subject: [PATCH 084/135] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 000b7f528f36..96b222c789f7 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -503,7 +503,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that illegal values are written to a table when the `MODIFT COLUMN` statement also changes the default value of a column [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) - Fix the issue that the adding index operation is inefficient due to invalid Region cache when there are many Regions in a table [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) - Fix data race occurred in allocating auto-increment IDs [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) - - Fix the issue that the implementation of the not expression of JSON is incompatible with the implementation of MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that the implementation of the not operator in JSON is incompatible with the implementation in MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) - Fix the issue that concurrent view might cause DDL operations to be blocked [#40352](https://github.com/pingcap/tidb/issues/40352) @[zeminzhou](https://github.com/zeminzhou) - Fix data inconsistency caused by concurrently executing DDL statements to modify columns of partitioned tables [#40620](https://github.com/pingcap/tidb/issues/40620) @[mjonss](https://github.com/mjonss) @[mjonss](https://github.com/mjonss) - Fix the issue that "Malformed packet" is reported when using `caching_sha2_password` for authentication without specifying a password [#40831](https://github.com/pingcap/tidb/issues/40831) @[dveeden](https://github.com/dveeden) @@ -516,12 +516,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that the value of system variables might be incorrectly modified in some cases due to memory reuse [#40979](https://github.com/pingcap/tidb/issues/40979) @[lcwangchao](https://github.com/lcwangchao) - Fix the issue that data might be inconsistent with the index when a unique index is created in the ingest mode [#40464](https://github.com/pingcap/tidb/issues/40464) @[tangenta](https://github.com/tangenta) - Fix the issue that some truncate operations cannot be blocked by MDL when truncating the same table concurrently [#40484](https://github.com/pingcap/tidb/issues/40484) @[wjhuang2016](https://github.com/wjhuang2016) - - Fix the issue that the privilege list returned by the `SHOW PRIVILEGES` command is incomplete [#40591](https://github.com/pingcap/tidb/issues/40591) @[CbcWestwolf](https://github.com/CbcWestwolf) + - Fix the issue that the `SHOW PRIVILEGES` statement returns an incomplete privilege list [#40591](https://github.com/pingcap/tidb/issues/40591) @[CbcWestwolf](https://github.com/CbcWestwolf) - Fix the issue that TiDB panics when adding a unique index [#40592](https://github.com/pingcap/tidb/issues/40592) @[tangenta](https://github.com/tangenta) - Fix the issue that executing the `ADMIN RECOVER` statement might cause the index data to be corrupted [#40430](https://github.com/pingcap/tidb/issues/40430) @[xiongjiwei](https://github.com/xiongjiwei) - Fix the issue that a query might fail when the queried table contains a `CAST` expression in the expression index [#40130](https://github.com/pingcap/tidb/issues/40130) @[xiongjiwei](https://github.com/xiongjiwei) - Fix the issue that a unique index might still produce duplicate data in some cases [#40217](https://github.com/pingcap/tidb/issues/40217) @[tangenta](https://github.com/tangenta) - - Fix PD OOM when there is a large number of Regions but the table ID cannot be pushed down when querying some virtual tables using `Prepare` or `Execute` [#39605](https://github.com/pingcap/tidb/issues/39605) @[djshow832](https://github.com/djshow832) + - Fix the PD OOM issue when there is a large number of Regions but the table ID cannot be pushed down when querying some virtual tables using `Prepare` or `Execute` [#39605](https://github.com/pingcap/tidb/issues/39605) @[djshow832](https://github.com/djshow832) - Fix the issue that data race might occur when an index is added [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) `` From b9920459546f23b229a68d45bfd2c8f5f766e356 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 16 Feb 2023 18:07:33 +0800 Subject: [PATCH 085/135] Apply suggestions from code review --- releases/release-6.6.0.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 96b222c789f7..daa96b8fbb8f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -415,6 +415,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - `partial order prop push down` now supports the LIST-type partitioned tables [#40273](https://github.com/pingcap/tidb/issues/40273) @[winoros](https://github.com/winoros) - Add error messages for conflicts between optimizer hints and execution plan bindings [#40910](https://github.com/pingcap/tidb/issues/40910) @[Reminiscent](https://github.com/Reminiscent) - Optimize the plan cache strategy to avoid non-optimal plans when using plan cache in some scenarios [#40312](https://github.com/pingcap/tidb/pull/40312) [#40218](https://github.com/pingcap/tidb/pull/40218) [#40280](https://github.com/pingcap/tidb/pull/40280) [#41136](https://github.com/pingcap/tidb/pull/41136) [#40686](https://github.com/pingcap/tidb/pull/40686) @[qw4990](https://github.com/qw4990) + - Clear expired region cache regularly to avoid memory leak and performance degradation [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + TiKV @@ -525,24 +526,21 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that data race might occur when an index is added [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) `` - - 修复了非法的 datetime 值导致 analyze 失败的问题 [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - Fix the `can't find proper physical plan` issue caused by virtual columns [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) - Fix the issue that TiDB cannot restart after global bindings are created for partition tables in dynamic trimming mode [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) - Fix the issue that auto analyze causes graceful shutdown to take a long time [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) `` - - Fix the panic of the TiDB server when the IndexMerge operator triggering memory limiting behaviors [#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) + - Fix the panic of the TiDB server when the IndexMerge operator triggers memory limiting behaviors [#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) - Fix the issue that the `SELECT * FROM table_name LIMIT 1` query on partitioned tables is slow [#40741](https://github.com/pingcap/tidb/pull/40741) @[solotzg](https://github.com/solotzg) `` - - Clear expired region cache regularly to avoid memory leak and performance degradation [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + TiKV `` - Fix an error that occurs when casting the `const Enum` type to other types [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) - - Fix the issue that TiKV cannot restart when the number of CPU cores is less than 1 [#13586] [#13752] [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) - - copr: 修复old collation时Like中的 _ pattern的行为 [#13785](https://github.com/tikv/tikv/pull/13785) @[Yangkeao](https://github.com/Yangkeao) + - Fix the issue that TiKV cannot restart when the number of CPU cores is less than 1 [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) (dup: release-6.1.4.md > Bug 修复> TiKV)- Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD From e0da4d23f40b2e5ae24583cb58df374b42ae18ec Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Thu, 16 Feb 2023 19:07:19 +0800 Subject: [PATCH 086/135] refine `LIMIT` --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index daa96b8fbb8f..7a4e4862fac2 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -119,7 +119,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Remove the limit on `LIMIT` clauses [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** - Starting from v6.6.0, TiDB plan cache supports caching queries containing `?` after `Limit`, such as `Limit ?` or `Limit 10, ?`. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. + Starting from v6.6.0, TiDB plan cache supports caching execution plans with a variable as the `LIMIT` parameter, such as `LIMIT ?` or `LIMIT 10, ?`. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. For more information, see [documentation](/sql-prepared-plan-cache.md). From c734f4a3783cc397740a5a9e97d71155a44d15cb Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Thu, 16 Feb 2023 19:20:21 +0800 Subject: [PATCH 087/135] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 7a4e4862fac2..98e9fe3c1632 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -540,7 +540,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `` - Fix an error that occurs when casting the `const Enum` type to other types [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) - - Fix the issue that TiKV cannot restart when the number of CPU cores is less than 1 [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) + - Fix the issue that TiKV cannot restart when the number of CPU cores is less than 1 [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) (dup: release-6.1.4.md > Bug 修复> TiKV)- Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD From 1fe08a822a5795c3ab7b0e1be3d1da3c9baa01de Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 17 Feb 2023 09:27:40 +0800 Subject: [PATCH 088/135] remove a duplicated release notes for #13568 --- releases/release-6.6.0.md | 1 - 1 file changed, 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 98e9fe3c1632..ab8655b06083 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -540,7 +540,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `` - Fix an error that occurs when casting the `const Enum` type to other types [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) - - Fix the issue that TiKV cannot restart when the number of CPU cores is less than 1 [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) (dup: release-6.1.4.md > Bug 修复> TiKV)- Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD From 72c46e2dc3e923d764ec921f67d848faa2eef943 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 17 Feb 2023 09:51:45 +0800 Subject: [PATCH 089/135] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ab8655b06083..4719322c890f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -447,7 +447,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiCDC - - Support batch UPDATE DML statements to improve TiCDC replication performance [#8084](https://github.com/pingcap/tiflow/issues/8084) + - Support batch UPDATE DML statements to improve TiCDC replication performance [#8084](https://github.com/pingcap/tiflow/issues/8084) @[amyangfei](https://github.com/amyangfei) - Implement MQ sink and MySQL sink in the asynchronous mode to improve the sink throughput [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) + TiDB Data Migration (DM) @@ -463,7 +463,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Lightning - - Physical Import Mode supports Keyspace [#40531](https://github.com/pingcap/tidb/issues/40531) @[iosmanthus](https://github.com/iosmanthus) + - Physical Import Mode supports keyspace [#40531](https://github.com/pingcap/tidb/issues/40531) @[iosmanthus](https://github.com/iosmanthus) - Support setting the maximum number of conflicts by `lightning.max-error` [#40743](https://github.com/pingcap/tidb/issues/40743) @[dsdashun](https://github.com/dsdashun) - Support importing CSV data files with BOM headers [#40744](https://github.com/pingcap/tidb/issues/40744) @[dsdashun](https://github.com/dsdashun) - Optimize the processing logic when encountering TiKV flow-limiting errors and try other available regions instead [#40205](https://github.com/pingcap/tidb/issues/40205) @[lance6716](https://github.com/lance6716) @@ -559,7 +559,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + Backup & Restore (BR) - - Fix the issue that when restoring log backup, hot regions cause the restore to fail [#37207](https://github.com/pingcap/tidb/issues/37207) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that when restoring log backup, hot Regions cause the restore to fail [#37207](https://github.com/pingcap/tidb/issues/37207) @[Leavrth](https://github.com/Leavrth) - Fix the issue that restoring data to a cluster on which the log backup is running causes the log backup file to be unrecoverable [#40797](https://github.com/pingcap/tidb/issues/40797) @[Leavrth](https://github.com/Leavrth) - Fix the issue that the PITR feature does not support CA-bundles [#38775](https://github.com/pingcap/tidb/issues/38775) @[YuJuncen](https://github.com/YuJuncen) - Fix the panic issue caused by duplicate temporary tables during recovery [#40797](https://github.com/pingcap/tidb/issues/40797) @[joccau](https://github.com/joccau) From f6315b67e87039f0545d17f99ab8113ef9da1dce Mon Sep 17 00:00:00 2001 From: yiwen92 <34636520+yiwen92@users.noreply.github.com> Date: Fri, 17 Feb 2023 10:56:27 +0800 Subject: [PATCH 090/135] Update release-6.6.0.md change highlight accordingly --- releases/release-6.6.0.md | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 4719322c890f..36fdb8bb93d8 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -23,34 +23,37 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - - - + + + - - + + - - - + + - + + + + - - - + + + - - + + + From 3bfaf2411206364936c1012f6bc00ea54a2ecede Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 10:57:09 +0800 Subject: [PATCH 091/135] Update releases/release-6.6.0.md Co-authored-by: xixirangrang --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 36fdb8bb93d8..df3dcd0c179c 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -273,7 +273,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Starting from Februray 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). - Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade. -### Ecosystem +### TiDB tools * TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** From e5a2c30d1b787b9c1b3741eea29e2c2bc6b8993e Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 10:58:37 +0800 Subject: [PATCH 092/135] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index df3dcd0c179c..335e08c6ba55 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -425,7 +425,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: `` - Optimize the default values of some parameters in partitioned-raft-kv mode: the default value of the TiKV configuration item `storage.block-cache.capacity` is adjusted from 45% to 30%, and the default value of `region-split-size` is adjusted from `96MiB` adjusted to `10GiB`. When using raft-kv mode and `enable-region-bucket` is `true`, `region-split-size` is adjusted to 1GiB by default. [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) - Support priority scheduling in Raftstore asynchronous writes [#13730] (https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) - - Support starting TiKV on a CPU with less than 1 core [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/13586) @[andreid-db](https://github.com/andreid-db) + - Support starting TiKV on a CPU with less than 1 core [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) - Optimize the new detection mechanism of Raftstore slow score and add `evict-slow-trend-scheduler` [#14131](https://github.com/tikv/tikv/issues/14131) @[innerr](https://github.com/innerr) - Force the block cache of RocksDB to be shared and no longer support setting the block cache separately according to CF [#12936](https://github.com/tikv/tikv/issues/12936) @[busyjay](https://github.com/busyjay) @@ -479,7 +479,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + Sync-diff-inspector - Add a new parameter `skip-non-existing-table` to control whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya94) **tw@shichun-0415** - - note [#issue](链接) @[贡献者 GitHub ID](链接) ## Bug fixes @@ -575,30 +574,30 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiCDC (dup: release-6.1.4.md > Bug 修复> Tools> TiCDC)- Fix the issue that `transaction_atomicity` and `protocol` cannot be updated via the configuration file [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) - - Fix the issue that precheck is not performed on the storage path of redo log [#6335](https://github.com/pingcap/tiflow/issues/6335) - - Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure [#8089](https://github.com/pingcap/tiflow/issues/8089) - - 修复 changefeed 在 tikv、CDC 节点扩缩容特殊场景下卡住的问题。 [#8197](https://github.com/pingcap/tiflow/issues/8197) - - Fix the issue of too large traffic among TiKV nodes [#14092](https://github.com/tikv/tikv/issues/14092) - - Fix the performance issues of TiCDC in terms of CPU usage, memory control, and throughput when the pull-based sink is enabled [#8142](https://github.com/pingcap/tiflow/issues/8142) [#8157](https://github.com/pingcap/tiflow/issues/8157) [#8001](https://github.com/pingcap/tiflow/issues/8001) [#5928](https://github.com/pingcap/tiflow/issues/5928) + - Fix the issue that precheck is not performed on the storage path of redo log [#6335](https://github.com/pingcap/tiflow/issues/6335) @[CharlesCheung96](https://github.com/CharlesCheung96) + - Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure [#8089](https://github.com/pingcap/tiflow/issues/8089) @[CharlesCheung96](https://github.com/CharlesCheung96) + - 修复 changefeed 在 tikv、CDC 节点扩缩容特殊场景下卡住的问题。 [#8197](https://github.com/pingcap/tiflow/issues/8197) @[hicqu](https://github.com/hicqu) + - Fix the issue of too high traffic among TiKV nodes [#14092](https://github.com/tikv/tikv/issues/14092) @[overvenus](https://github.com/overvenus) + - Fix the performance issues of TiCDC in terms of CPU usage, memory control, and throughput when the pull-based sink is enabled [#8142](https://github.com/pingcap/tiflow/issues/8142) [#8157](https://github.com/pingcap/tiflow/issues/8157) [#8001](https://github.com/pingcap/tiflow/issues/8001) [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) + TiDB Data Migration (DM) `` - - Fix the issue that the `binlog-schema delete` command fails to execute [#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94] - - Fix the issue that the checkpoint does not advance when the last binlog is a skipped DDL [#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter] - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when the expression filters of both "update" and "non-update" types are specified in one table, all `UPDATE` statements are skipped [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) @[lance6716] - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when only one of `update-old-value-expr` or `update-new-value-expr` is set for a table, the filter rule does not take effect or DM panics [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) @[lance6716] + - Fix the issue that the `binlog-schema delete` command fails to execute [#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94](https://github.com/liumengya94) + - Fix the issue that the checkpoint does not advance when the last binlog is a skipped DDL [#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter](https://github.com/D3Hunter) + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when the expression filters of both "update" and "non-update" types are specified in one table, all `UPDATE` statements are skipped [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) + (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when only one of `update-old-value-expr` or `update-new-value-expr` is set for a table, the filter rule does not take effect or DM panics [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) + TiDB Lightning - Fix the issue that TiDB Lightning timeout hangs due to TiDB restart in some scenarios [#33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) **tw@shichun-0415** - - Fix the issue that TiDB Lightning might incorrectly skip conflict resolution when all but the last TiDB Lightning instance encounters a local duplicate record during a parallel import [#40923](https://github.com/pingcap/tidb/issues/40923) @[lichunzhu] - - Fix the issue that precheck cannot accurately detect the presence of a running TiCDC in the target cluster [#41040](https://github.com/pingcap/tidb/issues/41040) @[lance6716] - - Fix the issue that TiDB Lightning panics in the split-region phase [#40934](https://github.com/pingcap/tidb/issues/40934) @[lance6716] - - Fix the issue that the conflict resolution logic (`duplicate-resolution`) might lead to inconsistent checksums [#40657](https://github.com/pingcap/tidb/issues/40657) @[gozssky] - - Fix a possible OOM problem when there is an unclosed delimiter in the data file [#40400](https://github.com/pingcap/tidb/issues/40400) @[buchuitoudegou] - - Fix the issue that the file offset in the error report exceeds the file size [#40034](https://github.com/pingcap/tidb/issues/40034) @[buchuitoudegou] - - Fix an issue with the new version of PDClient that might cause parallel import to fail [#40493](https://github.com/pingcap/tidb/issues/40493) @[AmoebaProtozoa] + - Fix the issue that TiDB Lightning might incorrectly skip conflict resolution when all but the last TiDB Lightning instance encounters a local duplicate record during a parallel import [#40923](https://github.com/pingcap/tidb/issues/40923) @[lichunzhu](https://github.com/lichunzhu) + - Fix the issue that precheck cannot accurately detect the presence of a running TiCDC in the target cluster [#41040](https://github.com/pingcap/tidb/issues/41040) @[lance6716](https://github.com/lance6716) + - Fix the issue that TiDB Lightning panics in the split-region phase [#40934](https://github.com/pingcap/tidb/issues/40934) @[lance6716](https://github.com/lance6716) + - Fix the issue that the conflict resolution logic (`duplicate-resolution`) might lead to inconsistent checksums [#40657](https://github.com/pingcap/tidb/issues/40657) @[gozssky](https://github.com/gozssky) + - Fix a possible OOM problem when there is an unclosed delimiter in the data file [#40400](https://github.com/pingcap/tidb/issues/40400) @[buchuitoudegou](https://github.com/buchuitoudegou) + - Fix the issue that the file offset in the error report exceeds the file size [#40034](https://github.com/pingcap/tidb/issues/40034) @[buchuitoudegou](https://github.com/buchuitoudegou) + - Fix an issue with the new version of PDClient that might cause parallel import to fail [#40493](https://github.com/pingcap/tidb/issues/40493) @[AmoebaProtozoa](https://github.com/AmoebaProtozoa) (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Lightning)- Fix the issue that TiDB Lightning prechecks cannot find dirty data left by previously failed imports [#39477](https://github.com/pingcap/tidb/issues/39477) @[dsdashun](https://github.com/dsdashun) @[dsdashun] ## Contributors From dbeb8a5fc7da44183b6b806b45d5d8bba3e11c27 Mon Sep 17 00:00:00 2001 From: Aolin Date: Fri, 17 Feb 2023 11:07:00 +0800 Subject: [PATCH 093/135] fix link --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 335e08c6ba55..244a1e9e20ac 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -194,7 +194,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** - In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#steps) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--replica-read-label`, to ensure the stability of cluster performance. + In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#procedures) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--replica-read-label`, to ensure the stability of cluster performance. For more information, see [documentation](/best-practices/readonly-nodes.md). From 9fa398395b76d0390c86d9fb8b81f3d858fc2c7a Mon Sep 17 00:00:00 2001 From: yiwen92 <34636520+yiwen92@users.noreply.github.com> Date: Fri, 17 Feb 2023 11:51:29 +0800 Subject: [PATCH 094/135] Update release-6.6.0.md Reorg domains --- releases/release-6.6.0.md | 220 +++++++++++++++++++------------------- 1 file changed, 108 insertions(+), 112 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 244a1e9e20ac..08930eeb38d1 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -60,60 +60,20 @@ In v6.6.0-DMR, the key new features and improvements are as follows:
SQL

SEAMLESS to use
SQL operations and compatibility

SEAMLESS to use
Foreign Key Support MySQL-compatible foreign key constraints to maintain data consistency and improve data quality.
Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0.
DB Operations

SMOOTH to use
Resource group (experimental)DB Operations

SMOOTH to use
Tenant resource group control (experimental) Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs.
Stability

RELIABLE to use
Stability

RELIABLE to use
Historical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard.
Performance

POWERFUL to use
Performance

POWERFUL to use
TiFlash supports compression exchange TiFlash supports data compression to improve the efficiency of parallel data exchange.
SQL operations and compatibility

SEAMLESS to use
Foreign KeyForeign Key Support MySQL-compatible foreign key constraints to maintain data consistency and improve data quality.
Multi-valued index (experimental)Multi-valued index (experimental) Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0.
DB Operations

SMOOTH to use
Tenant resource group control (experimental)Tenant resource group control (experimental) Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs.
Stability

RELIABLE to use
Historical SQL bindingHistorical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard.
Performance

POWERFUL to use
TiFlash supports compression exchangeTiFlash supports compression exchange TiFlash supports data compression to improve the efficiency of parallel data exchange.
TiFlash supports stale readTiFlash supports stale read TiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted.
DM support physical import (experimental)DM support physical import (experimental) TiDB Data Migration (DM) integrates TiDB Lightning's Physical Import mode to improve the performance of full data migration, with performance being up to 10 times faster.
SQL operations and compatibility

SEAMLESS to use
Foreign KeySupport MySQL-compatible foreign key constraints to maintain data consistency and improve data quality.Scalability and Performance
TiKV support batch aggregate data requestsThis enhancement significantly reduces total RPCs in TiKV batch get operations. In situations where data is highly dispersed and the gRPC thread pool is stretched, batching coprocessor requests can improve performance by 50+%.
Multi-valued index (experimental)Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0.TiFlash supports compression exchangeTiFlash supports data compression to improve the efficiency of parallel data exchange, overall performance for TPCH improves 10%, and can save 50+% network usage.
DB Operations

SMOOTH to use
Tenant resource group control (experimental)Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs.TiFlash supports stale readTiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted.
Stability

RELIABLE to use
Reliability and Availability
Resource Control (experimental)Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs.
Historical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard.
Performance

POWERFUL to use
TiFlash supports compression exchangeTiFlash supports data compression to improve the efficiency of parallel data exchange.SQL Functionality
Foreign KeySupport MySQL-compatible foreign key constraints to maintain data consistency and improve data quality.
TiFlash supports stale readTiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted.Multi-valued index (experimental)Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0.
DB Operations and Observability
DM support physical import (experimental) TiDB Data Migration (DM) integrates TiDB Lightning's Physical Import mode to improve the performance of full data migration, with performance being up to 10 times faster.
-## New features +## Feature Details +### Scalability -### SQL - -* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) [@crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - - TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. - - For more information, see [documentation](/foreign-key.md). - -* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) - **tw@TomShawn** - - TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. - - Introducing multi-valued indexes further enhances TiDB's support for the JSON data type and also improves TiDB's compatibility with MySQL 8.0. - - For details, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index) - -* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** - - The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, roll back a cluster within minutes, and roll back a cluster multiple times on the timeline to determine when specific data changes occurred. - - For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). - -### Stability - -* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** - - In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. - - For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). - -* Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** - - TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of `LIMIT` operations. - - - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. - - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. +* Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** - Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. + In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. +### Performance * Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). - -* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** - - TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. - -### Performance - + * Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** When TiDB sends a data request to TiKV, TiDB compiles the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. @@ -126,12 +86,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/sql-prepared-plan-cache.md). -* Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** - - In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. - -### HTAP - * TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) **tw@TomShawn** To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. @@ -148,7 +102,44 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support pushing down the `regexp_replace` string function to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** -### High availability +### Reliability + +* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** + + Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. + + The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. + + With this feature, you can: + + - Combine multiple small and medium-sized applications from different systems into a single TiDB cluster. When the workload of an application grows larger, it does not affect the normal operation of other applications. When the system workload is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. + - Choose to combine all test environments into a single TiDB cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can always get the necessary resources. + + In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. + + In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. + + For more information, see [documentation](/tidb-resource-control.md). +* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** + + TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. + +* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** + + In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. + + For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). + +* Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** + + TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of `LIMIT` operations. + + - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. + - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. + + Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. + +### Availability * Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** @@ -159,38 +150,30 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). -### Security - -* TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** - - In v6.6.0, TiDB supports automatic rotations of TiFlash TLS certificates. For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. In addition, the rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. - - For more information, see [documentation](/enable-tls-between-components.md). - -* TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** +* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** - Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. + The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, roll back a cluster within minutes, and roll back a cluster multiple times on the timeline to determine when specific data changes occurred. - For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). + For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). -### DB operations +### SQL -* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** +* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) [@crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. + TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. - The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. + For more information, see [documentation](/foreign-key.md). - With this feature, you can: +* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) + **tw@TomShawn** - - Combine multiple small and medium-sized applications from different systems into a single TiDB cluster. When the workload of an application grows larger, it does not affect the normal operation of other applications. When the system workload is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. - - Choose to combine all test environments into a single TiDB cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can always get the necessary resources. + TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. - In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. + Introducing multi-valued indexes further enhances TiDB's support for the JSON data type and also improves TiDB's compatibility with MySQL 8.0. - In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. + For details, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index) - For more information, see [documentation](/tidb-resource-control.md). +### DB Operations * Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** @@ -210,6 +193,47 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/tidb-configuration-file.md#initialize-sql-file-new-in-v660). +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** + + In v6.6.0, DM full migration capability integrates with physical import mode of TiDB Lightning, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. + + Before v6.6.0, for high data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration. + + For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). + +* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) + + In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. + + For more information, see [TiDB Lightning (Task)](tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). + +* TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV [#41163](https://github.com/pingcap/tidb/issues/41163) @[gozssky](https://github.com/gozssky) **tw@qiancai** + + Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volumes. + + This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to "gzip" or "gz". + + For more information, see [documentation](/tidb-lightning-configuration#tidb-lightning-task). + +* The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** + + TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. + + For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc/). + +* TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes (experimental) [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** + + Before v6.6.0, when a table in the upstream accepts a large amount of writes, the replication capability of this table cannot be scaled out, resulting in an increase in the replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which means the replication capability of a single table is scaled out. + + For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). + +* GORM adds TiDB integration tests. Now TiDB is the default database supported by GORM. + + - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) adapts to the `AUTO_RANDOM` attribute of TiDB [#104](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) + - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) + - [GORM](https://github.com/go-gorm/gorm) adds TiDB as the default database [#6014](https://github.com/go-gorm/gorm/pull/6014) @[Icemap](https://github.com/Icemap) + - [GORM documentation](https://github.com/go-gorm/gorm.io) mentions TiDB as the default database [#638](https://github.com/go-gorm/gorm.io/pull/638) @[Icemap](https://github.com/Icemap) + ### Observability * Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** @@ -268,53 +292,25 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/statement-summary-tables.md#persist-statements-summary). -### Telemetry - -- Starting from Februray 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). -- Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade. - -### TiDB tools - -* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** - - In v6.6.0, DM full migration capability integrates with physical import mode of TiDB Lightning, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. - - Before v6.6.0, for high data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration. - - For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). - -* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) - - In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. - - For more information, see [TiDB Lightning (Task)](tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). - -* TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV [#41163](https://github.com/pingcap/tidb/issues/41163) @[gozssky](https://github.com/gozssky) **tw@qiancai** - - Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volumes. - - This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to "gzip" or "gz". +### Security - For more information, see [documentation](/tidb-lightning-configuration#tidb-lightning-task). +* TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** -* The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** + In v6.6.0, TiDB supports automatic rotations of TiFlash TLS certificates. For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. In addition, the rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. - TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. + For more information, see [documentation](/enable-tls-between-components.md). - For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc/). +* TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** -* TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes (experimental) [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** + Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. - Before v6.6.0, when a table in the upstream accepts a large amount of writes, the replication capability of this table cannot be scaled out, resulting in an increase in the replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which means the replication capability of a single table is scaled out. + For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). - For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). +### Telemetry -* GORM adds TiDB integration tests. Now TiDB is the default database supported by GORM. +- Starting from Februray 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). +- Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade. - - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) adapts to the `AUTO_RANDOM` attribute of TiDB [#104](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) - - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) - - [GORM](https://github.com/go-gorm/gorm) adds TiDB as the default database [#6014](https://github.com/go-gorm/gorm/pull/6014) @[Icemap](https://github.com/Icemap) - - [GORM documentation](https://github.com/go-gorm/gorm.io) mentions TiDB as the default database [#638](https://github.com/go-gorm/gorm.io/pull/638) @[Icemap](https://github.com/Icemap) ## Compatibility changes From cf3c84078ff7554b5411d6410a697f4e12339fd7 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 12:06:58 +0800 Subject: [PATCH 095/135] refine --- releases/release-6.6.0.md | 46 +++++++++++++++------------------------ 1 file changed, 17 insertions(+), 29 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 08930eeb38d1..b700a830b118 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -61,6 +61,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ## Feature Details + ### Scalability * Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** @@ -68,12 +69,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. ### Performance + * Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). - + * Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** When TiDB sends a data request to TiKV, TiDB compiles the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. @@ -123,7 +125,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. - + * Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. @@ -138,7 +140,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. - + ### Availability * Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** @@ -164,8 +166,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) - **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -173,7 +174,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For details, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index) -### DB Operations +### DB operations * Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** @@ -311,7 +312,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Starting from Februray 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). - Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade. - ## Compatibility changes > **Note:** @@ -351,7 +351,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | Newly added | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | | [`tidb_stmt_summary_enable_persistent`](/system-variables.md#tidb_stmt_summary_enable_persistent-new-in-v660) | Newly added | This variable is read-only. It controls whether to enable [statements summary persistence](/statement-summary-tables.md#persist-statements-summary). The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660). | -| [`tidb_stmt_summary_filename`](/system-variables.md#tidb_stmt_summary_filename-new-in-v660) | Newly added | This variable is read-only. It specifies the file to which persistent data is written when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660). | +| [`tidb_stmt_summary_filename`](/system-variables.md#tidb_stmt_summary_filename-new-in-v660) | Newly added | This variable is read-only. It specifies the file to which persistent data is written when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660). | | [`tidb_stmt_summary_file_max_backups`](/system-variables.md#tidb_stmt_summary_file_max_backups-new-in-v660) | Newly added | This variable is read-only. It specifies the maximum number of data files that can be persisted when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660). | | [`tidb_stmt_summary_file_max_days`](/system-variables.md#tidb_stmt_summary_file_max_days-new-in-v660) | Newly added | This variable is read-only. It specifies the maximum number of days to keep persistent data files when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660). | | [`tidb_stmt_summary_file_max_size`](/system-variables.md#tidb_stmt_summary_file_max_size-new-in-v660) | Newly added | This variable is read-only. It specifies the maximum size of a persistent data file when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660). | @@ -403,14 +403,11 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB - `` - Improve the scheduling mechanism of TTL background cleaning tasks to allow the cleaning task of a single table to be split into several sub-tasks and scheduled to run on multiple TiDB nodes simultaneously [#40361](https://github.com/pingcap/tidb/issues/40361) @[YangKeao](https://github.com/YangKeao) - Optimize the column name display of the result returned by running multi-statements after setting a non-default delimiter [#39662](https://github.com/pingcap/tidb/issues/39662) @[mjonss](https://github.com/mjonss) - Optimize the execution efficiency of statements after warning messages are generated [#39702](https://github.com/pingcap/tidb/issues/39702) @[tiancaiamao](https://github.com/tiancaiamao) - Support distributed data backfill for `ADD INDEX` (experimental) [#37119](https://github.com/pingcap/tidb/issues/37119) @[zimulala](https://github.com/zimulala) - Support using `CURDATE()` as the default value of a column [#38356](https://github.com/pingcap/tidb/issues/38356) @[CbcWestwolf](https://github.com/CbcWestwolf) - - `` - `partial order prop push down` now supports the LIST-type partitioned tables [#40273](https://github.com/pingcap/tidb/issues/40273) @[winoros](https://github.com/winoros) - Add error messages for conflicts between optimizer hints and execution plan bindings [#40910](https://github.com/pingcap/tidb/issues/40910) @[Reminiscent](https://github.com/Reminiscent) - Optimize the plan cache strategy to avoid non-optimal plans when using plan cache in some scenarios [#40312](https://github.com/pingcap/tidb/pull/40312) [#40218](https://github.com/pingcap/tidb/pull/40218) [#40280](https://github.com/pingcap/tidb/pull/40280) [#41136](https://github.com/pingcap/tidb/pull/41136) [#40686](https://github.com/pingcap/tidb/pull/40686) @[qw4990](https://github.com/qw4990) @@ -418,9 +415,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiKV - `` - Optimize the default values of some parameters in partitioned-raft-kv mode: the default value of the TiKV configuration item `storage.block-cache.capacity` is adjusted from 45% to 30%, and the default value of `region-split-size` is adjusted from `96MiB` adjusted to `10GiB`. When using raft-kv mode and `enable-region-bucket` is `true`, `region-split-size` is adjusted to 1GiB by default. [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) - - Support priority scheduling in Raftstore asynchronous writes [#13730] (https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) + - Support priority scheduling in Raftstore asynchronous writes [#13730](https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) - Support starting TiKV on a CPU with less than 1 core [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) - Optimize the new detection mechanism of Raftstore slow score and add `evict-slow-trend-scheduler` [#14131](https://github.com/tikv/tikv/issues/14131) @[innerr](https://github.com/innerr) - Force the block cache of RocksDB to be shared and no longer support setting the block cache separately according to CF [#12936](https://github.com/tikv/tikv/issues/12936) @[busyjay](https://github.com/busyjay) @@ -472,7 +468,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support exporting settings for foreign keys [#39913](https://github.com/pingcap/tidb/issues/39913) @[lichunzhu](https://github.com/lichunzhu) - + Sync-diff-inspector + + sync-diff-inspector - Add a new parameter `skip-non-existing-table` to control whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya94) **tw@shichun-0415** @@ -480,7 +476,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB - `` - Fix the issue that a statistics collection task fails due to an incorrect `datetime` value [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - Fix the issue that `stats_meta` is not created following table creation [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - Refine the error message reported when a column that a partitioned table depends on is deleted [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) @@ -522,33 +517,27 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that a unique index might still produce duplicate data in some cases [#40217](https://github.com/pingcap/tidb/issues/40217) @[tangenta](https://github.com/tangenta) - Fix the PD OOM issue when there is a large number of Regions but the table ID cannot be pushed down when querying some virtual tables using `Prepare` or `Execute` [#39605](https://github.com/pingcap/tidb/issues/39605) @[djshow832](https://github.com/djshow832) - Fix the issue that data race might occur when an index is added [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) - - `` - Fix the `can't find proper physical plan` issue caused by virtual columns [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) - Fix the issue that TiDB cannot restart after global bindings are created for partition tables in dynamic trimming mode [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) - Fix the issue that auto analyze causes graceful shutdown to take a long time [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - `` - Fix the panic of the TiDB server when the IndexMerge operator triggers memory limiting behaviors [#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) - Fix the issue that the `SELECT * FROM table_name LIMIT 1` query on partitioned tables is slow [#40741](https://github.com/pingcap/tidb/pull/40741) @[solotzg](https://github.com/solotzg) - `` + TiKV - `` - Fix an error that occurs when casting the `const Enum` type to other types [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) - (dup: release-6.1.4.md > Bug 修复> TiKV)- Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + - Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD - `` + - Fix the issue that the Region Scatter task generates redundant replicas unexpectedly [#5909](https://github.com/tikv/pd/issues/5909) @[HundunDM](https://github.com/HunDunDM) - Fix the issue that the Online Unsafe Recovery feature would get stuck and time out in `auto-detect` mode [#5753](https://github.com/tikv/pd/issues/5753) @[Connor1996](https://github.com/Connor1996) - Fix the issue that the execution `replace-down-peer` slows down under certain conditions [#5788](https://github.com/tikv/pd/issues/5788) @[HundunDM](https://github.com/HunDunDM) - Fix the PD OOM issue that occurs when the calls of `ReportMinResolvedTS` is too frequent [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + TiFlash - `` + - Fix the issue that querying TiFlash-related system tables might get stuck [#6745](https://github.com/pingcap/tiflash/pull/6745) @[lidezhu](https://github.com/lidezhu) - Fix the issue that semi-joins use excessive memory when calculating Cartesian products [#6730](https://github.com/pingcap/tiflash/issues/6730) @[gengliqi](https://github.com/gengliqi) - Fix the issue that the result of division operation on the DECIMAL data type is not rounded [#6393](https://github.com/pingcap/tiflash/issues/6393) @[LittleFall](https://github.com/LittleFall) @@ -569,7 +558,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiCDC - (dup: release-6.1.4.md > Bug 修复> Tools> TiCDC)- Fix the issue that `transaction_atomicity` and `protocol` cannot be updated via the configuration file [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) + - Fix the issue that `transaction_atomicity` and `protocol` cannot be updated via the configuration file [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) - Fix the issue that precheck is not performed on the storage path of redo log [#6335](https://github.com/pingcap/tiflow/issues/6335) @[CharlesCheung96](https://github.com/CharlesCheung96) - Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure [#8089](https://github.com/pingcap/tiflow/issues/8089) @[CharlesCheung96](https://github.com/CharlesCheung96) - 修复 changefeed 在 tikv、CDC 节点扩缩容特殊场景下卡住的问题。 [#8197](https://github.com/pingcap/tiflow/issues/8197) @[hicqu](https://github.com/hicqu) @@ -578,11 +567,10 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Data Migration (DM) - `` - Fix the issue that the `binlog-schema delete` command fails to execute [#7373](https://github.com/pingcap/tiflow/issues/7373) @[liumengya94](https://github.com/liumengya94) - Fix the issue that the checkpoint does not advance when the last binlog is a skipped DDL [#8175](https://github.com/pingcap/tiflow/issues/8175) @[D3Hunter](https://github.com/D3Hunter) - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when the expression filters of both "update" and "non-update" types are specified in one table, all `UPDATE` statements are skipped [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Data Migration (DM))- Fix a bug that when only one of `update-old-value-expr` or `update-new-value-expr` is set for a table, the filter rule does not take effect or DM panics [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) + - Fix a bug that when the expression filters of both "update" and "non-update" types are specified in one table, all `UPDATE` statements are skipped [#7831](https://github.com/pingcap/tiflow/issues/7831) @[lance6716](https://github.com/lance6716) + - Fix a bug that when only one of `update-old-value-expr` or `update-new-value-expr` is set for a table, the filter rule does not take effect or DM panics [#7774](https://github.com/pingcap/tiflow/issues/7774) @[lance6716](https://github.com/lance6716) + TiDB Lightning @@ -594,7 +582,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix a possible OOM problem when there is an unclosed delimiter in the data file [#40400](https://github.com/pingcap/tidb/issues/40400) @[buchuitoudegou](https://github.com/buchuitoudegou) - Fix the issue that the file offset in the error report exceeds the file size [#40034](https://github.com/pingcap/tidb/issues/40034) @[buchuitoudegou](https://github.com/buchuitoudegou) - Fix an issue with the new version of PDClient that might cause parallel import to fail [#40493](https://github.com/pingcap/tidb/issues/40493) @[AmoebaProtozoa](https://github.com/AmoebaProtozoa) - (dup: release-6.1.4.md > Bug 修复> Tools> TiDB Lightning)- Fix the issue that TiDB Lightning prechecks cannot find dirty data left by previously failed imports [#39477](https://github.com/pingcap/tidb/issues/39477) @[dsdashun](https://github.com/dsdashun) @[dsdashun] + - Fix the issue that TiDB Lightning prechecks cannot find dirty data left by previously failed imports [#39477](https://github.com/pingcap/tidb/issues/39477) @[dsdashun](https://github.com/dsdashun) ## Contributors From 6427720d79c21d4cd9dc6660e981145965f26721 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 13:01:07 +0800 Subject: [PATCH 096/135] add a few sysvars --- releases/release-6.6.0.md | 8 ++++++-- system-variables.md | 16 ++++++++++------ 2 files changed, 16 insertions(+), 8 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index b700a830b118..51c033fa696b 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -338,6 +338,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | [`foreign_key_checks`](/system-variables.md#foreign_key_checks) | Modified | This variable controls whether to enable the foreign key constraint check. The default value changes from `OFF` to `ON`, which means enabling the foreign key check by default. | | [`tidb_enable_foreign_key`](/system-variables.md#tidb_enable_foreign_key-new-in-v630) | Modified | This variable controls whether to enable the foreign key feature. The default value changes from `OFF` to `ON`, which means enabling foreign key by default. | | `tidb_enable_general_plan_cache` | Modified | This variable controls whether to enable General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_enable_non_prepared_plan_cache`](/system-variables.md#tidb_enable_non_prepared_plan_cache). | +| [`tidb_enable_historical_stats`](/system-variables.md#tidb_enable_historical_stats) | Modified | This variable controls whether to enable historical statistics. The default value changes from `OFF` to `ON`, which means that historical statistics are enabled by default. | | [`tidb_enable_telemetry`](/system-variables.md#tidb_enable_telemetry-new-in-v402) | Modified | The default value changes from `ON` to `OFF`, which means that telemetry is disabled by default in TiDB. | | `tidb_general_plan_cache_size` | Modified | This variable controls the maximum number of execution plans that can be cached by General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_non_prepared_plan_cache_size`](/system-variables.md#tidb_non_prepared_plan_cache_size). | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | @@ -346,9 +347,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | [`mpp_exchange_compression_mode`](/system-variables.md#mpp_exchange_compression_mode-new-in-v660) | Newly added | This variable is used to specify the data compression mode of the MPP Exchange operator. This variable takes effect when TiDB selects the MPP execution plan with the version number `1`. The default value `UNSPECIFIED` means that TiDB automatically selects the `FAST` compression mode. | | [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | Newly added | This variable is used to specify different versions of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automatically selects the latest version `1`. | | [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | Newly added | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | +| [`tidb_enable_historical_stats_for_capture`](/system-variables.md#tidb_enable_historical_stats_for_capture) | Newly added | This variable controls whether the information captured by `PLAN REPLAYER CAPTURE` includes historical statistics by default. The default value `OFF` means that historical statistics are not included by default. | | [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | Newly added | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | Newly added | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | | [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | Newly added | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | +| [`tidb_historical_stats_duration`](/system-variables.md#tidb_historical_stats_duration-new-in-v660) | Newly added | This variable controls how long the historical statistics are retained in storage. The default value is 7 days. | +| [`tidb_index_join_double_read_penalty_cost_rate`](/system-variables.md#tidb_index_join_double_read_penalty_cost_rate-new-in-v660) | Newly added | This variable controls whether to add some penalty cost to the selection of index join. The default value `0` means that this feature is disabled by default. | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | | [`tidb_stmt_summary_enable_persistent`](/system-variables.md#tidb_stmt_summary_enable_persistent-new-in-v660) | Newly added | This variable is read-only. It controls whether to enable [statements summary persistence](/statement-summary-tables.md#persist-statements-summary). The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660). | | [`tidb_stmt_summary_filename`](/system-variables.md#tidb_stmt_summary_filename-new-in-v660) | Newly added | This variable is read-only. It specifies the file to which persistent data is written when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660). | @@ -362,10 +366,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | -------- | -------- | -------- | -------- | | TiKV | `enable-statistics` | Deleted | This configuration item specifies whether to enable RocksDB statistics. Starting from v6.6.0, this item is deleted. RocksDB statistics are enabled for all clusters by default to help diagnostics. For details, see [#13942](https://github.com/tikv/tikv/pull/13942). | | TiKV | `storage.block-cache.shared` | Deleted | Starting from v6.6.0, this configuration item is deleted, and the block cache is enabled by default and cannot be disabled. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | +| DM | `on-duplicate` | Deleted | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | | TiDB | [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB. | | TiKV | [`rocksdb.defaultcf.block-size`](/tikv-configuration-file.md#block-size) and [`rocksdb.writecf.block-size`](/tikv-configuration-file.md#block-size) | Modified | The default values change from `64K` to `16K`. | | TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dasboard. | +| DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | | TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS, and Azure. | | TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | Newly added | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | @@ -381,8 +387,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | The memory limit ratio for a PD instance. The value `0` means no memory limit. | | TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | | TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning-configuration#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | -| DM | `on-duplicate` | Deleted | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | -| DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | DM | [`checksum-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE ` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | | DM | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620) of TiDB Lightning. | | DM | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | diff --git a/system-variables.md b/system-variables.md index 31fe36ea35fa..21629abbf85f 100644 --- a/system-variables.md +++ b/system-variables.md @@ -1535,16 +1535,20 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a - Scope: GLOBAL - Persists to cluster: Yes - Type: Boolean -- Default value: `OFF` -- This variable is used for an unreleased feature. **Do not change the variable value**. +- Default value: `ON` +- This variable controls whether to enable historical statistics. The default value changes from `OFF` to `ON`, which means that historical statistics are enabled by default. -### `tidb_enable_historical_stats_for_capture` +### tidb_enable_historical_stats_for_capture + +> **Warning:** +> +> The feature controlled by this variable is not fully functional in the current TiDB version. Do not change the default value. - Scope: GLOBAL - Persists to cluster: Yes - Type: Boolean - Default value: `OFF` -- This variable is used for an unreleased feature. **Do not change the variable value**. +- This variable controls whether the information captured by `PLAN REPLAYER CAPTURE` includes historical statistics by default. The default value `OFF` means that historical statistics are not included by default. ### tidb_enable_index_merge New in v4.0 @@ -1749,8 +1753,8 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a - Scope: SESSION | GLOBAL - Persists to cluster: Yes - Type: Boolean -- Default value: `OFF` -- This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. +- Default value: `ON` +- This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `ON` means to enable the `PLAN REPLAYER CAPTURE` feature. From 8775d194e3a58d86730d51835007d745d7c80e86 Mon Sep 17 00:00:00 2001 From: yiwen92 <34636520+yiwen92@users.noreply.github.com> Date: Fri, 17 Feb 2023 13:35:28 +0800 Subject: [PATCH 097/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 51c033fa696b..efac827c7f73 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -73,6 +73,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. + + Tests indicate this reduces tail latency 40-60%. For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). From df68d1869a63a1dfbe6054e2c0542319f4b717ba Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 17 Feb 2023 13:52:54 +0800 Subject: [PATCH 098/135] Apply suggestions from code review --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index efac827c7f73..6ad2b7aa80cc 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -532,7 +532,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiKV - Fix an error that occurs when casting the `const Enum` type to other types [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus] (https://github.com/overvenus) + - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus](https://github.com/overvenus) - Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + PD @@ -540,7 +540,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that the Region Scatter task generates redundant replicas unexpectedly [#5909](https://github.com/tikv/pd/issues/5909) @[HundunDM](https://github.com/HunDunDM) - Fix the issue that the Online Unsafe Recovery feature would get stuck and time out in `auto-detect` mode [#5753](https://github.com/tikv/pd/issues/5753) @[Connor1996](https://github.com/Connor1996) - Fix the issue that the execution `replace-down-peer` slows down under certain conditions [#5788](https://github.com/tikv/pd/issues/5788) @[HundunDM](https://github.com/HunDunDM) - - Fix the PD OOM issue that occurs when the calls of `ReportMinResolvedTS` is too frequent [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + - Fix the PD OOM issue that occurs when the calls of `ReportMinResolvedTS` are too frequent [#5965](https://github.com/tikv/pd/issues/5965) @[HundunDM](https://github.com/HunDunDM) + TiFlash From 93d4b19c270159acb36f14eb699de73012972830 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 15:18:23 +0800 Subject: [PATCH 099/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 1 + 1 file changed, 1 insertion(+) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 6ad2b7aa80cc..3d4a7f66f0b4 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -40,6 +40,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + From 0da3825f6372398fa2d2fbf7772f12b270fb78e5 Mon Sep 17 00:00:00 2001 From: yiwen92 <34636520+yiwen92@users.noreply.github.com> Date: Fri, 17 Feb 2023 15:27:34 +0800 Subject: [PATCH 100/135] Update release-6.6.0.md minor change wordings --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 3d4a7f66f0b4..38e4a04d42f6 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -67,7 +67,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** - In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. + In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency for large table's DDL operations, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. ### Performance From 31894a3c90d41cb1ca75a16f30e3324c4f92f613 Mon Sep 17 00:00:00 2001 From: yiwen92 <34636520+yiwen92@users.noreply.github.com> Date: Fri, 17 Feb 2023 15:31:54 +0800 Subject: [PATCH 101/135] Update release-6.6.0.md typo --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 38e4a04d42f6..27d59ddc0a1c 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -37,7 +37,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - + @@ -46,7 +46,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - + @@ -56,7 +56,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - +
Resource Control (experimental) Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs.
Historical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard.
Reliability and Availability
Resource Control (experimental)Resource control (experimental) Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs.
SQL Functionality
Foreign KeyForeign key Support MySQL-compatible foreign key constraints to maintain data consistency and improve data quality.
DB Operations and Observability
DM support physical import (experimental)TiDB Data Migration (DM) integrates TiDB Lightning's Physical Import mode to improve the performance of full data migration, with performance being up to 10 times faster.TiDB Data Migration (DM) integrates TiDB Lightning's physical import mode to improve the performance of full data migration, with performance being up to 10 times faster.
From c50a8356ab16015a4fe822d8dc6001822afbb7ea Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 17 Feb 2023 16:55:03 +0800 Subject: [PATCH 102/135] Apply suggestions from code review Co-authored-by: xixirangrang Co-authored-by: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> --- releases/release-6.6.0.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 27d59ddc0a1c..80ed4a1f2275 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -526,15 +526,15 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that data race might occur when an index is added [#40879](https://github.com/pingcap/tidb/issues/40879) @[tangenta](https://github.com/tangenta) - Fix the `can't find proper physical plan` issue caused by virtual columns [#41014](https://github.com/pingcap/tidb/issues/41014) @[AilinKid](https://github.com/AilinKid) - Fix the issue that TiDB cannot restart after global bindings are created for partition tables in dynamic trimming mode [#40368](https://github.com/pingcap/tidb/issues/40368) @[Yisaer](https://github.com/Yisaer) - - Fix the issue that auto analyze causes graceful shutdown to take a long time [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) + - Fix the issue that `auto analyze` causes graceful shutdown to take a long time [#40038](https://github.com/pingcap/tidb/issues/40038) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - Fix the panic of the TiDB server when the IndexMerge operator triggers memory limiting behaviors [#41036](https://github.com/pingcap/tidb/pull/41036) @[guo-shaoge](https://github.com/guo-shaoge) - Fix the issue that the `SELECT * FROM table_name LIMIT 1` query on partitioned tables is slow [#40741](https://github.com/pingcap/tidb/pull/40741) @[solotzg](https://github.com/solotzg) + TiKV - Fix an error that occurs when casting the `const Enum` type to other types [#14156](https://github.com/tikv/tikv/issues/14156) @[wshwsh12](https://github.com/wshwsh12) - - Fix the issue that Resolved TS causes higher network traffic [#14098](https://github.com/tikv/tikv/issues/14092) @[overvenus](https://github.com/overvenus) - - Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) @[myonkeminta](https://github.com/myonkeminta) + - Fix the issue that Resolved TS causes higher network traffic [#14092](https://github.com/tikv/tikv/issues/14092) @[overvenus](https://github.com/overvenus) + - Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML [#14038](https://github.com/tikv/tikv/issues/14038) @[MyonKeminta](https://github.com/MyonKeminta) + PD @@ -547,7 +547,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that querying TiFlash-related system tables might get stuck [#6745](https://github.com/pingcap/tiflash/pull/6745) @[lidezhu](https://github.com/lidezhu) - Fix the issue that semi-joins use excessive memory when calculating Cartesian products [#6730](https://github.com/pingcap/tiflash/issues/6730) @[gengliqi](https://github.com/gengliqi) - - Fix the issue that the result of division operation on the DECIMAL data type is not rounded [#6393](https://github.com/pingcap/tiflash/issues/6393) @[LittleFall](https://github.com/LittleFall) + - Fix the issue that the result of the division operation on the DECIMAL data type is not rounded [#6393](https://github.com/pingcap/tiflash/issues/6393) @[LittleFall](https://github.com/LittleFall) + Tools From eba7d2e7b06d0cd02fe75dfafa5bf2618c277a90 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 17:50:24 +0800 Subject: [PATCH 103/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 80ed4a1f2275..39dad884942f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -95,7 +95,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. - For details, see [documentation](). + For details, see [documentation](/explain-mpp.md#mpp-version-and-exchange-data-compression). * TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** From f6303a02e3d03e86bc2bc16b48a35a02b58532b4 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 19:33:07 +0800 Subject: [PATCH 104/135] Apply suggestions from code review Co-authored-by: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Co-authored-by: Ran --- releases/release-6.6.0.md | 44 ++++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 19 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 39dad884942f..dd12adb29e15 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -65,6 +65,12 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Scalability +* Support the next generation Partitioned-Raft-KV storage engine (experimental) [#11515](https://github.com/tikv/tikv/issues/11515) [#12842](https://github.com/tikv/tikv/issues/12842) @[busyjay](https://github.com/busyjay) @[tonyxuqqi](https://github.com/tonyxuqqi) @[tabokie](https://github.com/tabokie) @[bufferflies](https://github.com/bufferflies) @[5kbpers](https://github.com/5kbpers) @[SpadeA-Tang](https://github.com/SpadeA-Tang) @[nolouch](https://github.com/nolouch) + + Before TiDB v6.6.0, TiKV's Raft-based storage engine used a single RocksDB instance to store the data of all Regions of the TiKV instance. To support larger clusters more smoothly, starting from TiDB v6.6.0, a new TiKV storage engine is introduced, which uses multiple RocksDB instances to store TiKV Region data, and the data of each Region is independently stored in a separate RocksDB instance. The new engine can better control the number and level of files in the RocksDB instance, achieve physical isolation of data operations between Regions, and support smoothly managing more data. You can see it as TiKV managing multiple RocksDB instances through partitioning, which is why the feature is named as Partitioned-Raft-KV. The main advantage of this feature is better write performance, faster scaling, and larger volume of data supported with the same hardware. It can also support larger cluster scale. + + Currently, this feature is experimental and not recommended for use in production environments. + * Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency for large table's DDL operations, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. @@ -163,7 +169,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* Support the MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) [@crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) [@crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. @@ -175,7 +181,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Introducing multi-valued indexes further enhances TiDB's support for the JSON data type and also improves TiDB's compatibility with MySQL 8.0. - For details, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index) + For details, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). ### DB operations @@ -234,7 +240,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * GORM adds TiDB integration tests. Now TiDB is the default database supported by GORM. - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) adapts to the `AUTO_RANDOM` attribute of TiDB [#104](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) - - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) + - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/105) @[Icemap](https://github.com/Icemap) - [GORM](https://github.com/go-gorm/gorm) adds TiDB as the default database [#6014](https://github.com/go-gorm/gorm/pull/6014) @[Icemap](https://github.com/Icemap) - [GORM documentation](https://github.com/go-gorm/gorm.io) mentions TiDB as the default database [#638](https://github.com/go-gorm/gorm.io/pull/638) @[Icemap](https://github.com/Icemap) @@ -306,7 +312,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** - Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. + Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and a secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). @@ -319,17 +325,17 @@ In v6.6.0-DMR, the key new features and improvements are as follows: > **Note:** > -> If you are upgrading from v6.4 or earlier versions to v6.6, you might also need to check the compatibility changes introduced in the intermediate versions. +> This section provides compatibility changes you need to know when you upgrade from v6.5.0 to the current version. If you are upgrading from v6.4.0 or earlier versions to the current version, you might also need to check the compatibility changes introduced in intermediate versions. ### MySQL compatibility -* Support the MySQL-compatible foreign key constraint [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** - For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-foreign-key.md). + For more information, see the [SQL](#sql) section in this document and [documentation](/sql-statements/sql-statement-foreign-key.md). * Support the MySQL-compatible multi-valued index [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** - For more information, see the [SQL](#sql) section in v6.6.0 Release Notes and [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). + For more information, see the [SQL](#sql) section in this document and [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). ### System variables @@ -346,17 +352,17 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | `tidb_general_plan_cache_size` | Modified | This variable controls the maximum number of execution plans that can be cached by General Plan Cache. Starting from v6.6.0, this variable is renamed to [`tidb_non_prepared_plan_cache_size`](/system-variables.md#tidb_non_prepared_plan_cache_size). | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `learner` is added for this variable to specify the learner replicas with which TiDB reads data from read-only nodes. | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | A new value option `prefer-leader` is added for this variable to improve the overall read availability of TiDB clusters. When this option is set, TiDB prefers to read from the leader replica. When the performance of the leader replica significantly decreases, TiDB automatically reads from follower replicas. | -| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable is used to control the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is changed from `0` to `4`, which means 4 Coprocessor tasks will be batched into one task for each batch of requests. | -| [`mpp_exchange_compression_mode`](/system-variables.md#mpp_exchange_compression_mode-new-in-v660) | Newly added | This variable is used to specify the data compression mode of the MPP Exchange operator. This variable takes effect when TiDB selects the MPP execution plan with the version number `1`. The default value `UNSPECIFIED` means that TiDB automatically selects the `FAST` compression mode. | -| [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | Newly added | This variable is used to specify different versions of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automatically selects the latest version `1`. | -| [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | Newly added | This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | +| [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size) | Modified | This variable controls the batch size of the Coprocessor Tasks of the `IndexLookUp` operator. `0` means to disable batch. Starting from v6.6.0, the default value is changed from `0` to `4`, which means 4 Coprocessor tasks will be batched into one task for each batch of requests. | +| [`mpp_exchange_compression_mode`](/system-variables.md#mpp_exchange_compression_mode-new-in-v660) | Newly added | This variable specifies the data compression mode of the MPP Exchange operator. It takes effect when TiDB selects the MPP execution plan with the version number `1`. The default value `UNSPECIFIED` means that TiDB automatically selects the `FAST` compression mode. | +| [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | Newly added | This variable specifies the version of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automatically selects the latest version `1`. | +| [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | Newly added | This variable controls whether to enable distributed execution of the DDL reorg phase to accelerate this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | | [`tidb_enable_historical_stats_for_capture`](/system-variables.md#tidb_enable_historical_stats_for_capture) | Newly added | This variable controls whether the information captured by `PLAN REPLAYER CAPTURE` includes historical statistics by default. The default value `OFF` means that historical statistics are not included by default. | -| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | Newly added | Controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | +| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | Newly added | This variable controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | Newly added | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | -| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | Newly added | Controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | +| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | Newly added | This variable controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | | [`tidb_historical_stats_duration`](/system-variables.md#tidb_historical_stats_duration-new-in-v660) | Newly added | This variable controls how long the historical statistics are retained in storage. The default value is 7 days. | | [`tidb_index_join_double_read_penalty_cost_rate`](/system-variables.md#tidb_index_join_double_read_penalty_cost_rate-new-in-v660) | Newly added | This variable controls whether to add some penalty cost to the selection of index join. The default value `0` means that this feature is disabled by default. | -| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | Controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | +| [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | This variable controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | | [`tidb_stmt_summary_enable_persistent`](/system-variables.md#tidb_stmt_summary_enable_persistent-new-in-v660) | Newly added | This variable is read-only. It controls whether to enable [statements summary persistence](/statement-summary-tables.md#persist-statements-summary). The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660). | | [`tidb_stmt_summary_filename`](/system-variables.md#tidb_stmt_summary_filename-new-in-v660) | Newly added | This variable is read-only. It specifies the file to which persistent data is written when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660). | | [`tidb_stmt_summary_file_max_backups`](/system-variables.md#tidb_stmt_summary_file_max_backups-new-in-v660) | Newly added | This variable is read-only. It specifies the maximum number of data files that can be persisted when [statements summary persistence](/statement-summary-tables.md#persist-statements-summary) is enabled. The value of this variable is the same as that of the configuration item [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660). | @@ -422,7 +428,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiKV - - Optimize the default values of some parameters in partitioned-raft-kv mode: the default value of the TiKV configuration item `storage.block-cache.capacity` is adjusted from 45% to 30%, and the default value of `region-split-size` is adjusted from `96MiB` adjusted to `10GiB`. When using raft-kv mode and `enable-region-bucket` is `true`, `region-split-size` is adjusted to 1GiB by default. [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) + - Optimize the default values of some parameters in partitioned-raft-kv mode: the default value of the TiKV configuration item `storage.block-cache.capacity` is adjusted from 45% to 30%, and the default value of `region-split-size` is adjusted from `96MiB` adjusted to `10GiB`. When using raft-kv mode and `enable-region-bucket` is `true`, `region-split-size` is adjusted to 1 GiB by default. [#12842](https://github.com/tikv/tikv/issues/12842) @[tonyxuqqi](https://github.com/tonyxuqqi) - Support priority scheduling in Raftstore asynchronous writes [#13730](https://github.com/tikv/tikv/issues/13730) @[Connor1996](https://github.com/Connor1996) - Support starting TiKV on a CPU with less than 1 core [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) - Optimize the new detection mechanism of Raftstore slow score and add `evict-slow-trend-scheduler` [#14131](https://github.com/tikv/tikv/issues/14131) @[innerr](https://github.com/innerr) @@ -430,7 +436,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + PD - - Support limiting the global memory to alleviate the OOM problem (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) + - Support managing the global memory threshold to alleviate the OOM problem (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - Add the GC Tuner to alleviate the GC pressure (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - Add the `balance-witness-scheduler` scheduler to schedule witness [#5763](https://github.com/tikv/pd/pull/5763) @[ethercflow](https://github.com/ethercflow) - Add the `evict-slow-trend-scheduler` scheduler to detect and schedule abnormal nodes [#5808](https://github.com/tikv/pd/pull/5808) @[innerr](https://github.com/innerr) @@ -439,7 +445,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiFlash - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) **tw@qiancai** - - Reduce the memory usage of TiFlash up to 30% when there is no query [#6589](https://github.com/pingcap/tiflash/pull/6589) @[hongyunyan](https://github.com/hongyunyan) + - Reduce the memory usage of TiFlash by up to 30% when there is no query [#6589](https://github.com/pingcap/tiflash/pull/6589) @[hongyunyan](https://github.com/hongyunyan) + Tools @@ -449,7 +455,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiCDC - - Support batch UPDATE DML statements to improve TiCDC replication performance [#8084](https://github.com/pingcap/tiflow/issues/8084) @[amyangfei](https://github.com/amyangfei) + - Support batch `UPDATE` DML statements to improve TiCDC replication performance [#8084](https://github.com/pingcap/tiflow/issues/8084) @[amyangfei](https://github.com/amyangfei) - Implement MQ sink and MySQL sink in the asynchronous mode to improve the sink throughput [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) + TiDB Data Migration (DM) From 3fe072f543ac61e5c2ecddf2748bd9d9e0a699b9 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 19:44:37 +0800 Subject: [PATCH 105/135] Apply suggestions from code review --- releases/release-6.6.0.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index dd12adb29e15..04cda6db090d 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -27,6 +27,10 @@ In v6.6.0-DMR, the key new features and improvements are as follows: TiKV support batch aggregate data requests This enhancement significantly reduces total RPCs in TiKV batch get operations. In situations where data is highly dispersed and the gRPC thread pool is stretched, batching coprocessor requests can improve performance by 50+%. + + TiKV supports the Partitioned-Raft-KV storage engine + TiKV introduces the next-generation storage engine Partitioned-Raft-KV, and each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger expansion capabilities. + TiFlash supports compression exchange TiFlash supports data compression to improve the efficiency of parallel data exchange, overall performance for TPCH improves 10%, and can save 50+% network usage. @@ -65,9 +69,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Scalability -* Support the next generation Partitioned-Raft-KV storage engine (experimental) [#11515](https://github.com/tikv/tikv/issues/11515) [#12842](https://github.com/tikv/tikv/issues/12842) @[busyjay](https://github.com/busyjay) @[tonyxuqqi](https://github.com/tonyxuqqi) @[tabokie](https://github.com/tabokie) @[bufferflies](https://github.com/bufferflies) @[5kbpers](https://github.com/5kbpers) @[SpadeA-Tang](https://github.com/SpadeA-Tang) @[nolouch](https://github.com/nolouch) +* Support the next-generation Partitioned-Raft-KV storage engine (experimental) [#11515](https://github.com/tikv/tikv/issues/11515) [#12842](https://github.com/tikv/tikv/issues/12842) @[busyjay](https://github.com/busyjay) @[tonyxuqqi](https://github.com/tonyxuqqi) @[tabokie](https://github.com/tabokie) @[bufferflies](https://github.com/bufferflies) @[5kbpers](https://github.com/5kbpers) @[SpadeA-Tang](https://github.com/SpadeA-Tang) @[nolouch](https://github.com/nolouch) - Before TiDB v6.6.0, TiKV's Raft-based storage engine used a single RocksDB instance to store the data of all Regions of the TiKV instance. To support larger clusters more smoothly, starting from TiDB v6.6.0, a new TiKV storage engine is introduced, which uses multiple RocksDB instances to store TiKV Region data, and the data of each Region is independently stored in a separate RocksDB instance. The new engine can better control the number and level of files in the RocksDB instance, achieve physical isolation of data operations between Regions, and support smoothly managing more data. You can see it as TiKV managing multiple RocksDB instances through partitioning, which is why the feature is named as Partitioned-Raft-KV. The main advantage of this feature is better write performance, faster scaling, and larger volume of data supported with the same hardware. It can also support larger cluster scale. + Before TiDB v6.6.0, TiKV's Raft-based storage engine used a single RocksDB instance to store the data of all Regions of the TiKV instance. To support larger clusters more stably, starting from TiDB v6.6.0, a new TiKV storage engine is introduced, which uses multiple RocksDB instances to store TiKV Region data, and the data of each Region is independently stored in a separate RocksDB instance. The new engine can better control the number and level of files in the RocksDB instance, achieve physical isolation of data operations between Regions, and support stably managing more data. You can see it as TiKV managing multiple RocksDB instances through partitioning, which is why the feature is named Partitioned-Raft-KV. The main advantage of this feature is better write performance, faster scaling, and larger volume of data supported with the same hardware. It can also support larger cluster scales. Currently, this feature is experimental and not recommended for use in production environments. From 9038f868bf52ebf361f91f46dbba8d18f93acb9f Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 19:51:27 +0800 Subject: [PATCH 106/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 04cda6db090d..a00cee0fd2b8 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -23,7 +23,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Scalability and Performance
+ Scalability and Performance
TiKV support batch aggregate data requests This enhancement significantly reduces total RPCs in TiKV batch get operations. In situations where data is highly dispersed and the gRPC thread pool is stretched, batching coprocessor requests can improve performance by 50+%. From 11c66c6884a0ceb6daedfee3f60a727965fc2f1d Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 17 Feb 2023 20:45:10 +0800 Subject: [PATCH 107/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index a00cee0fd2b8..ed423a6cb1e0 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -24,48 +24,48 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Scalability and Performance
- TiKV support batch aggregate data requests - This enhancement significantly reduces total RPCs in TiKV batch get operations. In situations where data is highly dispersed and the gRPC thread pool is stretched, batching coprocessor requests can improve performance by 50+%. + TiKV support batch aggregating data requests + This enhancement significantly reduces total RPC requests in TiKV batch-get operations. In situations where data is highly dispersed and the gRPC thread pool has insufficient resource, batching coprocessor requests can improve performance by more than 50%. TiKV supports the Partitioned-Raft-KV storage engine - TiKV introduces the next-generation storage engine Partitioned-Raft-KV, and each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger expansion capabilities. + TiKV introduces the next-generation storage engine Partitioned-Raft-KV. Each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger expansion capabilities. - TiFlash supports compression exchange - TiFlash supports data compression to improve the efficiency of parallel data exchange, overall performance for TPCH improves 10%, and can save 50+% network usage. + TiFlash supports compression exchange + TiFlash supports data compression to improve the efficiency of parallel data exchange, and the overall TPC-H performance improves by roughly 10%, which can save more than 50% of network usage. - TiFlash supports stale read + TiFlash supports stale read TiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted. Reliability and Availability
- Resource control (experimental) + Resource control (experimental) Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs. - Historical SQL binding + Historical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard. - SQL Functionality
- Foreign key + SQL Functionalities
+ Foreign key Support MySQL-compatible foreign key constraints to maintain data consistency and improve data quality. - Multi-valued index (experimental) + Multi-valued index (experimental) Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0. DB Operations and Observability
- DM support physical import (experimental) + DM support physical import (experimental) TiDB Data Migration (DM) integrates TiDB Lightning's physical import mode to improve the performance of full data migration, with performance being up to 10 times faster. -## Feature Details +## Feature details ### Scalability @@ -84,7 +84,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. - + Tests indicate this reduces tail latency 40-60%. For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). @@ -135,9 +135,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. For more information, see [documentation](/tidb-resource-control.md). -* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** - - TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. * Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** @@ -154,6 +151,10 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. +* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** + + TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. + ### Availability * Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** From d3437c1060e75bada3cd9220aa48ac575a9f9c7d Mon Sep 17 00:00:00 2001 From: yiwen92 <34636520+yiwen92@users.noreply.github.com> Date: Fri, 17 Feb 2023 20:48:06 +0800 Subject: [PATCH 108/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 43 +++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ed423a6cb1e0..bd1acba49f47 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -23,55 +23,55 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Scalability and Performance
- TiKV support batch aggregating data requests - This enhancement significantly reduces total RPC requests in TiKV batch-get operations. In situations where data is highly dispersed and the gRPC thread pool has insufficient resource, batching coprocessor requests can improve performance by more than 50%. + Scalability and Performance
+ TiKV supports Partitioned-Raft-KV storage engine (experimental) + TiKV introduces the next-generation storage engine Partitioned-Raft-KV, and each 'Region' uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger scalability. - TiKV supports the Partitioned-Raft-KV storage engine - TiKV introduces the next-generation storage engine Partitioned-Raft-KV. Each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger expansion capabilities. + TiKV support batch aggregate data requests + This enhancement significantly reduces total RPCs in TiKV batch get operations. In situations where data is highly dispersed and the gRPC thread pool is stretched, batching coprocessor requests can improve performance by 50+%. - TiFlash supports compression exchange - TiFlash supports data compression to improve the efficiency of parallel data exchange, and the overall TPC-H performance improves by roughly 10%, which can save more than 50% of network usage. + TiFlash supports compression exchange + TiFlash supports data compression to improve the efficiency of parallel data exchange, overall performance for TPCH improves 10%, and can save 50+% network usage. - TiFlash supports stale read + TiFlash supports stale read TiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted. Reliability and Availability
- Resource control (experimental) + Resource control (experimental) Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs. - Historical SQL binding + Historical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard. - SQL Functionalities
- Foreign key + SQL Functionality
+ Foreign key Support MySQL-compatible foreign key constraints to maintain data consistency and improve data quality. - Multi-valued index (experimental) + Multi-valued index (experimental) Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0. DB Operations and Observability
- DM support physical import (experimental) + DM support physical import (experimental) TiDB Data Migration (DM) integrates TiDB Lightning's physical import mode to improve the performance of full data migration, with performance being up to 10 times faster. -## Feature details +## Feature Details ### Scalability -* Support the next-generation Partitioned-Raft-KV storage engine (experimental) [#11515](https://github.com/tikv/tikv/issues/11515) [#12842](https://github.com/tikv/tikv/issues/12842) @[busyjay](https://github.com/busyjay) @[tonyxuqqi](https://github.com/tonyxuqqi) @[tabokie](https://github.com/tabokie) @[bufferflies](https://github.com/bufferflies) @[5kbpers](https://github.com/5kbpers) @[SpadeA-Tang](https://github.com/SpadeA-Tang) @[nolouch](https://github.com/nolouch) +* Support Partitioned-Raft-KV storage engine (experimental) [#11515](https://github.com/tikv/tikv/issues/11515) [#12842](https://github.com/tikv/tikv/issues/12842) @[busyjay](https://github.com/busyjay) @[tonyxuqqi](https://github.com/tonyxuqqi) @[tabokie](https://github.com/tabokie) @[bufferflies](https://github.com/bufferflies) @[5kbpers](https://github.com/5kbpers) @[SpadeA-Tang](https://github.com/SpadeA-Tang) @[nolouch](https://github.com/nolouch) - Before TiDB v6.6.0, TiKV's Raft-based storage engine used a single RocksDB instance to store the data of all Regions of the TiKV instance. To support larger clusters more stably, starting from TiDB v6.6.0, a new TiKV storage engine is introduced, which uses multiple RocksDB instances to store TiKV Region data, and the data of each Region is independently stored in a separate RocksDB instance. The new engine can better control the number and level of files in the RocksDB instance, achieve physical isolation of data operations between Regions, and support stably managing more data. You can see it as TiKV managing multiple RocksDB instances through partitioning, which is why the feature is named Partitioned-Raft-KV. The main advantage of this feature is better write performance, faster scaling, and larger volume of data supported with the same hardware. It can also support larger cluster scales. + Before TiDB v6.6.0, TiKV's Raft-based storage engine used a single RocksDB instance to store the data of all 'Regions' of the TiKV instance. To support larger clusters more stably, starting from TiDB v6.6.0, a new TiKV storage engine is introduced, which uses multiple RocksDB instances to store TiKV Region data, and the data of each Region is independently stored in a separate RocksDB instance. The new engine can better control the number and level of files in the RocksDB instance, achieve physical isolation of data operations between Regions, and support stably managing more data. You can see it as TiKV managing multiple RocksDB instances through partitioning, which is why the feature is named Partitioned-Raft-KV. The main advantage of this feature is better write performance, faster scaling, and larger volume of data supported with the same hardware. It can also support larger cluster scales. Currently, this feature is experimental and not recommended for use in production environments. @@ -84,7 +84,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. - + Tests indicate this reduces tail latency 40-60%. For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). @@ -135,6 +135,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. For more information, see [documentation](/tidb-resource-control.md). +* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** + + TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. * Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** @@ -151,10 +154,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. -* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** - - TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. - ### Availability * Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** From 2f36c8f8fb97dcd0a24293a310281c92631f9a7f Mon Sep 17 00:00:00 2001 From: yiwen92 <34636520+yiwen92@users.noreply.github.com> Date: Fri, 17 Feb 2023 20:50:03 +0800 Subject: [PATCH 109/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index bd1acba49f47..d10f9633501d 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -32,12 +32,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: This enhancement significantly reduces total RPCs in TiKV batch get operations. In situations where data is highly dispersed and the gRPC thread pool is stretched, batching coprocessor requests can improve performance by 50+%. - TiFlash supports compression exchange - TiFlash supports data compression to improve the efficiency of parallel data exchange, overall performance for TPCH improves 10%, and can save 50+% network usage. - - - TiFlash supports stale read - TiFlash supports the Stale Read feature, which can improve query performance in scenarios where real-time requirements are not restricted. + TiFlash supports stale read and compression exchange + TiFlash supports the stale read feature, which can improve query performance in scenarios where real-time requirements are not restricted. TiFlash supports data compression to improve the efficiency of parallel data exchange, overall performance for TPCH improves 10%, and can save 50+% network usage. Reliability and Availability
From 069ec2adf6f1ccee74d4ca8e6f87ecf0bea442a5 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Sun, 19 Feb 2023 20:44:20 +0800 Subject: [PATCH 110/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index d10f9633501d..fcf60a748d7b 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -207,7 +207,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In v6.6.0, DM full migration capability integrates with physical import mode of TiDB Lightning, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. - Before v6.6.0, for high data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration. + Before v6.6.0, for large data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration. For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). From fdf24d1b59f2c1fa7630685505cefb4499bfe6d6 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Sun, 19 Feb 2023 20:46:44 +0800 Subject: [PATCH 111/135] Apply suggestions from code review --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index fcf60a748d7b..680f0120e4be 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -377,7 +377,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiKV | `storage.block-cache.shared` | Deleted | Starting from v6.6.0, this configuration item is deleted, and the block cache is enabled by default and cannot be disabled. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | DM | `on-duplicate` | Deleted | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | | TiDB | [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB. | -| TiKV | [`rocksdb.defaultcf.block-size`](/tikv-configuration-file.md#block-size) and [`rocksdb.writecf.block-size`](/tikv-configuration-file.md#block-size) | Modified | The default values change from `64K` to `16K`. | +| TiKV | [`rocksdb.defaultcf.block-size`](/tikv-configuration-file.md#block-size) and [`rocksdb.writecf.block-size`](/tikv-configuration-file.md#block-size) | Modified | The default values change from `64K` to `32K`. | | TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dasboard. | | DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | From d062c1661c6f9454975e81009f1626e0ea7de161 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 20 Feb 2023 09:57:33 +0800 Subject: [PATCH 112/135] Apply suggestions from code review --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 680f0120e4be..0e381cbacbbb 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -148,7 +148,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. - Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. + Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. ### Availability @@ -159,7 +159,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - For TiDB clusters deployed across cloud regions, when a cloud region fails, the specified databases or tables can survive in another cloud region. - For TiDB clusters deployed in a single cloud region, when an availability zone fails, the specified databases or tables can survive in another availability zone. - For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). + For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). * Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** @@ -221,7 +221,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volumes. - This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to "gzip" or "gz". + This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to `"gzip"` or `"gz"`. For more information, see [documentation](/tidb-lightning-configuration#tidb-lightning-task). From bc3e775c7da5d0d62270b667e1d38fe9fbb9a1a8 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Mon, 20 Feb 2023 10:52:25 +0800 Subject: [PATCH 113/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 0e381cbacbbb..58402be18e09 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -409,7 +409,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitate more flexible TiKV performance tuning. - Remove the limit on `LIMIT` clauses, thus improving the execution performance. -- Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.5.0. +- Starting from v6.6.0, BR does not support restoring data to clusters of v6.1.0 or earlier versions. - Starting from v6.6.0, TiDB no longer supports modifying column types on partitioned tables because of potential correctness issues. ## Improvements From 3a20003c5767d5f5e10cee11121e013dd01f5556 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 11:16:25 +0800 Subject: [PATCH 114/135] remove tw, add toc, fix format --- TOC.md | 4 +- releases/release-6.6.0.md | 145 ++++++++++++++++++----------------- releases/release-notes.md | 4 + releases/release-timeline.md | 1 + 4 files changed, 81 insertions(+), 73 deletions(-) diff --git a/TOC.md b/TOC.md index 25f67af6e744..2757c804f003 100644 --- a/TOC.md +++ b/TOC.md @@ -4,7 +4,7 @@ - [Docs Home](https://docs.pingcap.com/) - About TiDB - [TiDB Introduction](/overview.md) - - [TiDB 6.5 Release Notes](/releases/release-6.5.0.md) + - [TiDB 6.6 Release Notes](/releases/release-6.6.0.md) - [Features](/basic-features.md) - [MySQL Compatibility](/mysql-compatibility.md) - [TiDB Limitations](/tidb-limitations.md) @@ -939,6 +939,8 @@ - [Release Timeline](/releases/release-timeline.md) - [TiDB Versioning](/releases/versioning.md) - [TiDB Installation Packages](/binary-package.md) + - v6.6 + - [6.6.0](/releases/release-6.6.0.md) - v6.5 - [6.5.0](/releases/release-6.5.0.md) - v6.4 diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 58402be18e09..6be64b6f3633 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -5,11 +5,11 @@ summary: Learn about the new features, compatibility changes, improvements, and # TiDB 6.6.0 Release Notes -Release date: xx, 2023 +Release date: Februrary 20, 2023 -TiDB version: 6.6.0-DMR +TiDB version: 6.6.0-[DMR](/releases/versioning.md#development-milestone-releases) -Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.6/quick-start-with-tidb) | [Installation package](https://cn.pingcap.com/product-community/) +Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.6/quick-start-with-tidb) | [Installation package](https://www.pingcap.com/download/?version=v6.6.0#version-list) In v6.6.0-DMR, the key new features and improvements are as follows: @@ -25,43 +25,43 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Scalability and Performance
TiKV supports Partitioned-Raft-KV storage engine (experimental) - TiKV introduces the next-generation storage engine Partitioned-Raft-KV, and each 'Region' uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger scalability. + TiKV introduces the next-generation storage engine Partitioned-Raft-KV, and each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger scalability. - TiKV support batch aggregate data requests - This enhancement significantly reduces total RPCs in TiKV batch get operations. In situations where data is highly dispersed and the gRPC thread pool is stretched, batching coprocessor requests can improve performance by 50+%. + TiKV support batch aggregating data requests + This enhancement significantly reduces total RPCs in TiKV batch-get operations. In situations where data is highly dispersed and the gRPC thread pool has insufficient resources, batching coprocessor requests can improve performance by more than 50%. - TiFlash supports stale read and compression exchange - TiFlash supports the stale read feature, which can improve query performance in scenarios where real-time requirements are not restricted. TiFlash supports data compression to improve the efficiency of parallel data exchange, overall performance for TPCH improves 10%, and can save 50+% network usage. + TiFlash supports Stale Read and compression exchange + TiFlash supports the stale read feature, which can improve query performance in scenarios where real-time requirements are not restricted. TiFlash supports data compression to improve the efficiency of parallel data exchange, and the overall TPC-H performance improves by 10%, which can save more than 50% of the network usage. - Reliability and Availability
- Resource control (experimental) - Support resource management based on resource groups, mapping database users to the corresponding resource groups and setting quotas for each resource group based on actual needs. + Reliability and availability
+ Resource control (experimental) + Support resource management based on resource groups, which maps database users to the corresponding resource groups and sets quotas for each resource group based on actual needs. - Historical SQL binding + Historical SQL binding Support binding historical execution plans and quickly binding execution plans on TiDB Dashboard. - SQL Functionality
- Foreign key + SQL functionalities
+ Foreign key Support MySQL-compatible foreign key constraints to maintain data consistency and improve data quality. - Multi-valued index (experimental) + Multi-valued index (experimental) Introduce MySQL-compatible multi-valued indexes and enhance the JSON type to improve TiDB's compatibility with MySQL 8.0. - DB Operations and Observability
- DM support physical import (experimental) + DB operations and observability
+ DM supports physical import (experimental) TiDB Data Migration (DM) integrates TiDB Lightning's physical import mode to improve the performance of full data migration, with performance being up to 10 times faster. -## Feature Details +## Feature details ### Scalability @@ -71,39 +71,39 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Currently, this feature is experimental and not recommended for use in production environments. -* Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) **tw@ran-huang** +* Support the distributed parallel execution framework for DDL operations (experimental) [#37125](https://github.com/pingcap/tidb/issues/37125) @[zimulala](https://github.com/zimulala) In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency for large table's DDL operations, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the `StateWriteReorganization` phase of the same task to speed up DDL execution. This feature is controlled by the system variable [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) and is currently only supported for `Add Index` operations. ### Performance -* Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) **tw@TomShawn** +* Support a stable wake-up model for pessimistic lock queues [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) to `ON`. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time. - + Tests indicate this reduces tail latency 40-60%. For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). -* Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) **tw@TomShawn** +* Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) When TiDB sends a data request to TiKV, TiDB compiles the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%. This feature is enabled by default. You can set the batch size of requests using the system variable [`tidb_store_batch_size`](/system-variables.md#tidb_store_batch_size). -* Remove the limit on `LIMIT` clauses [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) **tw@shichun-0415** +* Remove the limit on `LIMIT` clauses [#40219](https://github.com/pingcap/tidb/issues/40219) @[fzzf678](https://github.com/fzzf678) Starting from v6.6.0, TiDB plan cache supports caching execution plans with a variable as the `LIMIT` parameter, such as `LIMIT ?` or `LIMIT 10, ?`. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. For more information, see [documentation](/sql-prepared-plan-cache.md). -* TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) **tw@TomShawn** +* TiFlash supports data exchange with compression [#6620](https://github.com/pingcap/tiflash/issues/6620) @[solotzg](https://github.com/solotzg) To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. For details, see [documentation](/explain-mpp.md#mpp-version-and-exchange-data-compression). -* TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) **tw@qiancai** +* TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas. @@ -111,11 +111,11 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/stale-read.md). -* Support pushing down the `regexp_replace` string function to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** +* Support pushing down the `regexp_replace` string function to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) ### Reliability -* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) **tw@hfxsd** +* Support resource control based on resource groups (experimental) [#38825](https://github.com/pingcap/tidb/issues/38825) @[nolouch](https://github.com/nolouch) @[BornChanger](https://github.com/BornChanger) @[glorv](https://github.com/glorv) @[tiancaiamao](https://github.com/tiancaiamao) @[Connor1996](https://github.com/Connor1996) @[JmPotato](https://github.com/JmPotato) @[hnes](https://github.com/hnes) @[CabinfeverB](https://github.com/CabinfeverB) @[HuSharp](https://github.com/HuSharp) Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally. @@ -126,22 +126,19 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Combine multiple small and medium-sized applications from different systems into a single TiDB cluster. When the workload of an application grows larger, it does not affect the normal operation of other applications. When the system workload is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. - Choose to combine all test environments into a single TiDB cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can always get the necessary resources. - In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. - - In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. + In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. - For more information, see [documentation](/tidb-resource-control.md). -* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) **tw@ran-huang** + In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. - TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. + For more information, see [documentation](/tidb-resource-control.md). -* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@TomShawn** +* Binding historical execution plans is GA [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) In v6.5.0, TiDB extends the binding targets in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of [SQL binding](/sql-statements/sql-statement-create-binding.md), which further improves the feature usability. For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). -* Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) **tw@TomShawn** +* Add several optimizer hints [#39964](https://github.com/pingcap/tidb/issues/39964) @[Reminiscent](https://github.com/Reminiscent) TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of `LIMIT` operations. @@ -150,9 +147,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. +* Support dynamically managing the resource usage of DDL operations (experimental) [#38025](https://github.com/pingcap/tidb/issues/38025) @[hawkingrei](https://github.com/hawkingrei) + + TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the [DDL distributed parallel execution framework](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) is enabled. + ### Availability -* Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) **tw@qiancai** +* Support configuring `SURVIVAL_PREFERENCE` for [placement rules in SQL](/placement-rules-in-sql.md) [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) `SURVIVAL_PREFERENCES` provides data survival preference settings to increase the disaster survivability of data. By specifying `SURVIVAL_PREFERENCE`, you can control the following: @@ -161,7 +162,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). -* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) **tw@ran-huang** +* Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) The [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, roll back a cluster within minutes, and roll back a cluster multiple times on the timeline to determine when specific data changes occurred. @@ -169,13 +170,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### SQL -* Support MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) [@crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) [@crazycs520](https://github.com/crazycs520) TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling. For more information, see [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) TiDB introduces the MySQL-compatible multi-valued index in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-value index to filter the retrieval conditions with `MEMBER OF()`, `JSON_CONTAINS()`, `JSON_OVERLAPS()` functions, thereby reducing much I/O consumption and improving operation speed. @@ -185,25 +186,25 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### DB operations -* Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) **tw@Oreoxmt** +* Support configuring read-only storage nodes for resource-consuming tasks @[v01dstar](https://github.com/v01dstar) - In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/readonly-nodes.md#procedures) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--replica-read-label`, to ensure the stability of cluster performance. + In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to [steps](/best-practices/readonly-nodes.md#procedures) and specify where data is read through the system variable `tidb_replica_read`, the TiSpark configuration item `spark.tispark.replica_read`, or the br command line argument `--replica-read-label`, to ensure the stability of cluster performance. For more information, see [documentation](/best-practices/readonly-nodes.md). -* Support dynamically modifying `store-io-pool-size` [#13964](https://github.com/tikv/tikv/issues/13964) @[LykxSassinator](https://github.com/LykxSassinator) **tw@shichun-0415** +* Support dynamically modifying `store-io-pool-size` [#13964](https://github.com/tikv/tikv/issues/13964) @[LykxSassinator](https://github.com/LykxSassinator) The TiKV configuration item [`raftstore.store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530) specifies the allowable number of threads that process Raft I/O tasks, which can be adjusted when tuning TiKV performance. Before v6.6.0, this configuration item cannot be modified dynamically. Starting from v6.6.0, you can modify this configuration without restarting the server, which means more flexible performance tuning. For more information, see [documentation](/dynamic-config.md). -* Support specifying the SQL script executed upon TiDB cluster intialization [#35624](https://github.com/pingcap/tidb/issues/35624) @[morgo](https://github.com/morgo) **tw@shichun-0415** +* Support specifying the SQL script executed upon TiDB cluster intialization [#35624](https://github.com/pingcap/tidb/issues/35624) @[morgo](https://github.com/morgo) When you start a TiDB cluster for the first time, you can specify the SQL script to be executed by configuring the command line parameter `--initialize-sql-file`. You can use this feature when you need to perform such operations as modifying the value of a system variable, creating a user, or granting privileges. For more information, see [documentation](/tidb-configuration-file.md#initialize-sql-file-new-in-v660). -* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) **tw@ran-huang** +* TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @[lance6716](https://github.com/lance6716) In v6.6.0, DM full migration capability integrates with physical import mode of TiDB Lightning, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios. @@ -211,33 +212,33 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). -* TiDB Lightning adds a new configuration parameter "header-schema-match" to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) +* TiDB Lightning adds a new configuration parameter `"header-schema-match"` to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) - In v6.6.0, TiDB Lightning adds a new profile parameter `header-schema-match`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. + In v6.6.0, TiDB Lightning adds a new profile parameter `"header-schema-match"`. The default value is `true`, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration to `false`. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table. - For more information, see [TiDB Lightning (Task)](tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). + For more information, see [documentation](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). -* TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV [#41163](https://github.com/pingcap/tidb/issues/41163) @[gozssky](https://github.com/gozssky) **tw@qiancai** +* TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV [#41163](https://github.com/pingcap/tidb/issues/41163) @[gozssky](https://github.com/gozssky) Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volumes. This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to `"gzip"` or `"gz"`. - For more information, see [documentation](/tidb-lightning-configuration#tidb-lightning-task). + For more information, see [documentation](/tidb-lightning-configuration.md#tidb-lightning-task). -* The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) **tw@Oreoxmt** +* The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV. For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/cdc/cdc/). -* TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes (experimental) [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) **tw@Oreoxmt** +* TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes (experimental) [#7720](https://github.com/pingcap/tiflow/issues/7720) @[overvenus](https://github.com/overvenus) Before v6.6.0, when a table in the upstream accepts a large amount of writes, the replication capability of this table cannot be scaled out, resulting in an increase in the replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which means the replication capability of a single table is scaled out. For more information, see [documentation](/ticdc/ticdc-sink-to-kafka.md#scale-out-the-load-of-a-single-large-table-to-multiple-ticdc-nodes). -* GORM adds TiDB integration tests. Now TiDB is the default database supported by GORM. +* [GORM](https://github.com/go-gorm/gorm) adds TiDB integration tests. Now TiDB is the default database supported by GORM. [#6014](https://github.com/go-gorm/gorm/pull/6014) @[Icemap](https://github.com/Icemap) - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) adapts to the `AUTO_RANDOM` attribute of TiDB [#104](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/105) @[Icemap](https://github.com/Icemap) @@ -246,7 +247,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Observability -* Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) **tw@ran-huang** +* Support quickly creating SQL binding on TiDB Dashboard [#781](https://github.com/pingcap/tidb-dashboard/issues/781) @[YiniXu9506](https://github.com/YiniXu9506) TiDB v6.6.0 supports creating SQL binding from statement history, which allows you to quickly bind a SQL statement to a specific plan on TiDB Dashboard. @@ -254,7 +255,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/dashboard/dashboard-statement-details.md#create-sql-binding). -* Add warning for caching execution plans @[qw4990](https://github.com/qw4990) **tw@TomShawn** +* Add warning for caching execution plans @[qw4990](https://github.com/qw4990) When an execution plan cannot be cached, TiDB indicates the reason in warning to make diagnostics easier. For example: @@ -280,13 +281,13 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/sql-prepared-plan-cache.md#diagnostics-of-prepared-plan-cache). -* Add a `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) **tw@Oreoxmt** +* Add a `Warnings` field to the slow query log [#39893](https://github.com/pingcap/tidb/issues/39893) @[time-and-fate](https://github.com/time-and-fate) TiDB v6.6.0 adds a `Warnings` field to the slow query log to help diagnose performance issues. This field records warnings generated during the execution of a slow query. You can also view the warnings on the slow query page of TiDB Dashboard. For more information, see [documentation](/identify-slow-queries.md). -* Automatically capture the generation of SQL execution plans [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) **tw@ran-huang** +* Automatically capture the generation of SQL execution plans [#38779](https://github.com/pingcap/tidb/issues/38779) @[Yisaer](https://github.com/Yisaer) In the process of troubleshooting execution plan issues, `PLAN REPLAYER` can help preserve the scene and improve the efficiency of diagnosis. However, in some scenarios, the generation of some execution plans cannot be reproduced freely, which makes the diagnosis work more difficult. @@ -296,7 +297,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/sql-plan-replayer.md#use-plan-replayer-capture). -* Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) **tw@shichun-0415** +* Support persisting statements summary (experimental) [#40812](https://github.com/pingcap/tidb/issues/40812) @[mornyx](https://github.com/mornyx) Before v6.6.0, statements summary data is kept in memory and would be lost upon a TiDB server restart. Starting from v6.6.0, TiDB supports enabling statements summary persistence, which allows historical data to be written to disks on a regular basis. In the meantime, the result of queries on system tables will derive from disks, instead of memory. After TiDB restarts, all historical data remains available. @@ -304,36 +305,36 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Security -* TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) **tw@qiancai** +* TiFlash supports automatic rotations of TLS certificates [#5503](https://github.com/pingcap/tiflash/issues/5503) @[ywqzzy](https://github.com/ywqzzy) In v6.6.0, TiDB supports automatic rotations of TiFlash TLS certificates. For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. In addition, the rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster. For more information, see [documentation](/enable-tls-between-components.md). -* TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) **tw@qiancai** +* TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens [#4075](https://github.com/pingcap/tidb/issues/40750) @[okJiang](https://github.com/okJiang) Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and a secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. - For more information, see [documentation](/tidb-lightning-data-source#import-data-from-amazon-s3). + For more information, see [documentation](/tidb-lightning-data-source.md#import-data-from-amazon-s3). ### Telemetry -- Starting from Februray 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). +- Starting from Februray 20, 2023, the [telemetry feature](/telemetry.md) is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). - Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade. ## Compatibility changes > **Note:** > -> This section provides compatibility changes you need to know when you upgrade from v6.5.0 to the current version. If you are upgrading from v6.4.0 or earlier versions to the current version, you might also need to check the compatibility changes introduced in intermediate versions. +> This section provides compatibility changes you need to know when you upgrade from v6.5.0 to the current version (v6.6.0). If you are upgrading from v6.4.0 or earlier versions to the current version, you might also need to check the compatibility changes introduced in intermediate versions. ### MySQL compatibility -* Support MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) **tw@Oreoxmt** +* Support MySQL-compatible foreign key constraints [#18209](https://github.com/pingcap/tidb/issues/18209) @[crazycs520](https://github.com/crazycs520) - For more information, see the [SQL](#sql) section in this document and [documentation](/sql-statements/sql-statement-foreign-key.md). + For more information, see the [SQL](#sql) section in this document and [documentation](/foreign-key.md). -* Support the MySQL-compatible multi-valued index [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) **tw@TomShawn** +* Support the MySQL-compatible multi-valued index (experimental) [#39592](https://github.com/pingcap/tidb/issues/39592) @[xiongjiwei](https://github.com/xiongjiwei) @[qw4990](https://github.com/qw4990) For more information, see the [SQL](#sql) section in this document and [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). @@ -378,7 +379,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | DM | `on-duplicate` | Deleted | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | | TiDB | [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB. | | TiKV | [`rocksdb.defaultcf.block-size`](/tikv-configuration-file.md#block-size) and [`rocksdb.writecf.block-size`](/tikv-configuration-file.md#block-size) | Modified | The default values change from `64K` to `32K`. | -| TiKV | `storage.block-cache.block-cache-size` | Modified | Starting from v6.6.0, this configuration item is only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | +| TiKV | [`rocksdb.defaultcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.writecf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.lockcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size) | Modified | Starting from v6.6.0, these configuration items are only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dasboard. | | DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | @@ -395,7 +396,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | Newly added | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | The memory limit ratio for a PD instance. The value `0` means no memory limit. | | TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | -| TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning-configuration#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | +| TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning-configuration.md#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | | DM | [`checksum-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE ` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | | DM | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620) of TiDB Lightning. | | DM | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | @@ -409,7 +410,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitate more flexible TiKV performance tuning. - Remove the limit on `LIMIT` clauses, thus improving the execution performance. -- Starting from v6.6.0, BR does not support restoring data to clusters of v6.1.0 or earlier versions. +- Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.1.0. - Starting from v6.6.0, TiDB no longer supports modifying column types on partitioned tables because of potential correctness issues. ## Improvements @@ -444,7 +445,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiFlash - - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) **tw@qiancai** + - Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process [#6296](https://github.com/pingcap/tiflash/issues/6296) @[JinheLin](https://github.com/JinheLin) - Reduce the memory usage of TiFlash by up to 30% when there is no query [#6589](https://github.com/pingcap/tiflash/pull/6589) @[hongyunyan](https://github.com/hongyunyan) + Tools @@ -460,7 +461,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Data Migration (DM) - - Optimize DM alert rules and content [#7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + - Optimize DM alert rules and content [#7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occured. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors: @@ -483,7 +484,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + sync-diff-inspector - - Add a new parameter `skip-non-existing-table` to control whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya94) **tw@shichun-0415** + - Add a new parameter `skip-non-existing-table` to control whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream [#692](https://github.com/pingcap/tidb-tools/issues/692) @[lichunzhu](https://github.com/lichunzhu) @[liumengya94](https://github.com/liumengya94) ## Bug fixes @@ -587,7 +588,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: + TiDB Lightning - - Fix the issue that TiDB Lightning timeout hangs due to TiDB restart in some scenarios [#33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) **tw@shichun-0415** + - Fix the issue that TiDB Lightning timeout hangs due to TiDB restart in some scenarios [#33714](https://github.com/pingcap/tidb/issues/33714) @[lichunzhu](https://github.com/lichunzhu) - Fix the issue that TiDB Lightning might incorrectly skip conflict resolution when all but the last TiDB Lightning instance encounters a local duplicate record during a parallel import [#40923](https://github.com/pingcap/tidb/issues/40923) @[lichunzhu](https://github.com/lichunzhu) - Fix the issue that precheck cannot accurately detect the presence of a running TiCDC in the target cluster [#41040](https://github.com/pingcap/tidb/issues/41040) @[lance6716](https://github.com/lance6716) - Fix the issue that TiDB Lightning panics in the split-region phase [#40934](https://github.com/pingcap/tidb/issues/40934) @[lance6716](https://github.com/lance6716) diff --git a/releases/release-notes.md b/releases/release-notes.md index 5cd429a23e46..916638eac353 100644 --- a/releases/release-notes.md +++ b/releases/release-notes.md @@ -5,6 +5,10 @@ aliases: ['/docs/dev/releases/release-notes/','/docs/dev/releases/rn/'] # TiDB Release Notes +## 6.6 + +- [6.6.0](/releases/release-6.6.0.md): 2023-02-20 + ## 6.5 - [6.5.0](/releases/release-6.5.0.md): 2022-12-29 diff --git a/releases/release-timeline.md b/releases/release-timeline.md index d149933b9ade..e184da4ce4ba 100644 --- a/releases/release-timeline.md +++ b/releases/release-timeline.md @@ -9,6 +9,7 @@ This document shows all the released TiDB versions in reverse chronological orde | Version | Release Date | | :--- | :--- | +| [6.6.0](/releases/release-6.6.0.md) | 2023-02-20 | | [6.1.4](/releases/release-6.1.4.md) | 2023-02-08 | | [6.5.0](/releases/release-6.5.0.md) | 2022-12-29 | | [5.1.5](/releases/release-5.1.5.md) | 2022-12-28 | From 219be2ee4ecedf26e1913d59be00e70e9d2c24ce Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 11:23:26 +0800 Subject: [PATCH 115/135] fix anchor --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 6be64b6f3633..5c92808d880c 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -315,7 +315,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM **user's access keys** (each access key consists of an access key ID and a secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM **role's access keys + session tokens** as well to improve the data security. - For more information, see [documentation](/tidb-lightning-data-source.md#import-data-from-amazon-s3). + For more information, see [documentation](/tidb-lightning/tidb-lightning-data-source.md#import-data-from-amazon-s3). ### Telemetry @@ -396,7 +396,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | Newly added | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | The memory limit ratio for a PD instance. The value `0` means no memory limit. | | TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | -| TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning-configuration.md#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | +| TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | | DM | [`checksum-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE
` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | | DM | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620) of TiDB Lightning. | | DM | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | From 7ef6e1023f435402cc54354f014e5029241c4d0a Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 11:24:18 +0800 Subject: [PATCH 116/135] Update release-6.6.0.md --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 5c92808d880c..a694bd053df0 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -83,7 +83,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Tests indicate this reduces tail latency 40-60%. - For details, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). + For more information, see [documentation](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660). * Batch aggregate data requests [#39361](https://github.com/pingcap/tidb/issues/39361) @[cfzjywxk](https://github.com/cfzjywxk) @[you06](https://github.com/you06) @@ -101,7 +101,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange. - For details, see [documentation](/explain-mpp.md#mpp-version-and-exchange-data-compression). + For more information, see [documentation](/explain-mpp.md#mpp-version-and-exchange-data-compression). * TiFlash supports the Stale Read feature [#4483](https://github.com/pingcap/tiflash/issues/4483) @[hehechen](https://github.com/hehechen) @@ -182,7 +182,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Introducing multi-valued indexes further enhances TiDB's support for the JSON data type and also improves TiDB's compatibility with MySQL 8.0. - For details, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). + For more information, see [documentation](/sql-statements/sql-statement-create-index.md#multi-valued-index). ### DB operations From f20bfe01b36edb685b107b71f8f4496f80e0b1e0 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 11:27:46 +0800 Subject: [PATCH 117/135] refine wording --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index a694bd053df0..ed864bae1057 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -25,7 +25,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - + From 0d4001db3f8887e2f60ea5204b8e7966f78b9338 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 11:40:55 +0800 Subject: [PATCH 118/135] update a config desc --- releases/release-6.6.0.md | 4 ++-- tikv-configuration-file.md | 6 +++++- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index ed864bae1057..c6341bc8b22c 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -224,7 +224,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: This feature is disabled by default. To enable it, you can set the `compress-kv-pairs` configuration item of TiDB Lightning to `"gzip"` or `"gz"`. - For more information, see [documentation](/tidb-lightning-configuration.md#tidb-lightning-task). + For more information, see [documentation](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task). * The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV [#48](https://github.com/tikv/migration/issues/48) @[zeminzhou](https://github.com/zeminzhou) @[haojinming](https://github.com/haojinming) @[pingyu](https://github.com/pingyu) @@ -379,7 +379,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | DM | `on-duplicate` | Deleted | This configuration item controls the methods to resolve conflicts during the full import phase. In v6.6.0, new configuration items `on-duplicate-logical` and `on-duplicate-physical` are introduced to replace `on-duplicate`. | | TiDB | [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB. | | TiKV | [`rocksdb.defaultcf.block-size`](/tikv-configuration-file.md#block-size) and [`rocksdb.writecf.block-size`](/tikv-configuration-file.md#block-size) | Modified | The default values change from `64K` to `32K`. | -| TiKV | [`rocksdb.defaultcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.writecf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.lockcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size) | Modified | Starting from v6.6.0, these configuration items are only used for calculating the default value of `storage.block-cache.capacity`. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | +| TiKV | [`rocksdb.defaultcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.writecf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.lockcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size) | Deprecated | Starting from v6.6.0, these configuration items are deprecated. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | | PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dasboard. | | DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | diff --git a/tikv-configuration-file.md b/tikv-configuration-file.md index 2202e431a059..6754223fa930 100644 --- a/tikv-configuration-file.md +++ b/tikv-configuration-file.md @@ -1262,7 +1262,11 @@ Configuration items related to `rocksdb.defaultcf`, `rocksdb.writecf`, and `rock ### `block-cache-size` -+ The cache size of a RocksDB block. Starting from v6.6.0, this configuration is only used to calculate the default value of `storage.block-cache.capacity`. +> **Warning:** +> +> Starting from v6.6.0, this configuration is deprecated. + ++ The cache size of a RocksDB block. + Default value for `defaultcf`: `Total machine memory * 25%` + Default value for `writecf`: `Total machine memory * 15%` + Default value for `lockcf`: `Total machine memory * 2%` From 5a825b3f6a932f32d2b41e76f5f508c5998d3431 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 11:54:28 +0800 Subject: [PATCH 119/135] fix anchors --- releases/release-6.6.0.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index c6341bc8b22c..e782ba1df319 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -128,7 +128,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. - In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource_control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. + In v6.6, you need to enable both TiDB's global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) and the TiKV configuration item [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) to enable resource control. Currently, the supported quota method is based on "[Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO. For more information, see [documentation](/tidb-resource-control.md). @@ -142,8 +142,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of `LIMIT` operations. - - [`ORDER_INDEX()`](/optimizer-hints.md#keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. - - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_keep_ordert1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. + - [`ORDER_INDEX()`](/optimizer-hints.md#order_indext1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar to `Limit + IndexScan(keep order: true)`. + - [`NO_ORDER_INDEX()`](/optimizer-hints.md#no_order_indext1_name-idx1_name--idx2_name-): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar to `TopN + IndexScan(keep order: false)`. Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance. @@ -160,7 +160,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - For TiDB clusters deployed across cloud regions, when a cloud region fails, the specified databases or tables can survive in another cloud region. - For TiDB clusters deployed in a single cloud region, when an availability zone fails, the specified databases or tables can survive in another availability zone. - For more information, see [documentation](/placement-rules-in-sql.md#survival-preference). + For more information, see [documentation](/placement-rules-in-sql.md#survival-preferences). * Support rolling back DDL operations via the `FLASHBACK CLUSTER TO TIMESTAMP` statement [#14088](https://github.com/tikv/tikv/pull/14088) @[Defined2014](https://github.com/Defined2014) @[JmPotato](https://github.com/JmPotato) @@ -210,7 +210,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: Before v6.6.0, for large data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration. - For more information, see [documentation](/dm/dm-precheck.md#physical-import-check-items). + For more information, see [documentation](/dm/dm-precheck.md#check-items-for-physical-import). * TiDB Lightning adds a new configuration parameter `"header-schema-match"` to address the issue of mismatched column names between the source file and the target table @[dsdashun](https://github.com/dsdashun) @@ -253,7 +253,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: By providing a user-friendly interface, this feature simplifies the process of binding plans in TiDB, reduces the operation complexity, and improves the efficiency and user experience of the plan binding process. - For more information, see [documentation](/dashboard/dashboard-statement-details.md#create-sql-binding). + For more information, see [documentation](/dashboard/dashboard-statement-details.md#fast-plan-binding). * Add warning for caching execution plans @[qw4990](https://github.com/qw4990) @@ -358,9 +358,9 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | [`mpp_version`](/system-variables.md#mpp_version-new-in-v660) | Newly added | This variable specifies the version of the MPP execution plan. After a version is specified, TiDB selects the specified version of the MPP execution plan. The default value `UNSPECIFIED` means that TiDB automatically selects the latest version `1`. | | [`tidb_ddl_distribute_reorg`](/system-variables.md#tidb_ddl_distribute_reorg-new-in-v660) | Newly added | This variable controls whether to enable distributed execution of the DDL reorg phase to accelerate this phase. The default value `OFF` means not to enable distributed execution of the DDL reorg phase by default. Currently, this variable takes effect only for `ADD INDEX`. | | [`tidb_enable_historical_stats_for_capture`](/system-variables.md#tidb_enable_historical_stats_for_capture) | Newly added | This variable controls whether the information captured by `PLAN REPLAYER CAPTURE` includes historical statistics by default. The default value `OFF` means that historical statistics are not included by default. | -| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit--new-in-v660) | Newly added | This variable controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | +| [`tidb_enable_plan_cache_for_param_limit`](/system-variables.md#tidb_enable_plan_cache_for_param_limit-new-in-v660) | Newly added | This variable controls whether Prepared Plan Cache caches execution plans that contain `COUNT` after `Limit`. The default value is `ON`, which means Prepared Plan Cache supports caching such execution plans. Note that Prepared Plan Cache does not support caching execution plans with a `COUNT` condition that counts a number greater than 10000. | | [`tidb_enable_plan_replayer_capture`](/system-variables.md#tidb_enable_plan_replayer_capture) | Newly added | This variable controls whether to enable the [`PLAN REPLAYER CAPTURE` feature](/sql-plan-replayer.md#use-plan-replayer-capture-to-capture-target-plans). The default value `OFF` means to disable the `PLAN REPLAYER CAPTURE` feature. | -| [`tidb_enable_resource_control`](/system-variables.md#tidb-tidb_enable_resource_control-new-in-v660) | Newly added | This variable controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | +| [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) | Newly added | This variable controls whether to enable the resource control feature. The default value is `OFF`. When this variable is set to `ON`, the TiDB cluster supports resource isolation of applications based on resource groups. | | [`tidb_historical_stats_duration`](/system-variables.md#tidb_historical_stats_duration-new-in-v660) | Newly added | This variable controls how long the historical statistics are retained in storage. The default value is 7 days. | | [`tidb_index_join_double_read_penalty_cost_rate`](/system-variables.md#tidb_index_join_double_read_penalty_cost_rate-new-in-v660) | Newly added | This variable controls whether to add some penalty cost to the selection of index join. The default value `0` means that this feature is disabled by default. | | [`tidb_pessimistic_txn_aggressive_locking`](/system-variables.md#tidb_pessimistic_txn_aggressive_locking-new-in-v660) | Newly added | This variable controls whether to use enhanced pessimistic locking wake-up model for pessimistic transactions. The default value `OFF` means not to use such a wake-up model for pessimistic transactions by default. | @@ -395,7 +395,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | Newly added | The maximum memory threshold ratio for tuning GOGC. The default value is `0.6`. | | PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | Newly added | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | | PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | The memory limit ratio for a PD instance. The value `0` means no memory limit. | -| TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameter) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | +| TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | | TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | | DM | [`checksum-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE
Scalability and Performance
TiKV supports Partitioned-Raft-KV storage engine (experimental)TiKV introduces the next-generation storage engine Partitioned-Raft-KV, and each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger scalability.TiKV introduces the Partitioned-Raft-KV storage engine, and each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger scalability.
TiKV support batch aggregating data requests
` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | | DM | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620) of TiDB Lightning. | From 086d09449d4559cad65b16fee65d6bd952eacb2f Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:00:02 +0800 Subject: [PATCH 120/135] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index e782ba1df319..25e1ee927703 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -28,7 +28,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - + @@ -408,7 +408,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Others -- Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitate more flexible TiKV performance tuning. +- Support dynamically modifying [`store-io-pool-size`](/tikv-configuration-file.md#store-io-pool-size-new-in-v530). This facilitates more flexible TiKV performance tuning. - Remove the limit on `LIMIT` clauses, thus improving the execution performance. - Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.1.0. - Starting from v6.6.0, TiDB no longer supports modifying column types on partitioned tables because of potential correctness issues. From 5c27329b38cdad551a389a227684dda148417454 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:17:56 +0800 Subject: [PATCH 121/135] update flashback doc --- sql-statements/sql-statement-flashback-to-timestamp.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sql-statements/sql-statement-flashback-to-timestamp.md b/sql-statements/sql-statement-flashback-to-timestamp.md index 3044b7e70144..39df0337b7c3 100644 --- a/sql-statements/sql-statement-flashback-to-timestamp.md +++ b/sql-statements/sql-statement-flashback-to-timestamp.md @@ -45,6 +45,7 @@ FlashbackToTimestampStmt ::= * At the time specified in the `FLASHBACK` statement, there cannot be a DDL statement that is not completely executed. If such a DDL exists, TiDB will reject it. * Before executing `FLASHBACK CLUSTER TO TIMESTAMP`, TiDB disconnects all related connections and prohibits read and write operations on these tables until the `FLASHBACK CLUSTER` statement is completed. * The `FLASHBACK CLUSTER TO TIMESTAMP` statement cannot be canceled after being executed. TiDB will keep retrying until it succeeds. +* During the execution of `FLASHBACK CLUSTER`, if you need to back up data, you can only use [Backup & Restore](/br/br-snapshot-guide.md) and specify a `BackupTS` that is earlier than the start time of `FLASHBACK CLUSTER`. In addition, during the execution of `FLASHBACK CLUSTER`, enabling [log backup](/br/br-pitr-guide.md) will fail. Therefore, try to enable log backup after `FLASHBACK CLUSTER` is completed. * If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. ## Example From e068d528343a9803576e84318150828f45888a2e Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:23:07 +0800 Subject: [PATCH 122/135] Update sql-statement-flashback-to-timestamp.md --- .../sql-statement-flashback-to-timestamp.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/sql-statements/sql-statement-flashback-to-timestamp.md b/sql-statements/sql-statement-flashback-to-timestamp.md index 39df0337b7c3..422e3a5be16c 100644 --- a/sql-statements/sql-statement-flashback-to-timestamp.md +++ b/sql-statements/sql-statement-flashback-to-timestamp.md @@ -40,6 +40,8 @@ FlashbackToTimestampStmt ::= SELECT * FROM mysql.tidb WHERE variable_name = 'tikv_gc_safe_point'; ``` + + * Only a user with the `SUPER` privilege can execute the `FLASHBACK CLUSTER` SQL statement. * `FLASHBACK CLUSTER` does not support rolling back DDL statements that modify PD-related information, such as `ALTER TABLE ATTRIBUTE`, `ALTER TABLE REPLICA`, and `CREATE PLACEMENT POLICY`. * At the time specified in the `FLASHBACK` statement, there cannot be a DDL statement that is not completely executed. If such a DDL exists, TiDB will reject it. @@ -48,6 +50,18 @@ FlashbackToTimestampStmt ::= * During the execution of `FLASHBACK CLUSTER`, if you need to back up data, you can only use [Backup & Restore](/br/br-snapshot-guide.md) and specify a `BackupTS` that is earlier than the start time of `FLASHBACK CLUSTER`. In addition, during the execution of `FLASHBACK CLUSTER`, enabling [log backup](/br/br-pitr-guide.md) will fail. Therefore, try to enable log backup after `FLASHBACK CLUSTER` is completed. * If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. + + + +* Only a user with the `SUPER` privilege can execute the `FLASHBACK CLUSTER` SQL statement. +* `FLASHBACK CLUSTER` does not support rolling back DDL statements that modify PD-related information, such as `ALTER TABLE ATTRIBUTE`, `ALTER TABLE REPLICA`, and `CREATE PLACEMENT POLICY`. +* At the time specified in the `FLASHBACK` statement, there cannot be a DDL statement that is not completely executed. If such a DDL exists, TiDB will reject it. +* Before executing `FLASHBACK CLUSTER TO TIMESTAMP`, TiDB disconnects all related connections and prohibits read and write operations on these tables until the `FLASHBACK CLUSTER` statement is completed. +* The `FLASHBACK CLUSTER TO TIMESTAMP` statement cannot be canceled after being executed. TiDB will keep retrying until it succeeds. +* If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. + + + ## Example The following example shows how to restore the newly inserted data: From 7f4d8cf09db1dd49842edc39b4d248fdd5db0ea0 Mon Sep 17 00:00:00 2001 From: Ran Date: Mon, 20 Feb 2023 13:37:32 +0800 Subject: [PATCH 123/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 25e1ee927703..c97ad53bba89 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -240,10 +240,11 @@ In v6.6.0-DMR, the key new features and improvements are as follows: * [GORM](https://github.com/go-gorm/gorm) adds TiDB integration tests. Now TiDB is the default database supported by GORM. [#6014](https://github.com/go-gorm/gorm/pull/6014) @[Icemap](https://github.com/Icemap) - - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) adapts to the `AUTO_RANDOM` attribute of TiDB [#104](https://github.com/go-gorm/mysql/pull/104) @[Icemap](https://github.com/Icemap) - - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/105) @[Icemap](https://github.com/Icemap) - - [GORM](https://github.com/go-gorm/gorm) adds TiDB as the default database [#6014](https://github.com/go-gorm/gorm/pull/6014) @[Icemap](https://github.com/Icemap) - - [GORM documentation](https://github.com/go-gorm/gorm.io) mentions TiDB as the default database [#638](https://github.com/go-gorm/gorm.io/pull/638) @[Icemap](https://github.com/Icemap) + - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) adapts to the `AUTO_RANDOM` attribute of TiDB [#104](https://github.com/go-gorm/mysql/pull/104) + - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/105) + - [GORM documentation](https://github.com/go-gorm/gorm.io) mentions TiDB as the default database [#638](https://github.com/go-gorm/gorm.io/pull/638) + + For more information, see [GORM documentation](https://gorm.io/docs/index.html) ### Observability From a6d7911e8d406128bf0de3f06133c77256eb0b39 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:46:05 +0800 Subject: [PATCH 124/135] Update releases/release-6.6.0.md Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index c97ad53bba89..52119d9faf09 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -519,7 +519,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that a TTL task fails if the primary key of the table contains an `ENUM` column [#40456](https://github.com/pingcap/tidb/issues/40456) @[lcwangchao](https://github.com/lcwangchao) - Fix the issue that some DDL operations blocked by MDL cannot be queried in `mysql.tidb_mdl_view` [#40838](https://github.com/pingcap/tidb/issues/40838) @[YangKeao](https://github.com/YangKeao) - Fix the issue that data race might occur during DDL ingestion [#40970](https://github.com/pingcap/tidb/issues/40970) @[tangenta](https://github.com/tangenta) - - Fix the issue that TTL tasks might delete some data incorrectly after the time zone changes [41043](https://github.com/pingcap/tidb/issues/41043) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that TTL tasks might delete some data incorrectly after the time zone changes [#41043](https://github.com/pingcap/tidb/issues/41043) @[lcwangchao](https://github.com/lcwangchao) - Fix the issue that `JSON_OBJECT` might report an error in some cases [#39806](https://github.com/pingcap/tidb/issues/39806) @[YangKeao](https://github.com/YangKeao) - Fix the issue that TiDB might deadlock during initialization [#40408](https://github.com/pingcap/tidb/issues/40408) @[Defined2014](https://github.com/Defined2014) - Fix the issue that the value of system variables might be incorrectly modified in some cases due to memory reuse [#40979](https://github.com/pingcap/tidb/issues/40979) @[lcwangchao](https://github.com/lcwangchao) From b30410c530cfb942677c5af7f5245456db102bba Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:49:48 +0800 Subject: [PATCH 125/135] Update releases/release-6.6.0.md Co-authored-by: Ran --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 52119d9faf09..368b82acb2d9 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -576,7 +576,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that `transaction_atomicity` and `protocol` cannot be updated via the configuration file [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) - Fix the issue that precheck is not performed on the storage path of redo log [#6335](https://github.com/pingcap/tiflow/issues/6335) @[CharlesCheung96](https://github.com/CharlesCheung96) - Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure [#8089](https://github.com/pingcap/tiflow/issues/8089) @[CharlesCheung96](https://github.com/CharlesCheung96) - - 修复 changefeed 在 tikv、CDC 节点扩缩容特殊场景下卡住的问题。 [#8197](https://github.com/pingcap/tiflow/issues/8197) @[hicqu](https://github.com/hicqu) + - Fix the issue that changefeed might get stuck in special scenarios such as when scaling in or scaling out TiKV or TiCDC nodes https://github.com/pingcap/tiflow/issues/8174 @[hicqu](https://github.com/hicqu) - Fix the issue of too high traffic among TiKV nodes [#14092](https://github.com/tikv/tikv/issues/14092) @[overvenus](https://github.com/overvenus) - Fix the performance issues of TiCDC in terms of CPU usage, memory control, and throughput when the pull-based sink is enabled [#8142](https://github.com/pingcap/tiflow/issues/8142) [#8157](https://github.com/pingcap/tiflow/issues/8157) [#8001](https://github.com/pingcap/tiflow/issues/8001) [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) From abf3b44a6d6e545541c6cdc5807f5dfe991ea72f Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:52:28 +0800 Subject: [PATCH 126/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 368b82acb2d9..46e038fc4f5a 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -576,7 +576,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that `transaction_atomicity` and `protocol` cannot be updated via the configuration file [#7935](https://github.com/pingcap/tiflow/issues/7935) @[CharlesCheung96](https://github.com/CharlesCheung96) - Fix the issue that precheck is not performed on the storage path of redo log [#6335](https://github.com/pingcap/tiflow/issues/6335) @[CharlesCheung96](https://github.com/CharlesCheung96) - Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure [#8089](https://github.com/pingcap/tiflow/issues/8089) @[CharlesCheung96](https://github.com/CharlesCheung96) - - Fix the issue that changefeed might get stuck in special scenarios such as when scaling in or scaling out TiKV or TiCDC nodes https://github.com/pingcap/tiflow/issues/8174 @[hicqu](https://github.com/hicqu) + - Fix the issue that changefeed might get stuck in special scenarios such as when scaling in or scaling out TiKV or TiCDC nodes [#8174](https://github.com/pingcap/tiflow/issues/8174) @[hicqu](https://github.com/hicqu) - Fix the issue of too high traffic among TiKV nodes [#14092](https://github.com/tikv/tikv/issues/14092) @[overvenus](https://github.com/overvenus) - Fix the performance issues of TiCDC in terms of CPU usage, memory control, and throughput when the pull-based sink is enabled [#8142](https://github.com/pingcap/tiflow/issues/8142) [#8157](https://github.com/pingcap/tiflow/issues/8157) [#8001](https://github.com/pingcap/tiflow/issues/8001) [#5928](https://github.com/pingcap/tiflow/issues/5928) @[hicqu](https://github.com/hicqu) @[hi-rustin](https://github.com/hi-rustin) From ceadb67491b416e28f1e73c603bac771f46fd70d Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:53:41 +0800 Subject: [PATCH 127/135] Update releases/release-6.6.0.md Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 46e038fc4f5a..cee66578450f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -5,7 +5,7 @@ summary: Learn about the new features, compatibility changes, improvements, and # TiDB 6.6.0 Release Notes -Release date: Februrary 20, 2023 +Release date: February 20, 2023 TiDB version: 6.6.0-[DMR](/releases/versioning.md#development-milestone-releases) From ac10b9a77866249b76dba86b5d9906e78b7943aa Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 13:57:12 +0800 Subject: [PATCH 128/135] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index cee66578450f..8f060ef6d79e 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -320,7 +320,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: ### Telemetry -- Starting from Februray 20, 2023, the [telemetry feature](/telemetry.md) is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). +- Starting from February 20, 2023, the [telemetry feature](/telemetry.md) is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see [TiDB Release Timeline](/releases/release-timeline.md). - Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade. ## Compatibility changes @@ -381,7 +381,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | TiDB | [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB. | | TiKV | [`rocksdb.defaultcf.block-size`](/tikv-configuration-file.md#block-size) and [`rocksdb.writecf.block-size`](/tikv-configuration-file.md#block-size) | Modified | The default values change from `64K` to `32K`. | | TiKV | [`rocksdb.defaultcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.writecf.block-cache-size`](/tikv-configuration-file.md#block-cache-size), [`rocksdb.lockcf.block-cache-size`](/tikv-configuration-file.md#block-cache-size) | Deprecated | Starting from v6.6.0, these configuration items are deprecated. For details, see [#12936](https://github.com/tikv/tikv/issues/12936). | -| PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dasboard. | +| PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dashboard. | | DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | | TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS, and Azure. | From 6c4f1113804ff84c47829ad6ff2097ec68adc5e6 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Mon, 20 Feb 2023 14:17:28 +0800 Subject: [PATCH 129/135] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.6.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 8f060ef6d79e..0dba8f4203a1 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -198,7 +198,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: For more information, see [documentation](/dynamic-config.md). -* Support specifying the SQL script executed upon TiDB cluster intialization [#35624](https://github.com/pingcap/tidb/issues/35624) @[morgo](https://github.com/morgo) +* Support specifying the SQL script executed upon TiDB cluster initialization [#35624](https://github.com/pingcap/tidb/issues/35624) @[morgo](https://github.com/morgo) When you start a TiDB cluster for the first time, you can specify the SQL script to be executed by configuring the command line parameter `--initialize-sql-file`. You can use this feature when you need to perform such operations as modifying the value of a system variable, creating a user, or granting privileges. @@ -464,7 +464,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Optimize DM alert rules and content [#7376](https://github.com/pingcap/tiflow/issues/7376) @[D3Hunter](https://github.com/D3Hunter) - Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occured. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce this kind of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors: + Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occurred. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce these kinds of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors: - For an error that is automatically recoverable, DM reports the alert only if the error occurs more than 3 times within 2 minutes. - For an error that is not automatically recoverable, DM maintains the original behavior and reports the alert immediately. @@ -496,7 +496,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Refine the error message reported when a column that a partitioned table depends on is deleted [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) - Fix frequent write conflicts in transactions when performing DDL data backfill [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) - Add a mechanism that `FLASHBACK CLUSTER` retries when it fails to check the `min-resolved-ts` [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) - - Fix the issue that sometimes an index cannot be created for an empty table using injest mode [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) + - Fix the issue that sometimes an index cannot be created for an empty table using ingest mode [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) - Fix the issue that `wait_ts` in the slow query log is the same for different SQL statements within the same transaction [#39713](https://github.com/pingcap/tidb/issues/39713) @[TonsnakeLin](https://github.com/TonsnakeLin) - Fix the issue that the `Assertion Failed` error is reported when adding a column during the process of deleting a row record [#39570](https://github.com/pingcap/tidb/issues/39570) @[wjhuang2016](https://github.com/wjhuang2016) - Fix the issue that the `not a DDL owner` error is reported when modifying a column type [#39643](https://github.com/pingcap/tidb/issues/39643) @[zimulala](https://github.com/zimulala) From cd5bc9321f1841b648af9a13eea9abc49935a461 Mon Sep 17 00:00:00 2001 From: Aolin Date: Mon, 20 Feb 2023 14:32:28 +0800 Subject: [PATCH 130/135] move to improvements Signed-off-by: Aolin --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 0dba8f4203a1..0f340ab880dd 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -427,6 +427,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Add error messages for conflicts between optimizer hints and execution plan bindings [#40910](https://github.com/pingcap/tidb/issues/40910) @[Reminiscent](https://github.com/Reminiscent) - Optimize the plan cache strategy to avoid non-optimal plans when using plan cache in some scenarios [#40312](https://github.com/pingcap/tidb/pull/40312) [#40218](https://github.com/pingcap/tidb/pull/40218) [#40280](https://github.com/pingcap/tidb/pull/40280) [#41136](https://github.com/pingcap/tidb/pull/41136) [#40686](https://github.com/pingcap/tidb/pull/40686) @[qw4990](https://github.com/qw4990) - Clear expired region cache regularly to avoid memory leak and performance degradation [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) + - `MODIFY COLUMN` is not supported on partitioned tables [#39915](https://github.com/pingcap/tidb/issues/39915) @[wjhuang2016](https://github.com/wjhuang2016) + - Disable renaming of columns that partition tables depend on [#40150](https://github.com/pingcap/tidb/issues/40150) @[mjonss](https://github.com/mjonss) + TiKV @@ -505,8 +507,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that data cannot be inserted into a renamed table when the generated expression includes the name of this table [#39826](https://github.com/pingcap/tidb/issues/39826) @[Defined2014](https://github.com/Defined2014) - Fix the issue that the `INSERT ignore` statement cannot fill in default values when the column is write-only [#40192](https://github.com/pingcap/tidb/issues/40192) @[YangKeao](https://github.com/YangKeao) - Fix the issue that resources are not released when disabling the resource management module [#40546](https://github.com/pingcap/tidb/issues/40546) @[zimulala](https://github.com/zimulala) - - `MODIFY COLUMN` is not supported on partitioned tables [#39915](https://github.com/pingcap/tidb/issues/39915) @[wjhuang2016](https://github.com/wjhuang2016) - - Disable renaming of columns that partition tables depend on [#40150](https://github.com/pingcap/tidb/issues/40150) @[mjonss](https://github.com/mjonss) - Fix the issue that TTL tasks cannot trigger statistics updates in time [#40109](https://github.com/pingcap/tidb/issues/40109) @[YangKeao](https://github.com/YangKeao) - Fix the issue that unexpected data is read because TiDB improperly handles `NULL` values when constructing key ranges [#40158](https://github.com/pingcap/tidb/issues/40158) @[tiancaiamao](https://github.com/tiancaiamao) - Fix the issue that illegal values are written to a table when the `MODIFT COLUMN` statement also changes the default value of a column [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) From bd5c01c337b125e35bffed990063b72bc3770a9b Mon Sep 17 00:00:00 2001 From: Aolin Date: Mon, 20 Feb 2023 14:40:18 +0800 Subject: [PATCH 131/135] move to improvements Signed-off-by: Aolin --- releases/release-6.6.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 0f340ab880dd..fc0fe310263a 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -429,6 +429,8 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Clear expired region cache regularly to avoid memory leak and performance degradation [#40461](https://github.com/pingcap/tidb/issues/40461) @[sticnarf](https://github.com/sticnarf) - `MODIFY COLUMN` is not supported on partitioned tables [#39915](https://github.com/pingcap/tidb/issues/39915) @[wjhuang2016](https://github.com/wjhuang2016) - Disable renaming of columns that partition tables depend on [#40150](https://github.com/pingcap/tidb/issues/40150) @[mjonss](https://github.com/mjonss) + - Refine the error message reported when a column that a partitioned table depends on is deleted [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) + - Add a mechanism that `FLASHBACK CLUSTER` retries when it fails to check the `min-resolved-ts` [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) + TiKV @@ -495,9 +497,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that a statistics collection task fails due to an incorrect `datetime` value [#39336](https://github.com/pingcap/tidb/issues/39336) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - Fix the issue that `stats_meta` is not created following table creation [#38189](https://github.com/pingcap/tidb/issues/38189) @[xuyifangreeneyes](https://github.com/xuyifangreeneyes) - - Refine the error message reported when a column that a partitioned table depends on is deleted [#38739](https://github.com/pingcap/tidb/issues/38739) @[jiyfhust](https://github.com/jiyfhust) - Fix frequent write conflicts in transactions when performing DDL data backfill [#24427](https://github.com/pingcap/tidb/issues/24427) @[mjonss](https://github.com/mjonss) - - Add a mechanism that `FLASHBACK CLUSTER` retries when it fails to check the `min-resolved-ts` [#39836](https://github.com/pingcap/tidb/issues/39836) @[Defined2014](https://github.com/Defined2014) - Fix the issue that sometimes an index cannot be created for an empty table using ingest mode [#39641](https://github.com/pingcap/tidb/issues/39641) @[tangenta](https://github.com/tangenta) - Fix the issue that `wait_ts` in the slow query log is the same for different SQL statements within the same transaction [#39713](https://github.com/pingcap/tidb/issues/39713) @[TonsnakeLin](https://github.com/TonsnakeLin) - Fix the issue that the `Assertion Failed` error is reported when adding a column during the process of deleting a row record [#39570](https://github.com/pingcap/tidb/issues/39570) @[wjhuang2016](https://github.com/wjhuang2016) From 2f468c55e56c8d19403928fac4a61e141a6520c4 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Mon, 20 Feb 2023 16:12:52 +0800 Subject: [PATCH 132/135] Apply suggestions from code review --- releases/release-6.6.0.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index fc0fe310263a..9f2985334e4f 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -244,7 +244,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - In v1.4.6, [GORM MySQL driver](https://github.com/go-gorm/mysql) fixes the issue that when connecting to TiDB, the `Unique` attribute of the `Unique` field cannot be modified during `AutoMigrate` [#105](https://github.com/go-gorm/mysql/pull/105) - [GORM documentation](https://github.com/go-gorm/gorm.io) mentions TiDB as the default database [#638](https://github.com/go-gorm/gorm.io/pull/638) - For more information, see [GORM documentation](https://gorm.io/docs/index.html) + For more information, see [GORM documentation](https://gorm.io/docs/index.html). ### Observability @@ -384,28 +384,28 @@ In v6.6.0-DMR, the key new features and improvements are as follows: | PD | [`enable-telemetry`](/pd-configuration-file.md#enable-telemetry) | Modified | Starting from v6.6.0, the default value changes from `true` to `false`, which means that telemetry is disabled by default in TiDB Dashboard. | | DM | [`import-mode`](/dm/task-configuration-file-full.md) | Modified | The possible values of this configuration item are changed from `"sql"` and `"loader"` to `"logical"` and `"physical"`. The default value is `"logical"`, which means using TiDB Lightning's logical import mode to import data. | | TiFlash | [`profile.default.max_memory_usage_for_all_queries`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Specifies the memory usage limit for the generated intermediate data in all queries. Starting from v6.6.0, the default value changes from `0` to `0.8`, which means the limit is 80% of the total memory. | -| TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | The path under which redo log backup is stored. Two more value options are added for `scheme`, GCS, and Azure. | -| TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | Newly added | Specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | -| TiDB | [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) | Newly added | Controls whether to enable statements summary persistence. The default value is `false`, which means this feature is not enabled by default. | +| TiCDC | [`consistent.storage`](/ticdc/ticdc-sink-to-mysql.md#prerequisites) | Modified | This configuration item specifies the path under which redo log backup is stored. Two more value options are added for `scheme`, GCS, and Azure. | +| TiDB | [`initialize-sql-file`](/tidb-configuration-file.md#initialize-sql-file-new-in-v660) | Newly added | This configuration item specifies the SQL script to be executed when the TiDB cluster is started for the first time. The default value is empty. | +| TiDB | [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) | Newly added | This configuration item controls whether to enable statements summary persistence. The default value is `false`, which means this feature is not enabled by default. | | TiDB | [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the maximum number of data files that can be persisted. `0` means no limit on the number of files. | | TiDB | [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the maximum number of days to keep persistent data files. | | TiDB | [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the maximum size of a persistent data file (in MiB). | | TiDB | [`tidb_stmt_summary_filename`](/tidb-configuration-file.md#tidb_stmt_summary_filename-new-in-v660) | Newly added | When statements summary persistence is enabled, this configuration specifies the file to which persistent data is written. | | TiKV | [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) | Newly added | Whether to enable scheduling for user foreground read/write requests according to the Request Unit (RU) of the corresponding resource groups. The default value is `false`, which means to disable scheduling according to the RU of the corresponding resource groups. | -| PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | Newly added | Controls whether to enable the GOGC tuner, which is disabled by default. | -| PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | Newly added | The maximum memory threshold ratio for tuning GOGC. The default value is `0.6`. | -| PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | Newly added | The threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | -| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | The memory limit ratio for a PD instance. The value `0` means no memory limit. | -| TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | -| TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) | Newly added | Controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | +| PD | [`pd-server.enable-gogc-tuner`](/pd-configuration-file.md#enable-gogc-tuner-new-in-v660) | Newly added | This configuration item controls whether to enable the GOGC tuner, which is disabled by default. | +| PD | [`pd-server.gc-tuner-threshold`](/pd-configuration-file.md#gc-tuner-threshold-new-in-v660) | Newly added | This configuration item specifies the maximum memory threshold ratio for tuning GOGC. The default value is `0.6`. | +| PD | [`pd-server.server-memory-limit-gc-trigger`](/pd-configuration-file.md#server-memory-limit-gc-trigger-new-in-v660) | Newly added | This configuration item specifies the threshold ratio at which PD tries to trigger GC. The default value is `0.7`. | +| PD | [`pd-server.server-memory-limit`](/pd-configuration-file.md#server-memory-limit-new-in-v660) | Newly added | This configuration item specifies the memory limit ratio for a PD instance. The value `0` means no memory limit. | +| TiCDC | [`scheduler.region-per-span`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | This configuration item controls whether to split a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes. The default value is `50000`. | +| TiDB Lightning | [`compress-kv-pairs`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) | Newly added | This configuration item controls whether to enable compression when sending KV pairs to TiKV in the physical import mode. The default value is empty, meaning that the compression is not enabled. | | DM | [`checksum-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls whether DM performs `ADMIN CHECKSUM TABLE
TiKV introduces the Partitioned-Raft-KV storage engine, and each Region uses an independent RocksDB instance, which can easily expand the storage capacity of the cluster from TB to PB and provide more stable write latency and stronger scalability.
TiKV support batch aggregating data requestsTiKV supports batch aggregating data requests This enhancement significantly reduces total RPCs in TiKV batch-get operations. In situations where data is highly dispersed and the gRPC thread pool has insufficient resources, batching coprocessor requests can improve performance by more than 50%.
` for each table to verify data integrity after the import. The default value is `"required"`, which performs admin checksum after the import. If checksum fails, DM pauses the task and you need to manually handle the failure. | | DM | [`disk-quota-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item sets the disk quota. It corresponds to the [`disk-quota` configuration](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620) of TiDB Lightning. | | DM | [`on-duplicate-logical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls how DM resolves conflicting data in the logical import mode. The default value is `"replace"`, which means using the new data to replace the existing data. | | DM | [`on-duplicate-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item controls how DM resolves conflicting data in the physical import mode. The default value is `"none"`, which means not resolving conflicting data. `"none"` has the best performance, but might lead to inconsistent data in the downstream database. | -| DM | [`sorting-dir-physical`](/dm/task-configuration-file-full.md) | Newly added | The directory used for local KV sorting in the physical import mode. The default value is the same as the `dir` configuration. | -| sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | Newly added | Controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | -| TiSpark | [`spark.tispark.replica_read`](/tispark-overview.md#tispark-configurations) | Newly added | Controls the type of replicas to be read. The value options are `leader`, `follower`, and `learner`. | -| TiSpark | [`spark.tispark.replica_read.label`](/tispark-overview.md#tispark-configurations) | Newly added | Sets labels for the target TiKV node. | +| DM | [`sorting-dir-physical`](/dm/task-configuration-file-full.md) | Newly added | This configuration item specifies the directory used for local KV sorting in the physical import mode. The default value is the same as the `dir` configuration. | +| sync-diff-inspector | [`skip-non-existing-table`](/sync-diff-inspector/sync-diff-inspector-overview.md#configuration-file-description) | Newly added | This configuration item controls whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream. | +| TiSpark | [`spark.tispark.replica_read`](/tispark-overview.md#tispark-configurations) | Newly added | This configuration item controls the type of replicas to be read. The value options are `leader`, `follower`, and `learner`. | +| TiSpark | [`spark.tispark.replica_read.label`](/tispark-overview.md#tispark-configurations) | Newly added | This configuration item is used to set labels for the target TiKV node. | ### Others From 1f667e4a831a6cdad5e1cacf7a89c7fc6e8b8949 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 16:21:14 +0800 Subject: [PATCH 133/135] Update sql-statements/sql-statement-flashback-to-timestamp.md Co-authored-by: Grace Cai --- sql-statements/sql-statement-flashback-to-timestamp.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sql-statements/sql-statement-flashback-to-timestamp.md b/sql-statements/sql-statement-flashback-to-timestamp.md index 422e3a5be16c..3843499c35fc 100644 --- a/sql-statements/sql-statement-flashback-to-timestamp.md +++ b/sql-statements/sql-statement-flashback-to-timestamp.md @@ -51,6 +51,7 @@ FlashbackToTimestampStmt ::= * If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. + * Only a user with the `SUPER` privilege can execute the `FLASHBACK CLUSTER` SQL statement. From b552fe3a6580c47ec3cff0df2f2968b4e0970923 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 16:23:58 +0800 Subject: [PATCH 134/135] Update _index.md --- _index.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/_index.md b/_index.md index 22e9dbed9af0..9160aae81354 100644 --- a/_index.md +++ b/_index.md @@ -71,9 +71,7 @@ hide_commit: true [Scale a Cluster](https://docs.pingcap.com/tidb/dev/scale-tidb-using-tiup) -[Back Up Cluster Data](https://docs.pingcap.com/tidb/dev/br-usage-backup) - -[Restore Cluster Data](https://docs.pingcap.com/tidb/dev/br-usage-restore) +[Back Up and Restore Cluster Data](https://docs.pingcap.com/tidb/dev/backup-and-restore-overview) [Daily Check](https://docs.pingcap.com/tidb/dev/daily-check) From 314d69e20d491621fda006038f5fc6c19a246113 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 20 Feb 2023 16:24:39 +0800 Subject: [PATCH 135/135] Update releases/release-6.6.0.md --- releases/release-6.6.0.md | 1 - 1 file changed, 1 deletion(-) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index 9f2985334e4f..055e253c0b55 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -444,7 +444,6 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Support managing the global memory threshold to alleviate the OOM problem (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - Add the GC Tuner to alleviate the GC pressure (experimental) [#5827](https://github.com/tikv/pd/issues/5827) @[hnes](https://github.com/hnes) - - Add the `balance-witness-scheduler` scheduler to schedule witness [#5763](https://github.com/tikv/pd/pull/5763) @[ethercflow](https://github.com/ethercflow) - Add the `evict-slow-trend-scheduler` scheduler to detect and schedule abnormal nodes [#5808](https://github.com/tikv/pd/pull/5808) @[innerr](https://github.com/innerr) - Add the keyspace manager to manage keyspace [#5293](https://github.com/tikv/pd/issues/5293) @[AmoebaProtozoa](https://github.com/AmoebaProtozoa)