Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tidb_enable_async_merge_global_stats cannot work correctly #53972

Closed
Rustin170506 opened this issue Jun 12, 2024 · 3 comments · Fixed by #53977
Closed

tidb_enable_async_merge_global_stats cannot work correctly #53972

Rustin170506 opened this issue Jun 12, 2024 · 3 comments · Fixed by #53977
Assignees

Comments

@Rustin170506
Copy link
Member

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

  1. Start the TiDB with the following code:
diff --git a/pkg/statistics/handle/globalstats/global_stats.go b/pkg/statistics/handle/globalstats/global_stats.go
index 78ee277659..9a814ca4ca 100644
--- a/pkg/statistics/handle/globalstats/global_stats.go
+++ b/pkg/statistics/handle/globalstats/global_stats.go
@@ -103,6 +103,7 @@ func MergePartitionStats2GlobalStats(
        histIDs []int64,
 ) (globalStats *GlobalStats, err error) {
        if sc.GetSessionVars().EnableAsyncMergeGlobalStats {
+               logutil.BgLogger().Info("use async merge global stats", zap.String("table", globalTableInfo.Name.L))
                worker, err := NewAsyncMergePartitionStats2GlobalStats(statsHandle, globalTableInfo, histIDs, is)
                if err != nil {
                        return nil, errors.Trace(err)
@@ -113,6 +114,7 @@ func MergePartitionStats2GlobalStats(
                }
                return worker.Result(), nil
        }
+       logutil.BgLogger().Info("use blocking merge global stats", zap.String("table", globalTableInfo.Name.L))
        return blockingMergePartitionStats2GlobalStats(sc, statsHandle.GPool(), opts, is, globalTableInfo, isIndex, histIDs, nil, statsHandle)
 }
  1. Create a database and partitioned table:
CREATE DATABASE IF NOT EXISTS partition_example;
USE partition_example;
CREATE TABLE example_table (
    id INT NOT NULL,
    name VARCHAR(50),
    PRIMARY KEY (id)
)
PARTITION BY RANGE (id) (
    PARTITION p0 VALUES LESS THAN (2),
    PARTITION p1 VALUES LESS THAN (4),
    PARTITION p2 VALUES LESS THAN (6),
    PARTITION p3 VALUES LESS THAN (8),
    PARTITION p4 VALUES LESS THAN MAXVALUE
);
  1. Insert 3000 rows:
import { Client } from "https://deno.land/x/mysql/mod.ts";

const client = await new Client().connect({
  hostname: "127.0.0.1",
  port: 4000,
  username: "root",
  db: "partition_example",
  password: "",
});

async function insertData() {
  for (let i = 1; i <= 3000; i++) {
    const name = `Name${i}`;
    await client.execute(`INSERT INTO example_table (id, name) VALUES (?, ?)`, [
      i,
      name,
    ]);
  }
}

await insertData();

await client.close();

console.log("Inserted 3000 rows into example_table");
  1. Check the logs.

2. What did you expect to see? (Required)

use async merge global stats

3. What did you see instead (Required)

[2024/06/12 19:38:12.833 +08:00] [INFO] [save.go:221] ["directly update count"] [category=stats] [tableID=109] [results.Count=2993] [count=2993]
[2024/06/12 19:38:12.960 +08:00] [INFO] [analyze.go:744] ["analyze table `partition_example`.`example_table` has finished"] [partition=p4] ["job info"="auto analyze table all columns with 256 buckets, 500 topn, 1 samplerate"] ["start time"=2024/06/12 19:38:12.820 +08:00] ["end time"=2024/06/12 19:38:12.958 +08:00] [cost=137.905ms] ["sample rate reason"="use min(1, 110000/2993) as the sample-rate=1"]
[2024/06/12 19:38:12.963 +08:00] [INFO] [global_stats.go:117] ["use blocking merge global stats"] [table=example_table]

4. What is your TiDB version? (Required)

v7.5.1 with patch

@hawkingrei
Copy link
Member

tidb_analyze_partition_concurrency has the same problem on it.

@Rustin170506
Copy link
Member Author

Rustin170506 commented Jun 12, 2024

A quick fix:

diff --git a/pkg/executor/analyze_global_stats.go b/pkg/executor/analyze_global_stats.go
index 59d0163852..5e5e65efbc 100644
--- a/pkg/executor/analyze_global_stats.go
+++ b/pkg/executor/analyze_global_stats.go
@@ -76,8 +76,8 @@ func (e *AnalyzeExec) handleGlobalStats(ctx context.Context, globalStatsMap glob
                                        }
                                }
                                globalStatsI, err := statsHandle.MergePartitionStats2GlobalStatsByTableID(
-                                       e.Ctx(),
-                                       globalOpts, e.Ctx().GetInfoSchema().(infoschema.InfoSchema),
+                                       globalOpts,
+                                       e.Ctx().GetInfoSchema().(infoschema.InfoSchema),
                                        globalStatsID.tableID,
                                        info.isIndex == 1,
                                        info.histIDs,
diff --git a/pkg/statistics/handle/globalstats/global_stats.go b/pkg/statistics/handle/globalstats/global_stats.go
index 78ee277659..2a180d8add 100644
--- a/pkg/statistics/handle/globalstats/global_stats.go
+++ b/pkg/statistics/handle/globalstats/global_stats.go
@@ -47,13 +47,25 @@ func NewStatsGlobal(statsHandler util.StatsHandle) util.StatsGlobal {
 }
 
 // MergePartitionStats2GlobalStatsByTableID merge the partition-level stats to global-level stats based on the tableID.
-func (sg *statsGlobalImpl) MergePartitionStats2GlobalStatsByTableID(sc sessionctx.Context,
-       opts map[ast.AnalyzeOptionType]uint64, is infoschema.InfoSchema,
+func (sg *statsGlobalImpl) MergePartitionStats2GlobalStatsByTableID(
+       opts map[ast.AnalyzeOptionType]uint64,
+       is infoschema.InfoSchema,
        physicalID int64,
        isIndex bool,
        histIDs []int64,
 ) (globalStats interface{}, err error) {
-       return MergePartitionStats2GlobalStatsByTableID(sc, sg.statsHandler, opts, is, physicalID, isIndex, histIDs)
+       err = util.CallWithSCtx(sg.statsHandler.SPool(), func(sctx sessionctx.Context) error {
+               g, err := MergePartitionStats2GlobalStatsByTableID(sctx, sg.statsHandler, opts, is, physicalID, isIndex, histIDs)
+               if err != nil {
+                       return err
+               }
+               globalStats = g
+               return nil
+       })
+       if err != nil {
+               return nil, err
+       }
+       return
 }
 
 // UpdateGlobalStats will trigger the merge of global-stats when we drop table partition
@@ -103,6 +115,7 @@ func MergePartitionStats2GlobalStats(
        histIDs []int64,
 ) (globalStats *GlobalStats, err error) {
        if sc.GetSessionVars().EnableAsyncMergeGlobalStats {
+               logutil.BgLogger().Info("use async merge global stats", zap.String("table", globalTableInfo.Name.L))
                worker, err := NewAsyncMergePartitionStats2GlobalStats(statsHandle, globalTableInfo, histIDs, is)
                if err != nil {
                        return nil, errors.Trace(err)
@@ -113,6 +126,7 @@ func MergePartitionStats2GlobalStats(
                }
                return worker.Result(), nil
        }
+       logutil.BgLogger().Info("use blocking merge global stats", zap.String("table", globalTableInfo.Name.L))
        return blockingMergePartitionStats2GlobalStats(sc, statsHandle.GPool(), opts, is, globalTableInfo, isIndex, histIDs, nil, statsHandle)
 }
 
diff --git a/pkg/statistics/handle/util/interfaces.go b/pkg/statistics/handle/util/interfaces.go
index 6086a330a2..a062b973f8 100644
--- a/pkg/statistics/handle/util/interfaces.go
+++ b/pkg/statistics/handle/util/interfaces.go
@@ -300,8 +300,9 @@ type StatsReadWriter interface {
 // StatsGlobal is used to manage partition table global stats.
 type StatsGlobal interface {
        // MergePartitionStats2GlobalStatsByTableID merges partition stats to global stats by table ID.
-       MergePartitionStats2GlobalStatsByTableID(sctx sessionctx.Context,
-               opts map[ast.AnalyzeOptionType]uint64, is infoschema.InfoSchema,
+       MergePartitionStats2GlobalStatsByTableID(
+               opts map[ast.AnalyzeOptionType]uint64,
+               is infoschema.InfoSchema,
                physicalID int64,
                isIndex bool,
                histIDs []int64,
diff --git a/pkg/statistics/handle/util/util.go b/pkg/statistics/handle/util/util.go
index 61adcf8e21..f77c5fb197 100644
--- a/pkg/statistics/handle/util/util.go
+++ b/pkg/statistics/handle/util/util.go
@@ -114,6 +114,12 @@ func CallWithSCtx(pool SessionPool, f func(sctx sessionctx.Context) error, flags
 
 // UpdateSCtxVarsForStats updates all necessary variables that may affect the behavior of statistics.
 func UpdateSCtxVarsForStats(sctx sessionctx.Context) error {
+       // async
+       enableAsyncMergeGlobalStats, err := sctx.GetSessionVars().GlobalVarsAccessor.GetGlobalSysVar(variable.TiDBEnableAsyncMergeGlobalStats)
+       if err != nil {
+               return err
+       }
+       sctx.GetSessionVars().EnableAsyncMergeGlobalStats = variable.TiDBOptOn(enableAsyncMergeGlobalStats)
        // analyzer version
        verInString, err := sctx.GetSessionVars().GlobalVarsAccessor.GetGlobalSysVar(variable.TiDBAnalyzeVersion)
        if err != nil {

But I don't know why it cannot work with the old approach.

@Rustin170506
Copy link
Member Author

But I don't why it cannot work with the old approach.

#45202 (comment) This is why.

// HandleAutoAnalyze analyzes the newly created table or index.
func (sa *statsAnalyze) HandleAutoAnalyze(is infoschema.InfoSchema) (analyzed bool) {
	_ = statsutil.CallWithSCtx(sa.statsHandle.SPool(), func(sctx sessionctx.Context) error {
		enabled := sctx.GetSessionVars().EnableAsyncMergeGlobalStats
		logutil.BgLogger().Info("EnableAsyncMergeGlobalStats", zap.Bool("EnableAsyncMergeGlobalStats", enabled))
		analyzed = HandleAutoAnalyze(sctx, sa.statsHandle, is)
		return nil
	})
	return
}

We used this ctx to execute the auto-analyze, so we missed this variable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants