Skip to content

Commit

Permalink
Merge branch 'master' of github.com:pingcap/tidb
Browse files Browse the repository at this point in the history
  • Loading branch information
yuqi1129 committed Jul 9, 2021
2 parents 5c2c38c + f0f7091 commit f0ed397
Show file tree
Hide file tree
Showing 142 changed files with 4,550 additions and 1,261 deletions.
12 changes: 6 additions & 6 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -959,12 +959,12 @@ Enable hash partition by default; and enable range columns partition when there
- Control whether to open the `general log`
- Support modifying the log level online
- Check the TiDB cluster information
* [Add the `auto_analyze_ratio` system variables to contorl the ratio of Analyze](https://pingcap.com/docs/FAQ/#whats-the-trigger-strategy-for-auto-analyze-in-tidb)
* [Add the `tidb_retry_limit` system variable to control the automatic retry times of transactions](https://pingcap.com/docs/sql/tidb-specific/#tidb-retry-limit)
* [Add the `tidb_disable_txn_auto_retry` system variable to control whether the transaction retries automatically](https://pingcap.com/docs/sql/tidb-specific/#tidb-disable-txn-auto-retry)
* [Support using `admin show slow` statement to obtain the slow queries ](https://pingcap.com/docs/sql/slow-query/#admin-show-slow-command)
* [Add the `tidb_slow_log_threshold` environment variable to set the threshold of slow log automatically](https://pingcap.com/docs/sql/tidb-specific/#tidb_slow_log_threshold)
* [Add the `tidb_query_log_max_len` environment variable to set the length of the SQL statement to be truncated in the log dynamically](https://pingcap.com/docs/sql/tidb-specific/#tidb_query_log_max_len)
* [Add the `auto_analyze_ratio` system variables to contorl the ratio of Analyze](https://docs.pingcap.com/tidb/stable/sql-faq#whats-the-trigger-strategy-for-auto-analyze-in-tidb)
* [Add the `tidb_retry_limit` system variable to control the automatic retry times of transactions](https://docs.pingcap.com/tidb/stable/system-variables#tidb_retry_limit)
* [Add the `tidb_disable_txn_auto_retry` system variable to control whether the transaction retries automatically](https://docs.pingcap.com/tidb/stable/system-variables/#tidb_disable_txn_auto_retry)
* [Support using `admin show slow` statement to obtain the slow queries ](https://docs.pingcap.com/tidb/stable/identify-slow-queries/#admin-show-slow-command)
* [Add the `tidb_slow_log_threshold` environment variable to set the threshold of slow log automatically](https://docs.pingcap.com/tidb/stable/system-variables#tidb_slow_log_threshold)
* [Add the `tidb_query_log_max_len` environment variable to set the length of the SQL statement to be truncated in the log dynamically](https://docs.pingcap.com/tidb/stable/system-variables#tidb_query_log_max_len)

### DDL
* Support the parallel execution of the add index statement and other statements to avoid the time consuming add index operation blocking other operations
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ tools/bin/errdoc-gen: tools/check/go.mod
$(GO) build -o ../bin/errdoc-gen github.com/pingcap/errors/errdoc-gen

tools/bin/golangci-lint:
curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b ./tools/bin v1.29.0
curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b ./tools/bin v1.41.1

# Usage:
#
Expand Down
33 changes: 33 additions & 0 deletions SECURITY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Security Vulnerability Disclosure and Response Process

TiDB is a fast-growing open source database. To ensure its security, a security vulnerability disclosure and response process is adopted.

The primary goal of this process is to reduce the total exposure time of users to publicly known vulnerabilities. To quickly fix vulnerabilities of TiDB products, the security team is responsible for the entire vulnerability management process, including internal communication and external disclosure.

If you find a vulnerability or encounter a security incident involving vulnerabilities of TiDB products, please report it as soon as possible to the TiDB security team ([email protected]).

Please kindly help provide as much vulnerability information as possible in the following format:

- Issue title*:

- Overview*:

- Affected components and version number*:

- CVE number (if any):

- Vulnerability verification process*:

- Contact information*:

The asterisk (*) indicates the required field.

# Response Time

The TiDB security team will confirm the vulnerabilities and contact you within 2 working days after your submission.

We will publicly thank you after fixing the security vulnerability. To avoid negative impact, please keep the vulnerability confidential until we fix it. We would appreciate it if you could obey the following code of conduct:

The vulnerability will not be disclosed until TiDB releases a patch for it.

The details of the vulnerability, for example, exploits code, will not be disclosed.
12 changes: 12 additions & 0 deletions cmd/explaintest/r/explain.result
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,18 @@ StreamAgg 8000.00 root group by:Column#7, funcs:group_concat(Column#5, Column#6
└─Projection 10000.00 root cast(test.t.a, var_string(20))->Column#5, cast(test.t.b, var_string(20))->Column#6, test.t.id
└─TableReader 10000.00 root data:TableFullScan
└─TableFullScan 10000.00 cop[tikv] table:t keep order:true, stats:pseudo
explain format = TRADITIONAL select group_concat(a, b) from t group by id;
id estRows task access object operator info
StreamAgg_8 8000.00 root group by:Column#7, funcs:group_concat(Column#5, Column#6 separator ",")->Column#4
└─Projection_18 10000.00 root cast(test.t.a, var_string(20))->Column#5, cast(test.t.b, var_string(20))->Column#6, test.t.id
└─TableReader_15 10000.00 root data:TableFullScan_14
└─TableFullScan_14 10000.00 cop[tikv] table:t keep order:true, stats:pseudo
explain format = 'row' select group_concat(a, b) from t group by id;
id estRows task access object operator info
StreamAgg_8 8000.00 root group by:Column#7, funcs:group_concat(Column#5, Column#6 separator ",")->Column#4
└─Projection_18 10000.00 root cast(test.t.a, var_string(20))->Column#5, cast(test.t.b, var_string(20))->Column#6, test.t.id
└─TableReader_15 10000.00 root data:TableFullScan_14
└─TableFullScan_14 10000.00 cop[tikv] table:t keep order:true, stats:pseudo
drop table t;
drop view if exists v;
create view v as select cast(replace(substring_index(substring_index("",',',1),':',-1),'"','') as CHAR(32)) as event_id;
Expand Down
38 changes: 38 additions & 0 deletions cmd/explaintest/r/explain_generate_column_substitute.result
Original file line number Diff line number Diff line change
Expand Up @@ -414,3 +414,41 @@ StreamAgg 1.00 root funcs:count(Column#6)->Column#4
select count(*) from tbl1 use index() where md5(s) like '02e74f10e0327ad868d138f2b4fdd6f%';
count(*)
64
drop table if exists t;
create table t(a int, b varchar(10), key((lower(b)), (a+1)), key((upper(b))));
insert into t values (1, "A"), (2, "B"), (3, "C"), (4, "D"), (5, "E"), (6, "F");
analyze table t;
desc format = 'brief' select * from t where (lower(b) = "a" and a+1 = 2) or (lower(b) = "b" and a+1 = 5);
id estRows task access object operator info
Projection 1.00 root test.t.a, test.t.b
└─IndexLookUp 1.00 root
├─IndexRangeScan(Build) 1.00 cop[tikv] table:t, index:expression_index(lower(`b`), `a` + 1) range:["a" 2,"a" 2], ["b" 5,"b" 5], keep order:false
└─TableRowIDScan(Probe) 1.00 cop[tikv] table:t keep order:false
desc format = 'brief' select * from t where not (lower(b) >= "a");
id estRows task access object operator info
Projection 0.00 root test.t.a, test.t.b
└─IndexLookUp 0.00 root
├─IndexRangeScan(Build) 0.00 cop[tikv] table:t, index:expression_index(lower(`b`), `a` + 1) range:[-inf,"a"), keep order:false
└─TableRowIDScan(Probe) 0.00 cop[tikv] table:t keep order:false
desc format = 'brief' select count(upper(b)) from t group by upper(b);
id estRows task access object operator info
StreamAgg 4.80 root group by:EMPTY_NAME, funcs:count(EMPTY_NAME)->Column#7
└─IndexReader 6.00 root index:IndexFullScan
└─IndexFullScan 6.00 cop[tikv] table:t, index:expression_index_2(upper(`b`)) keep order:true
desc format = 'brief' select max(upper(b)) from t group by upper(b);
id estRows task access object operator info
StreamAgg 4.80 root group by:EMPTY_NAME, funcs:max(EMPTY_NAME)->Column#7
└─IndexReader 6.00 root index:IndexFullScan
└─IndexFullScan 6.00 cop[tikv] table:t, index:expression_index_2(upper(`b`)) keep order:true
desc format = 'brief' select count(upper(b)) from t use index() group by upper(b);
id estRows task access object operator info
HashAgg 6.00 root group by:Column#9, funcs:count(Column#8)->Column#7
└─Projection 6.00 root upper(test.t.b)->Column#8, upper(test.t.b)->Column#9
└─TableReader 6.00 root data:TableFullScan
└─TableFullScan 6.00 cop[tikv] table:t keep order:false
desc format = 'brief' select max(upper(b)) from t use index() group by upper(b);
id estRows task access object operator info
HashAgg 6.00 root group by:Column#9, funcs:max(Column#8)->Column#7
└─Projection 6.00 root upper(test.t.b)->Column#8, upper(test.t.b)->Column#9
└─TableReader 6.00 root data:TableFullScan
└─TableFullScan 6.00 cop[tikv] table:t keep order:false
2 changes: 2 additions & 0 deletions cmd/explaintest/t/explain.test
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ set session tidb_hashagg_partial_concurrency = 1;
set session tidb_hashagg_final_concurrency = 1;
explain format = 'brief' select group_concat(a) from t group by id;
explain format = 'brief' select group_concat(a, b) from t group by id;
explain format = TRADITIONAL select group_concat(a, b) from t group by id;
explain format = 'row' select group_concat(a, b) from t group by id;
drop table t;

drop view if exists v;
Expand Down
10 changes: 10 additions & 0 deletions cmd/explaintest/t/explain_generate_column_substitute.test
Original file line number Diff line number Diff line change
Expand Up @@ -182,3 +182,13 @@ update tbl1 set s=id%32;
explain format = 'brief' select count(*) from tbl1 where md5(s) like '02e74f10e0327ad868d138f2b4fdd6f%';
select count(*) from tbl1 use index() where md5(s) like '02e74f10e0327ad868d138f2b4fdd6f%';

drop table if exists t;
create table t(a int, b varchar(10), key((lower(b)), (a+1)), key((upper(b))));
insert into t values (1, "A"), (2, "B"), (3, "C"), (4, "D"), (5, "E"), (6, "F");
analyze table t;
desc format = 'brief' select * from t where (lower(b) = "a" and a+1 = 2) or (lower(b) = "b" and a+1 = 5);
desc format = 'brief' select * from t where not (lower(b) >= "a");
desc format = 'brief' select count(upper(b)) from t group by upper(b);
desc format = 'brief' select max(upper(b)) from t group by upper(b);
desc format = 'brief' select count(upper(b)) from t use index() group by upper(b);
desc format = 'brief' select max(upper(b)) from t use index() group by upper(b);
2 changes: 1 addition & 1 deletion ddl/column.go
Original file line number Diff line number Diff line change
Expand Up @@ -1145,7 +1145,7 @@ func BuildElements(changingCol *model.ColumnInfo, changingIdxs []*model.IndexInf

func (w *worker) updatePhysicalTableRow(t table.PhysicalTable, oldColInfo, colInfo *model.ColumnInfo, reorgInfo *reorgInfo) error {
logutil.BgLogger().Info("[ddl] start to update table row", zap.String("job", reorgInfo.Job.String()), zap.String("reorgInfo", reorgInfo.String()))
return w.writePhysicalTableRecord(t.(table.PhysicalTable), typeUpdateColumnWorker, nil, oldColInfo, colInfo, reorgInfo)
return w.writePhysicalTableRecord(t, typeUpdateColumnWorker, nil, oldColInfo, colInfo, reorgInfo)
}

// TestReorgGoroutineRunning is only used in test to indicate the reorg goroutine has been started.
Expand Down
59 changes: 57 additions & 2 deletions ddl/db_integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2790,7 +2790,62 @@ func (s *testIntegrationSuite3) TestCreateTemporaryTable(c *C) {
// Follow the behaviour of the old version TiDB: parse and ignore the 'temporary' keyword.
tk.MustGetErrCode("create temporary table t(id int)", errno.ErrNotSupportedYet)

// Create local temporary table.
tk.MustExec("set @@tidb_enable_noop_functions = 1")
tk.MustExec("create temporary table t (id int)")
tk.MustQuery("show warnings").Check(testutil.RowsWithSep("|", "Warning 1105 local TEMPORARY TABLE is not supported yet, TEMPORARY will be parsed but ignored"))
tk.MustExec("create database tmp_db")
tk.MustExec("use tmp_db")
tk.MustExec("create temporary table t1 (id int)")
// Create a normal table with the same name is ok.
tk.MustExec("create table t1 (id int)")
tk.MustExec("create temporary table tmp_db.t2 (id int)")
tk.MustQuery("select * from t1") // No error
tk.MustExec("drop database tmp_db")
_, err := tk.Exec("select * from t1")
c.Assert(err, NotNil)
// In MySQL, drop DB does not really drop the table, it's back!
tk.MustExec("create database tmp_db")
tk.MustExec("use tmp_db")
tk.MustQuery("select * from t1") // No error

// When local temporary table overlap the normal table, it takes a higher priority.
tk.MustExec("create table overlap (id int)")
tk.MustExec("create temporary table overlap (a int, b int)")
_, err = tk.Exec("insert into overlap values (1)") // column not match
c.Assert(err, NotNil)
tk.MustExec("insert into overlap values (1, 1)")

// Check create local temporary table does not auto commit the transaction.
// Normal DDL implies a commit, but create temporary does not.
tk.MustExec("create table check_data (id int)")
tk.MustExec("begin")
tk.MustExec("insert into check_data values (1)")
tk.MustExec("create temporary table a_local_temp_table (id int)")
// Although "begin" take a infoschem snapshot, local temporary table inside txn should be always visible.
tk.MustExec("show create table tmp_db.a_local_temp_table")
tk.MustExec("rollback")
tk.MustQuery("select * from check_data").Check(testkit.Rows())

// Check create temporary table for if not exists
tk.MustExec("create temporary table b_local_temp_table (id int)")
_, err = tk.Exec("create temporary table b_local_temp_table (id int)")
c.Assert(infoschema.ErrTableExists.Equal(err), IsTrue)
tk.MustExec("create temporary table if not exists b_local_temp_table (id int)")

// Stale read see the local temporary table but can't read on it.
tk.MustExec("START TRANSACTION READ ONLY AS OF TIMESTAMP NOW(3)")
tk.MustGetErrMsg("select * from overlap", "can not stale read temporary table")
tk.MustExec("rollback")

// For mocktikv, safe point is not initialized, we manually insert it for snapshot to use.
safePointName := "tikv_gc_safe_point"
safePointValue := "20060102-15:04:05 -0700"
safePointComment := "All versions after safe point can be accessed. (DO NOT EDIT)"
updateSafePoint := fmt.Sprintf(`INSERT INTO mysql.tidb VALUES ('%[1]s', '%[2]s', '%[3]s')
ON DUPLICATE KEY
UPDATE variable_value = '%[2]s', comment = '%[3]s'`, safePointName, safePointValue, safePointComment)
tk.MustExec(updateSafePoint)

// Considering snapshot, local temporary table is always visible.
tk.MustExec("set @@tidb_snapshot = '2016-01-01 15:04:05.999999'")
tk.MustExec("select * from overlap")
}
2 changes: 1 addition & 1 deletion ddl/db_partition_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2992,7 +2992,7 @@ func getPartitionTableRecordsNum(c *C, ctx sessionctx.Context, tbl table.Partiti
info := tbl.Meta().GetPartitionInfo()
for _, def := range info.Definitions {
pid := def.ID
partition := tbl.(table.PartitionedTable).GetPartition(pid)
partition := tbl.GetPartition(pid)
c.Assert(ctx.NewTxn(context.Background()), IsNil)
err := tables.IterRecords(partition, ctx, partition.Cols(),
func(_ kv.Handle, data []types.Datum, cols []*table.Column) (bool, error) {
Expand Down
9 changes: 8 additions & 1 deletion ddl/db_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ import (
"github.com/pingcap/errors"
"github.com/pingcap/failpoint"
"github.com/pingcap/parser/ast"
"github.com/pingcap/parser/auth"
"github.com/pingcap/parser/model"
"github.com/pingcap/parser/mysql"
"github.com/pingcap/parser/terror"
Expand Down Expand Up @@ -5340,13 +5341,19 @@ func (s *testSerialDBSuite) TestSetTableFlashReplicaForSystemTable(c *C) {
memOrSysDB := []string{"MySQL", "INFORMATION_SCHEMA", "PERFORMANCE_SCHEMA", "METRICS_SCHEMA"}
for _, db := range memOrSysDB {
tk.MustExec("use " + db)
tk.Se.Auth(&auth.UserIdentity{Username: "root", Hostname: "%"}, nil, nil)
rows := tk.MustQuery("show tables").Rows()
for i := 0; i < len(rows); i++ {
sysTables = append(sysTables, rows[i][0].(string))
}
for _, one := range sysTables {
_, err := tk.Exec(fmt.Sprintf("alter table `%s` set tiflash replica 1", one))
c.Assert(err.Error(), Equals, "[ddl:8200]ALTER table replica for tables in system database is currently unsupported")
if db == "MySQL" {
c.Assert(err.Error(), Equals, "[ddl:8200]ALTER table replica for tables in system database is currently unsupported")
} else {
c.Assert(err.Error(), Equals, fmt.Sprintf("[planner:1142]ALTER command denied to user 'root'@'%%' for table '%s'", strings.ToLower(one)))
}

}
sysTables = sysTables[:0]
}
Expand Down
12 changes: 5 additions & 7 deletions ddl/ddl_api.go
Original file line number Diff line number Diff line change
Expand Up @@ -1682,12 +1682,12 @@ func buildTableInfoWithLike(ctx sessionctx.Context, ident ast.Ident, referTblInf
// BuildTableInfoFromAST builds model.TableInfo from a SQL statement.
// Note: TableID and PartitionID are left as uninitialized value.
func BuildTableInfoFromAST(s *ast.CreateTableStmt) (*model.TableInfo, error) {
return buildTableInfoWithCheck(mock.NewContext(), s, mysql.DefaultCharset, "")
return BuildTableInfoWithCheck(mock.NewContext(), s, mysql.DefaultCharset, "")
}

// buildTableInfoWithCheck builds model.TableInfo from a SQL statement.
// BuildTableInfoWithCheck builds model.TableInfo from a SQL statement.
// Note: TableID and PartitionIDs are left as uninitialized value.
func buildTableInfoWithCheck(ctx sessionctx.Context, s *ast.CreateTableStmt, dbCharset, dbCollate string) (*model.TableInfo, error) {
func BuildTableInfoWithCheck(ctx sessionctx.Context, s *ast.CreateTableStmt, dbCharset, dbCollate string) (*model.TableInfo, error) {
tbInfo, err := buildTableInfoWithStmt(ctx, s, dbCharset, dbCollate)
if err != nil {
return nil, err
Expand Down Expand Up @@ -1829,9 +1829,7 @@ func setTemporaryType(ctx sessionctx.Context, tbInfo *model.TableInfo, s *ast.Cr
return errors.Trace(errUnsupportedOnCommitPreserve)
}
case ast.TemporaryLocal:
// TODO: set "tbInfo.TempTableType = model.TempTableLocal" after local temporary table is supported.
tbInfo.TempTableType = model.TempTableNone
ctx.GetSessionVars().StmtCtx.AppendWarning(errors.New("local TEMPORARY TABLE is not supported yet, TEMPORARY will be parsed but ignored"))
tbInfo.TempTableType = model.TempTableLocal
default:
tbInfo.TempTableType = model.TempTableNone
}
Expand Down Expand Up @@ -5725,7 +5723,7 @@ func (d *ddl) RepairTable(ctx sessionctx.Context, table *ast.TableName, createSt
}

// It is necessary to specify the table.ID and partition.ID manually.
newTableInfo, err := buildTableInfoWithCheck(ctx, createStmt, oldTableInfo.Charset, oldTableInfo.Collate)
newTableInfo, err := BuildTableInfoWithCheck(ctx, createStmt, oldTableInfo.Charset, oldTableInfo.Collate)
if err != nil {
return errors.Trace(err)
}
Expand Down
4 changes: 2 additions & 2 deletions ddl/index.go
Original file line number Diff line number Diff line change
Expand Up @@ -1167,7 +1167,7 @@ func (w *addIndexWorker) BackfillDataInTxn(handleRange reorgBackfillTask) (taskC

func (w *worker) addPhysicalTableIndex(t table.PhysicalTable, indexInfo *model.IndexInfo, reorgInfo *reorgInfo) error {
logutil.BgLogger().Info("[ddl] start to add table index", zap.String("job", reorgInfo.Job.String()), zap.String("reorgInfo", reorgInfo.String()))
return w.writePhysicalTableRecord(t.(table.PhysicalTable), typeAddIndexWorker, indexInfo, nil, nil, reorgInfo)
return w.writePhysicalTableRecord(t, typeAddIndexWorker, indexInfo, nil, nil, reorgInfo)
}

// addTableIndex handles the add index reorganization state for a table.
Expand Down Expand Up @@ -1358,7 +1358,7 @@ func (w *cleanUpIndexWorker) BackfillDataInTxn(handleRange reorgBackfillTask) (t
// cleanupPhysicalTableIndex handles the drop partition reorganization state for a non-partitioned table or a partition.
func (w *worker) cleanupPhysicalTableIndex(t table.PhysicalTable, reorgInfo *reorgInfo) error {
logutil.BgLogger().Info("[ddl] start to clean up index", zap.String("job", reorgInfo.Job.String()), zap.String("reorgInfo", reorgInfo.String()))
return w.writePhysicalTableRecord(t.(table.PhysicalTable), typeCleanUpIndexWorker, nil, nil, nil, reorgInfo)
return w.writePhysicalTableRecord(t, typeCleanUpIndexWorker, nil, nil, nil, reorgInfo)
}

// cleanupGlobalIndex handles the drop partition reorganization state to clean up index entries of partitions.
Expand Down
Loading

0 comments on commit f0ed397

Please sign in to comment.