Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

planner, txn: select ... for update using Plan Cache can not lock data correctly in some cases #54652

Closed
qw4990 opened this issue Jul 16, 2024 · 1 comment · Fixed by #54661
Closed
Assignees
Labels
affects-5.4 This bug affects 5.4.x versions. affects-6.1 affects-6.5 affects-7.1 affects-7.5 affects-8.1 report/customer Customers have encountered this bug. severity/critical sig/planner SIG: Planner sig/transaction SIG:Transaction type/bug The issue is confirmed as a bug.

Comments

@qw4990
Copy link
Contributor

qw4990 commented Jul 16, 2024

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

mysql> select @@autocommit; -- enable autocommit
+--------------+
| @@autocommit |
+--------------+
|            1 |
+--------------+

create table t (pk int, a int, primary key(pk));  -- create a table with PK
prepare st from 'select * from t where pk=? for update';   -- prepare a PointPlan statement
set @pk=1;                                                                             
execute st using @pk;    -- execute this statement to generate a PointPlan cached in Plan Cache

-- plan of this exec-statement, Lock operations for "for update" are optimized by auto-commit
+-------------+---------+---------+------+---------------+------------------------------------------------------------+---------------+--------+------+
| id          | estRows | actRows | task | access object | execution info                                             | operator info | memory | disk |
+-------------+---------+---------+------+---------------+------------------------------------------------------------+---------------+--------+------+
| Point_Get_1 | 1.00    | 0       | root | table:t       | time:94.1µs, loops:1, Get:{num_rpc:1, total_time:42.5µs}   | handle:2      | N/A    | N/A  |
+-------------+---------+---------+------+---------------+------------------------------------------------------------+---------------+--------+------+



begin;
set @pk=1;
execute st using @pk;   -- the optimizer decided to reuse the prior PointPlan, which is incorrect.

mysql> select @@last_plan_from_cache;
+------------------------+
| @@last_plan_from_cache |
+------------------------+
|                      1 |
+------------------------+

Reusing this PointPlan without Lock in the second exec-statement can cause wrong results.

The correct plan for the second exec-statement should have Lock opearations:

+-------------+---------+---------+------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------+--------+------+
| id          | estRows | actRows | task | access object | execution info                                                                                                                                               | operator info  | memory | disk |
+-------------+---------+---------+------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------+--------+------+
| Point_Get_1 | 1.00    | 0       | root | table:t       | time:1.74ms, loops:1, lock_keys: {time:1.69ms, region:1, keys:1, slowest_rpc: {total: 0.000s, region_id: 93, store: store1, }, lock_rpc:165µs, rpc_count:1}  | handle:1, lock | N/A    | N/A  |
+-------------+---------+---------+------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------+--------+------+

2. What did you expect to see? (Required)

Shouldn't reuse the first PointPlan for the second exec-statement and the second exec-statement's plan should have Lock operations:

+-------------+---------+---------+------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------+--------+------+
| id          | estRows | actRows | task | access object | execution info                                                                                                                                               | operator info  | memory | disk |
+-------------+---------+---------+------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------+--------+------+
| Point_Get_1 | 1.00    | 0       | root | table:t       | time:1.74ms, loops:1, lock_keys: {time:1.69ms, region:1, keys:1, slowest_rpc: {total: 0.000s, region_id: 93, store: store1, }, lock_rpc:165µs, rpc_count:1}  | handle:1, lock | N/A    | N/A  |
+-------------+---------+---------+------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------+--------+------+

3. What did you see instead (Required)

The second exec-statement's plan has no Lock:

+-------------+---------+---------+------+---------------+-------------------------------------------------------------+---------------+--------+------+
| id          | estRows | actRows | task | access object | execution info                                              | operator info | memory | disk |
+-------------+---------+---------+------+---------------+-------------------------------------------------------------+---------------+--------+------+
| Point_Get_1 | 1.00    | 0       | root | table:t       | time:123.7µs, loops:1, Get:{num_rpc:1, total_time:63.3µs}   | handle:1      | N/A    | N/A  |
+-------------+---------+---------+------+---------------+-------------------------------------------------------------+---------------+--------+------+

4. What is your TiDB version? (Required)

Master

@qw4990 qw4990 added type/bug The issue is confirmed as a bug. sig/planner SIG: Planner sig/transaction SIG:Transaction severity/critical labels Jul 16, 2024
@qw4990 qw4990 self-assigned this Jul 16, 2024
@qw4990 qw4990 added affects-6.1 and removed may-affects-5.4 This bug maybe affects 5.4.x versions. may-affects-6.1 labels Jul 16, 2024
@qw4990 qw4990 changed the title planner, txn: Plan Cache reuses wrong plan for select ... for update under auto-commit planner, txn: select ... for update using Plan Cache can not lock data correctly in some cases Jul 17, 2024
@seiya-annie
Copy link

/report customer

@ti-chi-bot ti-chi-bot bot added the report/customer Customers have encountered this bug. label Jul 26, 2024
ti-chi-bot bot pushed a commit to pingcap/docs-cn that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs-cn that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs-cn that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs-cn that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs-cn that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs that referenced this issue Aug 6, 2024
ti-chi-bot bot pushed a commit to pingcap/docs that referenced this issue Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-5.4 This bug affects 5.4.x versions. affects-6.1 affects-6.5 affects-7.1 affects-7.5 affects-8.1 report/customer Customers have encountered this bug. severity/critical sig/planner SIG: Planner sig/transaction SIG:Transaction type/bug The issue is confirmed as a bug.
Projects
None yet
3 participants