Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

planner: wrong result when pushing Agg down through Union in MPP plans #45850

Closed
qw4990 opened this issue Aug 7, 2023 · 4 comments · Fixed by #46310
Closed

planner: wrong result when pushing Agg down through Union in MPP plans #45850

qw4990 opened this issue Aug 7, 2023 · 4 comments · Fixed by #46310
Assignees
Labels
affects-5.3 This bug affects 5.3.x versions. affects-5.4 This bug affects 5.4.x versions. affects-6.1 affects-6.5 affects-7.1 severity/critical sig/planner SIG: Planner type/bug The issue is confirmed as a bug.

Comments

@qw4990
Copy link
Contributor

qw4990 commented Aug 7, 2023

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

create table t (a int, b int);
alter table t set tiflash replica 1;
insert into t values (1, 1);
insert into t values (1, 1);
insert into t values (1, 1);
insert into t values (1, 1);
insert into t values (1, 1);

set @@tidb_allow_mpp=1;
set @@tidb_enforce_mpp=1;
select a, count(*) from (
select a, b from t
union all
select a, b from t
) t group by a order by a limit 10;
+------+----------+
| a    | count(*) |
+------+----------+
|    1 |        2 |
+------+----------+

2. What did you expect to see? (Required)

+------+----------+
| a    | count(*) |
+------+----------+
|    1 |       10 |
+------+----------+

3. What did you see instead (Required)

+------+----------+
| a    | count(*) |
+------+----------+
|    1 |        2 |
+------+----------+

4. What is your TiDB version? (Required)

+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tidb_version()                                                                                                                                                                                                                                                                               |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Release Version: v7.1.0
Edition: Community
Git Commit Hash: 635a4362235e8a3c0043542e629532e3c7bb2756
Git Branch: heads/refs/tags/v7.1.0
UTC Build Time: 2023-05-30 10:50:03
GoVersion: go1.20.3
Race Enabled: false
TiKV Min Version: 6.2.0-alpha
Check Table Before Drop: false
Store: tikv |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
@qw4990 qw4990 added type/bug The issue is confirmed as a bug. sig/planner SIG: Planner labels Aug 7, 2023
@ti-chi-bot ti-chi-bot bot added may-affects-5.2 This bug maybe affects 5.2.x versions. may-affects-5.3 This bug maybe affects 5.3.x versions. may-affects-5.4 This bug maybe affects 5.4.x versions. may-affects-6.1 may-affects-6.5 may-affects-7.1 labels Aug 7, 2023
@qw4990
Copy link
Contributor Author

qw4990 commented Aug 7, 2023

mysql> select a, count(*) from (
    -> select a, b from t
    -> union all
    -> select a, b from t
    -> ) t group by a;
+------+----------+
| a    | count(*) |
+------+----------+
|    1 |        2 |
+------+----------+
1 row in set (0.02 sec)

mysql> explain select a, count(*) from ( select a, b from t union all select a, b from t ) t group by a;
+----------------------------------------------+---------+--------------+---------------+-----------------------------------------------------------------------------------------------------------------------------------------+
| id                                           | estRows | task         | access object | operator info                                                                                                                           |
+----------------------------------------------+---------+--------------+---------------+-----------------------------------------------------------------------------------------------------------------------------------------+
| TableReader_98                               | 8.00    | root         |               | MppVersion: 1, data:ExchangeSender_97                                                                                                   |
| └─ExchangeSender_97                          | 8.00    | mpp[tiflash] |               | ExchangeType: PassThrough                                                                                                               |
|   └─Projection_14                            | 8.00    | mpp[tiflash] |               | Column#7, Column#9                                                                                                                      |
|     └─Projection_96                          | 8.00    | mpp[tiflash] |               | Column#9, Column#7                                                                                                                      |
|       └─HashAgg_95                           | 8.00    | mpp[tiflash] |               | group by:Column#7, funcs:count(Column#10)->Column#9, funcs:firstrow(Column#11)->Column#7, stream_count: 16                              |
|         └─ExchangeReceiver_91                | 8.00    | mpp[tiflash] |               | stream_count: 16                                                                                                                        |
|           └─ExchangeSender_90                | 8.00    | mpp[tiflash] |               | ExchangeType: HashPartition, Compression: FAST, Hash Cols: [name: Column#7, collate: binary], stream_count: 16                          |
|             └─Union_89                       | 8.00    | mpp[tiflash] |               |                                                                                                                                         |
|               ├─HashAgg_77                   | 4.00    | mpp[tiflash] |               | group by:test.t.a, funcs:count(1)->Column#10, funcs:firstrow(test.t.a)->Column#11, funcs:firstrow(test.t.a)->Column#7, stream_count: 16 |
|               │ └─ExchangeReceiver_37        | 5.00    | mpp[tiflash] |               | stream_count: 16                                                                                                                        |
|               │   └─ExchangeSender_36        | 5.00    | mpp[tiflash] |               | ExchangeType: HashPartition, Compression: FAST, Hash Cols: [name: test.t.a, collate: binary], stream_count: 16                          |
|               │     └─TableFullScan_35       | 5.00    | mpp[tiflash] | table:t       | keep order:false, stats:pseudo                                                                                                          |
|               └─HashAgg_82                   | 4.00    | mpp[tiflash] |               | group by:test.t.a, funcs:count(1)->Column#10, funcs:firstrow(test.t.a)->Column#11, funcs:firstrow(test.t.a)->Column#7, stream_count: 16 |
|                 └─ExchangeReceiver_65        | 5.00    | mpp[tiflash] |               | stream_count: 16                                                                                                                        |
|                   └─ExchangeSender_64        | 5.00    | mpp[tiflash] |               | ExchangeType: HashPartition, Compression: FAST, Hash Cols: [name: test.t.a, collate: binary], stream_count: 16                          |
|                     └─TableFullScan_63       | 5.00    | mpp[tiflash] | table:t       | keep order:false, stats:pseudo                                                                                                          |
+----------------------------------------------+---------+--------------+---------------+-----------------------------------------------------------------------------------------------------------------------------------------+
16 rows in set (0.00 sec)

@qw4990
Copy link
Contributor Author

qw4990 commented Aug 7, 2023

When splitting Count into 2 phases, we actually need to convert the second phase Count to Sum to make the result correct, so the expected plan should be Phase1Agg-Count -> ... -> Phase2Agg-Sum.

But on TiDB, the optimizer keeps the Phase2Agg unchanged since its execution engine will handle this:
751444b2-049b-4756-9212-94c596635faf
So on TiDB, the plan is Phase1Agg-Count -> ... -> Phase2Agg-Count (it's wired but its our current Implementation...):
image

But on TiFlash, its execution engine assumes the optimizer to do this convert:
image

When pushing Agg through UnionAll, TiFlash is not considered, so we get two Count here, Phase1Agg-Count -> ... -> Phase2Agg-Count, which breaks TiFlash execution engine's assumption and cause the wrong result:
image

@qw4990
Copy link
Contributor Author

qw4990 commented Aug 7, 2023

This issue is a little hard to fix since when pushing Agg through UnionAll at the logical optimization phase, the optimizer doesn't know whether this plan is a TiFlash plan or a pure TiKV plan.

@qw4990
Copy link
Contributor Author

qw4990 commented Aug 8, 2023

A workaround is to set tidb_opt_agg_push_down to false.

A quick fix is to avoid pushing Agg down through Union if allow_mpp is true.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-5.3 This bug affects 5.3.x versions. affects-5.4 This bug affects 5.4.x versions. affects-6.1 affects-6.5 affects-7.1 severity/critical sig/planner SIG: Planner type/bug The issue is confirmed as a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants