-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] 8.0 from mysql:8.0 #4
Commits on Nov 3, 2022
-
Bug#34401798 close client connection when server connection is closed…
… by server When server closes the router-server connection while the router waits for the client to send the next command, router will not close its side of the router-server connection. The server side may be closed by: - opening a 2nd connection and KILL <first-connection-id> - wait-timeout expiring and closing the connection Change ------ - when idling wait for a read-event on both client and server connection. - If the server sends something first or closes the connection, forward the error to the client and close the client and server connection - If client sends first, stop waiting for input from the server and handle the client command. Change-Id: Ie743d907ea45ae8fa501feca355a983fd685e1c1
Configuration menu - View commit details
-
Copy full SHA for a0bbfe8 - Browse repository at this point
Copy the full SHA a0bbfe8View commit details -
Bug#34699246 routertest_integration_routing_sharing fails after renam…
…es [postfix] After the rename of the replication terms for source and replica in the error-msgs, the routertests started to fail as the expected errormsgs didn't match anymore. Change ------ In the binlog related tests, changed the expected error-msg texts to contain the updated terms: - allow 'master' and 'source' - allow 'slave' and 'replica' Change-Id: I6784fd8fccc287e5321330d5c6fa9611c6b336e8
Configuration menu - View commit details
-
Copy full SHA for d2bdce9 - Browse repository at this point
Copy the full SHA d2bdce9View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1af383c - Browse repository at this point
Copy the full SHA 1af383cView commit details
Commits on Nov 4, 2022
-
BUG#34698428: group_replication.gr_acf_group_member_maintenance faile…
…d on PB2 weekly 8.0 Test gr_acf_group_member_maintenance was failing due to a error log message `failed registering replica on source`. Though the message is expected since the test is triggering source failures and the respective asynchronous replication channel reconnection and failover. To solve the above issue, now we do supress the error log message. Change-Id: I44906c06a84c7a2ad313a0af015832a4f665b84c
Configuration menu - View commit details
-
Copy full SHA for de9f0f4 - Browse repository at this point
Copy the full SHA de9f0f4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 37ade96 - Browse repository at this point
Copy the full SHA 37ade96View commit details -
Bug#34572040 Failure due to signal (table::move_tmp_key) Bug#34634469…
… Signal (get_store_key at sql/sql_select.cc:2383) These are two related but distinct problems manifested in the shrinkage of key definitions for derived tables or common table expressions, implemented in JOIN::finalize_derived_keys(). The problem in Bug#34572040 is that we have two references to one CTE, each with a valid key definition. The function will first loop over the first reference (cte_a) and move its used key from position 0 to position 1. Next, it will attempt to move the key for the second reference (cte_b) from position 4 to position 2. However, for each iteration, the function will calculate used key information. On the first iteration, the values are correct, but since key value #1 has been moved into position #0, the old information is invalid and provides wrong information. The problem is thus that for subsequent iterations we read data that has been invalidated by earlier key moves. The best solution to the problem is to move the keys for all references to the CTE in one operation. This way, we can calculate used keys information safely, before any move operation has been performed. The problem in Bug#34634469 is also related to having more than one reference to a CTE, but in this case the first reference (ref_3) has a key in position 5 which is moved to position 0, and the second reference (ref_4) has a key in position 3 that is moved to position 1. However, the key parts of the first key will overlap with the key parts of the second key after the first move, thus invalidating the key structure during the copy. The actual problem is that we move a higher-numbered key (5) before a lower-numbered key (3), which in this case makes it impossible to find an empty space for the moved key. The solution to this problem is to ensure that keys are moved in increasing key order. The patch changes the algorithm as follows: - When identifying a derived table/common table expression, ensure to move all its keys in one operation (at least those references from the same query block). - First, collect information about all key uses: hash key, unique index keys and actual key references. For the key references, also populate a mapping array that enumerates table references with key references in order of increasing key number. Also clear used key information for references that do not use keys. - For each table reference with a key reference in increasing key order, move the used key into the lowest available position. This will ensure that used entries are never overwritten. - When all table references have been processed, remove unused key definitions. Change-Id: I938099284e34a81886621f6a389f34abc51e78ba
Configuration menu - View commit details
-
Copy full SHA for 5802152 - Browse repository at this point
Copy the full SHA 5802152View commit details -
Bug #33144829: ASAN FAILURE IN AUTH_LDAP_KERBEROS_GLOBAL
The GSS plugin for SASL appears to have leaks. This causes the LDAP SASL client plugin to fail with ASAN and valgrind. Fixed by: 1. making sure sasl_client_done is called by the client's deinit method. 2. add vg and asan suppressions to cover the library leaks. Change-Id: Iceb6fbb2d9483b2fcc51c2a0f004735b288bb4f0
Configuration menu - View commit details
-
Copy full SHA for c29e5ca - Browse repository at this point
Copy the full SHA c29e5caView commit details -
BUG#34673762: gr_primary_mode_group_operations_net_partition_4 failin…
…g on PB2 - Windows Disable test gr_primary_mode_group_operations_net_partition_4 on Windows until the bug is fixed. Change-Id: I32e247363eefab08372989c24670e5238c720f2d
Configuration menu - View commit details
-
Copy full SHA for e397e8b - Browse repository at this point
Copy the full SHA e397e8bView commit details -
Bug#34231798 Incorrect query result from pushed join with IN-subquery
The failing queries are 'semi-joins', which semantically are expected to eliminate join duplicates after the first match has been found - Contrary to normal joins where all matching rows should be returned (in any order). Thus the differens semi-join iterators in the mysql server does some kind of skip-read after the first matching set of row(s) has been found for a semi-join nest of tables. This may also skip over result rows from other tables depending on the table(s) being skip-read. I.e tables being in the same query tree branch as the rows being skipped. That is fine when these tables are a part of the same semi-join as being skip-then - Then this is intended behavior. However, we sometimes ends up with query plans where the semi-join'ed tables are evaluated first, and the inner joined tables ends up depending in the semi-join'ed parts. Usually (only?) seen when the 'duplicate eliminate' iterators are use din the query plan. Note that this effectively turns the table order in the originating SQL query upside down. E.g. the pseudo SQL query: select ... from t1 where <column1> in (select <column2> from t2 where <pred>) Might get the query plan duplicate eliminate (select <column2> from t2) join t1 on <pred> Thus, we have a plan where t1 depends on a semi-joined t2, without being part of the semi-join itself. However it will have t2 as an ancestor in the SPJ query tree if the query is pushed -> t1 becomes a subject of the t2 duplicate elimination, effectively a skip-read operation Due to the finite size of the batch row buffers when returning SPJ results to the API, we might need to return t1 result rows over multiple batches, with the t2 result rows being reused/repeated. Thus they will appear as dupliacted to the iterators, and be skipped over, together with the t1 rows which should not have been skipped. Patch identifies when we have such query plans where non-semi-joined tables are depending on semi-joined tables, _and_ both tables are scan operation subject to such batching mechanisms. We will then reject pushing of depending scan-tables not being an intended part of the semi-joins itself. Note that such query plans seems to be a rare corner case. Patch also changes some test cases where: - Added two variants of existing test cases where coverage of duplicate eliminating iterators were not sufficient - Added SEMIJOIN(LOOSESCAN) hint to enisure that intended planes where produced. - Added two test cases for bug itself. That ^ smoked out a query plan which returned incorrect results after modification. With the patch pushability was reduced, and result became correct. Change-Id: Iae890ef702cac8a50564d5fb0e493a4715c4dafd
Ole John Aske committedNov 4, 2022 Configuration menu - View commit details
-
Copy full SHA for acb6f91 - Browse repository at this point
Copy the full SHA acb6f91View commit details -
Windows specific: Replaced use of jemalloc for memory management within OpenSSL (on Windows) via the call to CRYPTO_set_mem_functions in mysqld.cc. The OpenSSL memory management functions used on Windows now use std::malloc, std::free and std::realloc instead. The memory management code in my_malloc.cc is refactored using function templates to avoid duplicating the performance schema instrumentation and debugging code. Change-Id: I4df2d3974f215f3a8a9a7bd0fd82dd54c96fecb7
Daniel Blanchard committedNov 4, 2022 Configuration menu - View commit details
-
Copy full SHA for f180b1f - Browse repository at this point
Copy the full SHA f180b1fView commit details -
BUG#34240178: gr_member_actions_error_on_read_on_mpm_to_spm failing o…
…n PB2 Test gr_member_actions_error_on_read_on_mpm_to_spm does test how a group mode switch handles a failure during the update of the member actions table, causing the member to leave the group. That is achieved by enabling a debug flag that returns a error when we close the member actions table. Though that flag, which is set on a common code path can affect other steps of the group mode switch, which will continue to fail and leave the group. The test was failing because the expected error message was not logged into the error log, which means that the group mode switch did error out before reaching the member actions table error. Given that the point on which group mode switch fails is not deterministic, we remove the error log message assert from the test. Change-Id: I42c9e3564f79c15b80ae99a1c2edee634be0f524
Configuration menu - View commit details
-
Copy full SHA for 126eb47 - Browse repository at this point
Copy the full SHA 126eb47View commit details -
BUG#34482530: gr_acf_start_failover_channels_error_on_bootstrap faile…
…d on weekly-trunk Test gr_acf_start_failover_channels_error_on_bootstrap was failing due to a error log message ``` [ERROR] [MY-013211] [Repl] Plugin group_replication reported: 'Error while sending message. Context: primary election process.' ``` though the message is expected since the test is triggering group bootstrap errors, which does include a primary election. To solve the above issue, now we do suppress the error log message. Change-Id: I0eb504fec68189191dc0591effd56ba26f8b3283
Configuration menu - View commit details
-
Copy full SHA for 5101100 - Browse repository at this point
Copy the full SHA 5101100View commit details -
BUG#34527407: gr_parallel_start_uninstall daily-trunk PB2 fail
gr_parallel_start_uninstall forces a race condition between `UNINSTALL PLUGIN group_replication;` and `START GROUP_REPLICATION;` Despite the test first asynchronously executes the `UNINSTALL`, there is the possibility that the `START` is executed first. `START` does enabled `super_read_only`, disabling it after the member joins the group and it is a primary. When the `UNINSTALL` is allowed to execute once the `START` is complete, that may happen before the `super_read_only` is disabled. If that happens the `UNINSTALL` will hit the error: ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement Since the above error is possible, we added to one of possible error status of `UNINSTALL PLUGIN group_replication;`. Change-Id: I9847def076ec1236a2e273befbef52d3fcdf1376
Configuration menu - View commit details
-
Copy full SHA for 9f34e69 - Browse repository at this point
Copy the full SHA 9f34e69View commit details -
BUG#34718491: gr_parallel_stop_dml failed on weekly-8.0
gr_parallel_stop_dml forces the execution of `INSERT INTO t1 VALUES(1)` while `STOP GROUP_REPLICATION` is ongoing. The `INSERT` must fail, throwing one of the errors: 1) `Error on observer while running replication hook 'before_commit'` when the plugin is stopping. 2) `The MySQL server is running with the --super-read-only option so it cannot execute this statement` when the plugin already stopped and enabled `super_read_only`. The test was not considering the second error, thence we added it. Change-Id: I1d4e539cea1a37c11c9e133f92add3615f7aabf0
Configuration menu - View commit details
-
Copy full SHA for cb184f0 - Browse repository at this point
Copy the full SHA cb184f0View commit details -
Configuration menu - View commit details
-
Copy full SHA for 49d408f - Browse repository at this point
Copy the full SHA 49d408fView commit details
Commits on Nov 5, 2022
-
Bug#34763860 : CHECK TABLE to check the INSTANT/VERSION bit & report
corruption if both are set. Issue : Check table shall check if both the version and instant bit are set for a compact/dynamic row. This is a corruption scenario and check table shall report it. Fix : check table checks the INSTANT/VERSION bits in the records and report if both are set. Change-Id: I551d6d6296d8df052bcca9450e7856a24a2c5416
Mayank Prasad committedNov 5, 2022 Configuration menu - View commit details
-
Copy full SHA for 9ebcad7 - Browse repository at this point
Copy the full SHA 9ebcad7View commit details
Commits on Nov 7, 2022
-
Bug#34523475: Inconsequent type derivation when using @variable
When a table is first created with a reference to a non-existing variable, the derived type is text. The second time an identical table is created, the derived type is mediumblob. This is due to an actual variable being created on the first table creation, and this variable is then used on the second table creation. The main problem with this is that the variable is created with a binary character set, whereas the first table creation is given the correct default character set. This problem is fixed by assigning the correct default character set to the source item when creating the user variable. Even after this fix, there is still a minor difference between the two table creations: the first table gets a column with maximum length 262140 bytes, wheras the second table gets a column with maximum length 4294967295 bytes. This is because the first creation utilizes a default character type, whereas the second utilizes the created user variable, and those instances use different maximum lengths. Fixing this will require a large rewrite and is not deemed worthwhile at the time being. Change-Id: I8cd1f946dbf87047c261bfeca9d8ba7d23a9629c
Configuration menu - View commit details
-
Copy full SHA for 710e1ab - Browse repository at this point
Copy the full SHA 710e1abView commit details -
Bug#34231798 Incorrect query result from pushed join with IN-subquery
Post push fix, re-recorded spj_rqg_hyeprgraph.results Change-Id: Ifcb0cfabef31004b5aa2af32f24736810cc2ffec
Ole John Aske committedNov 7, 2022 Configuration menu - View commit details
-
Copy full SHA for 8d66feb - Browse repository at this point
Copy the full SHA 8d66febView commit details -
Configuration menu - View commit details
-
Copy full SHA for 09466ad - Browse repository at this point
Copy the full SHA 09466adView commit details -
BUG#34698376 MYSQL80 crashes. Post push fix
Post push fix: static inline functions std_realloc and redirecting_realloc are only used when USE_MALLOC_WRAPPER is not defined, so make these functions conditionally compiled to avoid build breakage when compiling in maintainer mode (-Werror). Change-Id: If98ef4bba95289fbdd92c9cf9808ab83e4fe1d42
Daniel Blanchard committedNov 7, 2022 Configuration menu - View commit details
-
Copy full SHA for 9e2cee5 - Browse repository at this point
Copy the full SHA 9e2cee5View commit details -
BUG#34730637 : More scenarios to be covered for materializing INSTANT
DEFUALTS During UPDATE Issue : This is a followup issue of 34558510 which fixes the cases for which, During UPDATE, we shall not materialize INSTANT ADD columns added in earlier implementation. If a table has versions, it indicates it has INSTANT ADD/DROP columns in new implementation. And in new implementation it is made sure that the maximum possible row is within the permissible limit, otherwise INSTANT ADD is rejected. Fix: While deciding to materialize, check if table has An INSTANT ADD columns with added in a row versions. If it does, then we can be assured that if INSTANT DEFAULT are materialized, row will be within permissible limit. Change-Id: Ia22ab7a5aa96966741ee1b95833a5eb6705448d7
Mayank Prasad committedNov 7, 2022 Configuration menu - View commit details
-
Copy full SHA for 2b786b3 - Browse repository at this point
Copy the full SHA 2b786b3View commit details -
Configuration menu - View commit details
-
Copy full SHA for ec0d825 - Browse repository at this point
Copy the full SHA ec0d825View commit details -
Configuration menu - View commit details
-
Copy full SHA for ed6d69f - Browse repository at this point
Copy the full SHA ed6d69fView commit details
Commits on Nov 8, 2022
-
Bug #34378513 Assertion failure: dict0mem.h:2482:pos < n_def thread 1…
…40243300361984 Issue: When user keeps adding and dropping columns instantly, n_def increases. When n_def is increased beyond REC_MAX_N_FIELDS, it rotates back to 0 causing the assertion. Fix: Alter handler must know if INSTANT is possible. Hence we must check the value of n_def and number of columns being added before proceeding with ALGORITHM=INSTANT. Further we must ensure that if we cannot use INSTANT; we must: 1. Fall-back to INPLACE if algorithm=DEFAULT, or not specified. 2. Error out with ER_TOO_MANY_FIELDS(Too many columns) if algorithm=INSTANT; Note: Current patch will not allow n_def to cross 1022. This is because when we add even 1 more column, n_def could become 1023 (which is equal to REC_MAX_N_FIELDS). Furthermore, this patch will error with ER_TOO_MANY_FIELDS only when ADDing a new column with INSTANT. We can still drop any number of columns instantly Thanks to Marcelo Altmann ([email protected]) and Percona for the contribution Change-Id: Iff5c7d6e45c294548d515458cddfb35c00aff43e
Configuration menu - View commit details
-
Copy full SHA for 755fe28 - Browse repository at this point
Copy the full SHA 755fe28View commit details -
Bug #28674694 - REDO LOG FRAGMENT FILE SIZE DOES NOT MATCH CONFIGURED…
… ONE Post push fix : Adding a wait to fsync. Reviewed by: Mauritz Sundell <[email protected]> Change-Id: I26a19b9c653fd9a46849a2a3af20b9d815fcccdc
Maitrayi Sabaratnam committedNov 8, 2022 Configuration menu - View commit details
-
Copy full SHA for 4e911f2 - Browse repository at this point
Copy the full SHA 4e911f2View commit details -
Merge branch 'mysql-5.7-cluster-7.5' into mysql-5.7-cluster-7.6
Change-Id: I78a4c09a1790d8843b6ca14ba8856c88425966a4
Maitrayi Sabaratnam committedNov 8, 2022 Configuration menu - View commit details
-
Copy full SHA for 56810d8 - Browse repository at this point
Copy the full SHA 56810d8View commit details -
Change-Id: I67c36b3afcc0c1fea40efbea8c8a0b283ccbabd1
Maitrayi Sabaratnam committedNov 8, 2022 Configuration menu - View commit details
-
Copy full SHA for adfdddf - Browse repository at this point
Copy the full SHA adfdddfView commit details -
Configuration menu - View commit details
-
Copy full SHA for cdecfa2 - Browse repository at this point
Copy the full SHA cdecfa2View commit details -
Bug#34776172 MacOS: silence deprecated use warnings from XCode >= 14
- We have much use of sprintf, which is now flagged by clang as unsafe. Silence this, since we have too many uses to rewrite easily. - This version if Xcode also flags loss of precision from 64 to 32 bits integer, silence this also. Typically when x of type size_t is assigned to an int. Change-Id: I3e5f829c7fdb8ddb08c56149bc0db1a5dc277f34
Dag Wanvik committedNov 8, 2022 Configuration menu - View commit details
-
Copy full SHA for 7a37c9a - Browse repository at this point
Copy the full SHA 7a37c9aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9de2988 - Browse repository at this point
Copy the full SHA 9de2988View commit details -
Bug #33968442 Hypergraph gives too high row estimates for GROUP BY
This commit fixes the above bug by making better row estimates for "GROUP BY". We now use (non-hash) indexes and histograms to make row estimates where possible. Otherwise, we use rules-of-thumb based on table sizes and input set sizes. Change-Id: Ibfdd246f7251c29bb6a8b3a641ea067d65b72dbc
Jan Wedvik committedNov 8, 2022 Configuration menu - View commit details
-
Copy full SHA for 8ac1455 - Browse repository at this point
Copy the full SHA 8ac1455View commit details -
Configuration menu - View commit details
-
Copy full SHA for 918de35 - Browse repository at this point
Copy the full SHA 918de35View commit details -
Configuration menu - View commit details
-
Copy full SHA for f5427ee - Browse repository at this point
Copy the full SHA f5427eeView commit details -
Bug #34640773 Imbalance in fragment distribution when ClassicFragment…
…ation OFF Post-push fix. Make test ndb.ndb_reorganize_partition deterministic. Failed on some runs: CURRENT_TEST: ndb.ndb_reorganize_partition @@ -176,12 +176,12 @@ 1 test/def/t1 Unique hash index 2 2 test/def/t1 Unique hash index 1 2 test/def/t1 Unique hash index 2 -3 test/def/t1 Unique hash index 2 3 test/def/t1 Unique hash index 1 +3 test/def/t1 Unique hash index 2 4 test/def/t1 Unique hash index 1 4 test/def/t1 Unique hash index 2 -5 test/def/t1 Unique hash index 2 5 test/def/t1 Unique hash index 1 +5 test/def/t1 Unique hash index 2 6 test/def/t1 Unique hash index 1 6 test/def/t1 Unique hash index 2 7 test/def/t1 Unique hash index 1 Change-Id: I85d58c7b77104cc73fc03bf20675a4b85c5aaff2
Configuration menu - View commit details
-
Copy full SHA for e869782 - Browse repository at this point
Copy the full SHA e869782View commit details -
Bug#34549189 ndb_config --diff-default does not work
ndb_config --diff-default should show which configured parameter differ from the default value and could. Since 8.0.29 it did not not work neither for system, node, or, connection sections. $ ndb_config --diff-default --system Segmentation fault (core dumped) $ ndb_config --diff-default --nodes config of node id 1 that is different from default CONFIG_PARAMETER,ACTUAL_VALUE,DEFAULT_VALUE .../ndb_config.cpp:463: require((InitConfigFileParser::convertStringToUint64(def_str, def_value))) failed Aborted (core dumped) $ ndb_config --diff-default --connnections <nothing> Even before 8.0.29 there were some issues with --diff-default that all are now fixed: Mandatory parameters or parameters with no default value were never printed. Furthermore the option --type only worked for --nodes it was not possible to select TCP or SHM for --connections. For --nodes, one could specify ndbd or NDB for --type, but not DB as is a valid section name in config.ini, now DB is also a valid node type. Enumerated values in configuration were shown as numbers, these numbers were neither documented for user nor were it possible to use them in config.ini. Now the enumerated values are shown by name not number. Change-Id: I5aca113174bca431269dd517623dfbc33e36b396
Configuration menu - View commit details
-
Copy full SHA for 28bd68e - Browse repository at this point
Copy the full SHA 28bd68eView commit details -
Bug#34773752 ndb_config can not read cached binary configuation
Adding ndb_config option --config-binary-file to read configuration from binary (cached) configuration file. Change-Id: Ia89acb338f9ec25e244d96960f9f4a777831d8d4
Configuration menu - View commit details
-
Copy full SHA for 21e4d8c - Browse repository at this point
Copy the full SHA 21e4d8cView commit details
Commits on Nov 9, 2022
-
Bug#34702833 - Allow system thread to bind to user(THD) resource group.
Followup patch to fix RG alter changes when RG information is stored with a system thread. Original patch for this bug allows system thread to bind to user(THD) RG. But if a system thread remembers THD RG on bind, then altering RG attributes during will not be applied to a system thread. To fix this issue, introduced a version number in the RG in-memory instance. RG version is incremented on alter operation. System thread should check version number before deciding to re-use stored RG. If version number mismatches, then altered THD RG should be re-applied to a system thread. Change-Id: I19af513e6cb49893b93cbce4143e565197f36b17
Configuration menu - View commit details
-
Copy full SHA for 1f94f74 - Browse repository at this point
Copy the full SHA 1f94f74View commit details -
WL#15147 - Thread Pool: Integrate Resource Groups feature with the Th…
…read Pool MySQL Resource Groups(RG) provides ability to manage and assign resources to the threads. But at-present, RG feature is not supported when thread pool(TP) is enabled. This WL integrates RG feature with the TP. With RG, TP works as below, *) By default "USR_default" resource group is assigned to the user connections, "SYS_internal" resource group is assigned to the TP query worker threads and "SYS_default" to other TP background threads. *) RG of type SYSTEM can be assigned to TP background threads except TP query worker threads and type USER to user connections. TP query worker threads are assigned to resource group "SYS_internal". User is not allowed to assign other resource group to these threads. *) While attaching tp_worker thread to a user connection to execute connection's statement, user connection's RG is bind to the tp_worker thread. tp_worker thread's RG is unbind in the thread detachment stage. Change-Id: I45d4a6c205b9aa0aaa6ee8f53eaef1bcdeafd76f
Configuration menu - View commit details
-
Copy full SHA for 26c9413 - Browse repository at this point
Copy the full SHA 26c9413View commit details -
Bug#34748973 - CPU AFFINITY IS NOT DROPPED ON RESOURCE GROUP DROP
When resource group assigned to a thread is dropped, USR_default resource group is assigned to it. USR_default resource group has "0" CPU priority and no CPU affinity. So thread should be able to use any CPU in this case. But thread is bound to CPU's of a dropped resource group. This issue is observed on Linux and FreeBSD only. While applying default resource group, CPU affinity is dropped by setting bits for all available CPUs in the system. To get number of CPUs in Linux and FreeBSD, pthread_getaffinity_np() is invoked incorrectly. This method actually returns CPU_SET with bits set for CPUs to which invoking thread is bound. If thread dropping resource group is assigned to the same resource group then all threads assigned to resource groups keeps on using same the set of CPUs instead of using all CPUs. To fix this issue, changed method to get correct CPU count. Change-Id: I5def0582ff0a3c07f886b2a18e2c95c8a0c1e2bc
Configuration menu - View commit details
-
Copy full SHA for 699654a - Browse repository at this point
Copy the full SHA 699654aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 804b62d - Browse repository at this point
Copy the full SHA 804b62dView commit details -
Bug #33968442 Hypergraph gives too high row estimates for GROUP BY
post-push fix: Result of ndb.join_pushdown_default had to be updated to reflect the new algorithm for calculating GROUP BY row estimates. Change-Id: Ic5f061a61f25bad70df6e2f75c85c7804393db74
Jan Wedvik committedNov 9, 2022 Configuration menu - View commit details
-
Copy full SHA for ff6d2f7 - Browse repository at this point
Copy the full SHA ff6d2f7View commit details -
Bug#34781248 pooled connection not used if io.threads larger than 1
When connection pooling is used and a connection to a destination is picked from the pool, it is ignored if the connection is belongs to another io-thread. - By default, as many io-threads are starts as there are CPU-threads. - Each new connection is assigned another io-thread (round-robin) E.g. a 24 core machine, with 2-threads per core, leads to 48 io-threads. If a server-connection is taken from the pool for the current client-connection then it must match the right: - destination AND - io-thread With 3 possible destinations and 48 io-threads, the connection pool has to contain 3 * 48 connections to have a good chance that there is 1 matching connection for this client-connection. The higher the io-thread count, the lower the cache-hit rate. Change ------ - ignore the io-thread when taking connections from the pool - move pooled-connections to the current client connection's io-thread when taken from the pool. Change-Id: I586e7b5b1904a3a0020d6c8784928d549a1675e4
Configuration menu - View commit details
-
Copy full SHA for c89512b - Browse repository at this point
Copy the full SHA c89512bView commit details -
Bug#34778017 authenticating over unix-socket fails
Connecting through mysql-router with an 'caching-sha2-password' account over unix-socket fails: $ mysql --socket=/tmp/router.sock HY000 (2013) Lost connection to MySQL server at 'reading final connect information', system error: 95 Background ---------- As part of the authentication handshake of caching-sha2-password, the router asks the client for the plaintext password and expects the client to respond with a 'get-public-key' request as the connection to the client is not using TLS. But the client sends the plaintext password instead, as it treats the unix-socket as a secure transport. Change ------ - treat unix-sockets as 'secure-transport' - added tests Change-Id: I72e01c604f4616ec998174a6aa6e2ae97d07088b
Configuration menu - View commit details
-
Copy full SHA for e31f9f2 - Browse repository at this point
Copy the full SHA e31f9f2View commit details -
Bug#34778746 use-after-free after async_send_tls() in mock-server
ASAN reports use-after-free around async_send_tls() when running the routertest_component_mock_server. async_send_tls() calls the completion function with net::defer() without a execution-context, which leads to net::system_executor being called which starts a new thread to run the completion function. That's not intended and leads to a data-race. Change ------ - net::defer() the execution of the completion function in the socket's execution context instead to properly synchronize. Change-Id: I512c195adaa2a73210b541d18804b7d15a8d7f4e
Configuration menu - View commit details
-
Copy full SHA for 83701ef - Browse repository at this point
Copy the full SHA 83701efView commit details -
Bug #34229520 : Ndb MTA unordered commits with log_replica_updates=1
Problem ------- Commit ordering is not preserved for NDB transactions, even if replica_preserve_commit_order is turned on. Analysis / Root-cause analysis ------------------------------ Commit ordering is implemented in several places: 1. binlog.cc, in the `ordered_commit` function 2. handler.cc, in the `ha_commit_low` function Ordering is enabled in case binlogging is turned on and binlog has transaction information in its local caches or binlogging option is disabled. NDB uses MYSQL_BINLOG as transaction coordinator however it does not cache data locally. Therefore, commit ordering is never taken into account. Solution -------- Ordering condition in the `ha_commit_low` function is extended to take into account engines that do not use binlog cache manager - newly introduced flag 'is_low_level_ordering_enabled' is checked. The flag is set up in the 'binlog::commit' function to true in case binlog caches are disabled or empty and binlogging is enabled in the current call to the function. This condition is checked by the implemented 'is_current_stmt_binlog_enabled_and_caches_empty' function which may be safely called in case binlogging is disabled or binlog cache manager is not initialized. The check is done in the 'binlog::commit' because there are cases in which binlog caches were emptied after the thread entered the ordered_commit function in the MYSQL_BIN_LOG. Please also note that it is important to check opt_bin_log in is_current_stmt_binlog_log_replica_updates_disabled, because of statements such as ALTER TABLE OPTIMIZE PARTITION, where the last call to trans_commit_stmt in the mysql_inplace_alter_table (Implicit_substatement_guard disabled) is not the last call. Signed-off-by: Karolina Szczepankiewicz <[email protected]> Change-Id: I942f0a42aa9b3a033279a58deb8d37345b887b90
Karolina Szczepankiewicz committedNov 9, 2022 Configuration menu - View commit details
-
Copy full SHA for 7efe1ea - Browse repository at this point
Copy the full SHA 7efe1eaView commit details -
Configuration menu - View commit details
-
Copy full SHA for 1ad61e7 - Browse repository at this point
Copy the full SHA 1ad61e7View commit details -
Configuration menu - View commit details
-
Copy full SHA for 937b897 - Browse repository at this point
Copy the full SHA 937b897View commit details -
Bug #34756282 mysql-8.0.31: build fails if building user cannot acces…
…s mysqld's tempdir When creating the file INFO_BIN: invoke mysqld with --no-defaults to avoid interfering with possible other installations of MySQL. Change-Id: I25072ec674b5040f20d57da3a6ac70d7bebc9b1f (cherry picked from commit 9b2f3d4dab7b1ca07e6a4f8b51b60e3de9270147)
Tor Didriksen committedNov 9, 2022 Configuration menu - View commit details
-
Copy full SHA for 1e7727c - Browse repository at this point
Copy the full SHA 1e7727cView commit details -
Bug #34776151 BISON_TARGET should use full path as <YaccInput>
Use full source path for Bison and Flex source files. This simplifies debugging, and gcov reports. Change-Id: I10bd9966056a63c5f32b5ae2a5bd1a850bfeb8e1 (cherry picked from commit 4ec5dc61b099da1dbbf1473c82ea3be53f8ed18a)
Tor Didriksen committedNov 9, 2022 Configuration menu - View commit details
-
Copy full SHA for 5850d37 - Browse repository at this point
Copy the full SHA 5850d37View commit details -
Bug #29692047 INNODB.LOG_FIRST_REC_GROUP IS FAILING SPORADICALLY
Some InnoDB tests stop all log activity to allow tests to create deterministic redo log changes. For this to work, all redo producing threads must be stopped. But GTID Persistor thread was not stopped. It was producing redo records and was interfering with the tests assumptions. A new debug sync switch is added to effectively disable this thread. This new switch is used in the log_disable_page_cleaners.inc and transitively in many tests that use it, including the log_first_rec_group.test which uses the above include file instead of manual, incomplete snippets. Change-Id: I1a0b2d727e9ef13a6fed6dcb5783f050ea81bad8
Marcin Babij committedNov 9, 2022 Configuration menu - View commit details
-
Copy full SHA for 654a627 - Browse repository at this point
Copy the full SHA 654a627View commit details
Commits on Nov 10, 2022
-
Configuration menu - View commit details
-
Copy full SHA for f17bfef - Browse repository at this point
Copy the full SHA f17bfefView commit details -
Bug #34275711: Assertion failure: arch0page.cc:350:file_index + 1 == …
…count (weekly-trunk) Issue: Mismatch in archive file count while loading the archiver. Background: When an empty archive file was found during recovery, we would set the variable m_new_empty_file to true and delete that file before calling load_archiver and the file count for that group would be decremented. The m_new_empty_file variable was set from false (default value) to true if the file size was 0. Cause: After changes in 5cad80198a2b77068b3806d13b162e4a32f7cbc4, we started creating new archive files with headers. Which meant that the file size would not be zero for a newly created empty archive file but the header size. Hence m_new_empty_file remained false and the file count was not decremented and thus the assertion failure for file count mismatch. Fix: Modify the condition for setting m_new_empty_file from '!=0' to '> header_len'. Change-Id: Ie989ae67c249281ab29802818fedfc6d17fd87b2
Rachit Anklesaria committedNov 10, 2022 Configuration menu - View commit details
-
Copy full SHA for dffef1d - Browse repository at this point
Copy the full SHA dffef1dView commit details -
Configuration menu - View commit details
-
Copy full SHA for b61c317 - Browse repository at this point
Copy the full SHA b61c317View commit details -
Bug#34783136 Add Result::Suspend state and resume() method
Add Result::Suspend state and resume() method to Routers Processor and MysqlRoutingClassicConnection classes. Change-Id: I8bb517172010e31835f8501e65a9354e95accd59
Thomas Nielsen committedNov 10, 2022 Configuration menu - View commit details
-
Copy full SHA for 78e982d - Browse repository at this point
Copy the full SHA 78e982dView commit details -
Bug #34688683 Allow using 32-bit MIT Kerberos for client plugin on wi…
…ndows WL #15336 added support for using MIT Kerberos to build authentication_kerberos_client on Windows. The cmake code assumed 64bit libraries. Fix cmake code to look for 32bit libraries for 32bit builds. Also fix some build warnings, and a build break (redefined typedef) for typedef SSIZE_T ssize_t; Change-Id: I30d0b9c434202c3fb51ce8d1c9329abd737038e2
Tor Didriksen committedNov 10, 2022 Configuration menu - View commit details
-
Copy full SHA for c444ce7 - Browse repository at this point
Copy the full SHA c444ce7View commit details -
BUG#24963738 - INCONSISTENT HANDLING OF NULL VALUES IN GTID FUNCTIONS
Problem ------- To be consistent with other native functions, GTID functions GTID_SUBSET and GTID_SUBTRACT should return NULL when being passed NULL or erroneous arguments. WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS and WAIT_FOR_EXECUTED_GTID_SET function return error when NULL is passed. Analysis / Root-cause analysis ------------------------------ GTID_SUBSET and GTID_SUBTRACT function need to first check for their parameters if they are NULL and in that case make the functions return NULL. WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS and WAIT_FOR_EXECUTED_GTID_SET function return error when NULL is passed. Solution -------- Added quard code to check if parameters are NULL and if so return NULL or report error. Change-Id: Ib984ba04447fbbf3f390447516399aa9fe58f7c0
Slawomir Maludzinski committedNov 10, 2022 Configuration menu - View commit details
-
Copy full SHA for a1a6d42 - Browse repository at this point
Copy the full SHA a1a6d42View commit details -
Bug #34654470 Ndb : Copying ALTER safety check false positives
The copying ALTER safety check uses the sum of per-fragment commit count values to determine whether any writes have committed to a table over a period of time. Different replicas of a fragment will not necessarily have the same commit count over time, as a fragment replica's commit count is reset during node restart. ReadPrimary (RP) tables always route read requests to a table's primary fragment replicas. ReadBackup(RB) and FullyReplicated (FR) tables optimise reads by allowing CommittedRead operations to be routed to backup fragment replicas. This results in the set of commit counts read not always being stable for RB and FR tables, which can cause false positive failures for the copying ALTER TABLE safety check. This is solved by performing the copying ALTER TABLE safety check using a locking scan. Locked reads are routed to the same set of primary/main fragments every time, giving stable counts. Testcase ndb_alter_table_copy_check is extended to test this functionality. Approved by : Magnus Blaudd <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b267eac - Browse repository at this point
Copy the full SHA b267eacView commit details -
BUG#34567285 Wait for pending IO writes on shutdown
In some cases with innodb-fast-shutdown=2 page cleaner will shut down with buffer pool IO writes pending. Waiting for writes to complete ensures clean exit. Similarly, in the startup process where the code waits for pending IO reads, it is changed to also wait for pending writes. Change-Id: I40120071371157a7ec2399dbaa65da163d15e5a6
Andrzej Jarzabek committedNov 10, 2022 Configuration menu - View commit details
-
Copy full SHA for e50a802 - Browse repository at this point
Copy the full SHA e50a802View commit details -
Bug #34616560 - Assertion failure: row0sel.cc:4708:prebuilt->sql_stat…
…_start || trx->state.load(std::memory_order_relaxed) == TRX_STATE_ACTIVE Innodb detects a deadlock with just two transactions, choosing one as a victim to be rolled back. This transaction is stopped in the middle of a subquery, but would continue the query anyways, resulting in a failed assertion down the line. This patch forces UpdateRowsIterator::DoImmediateUpdatesAndBufferRowIds to return return true, thus setting the local_error variable in the reading loop in UpdateRowsIterator::Read, which stops it from continuing the query. The query ends with an error message about the deadlock instead. Note: the main issue is, the code ignored the return value of fill_record, which can signal an error. Change-Id: I3278a419d813eb1dcf5c5be1a451436afb2bc7a6
Configuration menu - View commit details
-
Copy full SHA for 4434adb - Browse repository at this point
Copy the full SHA 4434adbView commit details
Commits on Nov 11, 2022
-
WL#15422: Deprecate starting unquoted identifiers with a dollar sign
Emit a deprecation warning if an SQL statement has an unquoted identifier that starts with a '$'. This is in preparation for future support for dollar quoted strings, which would conflict with such unquoted identifiers. Quoted identifiers will continue to have a starting dollar with no warning. class Admin_command_collection_handler defined in plugin code used such an identifier while generating an internal SQL. Quoted that identifier. Add the 'starting $' rule in require_quotes() so that internally generated SQLs correctly quote identifiers starting with $. Some mtr tests used identifiers with dollar, which had to be adjusted. router/src/routing/src/sql_lexer.cc has a copy of lex_one_token() function; so copied the changes into that file. Change-Id: I68cc581a7f443a1a36629d5ef51a9e6fce23192d
Amit Khandekar committedNov 11, 2022 Configuration menu - View commit details
-
Copy full SHA for 950eb14 - Browse repository at this point
Copy the full SHA 950eb14View commit details -
Bug#34336033 - LOAD DATA INFILE with a subquery returns
incorrect 'Subquery returns more than 1 row' Issue: When the LOAD DATA INFILE command is used with a subquery having a JOIN condition, the subquery returns more than one row where it is expected to return only one row. Analysis: In case of LOAD DATA the local transformations are skipped and hence the join condition is not set. So during the execution, the hash iterator doesn't have the join condition and results in returning multiple rows. Fix: Make sure the local transformations are applied even in the case of LOAD command. Change-Id: If4295ef60573eb380837ff7043497dad07645337
Nimita Joshi committedNov 11, 2022 Configuration menu - View commit details
-
Copy full SHA for 2954a82 - Browse repository at this point
Copy the full SHA 2954a82View commit details -
Bug #33144829: ASAN FAILURE IN AUTH_LDAP_KERBEROS_GLOBAL
Post-push fix: Fix for EL6 build Change-Id: I9700aca9601a1486efd75daf73a6ccfe1bd2b818
Configuration menu - View commit details
-
Copy full SHA for 206b58c - Browse repository at this point
Copy the full SHA 206b58cView commit details -
Bug #34699398 Row estimates for joins ignores histograms
There is a function EstimateFieldSelectivity() that estimates the selectivity of a given field (when there is an equi-join with some other field.) As it was, this function looked for (non-hash) indexes starting with that field, and used KEY::records_per_key() to estimate the selectivity. If no suitable index was found, a default selectivity of 10% was used. This patch ensures that if there is no suitable index, we will look for a histogram. If there is one, we use Histogram::get_num_distinct_values() to estimate the selectivity. Change-Id: I83af46e04142aa0260e701cd07e03c246696b3be
Jan Wedvik committedNov 11, 2022 Configuration menu - View commit details
-
Copy full SHA for 44cd1e2 - Browse repository at this point
Copy the full SHA 44cd1e2View commit details -
Bug #33968442 Hypergraph gives too high row estimates for GROUP BY
Post-push fix for broken build on windows: %lu was used for argument of type size_t. Change-Id: I35635db84b095c17f52db27300460b889c990f0e (cherry picked from commit 33e2f68365200f79b3db6d7526e8178ea62710f9)
Tor Didriksen committedNov 11, 2022 Configuration menu - View commit details
-
Copy full SHA for 5713000 - Browse repository at this point
Copy the full SHA 5713000View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7b75a5d - Browse repository at this point
Copy the full SHA 7b75a5dView commit details -
Bug#34751807 - Mysqld crash - Assertion !state.m_required_items.empty()
Problem: MySQL misses evaluation of constant filter condition with masking functions, thus we see access path with impossible or always true filter condition. Solution: If we have filter with Constant condition, handle it. Following possibilities are handled based on value of filter condition: - If false, we replace by zero_rows. - If true, we ignore this accesspath. Change-Id: Ic32c24c2da32430acfbc96d3a0f719ecadcb95c1
Keshav Singh committedNov 11, 2022 Configuration menu - View commit details
-
Copy full SHA for 8219cde - Browse repository at this point
Copy the full SHA 8219cdeView commit details -
Bug #34790413 Two projects with the same/similar name in Windows build
We have two targets: parser-t and Parser-t. This is problematic on systems with case-insensitive file systems, notably Windows. Rename all the test targets in unittest/gunit/components/keyring_common/CMakeLists.txt (Note these renamed targets are not built by default) Change-Id: Ied01f5233a3960089b8c3b03c32939b6e2705b2f (cherry picked from commit b643e71e1fe9dddd997d1b8b634940576dab4bf9)
Configuration menu - View commit details
-
Copy full SHA for f8aa4f4 - Browse repository at this point
Copy the full SHA f8aa4f4View commit details -
Bug#34787854 unfair scheduling of read/write on slow-machines
If the router is slower than the server (like running under strace, valgrind, nice-level, ...) it may succeed to read() and write() without ever switching to another connection. async_read() executes - 'read()->EGAIN->poll()->read()' in case there is no data, where poll() may lead to other connections being handled - 'read()->...' in case there is data already. Change ------ - always 'poll()->read()' to guarantee other connections are not starving. Change-Id: I805a4311fded1361ebf488512fdec6987575fc0d
Configuration menu - View commit details
-
Copy full SHA for 2cf7c62 - Browse repository at this point
Copy the full SHA 2cf7c62View commit details -
Bug#34787879 linux-epoll issues unnecessary epoll_ctl(MOD)
After an event fired for epoll_wait(), the managed set of events is updated according to the ONESHOT rules: 1. if multiple events were registered and one fired, the remaining ones must be added again with epoll_ctl(.., MOD, ...); 2. if only one event was registered and that one fired, the epoll_ctl() can be skipped. Checking with strace, the one-event-case still issues a epoll_ctl(..., MOD, {ONESHOT|ET}) which is bogus, even though a check for that case is in place. Change ------ - fixed the check for "there are still tracked events" before calling epoll_ctl() in after_events_fired() Change-Id: Ibb66d61e5278e00c53d154aeb1f4218a8391d45d
Configuration menu - View commit details
-
Copy full SHA for a351cb1 - Browse repository at this point
Copy the full SHA a351cb1View commit details -
Bug#34788019 unnecessary wakeups after fd-event-add
strace shows that lots of read()/write() on a eventfd after a fd-event-add/remove even though there is no need for that. write(6, "\1\0\0\0\0\0\0\0", 8) = 8 <... epoll_wait resumed>[{events=EPOLLIN, data={u32=6, u64=6}}], 8192, -1) = 1 <... read resumed>"\1\0\0\0\0\0\0\0", 8) = 8 After a (fd-event, timer) is added, modified or removed the io-service must be interrupted to become aware of that change. That 'notify' is supposed to interrupt blocking poll()/epoll_wait()/... running in other io-threads, but aren't needed when running in the same thread. Change ------ - added a per-thread Callstack to check if the add/remove event is running the same thread as the io-context where it is added. - don't interrupt (notify) the io-service if the add/remove is coming for the same thread. - updated running_in_this_thread() to work according to spec. - fixed the Timer::wait() to not call dispatch() as it would defer the execution. Change-Id: I5a48a2af00cb522c4de5a702293a7bfa73c36026
Configuration menu - View commit details
-
Copy full SHA for 9ea3878 - Browse repository at this point
Copy the full SHA 9ea3878View commit details -
Bug#34787953 add missing AF_UNIX support on windows
Windows supports AF_UNIX streamsockets since Windows 10 (April 2018 Update). The low-level functions should allow to use it. Change ------ - updated the socketpair() emulation on windows to allow AF_UNIX - added tests for autobind and abstract unix-sockets Change-Id: I0e564f3d7790cc8eb89bc8c795b4c9cbb0cb48b9
Configuration menu - View commit details
-
Copy full SHA for 1570bc6 - Browse repository at this point
Copy the full SHA 1570bc6View commit details -
Bug#34401798 close client connection when server connection is closed…
… by server [postfix] Fixed test-failure if the "Error 4031 disconnect due to inactivity" wasn't received in time before the socket was closed, which results in the "Error 2013 Lost connection". Change-Id: I1745d24692159509929e25ebc445d29d5f3a432f
Configuration menu - View commit details
-
Copy full SHA for ab62007 - Browse repository at this point
Copy the full SHA ab62007View commit details -
Configuration menu - View commit details
-
Copy full SHA for cfbc3b9 - Browse repository at this point
Copy the full SHA cfbc3b9View commit details -
Bug #34763565 - FTS worker threads should not copy parent thread THD
Create new THD instances for worker threads in ddl0fts.cc to avoid some problems caused by more than one thread using the same THD instance. Error messages from worker threads must be propragated to the parent thread. Change-Id: I8fe4cfafa9c697591cc662250c7d691846b83247
Configuration menu - View commit details
-
Copy full SHA for 9b32e1a - Browse repository at this point
Copy the full SHA 9b32e1aView commit details
Commits on Nov 12, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 242fcab - Browse repository at this point
Copy the full SHA 242fcabView commit details
Commits on Nov 14, 2022
-
Bug#33100586 CRASH IN ITEM_SINGLEROW_SUBSELECT::STORE
Issue: If the subquery of an order by clause has a view reference, it is not cleaned up. This is because the subquery might be referred to again during the execution. In the case of execution of such a prepared statement, the server exits as the query expression from the order by clause is present but not marked optimized. Fix: Cleanup query expression of subquery if it is marked as not used by the resolver. Change-Id: I7ce2c6243e132339ad69c2c353f64c8d1c0485a2
Maheedhar PV committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for 5aae7ac - Browse repository at this point
Copy the full SHA 5aae7acView commit details -
Merge branch 'mysql-5.7' into mysql-8.0
Maheedhar PV committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for 388793f - Browse repository at this point
Copy the full SHA 388793fView commit details -
Bug #34787608 Missing DIH checks of START_MECONF and COPY_GCIREQ sign…
…al data Change checks for array access to handle incorrect signal data that could cause Uint32 wrap around. Approved by: Mauritz Sundell <[email protected]> Change-Id: I04e9990953d14ed208c7c0568dc8a6e3ca311a33
Martin Sköld authored and Martin Sköld committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for be76e17 - Browse repository at this point
Copy the full SHA be76e17View commit details -
Merge branch 'mysql-5.6-cluster-7.4' into mysql-5.7-cluster-7.5
Change-Id: Iae607c54f14c5a56316f1c8d529b2c3743b4008b
Martin Sköld authored and Martin Sköld committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for 2674cc5 - Browse repository at this point
Copy the full SHA 2674cc5View commit details -
Merge branch 'mysql-5.7-cluster-7.5' into mysql-5.7-cluster-7.6
Change-Id: I5390e8fdd25ba017b28d6e371aed39fd57bf1d84
Martin Sköld authored and Martin Sköld committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for a8345d9 - Browse repository at this point
Copy the full SHA a8345d9View commit details -
Merge branch 'mysql-5.7-cluster-7.6' into mysql-8.0
Change-Id: Ie9accae7c10ee6fde9dc546fba1df2a32d1f2705
Martin Sköld authored and Martin Sköld committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for 633f895 - Browse repository at this point
Copy the full SHA 633f895View commit details -
Bug#34727088: Assertion `!OrderItemsReferenceUnavailableTables(path, …
…tables)' failed. A query hit an assertion in debug builds and returned wrong results in release builds when running with the hypergraph optimizer, because it tried to sort the results on a column that was not available. The problem happened when doing a sort with duplicate removal for a DISTINCT clause, and there was a streaming step somewhere below the sort access path. The sort path sorted the rows on a column that was not mentioned in the SELECT list, and the column was therefore not materialized in the streaming step before the sort. The sort used the column value that happened to be available from the base table scan, instead of getting it from the temporary table, and this caused wrong results. The reason why the sort access path sorts on a column not in the SELECT list, is that the hypergraph optimizer tries all available sort-ahead orderings for satisfying DISTINCT in the hope that one of them might also satisfy the ORDER BY clause that is processed just after DISTINCT. It picks a sort-ahead ordering that satisfies the ordering required for DISTINCT (via functional dependencies) even though it doesn't reference the column in the SELECT list. This ordering is picked only because it is seen first; not because it is in any way better than sorting on the original column. This patch changes how the sort for DISTINCT is chosen. Instead of proposing all sort-ahead orderings that would satisfy DISTINCT, it now proposes: 1) The sort-ahead ordering that is directly derived from the DISTINCT clause in the query. 2) If there is an ORDER BY clause, all sort-ahead orderings that satisfy both DISTINCT and ORDER BY. This makes it prefer orderings that reference columns in the SELECT list or in ORDER BY, and such columns are materialized, should there be a streaming step below the sort access path. Change-Id: Iadd9cf7a43f95aaf6a650c1c400166a4adf8f198
Configuration menu - View commit details
-
Copy full SHA for 7817b15 - Browse repository at this point
Copy the full SHA 7817b15View commit details -
Bug#34782909 ndb_opt.index_stat fails on solaris sparc
Description: Test ndb_opt.index_stat fails on Solaris sparc. This since since primary key hash value is calculated differently on little endian and big endian system which causes slightly different distribution. The partition that is scanned for index statistics in this test happen to have 21 records on little endian system, and 27 on big endian (sparc). Adjust test to allow both 21 and 27 records. Change-Id: Ic1168e83a88dbfdd7dc403cb58e8660ad9a16a3e
Configuration menu - View commit details
-
Copy full SHA for fc9c437 - Browse repository at this point
Copy the full SHA fc9c437View commit details -
Bug#34556068 Encrypted backups can not be restored on system with oth…
…er endian Post push fix. Test ndb.ndb_print_backup_file failed with old perl. CURRENT_TEST: ndb.ndb_print_backup_file Bareword found where operator expected at ... line 47, near "s|$ENV{MYSQL_TEST_DIR}/||r" syntax error at ... line 47, near "s|$ENV{MYSQL_TEST_DIR}/||r" Replaced use of return value option /r for regex that is not supported for old perl versions before 5.14. Change-Id: I736120b9bb0c0b8e7bc80acc245494a0b9c134c6
Configuration menu - View commit details
-
Copy full SHA for 2ce5b35 - Browse repository at this point
Copy the full SHA 2ce5b35View commit details -
Bug#33910786 Scalar subquery transformation combined with WHERE claus…
…e optimization lead to reject_multiple_rows being ineffective Pushing down the condition into the derived table made from the subquery leads to a wrong result here. If we rely run-time checking of the cardinality, we should not push down the condition, since it will potentially change the cardinality to zero or one, which is semantically wrong: This query SELECT 1 AS one FROM t WHERE 1=(SELECT 1 UNION SELECT 2) is transformed to SELECT 1 AS one FROM t JOIN ( SELECT 1 AS col1 UNION SELECT 2) derived where 1 = derived.col1 and pushing down '1 = derived.col1' into the unio removes the contribution from SELECT 2, which is wrong. Solution: don't push down in this case. Change-Id: If214f5fa88ed79c186ca4b9806445524e8f37e5e
Dag Wanvik committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for 5f3bc95 - Browse repository at this point
Copy the full SHA 5f3bc95View commit details -
Bug #32762229 [ERROR] [MY-013183] [INNODB] ASSERTION FAILURE: ROW0MYS…
…QL.CC:2928:PREBUILT->TEMP A DELETE statement which is executed using an index skip scan fails an assertion on the row template type. The index skip scan initialisation sets a keyread flag to indicate that it is sufficient to read only the indexed fields. Based on the keyread flag, a template is built which specifies a subset of the fields instead of the whole row. While performing the row delete, this hits an assert since the template type is required to specify the whole row for deletes. This is fixed in index skip scan initialisation by checking if the query requires fields which are not included in the index and setting the keyread flag accordingly. A test case is added for a skip-scan delete query. Change-Id: Iaa0cbd380ae12519b3dadc7924b538432bf6a77d
Priyanka Sangam committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for 23f942a - Browse repository at this point
Copy the full SHA 23f942aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 366f117 - Browse repository at this point
Copy the full SHA 366f117View commit details -
Bug#34637153: Performance of Hypergraph is lower than the old
optimizer for point selects [1/7, benchmark, noclose] The point select microbenchmark in hypergraph_optimizer-t does not set up multiple equalities. Since a fair amount of time is spent on the multiple equalities in a point select query, the microbenchmark is changed to set up multiple equalities too, so that it more accurately simulates the work done in a real query. Change-Id: I844d71422c4c495fc9be537fbebd0975fd642593
Configuration menu - View commit details
-
Copy full SHA for b85948a - Browse repository at this point
Copy the full SHA b85948aView commit details -
Bug#34637153: Performance of Hypergraph is lower than the old
optimizer for point selects [2/7, refactor fast path, noclose] Factor out the single-table fast path in MakeJoinHypergraph() as a separate function. Change-Id: If16e95b6af246a93dd1165f338b6210e2102c980
Configuration menu - View commit details
-
Copy full SHA for 1d81aa5 - Browse repository at this point
Copy the full SHA 1d81aa5View commit details -
Bug#34637153: Performance of Hypergraph is lower than the old
optimizer for point selects [3/7, refactor conjunctions, noclose] Factor out the code for visiting all terms of a conjunction in ExtractConditions() as a function template called WalkConjunction(). The template will be reused in later patches in this series. Change-Id: I34c7d51b31e67487b406d626c414d1b3bf311226
Configuration menu - View commit details
-
Copy full SHA for a3e4e72 - Browse repository at this point
Copy the full SHA a3e4e72View commit details -
Bug#34637153: Performance of Hypergraph is lower than the old
optimizer for point selects [4/7, skip normalization, noclose] Some non-trivial amount of time is spent doing unnecessary normalization of the WHERE clause in point select queries. This includes common subexpression elimination and constant propagation, and a rerun of constant folding in case the previous steps made something foldable. In the point select case, the WHERE clause is a simple equality, which has already gone through constant folding, so none of these steps can make it any simpler. This patch skips the normalization when a single-table query has a WHERE clause that consists of a multiple equality only, or a conjunction of multiple equalities, as these will all be expanded to table filters that cannot be simplified any further. Also, the call to CanonicalizeConditions() is removed entirely when a query accesses a single table. CanonicalizeConditions()'s only purpose is to expand remaining multiple equalities that couldn't be entirely pushed down as join conditions or table filters. In the case of single-table queries, the multiple equalities can always be fully converted to table filters, so there's nothing for it to do. Microbenchmark results: BM_FindBestQueryPlanPointSelect 1906 ns/iter -> 1545 ns/iter [+23%] Change-Id: Ib3137a3603861966339c3beb6049c884aada1818
Configuration menu - View commit details
-
Copy full SHA for 80598d3 - Browse repository at this point
Copy the full SHA 80598d3View commit details -
Bug#34637153: Performance of Hypergraph is lower than the old
optimizer for point selects [5/7, mem_root_unordered_map, noclose] CostingReceiver stores the plan candidates in a std::unordered_map. This leads to dynamic memory allocation when adding candidates to it. This patch replaces it with a mem_root_unordered_map, which allocates memory on the execution MEM_ROOT instead. The AccessPath objects that are stored in the map, and which consume more space than the map itself, are already stored on the execution MEM_ROOT. Microbenchmark results: BM_FindBestQueryPlanPointSelect 1545 ns/iter -> 1471 ns/iter [+5.0%] Change-Id: I38a14d8faefa9618e3b4ab9b51587e0f6aa31145
Configuration menu - View commit details
-
Copy full SHA for 5b81353 - Browse repository at this point
Copy the full SHA 5b81353View commit details -
Bug#34637153: Performance of Hypergraph is lower than the old
optimizer for point selects [6/7, sargable, noclose] When analyzing equality predicates to see if they are sargable, some time is spent checking that the arguments have compatible types so that the predicate may be used for an index lookup. When the equality predicate comes from a multiple equality, we already know that the arguments have compatible types, since multiple equalities are constructed only if the arguments have the exact same type. We now skip the check for compatible types if the equality comes from a multiple equality. One bug in the multiple equality construction had to be fixed in order to get this to work: A multiple equality was created for a string column compared to a constant JSON expression, if the string column had the same collation as the JSON expression (utf8mb4_bin). The predicate would therefore be considered sargable, and a lookup was performed on the index for the string column. This made the predicate use string comparison semantics instead of JSON comparison semantics, and wrong results were returned. This was fixed by adding an extra check to check_simple_equality() to prevent creation of multiple equalities mixing JSON and string types. Microbenchmark results: BM_FindBestQueryPlanPointSelect 1471 ns/iter -> 1432 ns/iter [+2.7%] Change-Id: I21788c44668ef9c5d2ca2c7d7c6d0a7163be3c35
Configuration menu - View commit details
-
Copy full SHA for 25c1848 - Browse repository at this point
Copy the full SHA 25c1848View commit details -
Bug#34637153: Performance of Hypergraph is lower than the old
optimizer for point selects [7/7, final predicates] After join enumeration, the hypergraph optimizer applies the final predicates (those that reference no tables or are non-deterministic) on top of all candidate plans, and creates FILTER nodes for all (both final and non-final) predicates in the access path tree below. None of this is necessary in a point select query, since it has no final predicates to apply, and no FILTER nodes to create (since its predicate is pushed to an index lookup). This step unnecessarily copies the EQ_REF access path from MEM_ROOT to stack, and from stack back to MEM_ROOT, and adds it to a freshly allocated Mem_root_array. While it's not terribly expensive, the cost is noticeable for the cheapest queries, such as point select. This patch therefore skips this step when it sees it has a point select plan. Microbenchmark results: BM_FindBestQueryPlanPointSelect 1432 ns/iter -> 1366 ns/iter [+4.8%] Change-Id: I834091382499c43b8b8de27cfa7e0b6057063c7f
Configuration menu - View commit details
-
Copy full SHA for 518ac45 - Browse repository at this point
Copy the full SHA 518ac45View commit details -
Bug#33910786 Scalar subquery transformation combined with WHERE claus…
…e optimization lead to reject_multiple_rows being ineffective [follow-up] Skip explain for hypergraph due to different plan from old optimizer. Change-Id: Ia07c20689473afc3ff486b362957b69173c37254
Dag Wanvik committedNov 14, 2022 Configuration menu - View commit details
-
Copy full SHA for 4c93324 - Browse repository at this point
Copy the full SHA 4c93324View commit details -
Configuration menu - View commit details
-
Copy full SHA for e96cbac - Browse repository at this point
Copy the full SHA e96cbacView commit details -
Configuration menu - View commit details
-
Copy full SHA for 316daf8 - Browse repository at this point
Copy the full SHA 316daf8View commit details -
Configuration menu - View commit details
-
Copy full SHA for c41ddc8 - Browse repository at this point
Copy the full SHA c41ddc8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 10d1c14 - Browse repository at this point
Copy the full SHA 10d1c14View commit details -
Configuration menu - View commit details
-
Copy full SHA for b93dc49 - Browse repository at this point
Copy the full SHA b93dc49View commit details -
Configuration menu - View commit details
-
Copy full SHA for 977c21f - Browse repository at this point
Copy the full SHA 977c21fView commit details -
Configuration menu - View commit details
-
Copy full SHA for a90b01e - Browse repository at this point
Copy the full SHA a90b01eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3145f09 - Browse repository at this point
Copy the full SHA 3145f09View commit details -
Configuration menu - View commit details
-
Copy full SHA for 3b18f40 - Browse repository at this point
Copy the full SHA 3b18f40View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1604f8b - Browse repository at this point
Copy the full SHA 1604f8bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 67a0d6b - Browse repository at this point
Copy the full SHA 67a0d6bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 2cdb2b1 - Browse repository at this point
Copy the full SHA 2cdb2b1View commit details -
Configuration menu - View commit details
-
Copy full SHA for f24b6d2 - Browse repository at this point
Copy the full SHA f24b6d2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 49479dd - Browse repository at this point
Copy the full SHA 49479ddView commit details
Commits on Nov 15, 2022
-
Bug #34638573 Compile MySQL with clang 15 [3rd party]
Add some warning CLI options to src files, instead of libs. The ADD_CONVENIENCE_LIBRARY macro doesn't create zlib_objlib library on MacOS, which results in this CMake error: -- Performing Test HAVE_CXX_W_missing_profile -- Performing Test HAVE_CXX_W_missing_profile - Failed -- Performing Test HAVE_CXX_W_deprecated_non_prototype -- Performing Test HAVE_CXX_W_deprecated_non_prototype - Success CMake Error at extra/zlib/zlib-1.2.12/CMakeLists.txt:200 (TARGET_COMPILE_OPTIONS): Cannot specify compile options for target "zlib_objlib" which is not built by this project. See in cmake/libutils.cmake: "For APPLE, we create a STATIC library only," Change-Id: I637ef093f8a8202749ad47f4f343ad20ade7bc58
Configuration menu - View commit details
-
Copy full SHA for 813cf98 - Browse repository at this point
Copy the full SHA 813cf98View commit details -
Bug #34638573 Compile MySQL with clang 15
Fixing two compile errors, that are triggered when using libcxx from LLVM15 https://reviews.llvm.org/D104002 std::unary_function is not available in libcxx under C++17, see: https://en.cppreference.com/w/cpp/utility/functional/unary_function Boost uses std::unary_function, but it has a workaround for using Boost headers in C++17, triggered by the macro BOOST_NO_CXX98_FUNCTION_BASE See: https://www.boost.org/doc/libs/master/libs/config/doc/html/boost_config/boost_macro_reference.html#boost_config.boost_macro_reference.macros_that_describe_features_that_have_been_removed_from_the_standard_ https://reviews.llvm.org/D130538 A new assert in libcxx is triggered in include/varlen_sort.h std::iterator_traits<varlen_iterator>::reference should match the return type of varlen_iterator::operator*() include/c++/v1/__algorithm/iterator_operations.h:100:5: error: static assertion failed due to requirement 'is_same<varlen_element, varlen_element &>::value': It looks like your iterator's `iterator_traits<It>::reference` does not match the return type of dereferencing the iterator, i.e., calling `*it`. This is undefined behavior according to [input.iterators] and can lead to dangling reference issues at runtime, so we are flagging this. static_assert(is_same<__deref_t<_Iter>, typename iterator_traits<__remove_cvref_t<_Iter> >::reference>::value, ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fix a few warnings: Remove some explicitly defined "=defau.t" constructors, destructors. warning: definition of implicit copy assignment operator for 'Row' is deprecated because it has a user-declared destructor [-Wdeprecated-copy-with-dtor] Mark a variable potentially unuses in tests (unuses when __aarch64__) Change-Id: Iad346bd0cdb1d25d958377b9c7a0dd5da7a45fad
Configuration menu - View commit details
-
Copy full SHA for 69fb953 - Browse repository at this point
Copy the full SHA 69fb953View commit details -
Bug#34713142: ASAN: memory leaks when killing thread executing prepar…
…ed statement A thread within test_bug25584097() in mysql_client_test was missing mysql_thread_end() call to cleanup the resources on exit. Change-Id: Ib540f5c0cafa8104528cc454a094cbe46625836a
Miroslav Rajcic committedNov 15, 2022 Configuration menu - View commit details
-
Copy full SHA for e7a03ba - Browse repository at this point
Copy the full SHA e7a03baView commit details -
Configuration menu - View commit details
-
Copy full SHA for 0e3c9d3 - Browse repository at this point
Copy the full SHA 0e3c9d3View commit details -
Configuration menu - View commit details
-
Copy full SHA for 6420a94 - Browse repository at this point
Copy the full SHA 6420a94View commit details -
Configuration menu - View commit details
-
Copy full SHA for 117a10c - Browse repository at this point
Copy the full SHA 117a10cView commit details -
Bug#34668313: Fix binlog.binlog_mysqlbinlog_source_gipk_info
In order to make the test more precise on what it tests, some new binlog files were added and more test cases were introduced. Change-Id: I1479fd0d0090c3b4c2394ecbd6927db51dccc289
Configuration menu - View commit details
-
Copy full SHA for fa27b3b - Browse repository at this point
Copy the full SHA fa27b3bView commit details
Commits on Nov 16, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 69c5428 - Browse repository at this point
Copy the full SHA 69c5428View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1bee8e3 - Browse repository at this point
Copy the full SHA 1bee8e3View commit details -
Bug#34800850 EXPECT/ASSERT for stdx::expected
In the past, stdx::expected<> was ASSERTed with: ASSERT_TRUE(res) << res.error(); or ASSERT_THAT(res, Truly([](const auto &v) { return bool(v); })) Instead of these long patters, it is better to have a direct ASSERT_NO_ERROR() which is stdx::expected<> aware. Change ------ - added ASSERT_ERROR(), ASSERT_NO_ERROR, EXPECT_ERROR() and EXPECT_NO_ERROR() Change-Id: I7d6882830cff1fb4b74c648ff696a21afc6a23bd
Configuration menu - View commit details
-
Copy full SHA for f6b902d - Browse repository at this point
Copy the full SHA f6b902dView commit details -
Bug#34800877 add stdx::views::enumerate()
stdx::views::enumerate() takes a range and returns a view of a sequence-number and the values of the range: [1, 3, 5] -> [(0, 1), (1, 3), (2, 5)] Change ------ - added enumerate() to mysql/harness/stdx/ranges.h - added needs type-traits from C++20/23 to mysql/harness/stdx/iterator.h - added tests Change-Id: I17751b9266c8dcb224ad4695ad2dc5a69920bd94
Configuration menu - View commit details
-
Copy full SHA for 75d3e8d - Browse repository at this point
Copy the full SHA 75d3e8dView commit details -
Bug#34791625 integration tests for sharing are slow
routertest_integration_sharing executes several thousand tests sequentially and takes 180 sec or more to complete. Change ------ - split routertest_integration_sharing into multiple tests After splitting the tests can be run in parallel: $ ctest --output-on-failure -j 8 -R routertest_integration_ ..._integration_routing_reuse ........ Passed 47.47 sec ..._integration_routing_sharing_restart Passed 59.68 sec ..._integration_routing_direct ....... Passed 64.52 sec ..._integration_routing_sharing_constrained_pools 76.47 sec ..._integration_routing_sharing ...... Passed 109.77 sec Total Test time (real) = 109.77 sec Change-Id: Id9c6f5b50e6edb20f1815ca24157dc85ca8bd51f
Configuration menu - View commit details
-
Copy full SHA for 389de29 - Browse repository at this point
Copy the full SHA 389de29View commit details -
Bug#34801356 Aborted_clients increases for router connections
Connection through mysqlrouter leads to 'aborted_clients' getting incremented. $ mysql --host=127.0.0.1 --port=6446 -e 'SHOW STATUS LIKE "aborted_clients"' | Aborted_clients | 11 | $ mysql --host=127.0.0.1 --port=6446 -e 'SHOW STATUS LIKE "aborted_clients"' | Aborted_clients | 12 | Aborted_clients gets incremented by the server when the connection is not closed by a 'COM_QUIT'. Change ------ - forward the clients COM_QUIT to the server instead of closing the connection directly if the connection-pool is full. Change-Id: Iafdd8c95e59f07ccbb7b3ec3af234243ae3425cf
Configuration menu - View commit details
-
Copy full SHA for fa196c6 - Browse repository at this point
Copy the full SHA fa196c6View commit details -
Bug#34801531 queries classified even if sharing is not possible
Queries need to get classified if connection sharing is active to track SHOW WARNINGS, SELECTs with forbidden tokens, ... But if connection-sharing isn't possible (because of no connection pool, PREFERRED/AS_CLIENT, connection_sharing=0, ...), the classification can be skipped. Before: transactions: 1751829 (175128.21 per sec.) After : transactions: 1953741 (195311.91 per sec.) Change ------ - skip query classification if sharing isn't possible. Change-Id: I329c490f2b7bef8cd8fc7572b0c016dda1ddff1b
Configuration menu - View commit details
-
Copy full SHA for 598fd3e - Browse repository at this point
Copy the full SHA 598fd3eView commit details -
Bug#34801929 PREFERRED/AS_CLIENT with unix-socket leads to unencrypte…
…d tcp to server The router treats the client_ssl_mode/server_ssl_mode PREFERRED/AS_CLIENT as: encrypt client-connection and server-connection if the client requests it and server supports it ... which leads to the server-connection being encrypted by default. $ mysql --host=127.0.0.1 --port=6446 -e 'SHOW STATUS LIKE "ssl_version"' | ssl_version | TLSv1.3 | But not if the connection is over unix-sockets: $ mysql --socket=router.sock -e 'SHOW STATUS LIKE "ssl_version"' | ssl_version | | As the client treats the unix-socket as secure, it does not request to encrypt the client-connection, and router therefore doesn't encrypt the server-connection. Change ------ For PREFERRED/AS_CLIENT: Changed from: encrypt the server connection if server supports and the client connection is encrypted To: encrypt the server connection if server supports and the client connection is secure (encrypted or unix-socket) Change-Id: Ia7ddf82c3bf7c5b4ed7885a7efc689462da68878
Configuration menu - View commit details
-
Copy full SHA for 26c27df - Browse repository at this point
Copy the full SHA 26c27dfView commit details -
Bug#34806320 connection sharing fails if client via unix-socket
When connection sharing is enabled and a connection is made via unix-sockets, the server-connection isn't shared. For connection-sharing to work, the router must be able to fetch the clients password, but for unix-sockets the password is never requested. Currently, the password is requested after the TLS handshake, but in the case of unix-sockets, the transport is already secure without TLS. Change ------ - request client's password even with TLS if connection is over unix-sockets Change-Id: I85653cc620681f5384b3f179b0b2051523985a0f
Configuration menu - View commit details
-
Copy full SHA for 42c0e8a - Browse repository at this point
Copy the full SHA 42c0e8aView commit details -
Bug#34549189 ndb_config --diff-default does not work
Post push fix. Allow GMK suffix on PortNumber and ShmKey in test. ndb_config --diff-default normalises the values in configuration before printing them. That includes use G, M, and, K suffix for integer values that are even 1024 multiples. In some test case data node dynamic port numbers are shown as negative values (-1 to -65535) represented as 32 bit unsigned numbers. To have deterministic result files these are replaced with <DYNAMIC-PORT> in result. Although the replace_regex command did not take normalization of port number using K suffix into account. See example failure: CURRENT_TEST: ndb.ndb_config_diff_default @@ -275,7 +275,7 @@ Checksum,1,false HostName1,localhost,(null) HostName2,localhost,(null) -PortNumber,<DYNAMIC-PORT>,0 +PortNumber,4194253K,0 Change-Id: I75c0706487e9c9c1a86f82ce507b6e10c83ff934
Configuration menu - View commit details
-
Copy full SHA for e9d07a4 - Browse repository at this point
Copy the full SHA e9d07a4View commit details -
Bug#34549189 ndb_config --diff-default does not work
Post push fix. std::sort takes a strict less than compare function. The function paraminfo_order was implemented as a less or equal than compare function while it should have been a less than function. This showed as test failure on mac os: CURRENT_TEST: ndb.ndb_config @@ -99,8 +99,8 @@ BackupDataDir,.,(null) DataMemory,45M,98M FileSystemPath,.,(null) -LockExecuteThreadToCPU,0-65535,(null) Nodegroup,0,(null) +LockExecuteThreadToCPU,0-65535,(null) NoOfReplicas,1,2 ThreadConfig,,(null) Change-Id: I6999cd1b8d36a0862e950fdd09ed70b9d9e13a0d
Configuration menu - View commit details
-
Copy full SHA for 772dfe4 - Browse repository at this point
Copy the full SHA 772dfe4View commit details
Commits on Nov 17, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 84a0908 - Browse repository at this point
Copy the full SHA 84a0908View commit details -
Bug #34804820: Change error message when INSTANT ADD/DROP not possibl…
…e in max column count Background: When a table with max column (1017) has repeated instant drop followed by instant add on the same column, the n_def parameter increases. This parameter has a max limit of 1023. Hence when we cross this limit, the server reports "Too many columns". Problem: The error message from the server is misleading and columns can't be added with ALGORITHM=INSTANT only. We can still add column using INPLACE or COPY algorithm. Fix: Fix the error message to avoid confusion and ensure consistency both for innobase and innopart Change-Id: I393c3cd7a69e29d34e384deee5725aaf8eec96f9
Configuration menu - View commit details
-
Copy full SHA for 04f378f - Browse repository at this point
Copy the full SHA 04f378fView commit details -
Bug#34805922 infinite loop at authentication on unexpected packets
If the server sends an unexpected packet to client while it is authenticating, it may end in an infinite loop. 13:43:36.475601 connect(10, {sa_family=AF_UNIX, sun_path="/tmp/router-XVRKTE/....sock"}, 110) = 0 13:43:36.475662 recvfrom(10, "Q\0\0\0\n8.0.33-router\0\0\0\0\0`\22j{[\371o4\0"..., 16384, 0, NULL, NULL) = 85 13:43:36.475780 sendto(10, "\226\0\0\1\205\242\276\31\0\0\0@\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 154, 0, NULL, 0) = 154 13:43:36.475805 recvfrom(10, "\2\0\0\2\1\4", 16384, 0, NULL, NULL) = 6 ... infinite loop until killed ... 13:43:43.934174 --- SIGINT {si_signo=SIGINT, si_code=SI_KERNEL} --- The infinite loop is in authsm_handle_change_user_result() which treats the unexpected packet as "try again", using the same unexpected input. Change ------ - fail authenticating on unexpected packets Change-Id: If2ce981322acce46c403cd2fde8ce91abc628d58
Configuration menu - View commit details
-
Copy full SHA for 7335ab0 - Browse repository at this point
Copy the full SHA 7335ab0View commit details -
Configuration menu - View commit details
-
Copy full SHA for b9aee72 - Browse repository at this point
Copy the full SHA b9aee72View commit details
Commits on Nov 18, 2022
-
Bug #34686140 ConfigInfo.cpp DiskPageBufferEntries in the wrong part …
…of the file DiskPageBufferEntries entry in ConfigInfo moved next to other data node disk data config parameters. Change-Id: I7c67c2cf211e402848c24e52eb202f753c863a0e
Configuration menu - View commit details
-
Copy full SHA for 9055422 - Browse repository at this point
Copy the full SHA 9055422View commit details -
Bug #33764143 Missed init of m_nodes in fix_nodegroup
Added missing initialization of m_nodes array in fix_nodegroup. Using uninitialised memory can lead to a crash when no node exists for a bucket. Change-Id: I15237d42a2127bbb7b2897e4b1e44725b80b0b92
Configuration menu - View commit details
-
Copy full SHA for 0c051b1 - Browse repository at this point
Copy the full SHA 0c051b1View commit details -
Merge branch 'mysql-5.7-cluster-7.5' into mysql-5.7-cluster-7.6
Change-Id: I6d01af051e02160523709738764e1e9349c99626
Configuration menu - View commit details
-
Copy full SHA for 3bbc31d - Browse repository at this point
Copy the full SHA 3bbc31dView commit details -
Merge branch 'mysql-5.7-cluster-7.6' into mysql-8.0
Change-Id: Ibec89b3735c264af962f2f7219b9078e4fa3e289
Configuration menu - View commit details
-
Copy full SHA for 07e5359 - Browse repository at this point
Copy the full SHA 07e5359View commit details -
Bug #34809802 Upgrade the bundled lz4 library to version 1.9.4
Upgrade to lz4-1.9.4 This patch is for 5.7 - unpack tarball - git add LICENCE lib/*.h lib/*.c - change LZ4_VERSION in cmake/lz4.cmake - remove old version Change-Id: Id60237b028f31d15f360b638b34aaee15767de1d
Tor Didriksen committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for d29e756 - Browse repository at this point
Copy the full SHA d29e756View commit details -
Bug#34716246: MySQL 8.0.31 another crash at
substitute_for_best_equal_field When multiple lateral derived tables are present in a query, and if any of the derived tables get merged, we will have view references which could be outer references. But the outer reference information is present only as part of the Item_view_ref and not the underlying field that it points to. When pushing a condition having outer references to a derived table, the "depended_from" information is not copied from the view reference but from the underlying fields to the cloned condition leading to problems later. In the query that is causing problems, we have 3 derived tables with 2 of them being lateral derived tables. The last lateral derived table has a condition which has outer references. But this table gets merged into the outer query block which makes it eligible for condition pushdown. This condition could be pushed down to the second derived table (lateral). However the underlying fields in this table are view references that are outer references because the first derived table was also merged into the outer query block. While cloning this condition, we clone the underlying derived table fields to replace the original fields. Information related to all the field items in the underlying expression is copied to the cloned condition. Since the "depended_from" is part of the "Item_view_ref" and not the underlying field, we end up having a outer reference without actually having marked as such leading to problems later. Solution is to copy the "depended_from" and "context" information from the reference instead of the underlying field when cloning the reference. Change-Id: I756372bb649d5939ebc4f4da1a60783082dc0832
Chaithra Gopalareddy committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for 5d5188a - Browse repository at this point
Copy the full SHA 5d5188aView commit details -
Bug #34778646 crashing in Item_func_as_wkt::val_str_ascii when feedin…
…g bogus GIS data via window functions Add missing error handling in Item_sum_hybrid::val_str(). Change-Id: I6c5c905fc5ba9375e2b74ef2505a0367f7f84cb2
Tor Didriksen committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for 73a3b6d - Browse repository at this point
Copy the full SHA 73a3b6dView commit details -
WL#14793: Load log-components such as the JSON-logger earlier (before…
… InnoDB) Post-push fix: LTO RelWithDebInfo build with gcc 11.2.1 says: mysys/my_malloc.cc:298: error: free called on unallocated object urn [-Werror=free-nonheap-object] Change-Id: I067faf2511280049461676ee707545e2ab29dae6
Tor Didriksen committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for 7804e14 - Browse repository at this point
Copy the full SHA 7804e14View commit details -
Bug #34809802 Upgrade the bundled lz4 library to version 1.9.4
Upgrade to lz4-1.9.4 - unpack tarball - git add LICENCE lib/*.h lib/*.c - change LZ4_VERSION in cmake/lz4.cmake - remove old version It seems that our earlier patches to suppress UBSAN warnings are no longer needed. Change-Id: I274f20821d8d4fb90e2726a5e381dd27727ce0d1 (cherry picked from commit f1fbf9277d8133c2c5bdc10fd591eea725f1ceb7)
Tor Didriksen committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for 42c8ef1 - Browse repository at this point
Copy the full SHA 42c8ef1View commit details -
NULL Merge branch 'mysql-5.7' into mysql-8.0
Change-Id: Ibdf81c2cab3783778cea3c599e8332dab841af23
Tor Didriksen committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for 0925e8e - Browse repository at this point
Copy the full SHA 0925e8eView commit details -
Bug#34763860 : CHECK TABLE to check the INSTANT/VERSION bit & report
corruption if both are set. Post-push fix. The datadir was recorded in 8.0.33 which won't work with 8.0.32, so Recreated datadir Change-Id: Ief077c98aac92dc08e5718225608e32cc1de1a22
Mayank Prasad committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for 32ab3c0 - Browse repository at this point
Copy the full SHA 32ab3c0View commit details -
Bug#34798706 : server not comming up after kill and restart
Background: When we log DML operations on REDO log, to apply those changes during recovery, we need the index information. This index metadata is logged in REDO and read and index datastructure is regenerated at recovery. Issue : While parsing index infromation from REDO log, number of dropped columns was determined later in the flow. And at that time, n_fields and nullable_fields are to be adjusted for index. Which was missing. Fix: Adjusted n_fields and nullable_fields for index after number of dropped columns are determined. Change-Id: I0fc18b74fe7f94acc489d8526db1d27be1dc206b
Mayank Prasad committedNov 18, 2022 Configuration menu - View commit details
-
Copy full SHA for cade55b - Browse repository at this point
Copy the full SHA cade55bView commit details
Commits on Nov 19, 2022
-
Bug#34772890: Assertion failure: dict0dd.cc:1821:dd_table_is_upgraded…
…_instant(dd_table) thread Background: For an INPLACE alter, when the table is not being rebuilt, then we need to keep the instant metadata information of table as it is. This is done by copying INSTANT metadata from old table definition to new table definition. Issue: In earlier implementation of INSTANT ADD, each partition also has INSTANT metadata. So for partitioned table, this metadata is also to be copied. But it was mssing. Thus partitions of an upgraded partitioned table, having INSTANT ADD columns, would loose instant metadata post INPLACE ALTER. Fix: - Make sure earlier implementation INSTANT metadata for partitions, if any, is also copied for partitioned tables during INPLACE ALTER. - Fixed couple of tests scripts in which ROW_FORMAT clause was missing. Change-Id: I7e8f18b697b6667630b12bba02a7190fdda8445c
Mayank Prasad committedNov 19, 2022 Configuration menu - View commit details
-
Copy full SHA for 93cb964 - Browse repository at this point
Copy the full SHA 93cb964View commit details -
Bug #33788578 After performing instant add column, online rebuild DDL…
… will crash Background: When a table is being rebuilt (for example using OPTIMIZE TABLE or ALTER TABLE .. ADD PRIMARY KEY), any DML done on the table is logged in the row log. Once the DDL completes, the row log is applied on the new table. Problem: In online DDL code path, row_log_table_get_pk_col() does not handle clustered index having instantly added columns. It was observed that similar scenario from upgrade fails for REDUNDANT format. Fix: Ensure that correct function is called in row_log_table_get_pk_col() Ensure that REDUNDANT format is handled correctly when reading length Change-Id: I658c5745045057d4e7414f719dfca02ed8823a7a
Configuration menu - View commit details
-
Copy full SHA for 1070fe2 - Browse repository at this point
Copy the full SHA 1070fe2View commit details
Commits on Nov 20, 2022
-
Bug #33788578 After performing instant add column, online rebuild DDL…
… will crash Post push fix. Changed in type cast when printing error message Change-Id: I41aad1a4bbadd2a6adfe177f8dbcc2284a6f5430
Configuration menu - View commit details
-
Copy full SHA for 8a57d03 - Browse repository at this point
Copy the full SHA 8a57d03View commit details -
Bug#34763860 : CHECK TABLE to check the INSTANT/VERSION bit & report
corruption if both are set. post-push fix : created datadir on 8.0.29 Change-Id: I50972e8f4787a51f43c459c012d438ad854c97c5
Mayank Prasad committedNov 20, 2022 Configuration menu - View commit details
-
Copy full SHA for c7c1490 - Browse repository at this point
Copy the full SHA c7c1490View commit details
Commits on Nov 21, 2022
-
Bug#34347116 - Range intersection is not available for a compound
index when joining a derived table Issue: When merging the derived table, a nested join condition is added to the derived table and the underlying tables are added to this join nest. Also, the join condition is associated with the derived table and not the table in the join nest. Evaluating for range access is skipped if the table is inner table of an outer join or if the table is inner and it is not a semi-join. In this case the underlying base table is of the latter kind, the range analysis is skipped and we do not have the range access method. Fix: If the embedded table is derived, use the join condition associated with the derived table to evaluate range access. Skip marking the table as "constant" if the range check evaluates to "impossible range" and the condition used for the range check is from the derived embedding table. Change-Id: I31b2a739198d3e2e5d8aff3e0b049aa1549734a9
Maheedhar PV committedNov 21, 2022 Configuration menu - View commit details
-
Copy full SHA for 40c62c4 - Browse repository at this point
Copy the full SHA 40c62c4View commit details -
Bug#34767607: Improve log message printed when TP terminates idle con…
…nections Problem: The thread pool would print the same generic message about connections timing out when connections were terminated due to inactivity. This made analysis of situations where connections time out unexpectedly harder. Solution: Add new INFO_LEVEL message which makes it clear that connections are terminated due to inactivity in TP and which timeout values are being used to determine inactivity. Also move the checking for idle connections into the stall checker's main for loop and traverse thread group's own list of connections while holding LOCK_group, rather than use for_all_thd() without lock. This way it becomes safe to access THDs for information and we avoid traversing THDs that aren't managed by TP. Replace the global min expiry time with one for each thread group. Change-Id: I4e41cbfa5b9c91b261ecc3320c3b96ada0accb7e
Dyre Tjeldvoll authored and Dyre Tjeldvoll committedNov 21, 2022 Configuration menu - View commit details
-
Copy full SHA for 10412c3 - Browse repository at this point
Copy the full SHA 10412c3View commit details -
Bug#34768216 SPJ should scan local fragments first
When the SPJ block submits SCAN_FRAGREQs to the LDM's, it will usually scan only a subsets of the fragments in parallel. Based on recsPrKeys-statistics if 'valid', or just make a guess of no statistics is available. SPJ have logic which may take advantage of the result collected from the first fragments scanned: - Parallelism statistics are collected after SCAN_FRAGCONFs - firstMatch elimination may eliminate keys needed to scan in the next rounds. As scanning the local fragments are expected to have less overhead than the non-local fragments, it is preferably to err on the scan-parallelism for the local fragments. The same should be expected for the 'firstMatch' as well (when implemented, see bug#34768191) Patch introduce two rounds to be made over the fragments in ScanFrag_send(): - first round will only allow sending SCAN_FRAGREQs to local fragments. - Second round may send to any fragment expecting a SCAN_FRAGREQ Change-Id: I0b7e024edd0b909661ca8a4929e33d969c3a5233
Ole John Aske committedNov 21, 2022 Configuration menu - View commit details
-
Copy full SHA for 4ebb294 - Browse repository at this point
Copy the full SHA 4ebb294View commit details -
Bug#34723413 A 'pushed join' should not start with a scan-table retur…
…ning very few rows Patch introduce a heuristic step in the NDB pushed-join handler code, where we qualify whether a planned pushed join should be 'accepted', or wheteher there likely are better plans to be found. To solve this particular bug we use the expected 'num_output_rows', and the total number of fragments for the root table as a metric. If the expected 'num_output_rows' is less than 3/4 of the number of fragments, the root table is a candidate for not being a root, iff: - The next table is a scan, and returns > 4x the num_output_rows as this table. - All other child tables in the planned pushed join can be pushed in a 'reduced_plan' with the next table as root. Patch has been tested agains TPC-H with a 2node x 4LDM's config. Improvements for relevant queries, where we start with 'small-table' (Usually 'regions' or 'nations') are: Q7: 0.43s -> 0.29s Q8: 1.45s -> 1.17s Q21: 1.94s -> 1.56s Expect that a config with more data nodes may show even better results. Change-Id: I9da99246f25a3f9470d6d985d08073f890d185c3
Ole John Aske committedNov 21, 2022 Configuration menu - View commit details
-
Copy full SHA for 1f4d52d - Browse repository at this point
Copy the full SHA 1f4d52dView commit details -
Configuration menu - View commit details
-
Copy full SHA for a2348a6 - Browse repository at this point
Copy the full SHA a2348a6View commit details -
Bug#34776970 Assertion failure in SimulatedBlock::assembleFragments()
Description: Fragmented signals are not supported to virtual, V_QUERY, blocks since the different signal fragments may end up at different block instances. When DBTC and DBSPJ sends a LQHKEYREQ or SCAN_FRAGREQ signal that may end up using V_QUERY they check whether signal will be fragmented and in that case change receiver to a DBLQH instance. The function sendBatchedFragmentedSignal is intended to use the same check to decide whether to fragment signal or not, but it had a wrong check and signals with size from ~20KB to ~30KB that were not expected to be fragmented were fragmented and potentially sent using V_QUERY and in that case likely to fail when received. Fix: Make the size check in sendFirstFragment, used by sendBatchedFragmentedSignal, match the check in DBTC and DBSPJ. Change-Id: I76ce8729901443d826722cfbd01a46ab42ca5839
Configuration menu - View commit details
-
Copy full SHA for f841fac - Browse repository at this point
Copy the full SHA f841facView commit details -
Configuration menu - View commit details
-
Copy full SHA for e53c34c - Browse repository at this point
Copy the full SHA e53c34cView commit details -
Bug#34682561: Assertion `!eq_items.is_empty()' failed in
make_join_hypergraph.cc When canonicalizing where conditions, optimizer fails to find replacements for the multiple equality attached to the topmost semijoin. This is because all the fields in the multiple equality are from the inner tables of the semijoin which are not visible. However semijoin conditions are often moved to the where conditions. So it is only correct for the where condition to see all tables and not just the outer side of a semijoin. A similar change was made even in Bug#34534373. Change-Id: Ieb96798a9d0ca83556693474dbaf1e87e1ed6d04
Chaithra Gopalareddy committedNov 21, 2022 Configuration menu - View commit details
-
Copy full SHA for 7333ffc - Browse repository at this point
Copy the full SHA 7333ffcView commit details -
Bug#34717171: Hypergraph :Assertion `false' failed
in join_optimizer.cc When creating equalities from multiple equalities, hypergraph finds replacements from tables which are not visible for a join i.e it finds replacements from the inner side of semijoin which results in seeing the semijoin conditions that break the rules. Such semijoin conditions result in plans which have different costing leading to an assert failure. Solution is to find the correct replacements when concretizing multiple equalities. Change-Id: I0434f4bc15ab35bf670bd59e673f962199deb1a7
Chaithra Gopalareddy committedNov 21, 2022 Configuration menu - View commit details
-
Copy full SHA for 4dc13f2 - Browse repository at this point
Copy the full SHA 4dc13f2View commit details
Commits on Nov 22, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 06f501b - Browse repository at this point
Copy the full SHA 06f501bView commit details -
Configuration menu - View commit details
-
Copy full SHA for cfd0c57 - Browse repository at this point
Copy the full SHA cfd0c57View commit details -
Bug#34782389: regression: match against: assert_consistent_hidden_flags:
Assertion `item->hidden == hidden' failed. An assert failure was seen in a query that had an IN subquery which performed full-text search. The assertion was triggered when adding a hidden element in the SELECT list for full-text functions in the HAVING clause. The assertion detected that the item was already in the SELECT list. (The query had no explicit HAVING clause, but the in-to-exists transformation had added one implicitly.) Fixed by adding the full-text function to the SELECT list only if it is not already there. Change-Id: I71c1cb3cfabe4df6ac76d13a432386dc0abb45f1
Configuration menu - View commit details
-
Copy full SHA for 25dbd4c - Browse repository at this point
Copy the full SHA 25dbd4cView commit details -
Configuration menu - View commit details
-
Copy full SHA for d8bba3d - Browse repository at this point
Copy the full SHA d8bba3dView commit details -
Bug#34823952 Query plans are not stable due to non-deterministic sort…
… of Key_use_array Regression caused by patch for bug#25965593 REPLACE MY_QSORT WITH STD::SORT The above patch introduced usage of std::sort() to sort the Key_use array. Documentation for std::sort is very clear about: 'The order of equal elements is not guaranteed to be preserved' The compare function used to sort the Key_use items considder the item to be equal if it refers the same table / field. Multiple such items may exists referring different values, which will be compared to be equal. With the home grown my_qsort we were guaranteed to have the same sort implementation across all platforms, which also would end up with the same sort order of these equal Key_use items. That is not true any more when using the std::sort() which has no such guarantees. Patch replace this specific sort with std::stable_sort(), which claims: 'The order of equivalent elements is guaranteed to be preserved.' Patch also uncomments a couple of query plans explains in spj_rqg.test which were previously commented out du to such unstable query plans. Change-Id: Ifbb151ccef5dfa847095e52999ac6357ba1f663d
Ole John Aske committedNov 22, 2022 Configuration menu - View commit details
-
Copy full SHA for add1371 - Browse repository at this point
Copy the full SHA add1371View commit details -
Bug #34756282 mysql-8.0.31: build fails if building user cannot acces…
…s mysqld's tempdir Additional patch: use --no-defaults also for the show_build_id target. Change-Id: I79a5679a7aa537a46792c012af27e3dacfc3badb (cherry picked from commit 18d331e181bb0a9d84923b0736d1978a54f69bc6)
Tor Didriksen committedNov 22, 2022 Configuration menu - View commit details
-
Copy full SHA for 9dab2eb - Browse repository at this point
Copy the full SHA 9dab2ebView commit details
Commits on Nov 23, 2022
-
Bug #34813456: Server error seen when installing a validate_password …
…component The system variable registration service doesn't stop on errors setting pre-existing persisted values to the newly registered variable. It does continue the execution of the current SQL statement eventually resulting in returning OK, even when there are errors in the DA. This is a problem only for SQL operations resuting in variable registration like e.g. INSTALL COMPONENT. Fixed by downgrading the accumulated SQL errors when processing the persisted values. Also moved one of the relevant error codes to match its usage: from log file to SQL errors. Test case created. Change-Id: Ifdcd50965364ce6f469783c45ee2a18624986bbc
Configuration menu - View commit details
-
Copy full SHA for 0cc365a - Browse repository at this point
Copy the full SHA 0cc365aView commit details -
Addendum 1 to Bug #34813456: Server error seen when installing a vali…
…date_password component Fixed a gcc warning. Change-Id: Ifdcd50965364ce6f469783c45ee2a18624986bbc
Configuration menu - View commit details
-
Copy full SHA for b508c13 - Browse repository at this point
Copy the full SHA b508c13View commit details -
Bug #34827945 FetchWholeTopologyConnections fails randomly
TSAN complains about the concurrent r/w access to the fetch_whole_topology_ boolean flag. When this is addressed the test still sometimes fails because it does not wait long enough for the call to fetch_whole_topology() to porpage before doing the checks that rely on that. This patch makes the fetch_whole_topology_ flag atomic. It also fixes the test to wait long enough after the fetch_whole_topology() call to cover worst timing cases. Change-Id: I8c6d02624f5585abb865ad9b513a1d7c898d0b2b
Andrzej Religa committedNov 23, 2022 Configuration menu - View commit details
-
Copy full SHA for 9413d65 - Browse repository at this point
Copy the full SHA 9413d65View commit details -
Bug #34805489 Add RTLD_NODELETE when loading components for ASAN builds
For ASAN and LSAN builds: Do not unload the shared object during dlclose(). LeakSanitizer needs this in order to provide call stacks, and to match entries in lsan.supp. Change-Id: Iacc74423e5545afd46ad7e2650a308753fea0f9d (cherry picked from commit bfb6a6b66e3dc24b53539bcf7270c453e0bc674f)
Tor Didriksen committedNov 23, 2022 Configuration menu - View commit details
-
Copy full SHA for 11bedc0 - Browse repository at this point
Copy the full SHA 11bedc0View commit details -
Bug#34808199: Assertion `!OrderItemsReferenceUnavailableTables(path,
tables)' failed. An assert failure was seen in a query ordered by an expression containing IS NULL when running with the hypergraph optimizer. The assertion was supposed to check that the order items referenced temporary tables instead of base tables if there was a materialization step before the sorting. It failed because the IS NULL expression seemed to access a column in a table that was not available in the sort node, but it was not the case that it was accessed, because the column was non-nullable, and IS NULL therefore didn't have to read it. The plan of the failing query was: -> Nested loop inner join -> Sort: ((t1.x is null) = t2.x) -> Table scan on t2 -> Table scan on t1 Notice that the sort-ahead on t2 seems to access t1.x, but it does in fact not access it because t1.x is declared as non-nullable in the table definition. So it is actually OK to push this sort down to t2. Fixed by making the assertion more lenient and only check for tables that have a materialization step between the sort and the base table access. Change-Id: Ib81e2aa256bfede0e64042c8c96ecbd5fca4869e
Configuration menu - View commit details
-
Copy full SHA for 0ad494f - Browse repository at this point
Copy the full SHA 0ad494fView commit details -
Bug#33725530 : A bug in item_subselect.cc cause mysql-server
crash. Problem: -------- While attempting to convert an IN subquery predicate to a semi-join, a subquery that was not a simple query block was selected as a candidate. However, such subqueries cannot be used with semi-join since the subquery's inner query block may have enclosed ORDER BY and/or LIMIT clauses that determine which rows to select from the query block. Solution: --------- Do not allow such subqueries to be picked as candidate for semi-join transformation. This patch do not fix following case. Also many times such ORDER BY clauses can be removed/merged with inner subqueries. If we can transform such query trees by removing parenthesized query blocks whose ORDER BY clauses has been moved before calling optimize() it opens up semi materialization strategy for them. Change-Id: Id4b46c284db3daf269ff963af231a48acd46d5d2
Configuration menu - View commit details
-
Copy full SHA for 4361b8e - Browse repository at this point
Copy the full SHA 4361b8eView commit details -
Bug#33617591 Test spj_rqg failing in pb2
Query plans were different due to slightly different row distribution and table statistics for the test tables for big- vs little endian platforms. Patch add optimizer hints for table join order and query algorithms to those queries being affected. Care is taken to not change the query plan as such, just enforce the plan which was expected. Change-Id: I014ad805b2351ed25aaf2baccb6f856814b17bb3
Ole John Aske committedNov 23, 2022 Configuration menu - View commit details
-
Copy full SHA for f76c3d1 - Browse repository at this point
Copy the full SHA f76c3d1View commit details -
Bug#32003890 NDBINFO_LOCKS_PER_FRAGMENT FAILS IN PB2 DUE TO RESULT MI…
…SMATCH Test case failed due to the different endianness on Sparc vs. other platforms we test on. That affected the distribution of rows in the test table, such that some fragments got more than a single row stored in it on Sparc, while other platforms had 0/1 rows stored in each fragments. (Test table contained 6 rows when *.result mismatches were encountered) The failing test cases had two connection against the database: - con1 takes and holds an exclusive lock on a row: begin; select * from test.t1 where a=1 for update; - Another connection read and locks row from the same table: select * from test.t1 where a % 1234 =1 for update; Lock conflicts will be encountered on *fragment* levels!. Such that when reading the row/fragment holding the 'where a=1'-lock, we will block and eventually timeout on that. ---> Any rows stored after 'a=1' *in the same fragment*, will never be read nor locks taken. In case of these failing tests oon Sparc, there are likely multiple rows distributed to the same fragment storing 'a=1', which are never read as we block on 'a=1' first. Thus getting a different number of lock being reported on the Sparc platform. Patch work around this by creating the test table partitioned over 64 fragments, instead of the default of 8. This there will be either 0 or a single row stored in each fragment. Change-Id: I969324bdd5d4f1b47ae899166bc4fd877f4832ac
Ole John Aske committedNov 23, 2022 Configuration menu - View commit details
-
Copy full SHA for aa6b8c2 - Browse repository at this point
Copy the full SHA aa6b8c2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 5cb63b7 - Browse repository at this point
Copy the full SHA 5cb63b7View commit details -
Configuration menu - View commit details
-
Copy full SHA for b7e0400 - Browse repository at this point
Copy the full SHA b7e0400View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4e12c1d - Browse repository at this point
Copy the full SHA 4e12c1dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9cecd34 - Browse repository at this point
Copy the full SHA 9cecd34View commit details -
Configuration menu - View commit details
-
Copy full SHA for f44ee1c - Browse repository at this point
Copy the full SHA f44ee1cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 4394595 - Browse repository at this point
Copy the full SHA 4394595View commit details -
Configuration menu - View commit details
-
Copy full SHA for dc13ed4 - Browse repository at this point
Copy the full SHA dc13ed4View commit details -
Configuration menu - View commit details
-
Copy full SHA for a650b66 - Browse repository at this point
Copy the full SHA a650b66View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4df3e32 - Browse repository at this point
Copy the full SHA 4df3e32View commit details -
Bug#34449016 since 8.0.23+ CREATE USER became slower
Problem ======= Creation of 20k users with the IP in the hostname takes over 10 hours comparing to 12 minutes before the WL#14074. Analysis ======== WL#14074 changed the way ip_mask value of the ACL_HOST_AND_IP sturcture is set, when the IP is used in the hostname part. Previously it was set, only when the mask value was provided. Because of this bug, IP provided with the host name was treated as the wildcarded hostname. The algorithm for building a structure that keeps all wildcards is performance costly. Fix === A host part of the user account is treated as a wildcard, only if the mask is not 255.255.255.255. Change-Id: I6e07f0cc73911951485024bda152920e87c485f9
Configuration menu - View commit details
-
Copy full SHA for 01b78ef - Browse repository at this point
Copy the full SHA 01b78efView commit details
Commits on Nov 24, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 384220b - Browse repository at this point
Copy the full SHA 384220bView commit details -
Bug#34556764: Contribution by Facebook: Fix sha256_password_auth_clie…
…nt_nonblocking Description: sha256_password_auth_client_nonblocking should make use of RSA public key (if available) if connection is not secured using TLS. Change-Id: Ie01ce0705df1cc57f75c9dbe57fc8f9045ce71a1
Configuration menu - View commit details
-
Copy full SHA for 099e452 - Browse repository at this point
Copy the full SHA 099e452View commit details -
BUG #34122122: Add a specific column after instant drop column will
cause data error and crash PROBLEM : - When we DROP a column INSTANTLY, it is renamed to col_name+_dropped_v_+<version> and made hidden. - On adding a column with same name as the hidden INSTANTly dropped column, it will cause metadata corruption as entry of INSTANTly dropped column is lost. This will later hit assertion or give incorrect result. - Attempt to drop column with name >53 characters long results in error. FIX : - ER_WRONG_COLUMN_NAME is raised for situation when there is conflict in column name. - Renaming format for dropped column is changed to '''!hidden!_dropped_v+<version>+_p+<phy_pos>+_+col_name''' - Return type of some functions have changed from void to bool in order to signal success/failure. - Maximum length of renamed column is limited to 64 by cropping the tail. Change-Id: I6605cbbaad5718a0b7abbe7d8781d94bc86f5c03
Mohammad Tafzeel Shams committedNov 24, 2022 Configuration menu - View commit details
-
Copy full SHA for d4ecb24 - Browse repository at this point
Copy the full SHA d4ecb24View commit details -
bug#34723119 -- MySQL client hangs after creat index
Stop using the same THD instance in multpile threads in DDL builder. Change-Id: I5aa71bc51018f0f238b507d3ea5e4e785e9f0d11
Configuration menu - View commit details
-
Copy full SHA for 400f29c - Browse repository at this point
Copy the full SHA 400f29cView commit details -
Bug #34826194 Modify ndb(mt)d to ignore SIGCHLD
In some contexts ndb[mt]d may be sent SIGCHLD by other processes. Currently ndb[mt]d never starts child processes itself, so there is no need to take action in these cases. ndb[mt]d currently binds an error handling signal handler for SIGCHLD. This is modified to use SIG_IGN. Approved by : Mauritz Sundell <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 772aa36 - Browse repository at this point
Copy the full SHA 772aa36View commit details -
Configuration menu - View commit details
-
Copy full SHA for b401a08 - Browse repository at this point
Copy the full SHA b401a08View commit details -
Configuration menu - View commit details
-
Copy full SHA for 242e930 - Browse repository at this point
Copy the full SHA 242e930View commit details -
Configuration menu - View commit details
-
Copy full SHA for ad35d61 - Browse repository at this point
Copy the full SHA ad35d61View commit details -
Updated copyright year in user visible text
Approved-by: Bjorn Munch <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ff3f07f - Browse repository at this point
Copy the full SHA ff3f07fView commit details -
Updated copyright year in user visible text
Approved-by: Bjorn Munch <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 6e4e344 - Browse repository at this point
Copy the full SHA 6e4e344View commit details -
Updated copyright year in user visible text
Approved-by: Bjorn Munch <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for fa2eddb - Browse repository at this point
Copy the full SHA fa2eddbView commit details -
BUG#34828311 - Backport Bug#34555045 to 8.0.32
BUG#34555045 - join rejected in group bootstrapped with paxos single leader but runtime value 0 Problem ------------------- `performance_schema.replication_group_communication_information. WRITE_CONSENSUS_SINGLE_LEADER_CAPABLE` reflects the runtime value of Paxos Single Leader setup in a group, and its main purpose is to let users know which must be the value of `group_replication_paxos_single_leader` on joining members. A group that was bootstrapped with single leader enabled but its protocol version is downgraded to one that does not support it reports WRITE_CONSENSUS_SINGLE_LEADER_CAPABLE=0, as expected. However, attempting to join an instance to the group using group_replication_paxos_single_leader=0 fails Analysis and Fix ------------------- For this, we will change the behaviour and make the value of `group_replication_paxos_single_leader` to be up to par with the Communication Version that the group is running. `group_replication_paxos_single_leader` was introduced in 8.0.27, and below that version, it is not known or used. As such, we will enforce the following rules: - When a node joins a group that is running < 8.0.27 and we are of a version >= 8.0.27, we must error out and state that `group_replication_paxos_single_leader` must be OFF before joining the group - When we try to run `set_communication_protocol` to a version < 8.0.27 and we are of a version >= 8.0.27, we must error out the UDF if `group_replication_paxos_single_leader` is not OFF This bug also changes the value that we used to check if we are allowed to change the group leader after running `set_communication_protocol`. Until today, we would look at the runtime value of `group_replication_paxos_single_leader`. This is not correct, since, as per the WL design, this value only takes effect after a group reboot. As such, when we run `set_communication_protocol`, we will use the value that is shown in `performance_schema. replication_group_communication_information .WRITE_CONSENSUS_SINGLE_LEADER_CAPABLE` Change-Id: I4672243ef2ab31e8e2b4c943068abf304ca4753a
Configuration menu - View commit details
-
Copy full SHA for 82edbf8 - Browse repository at this point
Copy the full SHA 82edbf8View commit details -
Bug#34823287 tsan backtraces on crash
When router is built with -DWITH_TSAN any segfaults will result in interleaved output of a mysql stacktrace and a tsan stacktrace. Change ------ - skip the my_print_stacktrace() when built with TSAN Change-Id: I03b740fcfce12ca9f57cd2c942772fb076b8f5e1
Configuration menu - View commit details
-
Copy full SHA for 8f7456a - Browse repository at this point
Copy the full SHA 8f7456aView commit details -
Bug#34823313 memleak in mysqlrouter_passwd with openssl 1.0.2
ASAN reports a memleak in mysqlrouter_passwd. It is triggered when Digest::reinit() is called, when openssl 1.0.x is used, but not when openssl 1.1.x is used. Change ------ - call EVP_MD_CTX_cleanup() before EVP_DigestInit() in .reinit() when openssl 1.0.x is used. Change-Id: Id5f2ec7abac474886bcb31efe5f1f52917e188f4
Configuration menu - View commit details
-
Copy full SHA for 56f198a - Browse repository at this point
Copy the full SHA 56f198aView commit details -
Bug#34823366 possible data-race creating x-connections
TSAN reports a possible data-race between creating a x-connection and further handing the connection. The connection is created in the acceptor's io-thread until it blocks, but then gets futher handled in the io-ctx of the client-socket ... which may be another io-thread. It also blocks the acceptor until the x-connection would-blocks (which may take a long time if all data can be read from the socket directly), resulting in a slower accept-rate. Change ------ - defer handling the x-connection Change-Id: I1323283ad0f05cbc82ec85274545099705749d83
Configuration menu - View commit details
-
Copy full SHA for 739bcdc - Browse repository at this point
Copy the full SHA 739bcdcView commit details -
Bug#34824457 logs missing on unclean process exit
If the router's integration tests fail with a unexpected exit-code, they should dump logs on failure, but instead no logs are dumped. That is caused by: 1. if the test failed, dump logs 2. check for clean exit. That works as expected for normal-test failures, but not if the monitored executable crashes after the test itself passed. Change ------ - flip the order: first, check for clean exit, then dump logs on failure. - removed not-needed .dump_logs() Change-Id: Iaa0cc56e64446444df789008a98d7c8e62e4e607
Configuration menu - View commit details
-
Copy full SHA for 056a2df - Browse repository at this point
Copy the full SHA 056a2dfView commit details -
Bug#34824469 stacktrace tests fail with TSAN
The stacktrace tests expect a debugger to be started, but TSAN intercepts signals and dumps its own stacktraces, leading to a test-failure. These tests are already disabled for ASAN and UBSAN. Change ------ - disable test for TSAN too. Change-Id: Ic76b59cee88e98a5ea623d642d9a3ae8d2dd1749
Configuration menu - View commit details
-
Copy full SHA for e99ab36 - Browse repository at this point
Copy the full SHA e99ab36View commit details -
Bug#34824515 concurrent_map::for_each does not support pass-by-value
ConnectionContainer's concurrent_map allows to interate over its members with .for_each(). for_each() expects function that currently must be placed on the stack first as it is taken by reference, auto f = [](auto &v){ ... }; m.for_each(..., f); forbidding passing the function as lambda directly: m.for_each(..., [](auto &v){ ... }); Change ------ - take the function by value Change-Id: Icb8cc46b8af3247c14ef4d07a6b088ab7f3183bc
Configuration menu - View commit details
-
Copy full SHA for 729783f - Browse repository at this point
Copy the full SHA 729783fView commit details -
Bug#34824642 data-race on disconnect
TSAN returns a data-race between disconnect() and socket-close(). disconnect() may be called from another thread, while socket-close() is called from the io-thread the sockets io-ctx is owned by. Change ------ - net::dispatch the socket-cancel to be executed in the io-ctx of the socket. - wrap the connections in a shared-pointer to ensure that connection-object stays alive until net::dispatch() is finished even if the connection itself closed in the meantime. - moved disconnect() into the .cc file. Change-Id: I2133475d8e33839824ced150f9f7cf366e5144b6
Configuration menu - View commit details
-
Copy full SHA for 905cf2e - Browse repository at this point
Copy the full SHA 905cf2eView commit details -
Bug #34685386 Several unit tests are not build, remove or integrate w…
…ith cmake Fix testConfigValues Change-Id: I6a5d370ae0253b2c79cfa470e9f6240107a40b35
Configuration menu - View commit details
-
Copy full SHA for eda56ac - Browse repository at this point
Copy the full SHA eda56acView commit details -
Bug #34685386 Several unit tests are not build, remove or integrate w…
…ith cmake testProperties removed. It check the write and read of properties to file what is no longer a useful use case. Also some functions covered by the test are no longer present in the code. The uucode.cpp and uucode.h files, used to encode data, are also removed since they are obsolete and are not used anywhere besides the testProperties. Change-Id: Ie27ef3274a4ffd98e12bedb97daad9e247f675b5
Configuration menu - View commit details
-
Copy full SHA for 6a5a73d - Browse repository at this point
Copy the full SHA 6a5a73dView commit details -
Bug #34685386 Several unit tests are not build, remove or integrate w…
…ith cmake Fix testDataBuffer Change-Id: I4f2b465a49835006ead672eeddaa1c94c17ffb94
Configuration menu - View commit details
-
Copy full SHA for 8729c06 - Browse repository at this point
Copy the full SHA 8729c06View commit details -
Bug #34685386 Several unit tests are not build, remove or integrate w…
…ith cmake Fix test_cpcd Change-Id: Ib9fcee3eb061bd779d7ee2bef93942a13657d137
Configuration menu - View commit details
-
Copy full SHA for 85df3c1 - Browse repository at this point
Copy the full SHA 85df3c1View commit details -
Bug #34685386 Several unit tests are not build, remove or integrate w…
…ith cmake Fix testCopy Change-Id: I41f3212a86e53b69a581734befad09d9814e17e7
Configuration menu - View commit details
-
Copy full SHA for ac27430 - Browse repository at this point
Copy the full SHA ac27430View commit details -
Bug #34685386 Several unit tests are not build, remove or integrate w…
…ith cmake Fix testSimplePropertiesSection Change-Id: Id520c5eac787c168a6b5ae92757a3d4da56116c7
Configuration menu - View commit details
-
Copy full SHA for 9bda69e - Browse repository at this point
Copy the full SHA 9bda69eView commit details
Commits on Nov 25, 2022
-
Configuration menu - View commit details
-
Copy full SHA for aefe161 - Browse repository at this point
Copy the full SHA aefe161View commit details -
Bug #34646510 innodb.zlob_ddl_big failing on pb2 daily-trunk
This issue is specific to compressed row format. When the sub-trees are merged and a new root page is added, it was assumed that the compression of this new root page will always be successful. But for small page sizes the compression of this new page failed. When compression of the new root page fails, it needs to be split. Change-Id: I4bc4942e1cea11f103375d27bf5ce01cc4443302
Configuration menu - View commit details
-
Copy full SHA for 80c4364 - Browse repository at this point
Copy the full SHA 80c4364View commit details -
Bug#34833817 integration tests fail on solaris/macos/windows
1. Integration tests fail on solaris as ECONNRESET is returned instead of the expected ECONNABORT. 2. use-after-free in ~Procs() if it is used outside of a test. It calls testing::HasFatalFailure() which isn't allowed in Environments. 3. integration tests fail as ports are already in use. Change ------ - added ECONNRESET as expected error-code - don't call HasFatalFailure in ~Procs() - use a single port-pool for the whole process Change-Id: I8c2387b3e6266beb45c3279fb9f98517a19eba55
Configuration menu - View commit details
-
Copy full SHA for 8b378f8 - Browse repository at this point
Copy the full SHA 8b378f8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 5b1fb2b - Browse repository at this point
Copy the full SHA 5b1fb2bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3713354 - Browse repository at this point
Copy the full SHA 3713354View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0a2e5e2 - Browse repository at this point
Copy the full SHA 0a2e5e2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9e02926 - Browse repository at this point
Copy the full SHA 9e02926View commit details -
Configuration menu - View commit details
-
Copy full SHA for 6e15d53 - Browse repository at this point
Copy the full SHA 6e15d53View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0f32057 - Browse repository at this point
Copy the full SHA 0f32057View commit details -
Configuration menu - View commit details
-
Copy full SHA for 62cc657 - Browse repository at this point
Copy the full SHA 62cc657View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4c5aad2 - Browse repository at this point
Copy the full SHA 4c5aad2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 614521d - Browse repository at this point
Copy the full SHA 614521dView commit details -
Configuration menu - View commit details
-
Copy full SHA for ce3e2be - Browse repository at this point
Copy the full SHA ce3e2beView commit details -
Configuration menu - View commit details
-
Copy full SHA for 4136068 - Browse repository at this point
Copy the full SHA 4136068View commit details -
Configuration menu - View commit details
-
Copy full SHA for f0b98df - Browse repository at this point
Copy the full SHA f0b98dfView commit details -
Bug #34685386 Several unit tests are not build, remove or integrate w…
…ith cmake Post push fix: Removed #include of already removed uucode.h file (8.0 only) Change-Id: I0b282bc5e453d64f04677026e85d6742dc0013ce
Configuration menu - View commit details
-
Copy full SHA for 337c459 - Browse repository at this point
Copy the full SHA 337c459View commit details -
Bug#34834787 Typo in error message when bootstrap servers list is empty
This patch fixes the typo in error message: list of metadata-servers is empty: 'bootstrap_server_addresses' is the configuration file is empty to list of metadata-servers is empty: 'bootstrap_server_addresses' in the configuration file is empty Change-Id: Iaaf1c7e232d381412c51b6062fa62dd4f194a55c
Andrzej Religa committedNov 25, 2022 Configuration menu - View commit details
-
Copy full SHA for 3028dac - Browse repository at this point
Copy the full SHA 3028dacView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9227a84 - Browse repository at this point
Copy the full SHA 9227a84View commit details -
Configuration menu - View commit details
-
Copy full SHA for 468d724 - Browse repository at this point
Copy the full SHA 468d724View commit details
Commits on Nov 28, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 9a84c53 - Browse repository at this point
Copy the full SHA 9a84c53View commit details -
Approved-by: Bjorn Munch <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 190d282 - Browse repository at this point
Copy the full SHA 190d282View commit details
Commits on Dec 7, 2022
-
Configuration menu - View commit details
-
Copy full SHA for e1980c8 - Browse repository at this point
Copy the full SHA e1980c8View commit details -
Bug #34828111 CURL update to 7.86.0 [add sources]
Unpack curl-7.86.0.tar.xz rm configure ltmain.sh config.guess config.sub Makefile rm -rf docs rm -rf m4 rm -rf packages rm -rf plan9 rm -rf projects rm -rf src rm -rf tests rm -rf winbuild git add curl-7.86.0 Change-Id: I9e5a4d27a1d064a5870dcb8ba269bc59ed08a50e Change-Id: I0e91991da09d6b3dee9653fce29f3c3b0ab08f78 (cherry picked from commit 9affac2f29690fc3af33cac78c0de7c0644eccba)
Configuration menu - View commit details
-
Copy full SHA for 3137bbb - Browse repository at this point
Copy the full SHA 3137bbbView commit details -
Bug #34828111 CURL update to 7.86.0 [patches]
On Oracle Linux 7, we now support -DWITH_SSL=openssl11. This option will automatically set WITH_CURL=bundled. "bundled" curl is not supported for other platforms. Disable in curl cmake files: - cmake_minimum_required(VERSION) - BUILD_SHARED_LIBS - CMAKE_DEBUG_POSTFIX - find_package(OpenSSL) - install(...) - set PICKY_COMPILER OFF by default Change-Id: I3b9ec5048127589817e7917f564158364f0965f3 (cherry picked from commit 4c56fc4f53127cbc80bf3955b2f0747c59301c51)
Configuration menu - View commit details
-
Copy full SHA for 1b3b8f5 - Browse repository at this point
Copy the full SHA 1b3b8f5View commit details -
Bug #34828111 CURL update to 7.86.0 [remove old]
Remove all old source files. Change-Id: I82837b85aeafa1f80da66b5f34097be5648783be (cherry picked from commit f8a70670e8b58a2054bd3c26777bae8c00953393)
Configuration menu - View commit details
-
Copy full SHA for 7e1cfd2 - Browse repository at this point
Copy the full SHA 7e1cfd2View commit details -
Bug #34828111 CURL update to 7.86.0 [FILE protocol]
We have new functionality, implemented by WL#15131, WL#15133: Innodb: Support Bulk Load so do not disable the FILE protocol in Curl. Change-Id: Ib05f4656c2d13c620756518638ef73fa373cf63f (cherry picked from commit e85db298f4ba0a2de53baa978f452d1107c48f7a)
Configuration menu - View commit details
-
Copy full SHA for 6697bd5 - Browse repository at this point
Copy the full SHA 6697bd5View commit details -
Bug#34711762 Upgrade zlib to 1.2.13 [add tarball]
Unpack source tarball, git add everything. Change-Id: Ib6eb64f8e132ca59539208f7bf69245268804ee5
Configuration menu - View commit details
-
Copy full SHA for 944a699 - Browse repository at this point
Copy the full SHA 944a699View commit details -
Bug#34711762 Upgrade zlib to 1.2.13 [remove unneeded]
Remove things we do not need/want. git rm -rf amiga/ contrib/ doc/ examples/ nintendods/ Makefile zconf.h Change-Id: Ibd76884411c6596f2fcfcb6c3fe2f1f4aabadb73
Configuration menu - View commit details
-
Copy full SHA for a97f76d - Browse repository at this point
Copy the full SHA a97f76dView commit details -
Bug#34711762 Upgrade zlib to 1.2.13 [cmake changes]
Bump MIN_ZLIB_VERSION_REQUIRED to "1.2.13" and adjust paths to bundled zlib sources. In extra/zlib/zlib-1.2.13/CMakeLists.txt: - apply cumulative patches from previous zlib upgrade - apply fix to MacOS build (bug #34776172) Change-Id: I1a0aeff115a96a0993f2f396c643eda1c1b4900b
Configuration menu - View commit details
-
Copy full SHA for 4eadbdb - Browse repository at this point
Copy the full SHA 4eadbdbView commit details -
Bug#34711762 Upgrade zlib to 1.2.13 [remove old]
Remove all old source files. Change-Id: I456635823feb21faa42b683f0bfb62d353cb80d4
Configuration menu - View commit details
-
Copy full SHA for 6797826 - Browse repository at this point
Copy the full SHA 6797826View commit details -
Configuration menu - View commit details
-
Copy full SHA for 761f865 - Browse repository at this point
Copy the full SHA 761f865View commit details -
Configuration menu - View commit details
-
Copy full SHA for 68cbc6b - Browse repository at this point
Copy the full SHA 68cbc6bView commit details -
Bug#33674644 EPOLLHUP|EPOLLERR leads to high CPU usage [1/3]
When a socket is shutdown() on both sides, but not closed AND the socket is still monitoed via epoll_wait(), epoll_wait will return EPOOLHUP|EPOLLERR. It will be logged as: after_event_fired(54, 00000000000000000000000000011000) not in 11000000000000000000000000000000 As EPOLLHUP and EPOLLERR are always watched for even if they aren't explicitely requested, not handling them may lead to an infinite loop and high CPU usage until the socket gets closed. Additionally, events may be reported for fds which are already closed which may happen if: 1. io_context::poll_one() led to epoll_wait() fetching multiple events: [(1, IN|HUP), (2, IN)] 2. when the first event is processed, event handler (for fd=1), closes fd=2 (which leads to epoll_ctl(DEL, fd=2) and close(2) 3. io_context::poll_one() processes the next event: (2, IN) ... but no handler for fd=2 exists. This is more problematic if a new connection which fd=2 was opened in the meantime: 1. io_context::poll_one() led to epoll_wait() fetching multiple events: [(1, IN|HUP), (2, HUP)] 2. when the first event is processed, event handler (for fd=1), closes fd=2 (which leads to epoll_ctl(DEL, fd=2) and close(2) 3. new connection with fd=2 gets accepted. 4. io_context::poll_one() processes the next event: (2, HUP) ... sends event to fd=2 which gets closed event though the HUP event was for the old fd=2, not the current one. Change ====== - expose EPOLLHUP and EPOLLERR as their own, seperate events. - if none of EPOLLHUP|EPOLLERR|EPOLLIN|EPOLLOUT is requested, don't pass the fd to epoll_wait(). - remove polled-events when the fd is removed from the io-context Change-Id: I145cacd457fa9876112789eb4bfd06fce1722c45
Configuration menu - View commit details
-
Copy full SHA for 2e347a4 - Browse repository at this point
Copy the full SHA 2e347a4View commit details -
Bug#33674644 EPOLLHUP|EPOLLERR leads to high CPU usage [2/3]
Change ====== Repeat the changes done for linux_epoll in [1/3] - expose POLLHUP and POLLERR as their own, seperate events. - if no interest for any of POLLHUP, POLLERR, POLLIN or POLLOUT is registered, don't pass that fd to poll() - treat POLLHUP as POLLIN if only POLLIN is waited for to handle the connection-close case nicely on windows. - remove queued events if a fd is removed from the registered set. - added unittests for the poll io-service Change-Id: I1311513492fe755d5f23432b34721e0ab1fc88a7
Configuration menu - View commit details
-
Copy full SHA for 0717153 - Browse repository at this point
Copy the full SHA 0717153View commit details -
Bug#33674644 EPOLLHUP|EPOLLERR leads to high CPU usage [3/3]
Change ====== linux timestamping reports when a packet stepped through the layers of the linux network stack on the send and receive side. - kernel -> driver - driver -> cable linux timestamping are reported as EPOLLERR without EPOLLHUP and serve as test-bed for the EPOLLERR handling. Change-Id: I083e304d23c72880b974863d29c29aa9d25b8694
Configuration menu - View commit details
-
Copy full SHA for 58f1f4a - Browse repository at this point
Copy the full SHA 58f1f4aView commit details -
Bug #34847756 Revert WLs 14772, 15131 and 15133 from mysql-8.0 branch
Reverting the following WLs and bug fixes: - WL#14772 InnoDB: Parallel Index Build - WL#15131 Innodb: Support Bulk Load with Sorted data - WL#15133 Innodb: Support Bulk Load from OCI Object Store - Bug #34840684 Assertion failure: mtr0log.cc:175:!page || !page_zip || !fil_page_index_page_che - Bug #34819343 Assertion failure: btr0btr.cc:731:ib::fatal triggered thread 140005531973376 - Bug #34646510 innodb.zlob_ddl_big failing on pb2 daily-trunk Reverted Commit-Ids: a8940134dd8d33e7fc25f641d627b640d56769b6 ae9fd03687486b5d01a7dbe766d73993d7c78efa c4388545dc98e472b0f3d96db0e0d19d8231dc56 fd950026c1a4d11294b3448d8bfcd94631618611 ae9fd03687486b5d01a7dbe766d73993d7c78efa 226765401a5daa4a2443e1507343ed264f62f60f Change-Id: I392bda99eeb825174d156fcd169caef7c4b712b0
Configuration menu - View commit details
-
Copy full SHA for ee43bc4 - Browse repository at this point
Copy the full SHA ee43bc4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 92e40d0 - Browse repository at this point
Copy the full SHA 92e40d0View commit details -
Bug #34857411 : regression - slow connections/telnet block many other…
… statements for connect_timeout seconds, causing pileups Description: ------------ This is a regression caused due to the fix made for the Bug 34094706. When a connection somehow stalls/blocks during the authentication phase, where a mutex is held, the other connections that are executing queries on I_S and P_S are blocked until the first connection release the mutex. Fix: ---- Instead of using the mutex and checking the thd->active_vio, we now check the value of net.vio type in the is_secure_transport() check. Change-Id: I02f50f7e90c6e683a7bbe0b5f99b932e819f1f08
Configuration menu - View commit details
-
Copy full SHA for a153bf5 - Browse repository at this point
Copy the full SHA a153bf5View commit details -
Bug #34860923 : Timeout on cv in waiting_with_heartbeat cause dump th…
…read to stop Problem ------- In case a binary log dump thread waits for new events with a heartbeat configured and a new event arrives, it is possible that a binary log dump thread will send an EOF packet to connected client (replica/mysqlbinlog/custom client...) before sending all of the events. Analysis / Root-cause analysis ------------------------------ It happens in case binary log dump thread exits with a timeout on conditional variable just before position gets updated. Function 'wait_with_heartbeat' exits with a code 1, which is treated later on as the end of the execution. Solution -------- Ignore the code returned from the 'wait' function, since a timeout is not important information for the binary dump log thread. In case a timeout occurs, binary log dump thread should continue execution or abort in case thread was stopped. Return 0 from the wait_with_heartbeat or 1 in case of send/flush error. Signed-off-by: Karolina Szczepankiewicz <[email protected]> Change-Id: I027985aafc1234194f0798ba52b65cce36936f24
Configuration menu - View commit details
-
Copy full SHA for 948d83d - Browse repository at this point
Copy the full SHA 948d83dView commit details
Commits on Dec 8, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 0c5648d - Browse repository at this point
Copy the full SHA 0c5648dView commit details -
Bug#33674644 EPOLLHUP|EPOLLERR leads to high CPU usage [3/3] - postfix
gcc12 reports: harness/tests/linux_timestamping.cc:741:15: error: narrowing conversion of ‘attr_type’ from ‘size_t’ {aka ‘long unsigned int’} to ‘short unsigned int’ [-Werror=narrowing] 741 | return {attr_type, {payload, payload_len}}; Change-Id: I28fb1a1ca32e6ffd1febe44c704a1ae438b414a2
Configuration menu - View commit details
-
Copy full SHA for b306693 - Browse repository at this point
Copy the full SHA b306693View commit details -
Configuration menu - View commit details
-
Copy full SHA for f621691 - Browse repository at this point
Copy the full SHA f621691View commit details
Commits on Dec 15, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 6c21f3d - Browse repository at this point
Copy the full SHA 6c21f3dView commit details
Commits on Dec 16, 2022
-
Bug #34893684: Alter Table IMPORT TABLESPACE crashes after upgrade
PROBLEM: - In current version pattern for naming hidden dropped column has changed. - When cfg file is taken from older version hidden dropped column name follows old pattern. - When INSTANT operations are done in current version exactly in same order as done before creating cfg file, then the server crashes. FIX: - When searching dropped column with older name version returns null, IMPORT fails with error SCHEMA_MISMATCH. Change-Id: Ifd93adafb78f0aa7b5ae1980b64a3230f94deae9
Configuration menu - View commit details
-
Copy full SHA for 7b6fb07 - Browse repository at this point
Copy the full SHA 7b6fb07View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1bfe02b - Browse repository at this point
Copy the full SHA 1bfe02bView commit details