Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] 8.0 from mysql:8.0 #2

Merged
merged 1,553 commits into from
Jul 28, 2022
Merged

[pull] 8.0 from mysql:8.0 #2

merged 1,553 commits into from
Jul 28, 2022

Conversation

pull[bot]
Copy link

@pull pull bot commented Jul 27, 2022

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

Tor Didriksen and others added 30 commits May 5, 2022 12:26
Change-Id: Ic601c5072bea537b018eb6ca735c2d0b627526aa
Post-push fix: remove unwanted files config.guess and config.sub

Fix linking of keyring_hashicorp.so, which depends on the correct
location of ZLIB::ZLIB

Change-Id: I47f1daa710042e51336bdcce31e6261d945cc91c
The failure here is due to a missing error return which causes
pushing and popping of name resolution objects during contextualization
to come out of sync, and eventually we access a null pointer.

Fixed by adding proper error return.

Change-Id: I0df999859440c52dd1352fcb9263e44916af4346
… least BACKUP_ADMIN

Made DO innodb_redo_log_consumer_register and DO innodb_redo_log_consumer_advance(lsn) require BACKUP_ADMIN privileges.

Change-Id: Ic6dc60c7ec606621119efe8c4809a2075da2a6d7
Metadata cache plugin exposes cache_stop() API function that locks the global
g_metadata_cache mutex. While keeping this lock it notifies the metadata
refresh thread to stop and waits for it to finish.
This calls for possible deadlocks since the refresh thread may also need
g_metadata_cache in order to progress and finish (directly or via some other
dependencies).

There should be no need for the cache_stop() to lock g_metadata_cache
since the two other API functions that it needs to be synchronized with:
cache_init() and cache_start() are called from the same thread as
cache_stop().

This patch removes locking g_metadata_cache from the cache_stop() to avoid
deadlocks.

Change-Id: I2563e11884aa38ab33cd3ae0ec78ed1418b7db74
FROM_DAYS() development in HeatWave

This WL adds support for:

1. GREATEST/LEAST functions are supported with TEMPORAL types
   - Temporal types are casted in QKRN by QkrnCastFuncArgToDateTime.
     This converts the types and compares the arguments of greatest
     and least functions

2. FROM_DAYS()
   - Packed date/datetime/timestamp/time types are converted
     to MYSQL_LTIME. This is further converted to date using
     MySQL existing functions.

A MySQL bug 33996054 - Result mismatch with input column re-order
for GREATEST() is noted. Hence we are not-offloading cases involving
temporal and non-temporal types.

Change-Id: I394a101489038b3d2b417bc576af328d85fc765a
Post-push fix: ensure that we insert whitespace before appending
more flags to CMAKE_C_FLAGS and CMAKE_CXX_FLAGS.

Change-Id: Ib2e669f2aedbbdee4fb576f426aadc9c8c8fae17
…nts in global_status

Post merge fix, 5.7 only.

Disable new test perfschema.misc_global_status when using the query cache.

Approved by: Georgi Kodinov <[email protected]>
Problem: While persisting system variables, a server crash during write
operations on persist file can cause data loss.
Fix: flush and fsync the contents to disk immediately during SET PERSIST, this
will reduce the possible data loss.

RB#27839
…gers are gone

Problem:
--------

NDB tables are skipped in the MySQL Server upgrade phase and are
instead migrated by the ndbcluster plugin at a later stage. As a
result of this, triggers associated with NDB tables are not
created during upgrades from 5.7 based versions.

It isn't possible to create such triggers when the NDB tables
are migrated by the ndbcluster plugin as metadata about the
triggers is lost in the 'finalize upgrade' phase of the Server
upgrade where all .TRG files are deleted.

Fix:
----

1. Migration of NDB tables with triggers is not deferred during
   the Server upgrade phase.
2. NDB tables with triggers are not removed from the DD during
   setup even when initial system starts are detected.

Change-Id: Ic651545196f7228eca442e4acbcb30a185e6e903
When pushing a condition down to derived table for prepared statements,
we clone a condition which also includes parameters when a derived table
has UNIONS. For a case when a statement needs to be reprepared during
execution E.g. when the signedness of the value specified does not match
with the actual datatype, this cloning for the parameter is not happening
correctly resulting in errors as specified in the bugpage.
Cloning is not happening successfully because the value specified for
the parameter is being used, to print the string for re-parsing, instead
of "?" itself.
Solution is to use the special flag "QT_NO_DATA_EXPANSION" to print
parameters for re-parsing which will just print "?" instead of the value
specified.

Change-Id: I8aa928089725973f4e0e0a6068f8e44e8f986579
Add an assert to see if the error still occurs

Change-Id: I85505e24dc1aeaf5b2d83bffc0f0734dd57645eb
Fix :
- Adds the missing space in error message.

Reviewed by : Sachin Z Agarwal ([email protected])
RB #27260
mangled

The @diafile Doxygen special command is confused by line breaks inserted
by clang-format in its caption parameter.

This patches removes the offending line breaks and surrounds vulnerable
comment blocks with
// clang-format off
and
// clang-format on

Change-Id: I7e5f2904114fc81c5d9bda21bddf3ecf660200fd
ndb[mt]d exits through ndbd_exit()

Stops log thread in ndb(mt)d before ndb_exit() in order to guarantee
that all log messages are printed out to log file/console before the
data node termination.

Change-Id: I41f2842c55ee0345f8dc8bcfc8e8ce6fa27a9026
…lose]

LogBuffer.cpp: In function int LogBuffer_test():
LogBuffer.cpp:689:20: warning: empty_ap may be used uninitialized in this function [-Wmaybe-uninitialized]
  689 |   OK(buf_t1->append("123456789", empty_ap, 9) == 9);
      |                    ^

Added wrapper function append_fmt to call the va_list variant of
LogBuffer::append.

Change-Id: I6de71a78a584eaf7f7534cac273b460fc9f39748
Also refactor relevant code in cmake files.

Move code to find java to own cmake file.

Make WITH_NDB_JAVA default off for ASAN builds.

Remove unneeded HAVE_JAVA, HAVE_JDK, and, ENV{WITH_NDB_JAVA_DEFAULT} use
WITH_NDB_JAVA directly.

For multi config builds (on Windows) set JAVA_SUBDIR to $<CONFIG>
directly instead of by repeated conditionals.

Change-Id: Id0ef3eed9467c7e520b3c15ca5152edc01d3ee06
gkodinov and others added 22 commits June 17, 2022 13:37
Added a Tls_library_version status var for the SSL library version.

Change-Id: I9a6486dfd6d12825fbbafa76f4d0d311b690bbb3
…TED IF SLOW QUERY IS OFF

Post-push fix.

Approved by: Jens Even Blomsoy <[email protected]>

Change-Id: Ie72d558cc6a7f8d3b407ffe50eef06a7d836b453
Problem :
- With upgrade to boost 1.77.0 in 8.0.29, the spherical part of the
  geogrpahic area computation got changed.
- At some places the area is still calculated using the old way.

Fix :
- Update the remaining area calculation to the new way by using
  compute_area().

Change-Id: Ifd5d598d3fc241be28837af43cc1835c166c46a7
              where conditions while using inline view with union

When pushing a condition down to derived table, predicate count for
the derived table query block is not incremented accordingly. As
a result while setting up keys, server uses the wrong predicate
count to allocate leading to problems later.
Fix is to set the correct predicate count for the derived table
query blocks.

Change-Id: I473deeb24eda3a0de4750309000a6c2c4df197b8
After a condition is pushed down to a derived table having a set
operation, while folding an always true boolean condition, the
re-write is not correct resulting in an assert. The wrong re-write is a
result of not telling the cloned condition to treat UNKNOWN as FALSE
when a copy is made during condition pushdown to derived tables with set
operations.
Solution is to set "abort_on_null" to true for the copied condition
which triggers the correct re-write during constant folding.

Change-Id: Id3c13adc43af46f867b2a0667e5aa7190477279a
…c) || !rec_get_instant_flag_new(rec)

Issue :
- In v1, When a record say R1 is inserted after INSTANT ADD
  column, the INSTANT bit in info_bit is set.

- After upgrade, in this table, if an existing record is updated, it
  is always stored in V2 with version 0.

- So for the above ROW R1, after upgrade, when we update, we store it
  in V2 format with version 0 with all INSTANT columns materialized.
  And version bit is set in this updated record.

- But when we rollback, we still store the record in V2 with all
  INSTANT columns materialized. But from the update vector we copied
  old info-bits and thus got INSTANT bit also set there.

Fix :
- When storing version bit, make sure INSTANT bit is not set and
  vice-versa.  Added a new MTR to depict the scenario.

Change-Id: I0e056e63756940fff0899ca4ec8007c1cfcc27c0
…ed(old_col)

Issue:
  During truncate, table is dropped but DD is retained. So if table has
  INSTANT ADD/DROP columns, their metadata is retained. The metadata is
  being cleared at the end of truncate to make sure there is no instant
  ADD/DROP columns and table is as good as new. But metadata of INSTANT
  DROP columns is present whereas post truncate these dropped column
  metadata should not exist.

Fix:
  Remove the metadata of dropped column from DD::Table post truncate.

  NOTE: This needs changes in DD code so we need to get the code changes
  reviewed by runtime team as well.

Change-Id: I4ef737894d1d0373b2d8c9621edb20f9a2fbfdef
…le corruption

Issue:
  When a table is upgraded from 8.0.28 and has INSTANT ADD columns,
  nullable columns calculation was wrong after this table is further
  ALTERed to ADD/DROP with ALGORITHM=INSTANT. Because of which
  existing rows were not interpreted correctly and "check table"
  reported corruption.

Fix:
  Corrected the nullable column calculation for these rows so that
  they can be interpreted correctly.
  Added 4 test cases to
  - Test the table with INSTANT ADD columns before upgrade and after
    upgrade ADD/DROP at various places
  - Test above scenario with recovery as well
  - Test the table with INSTANT ADD columns before upgrade with various
    datatypes and after upgrade ADD/DROP at various places with various datatypes
  - Test above scenario with recovery as well

Change-Id: Idce52ce589c1616790ee4d21e9b0adc5fbcdd0c8
Problem:
    read_2_bytes may return nullptr if ptr exceeds end_ptr. Any calls to
read_2_bytes must check if nullptr is returned.

Solution:
    Updated all instances where this check is required. Verified other
call sites as well

Change-Id: I5d9b2e737c94f83626b625342f36366838544021
… for Bug#34160256

Problem: Using admin interface connections with enterprise TP would lead
to SEGV.

Solution: Check for admin_thread_group == nullptr to correctly handle
admin interface connections when there is no dedicated admin thread
group.

Testing: Moved main.admin_interface and main.admin_interface_ipv4_mapped
to inc files, so that they can be --sourced both from main and
thread_pool suites.

Change-Id: I8a5a8f52c40c45f1d01bfb60cd0ddd72a9bc3533
@pull pull bot merged commit fbdaa4d into Mu-L:8.0 Jul 28, 2022
pull bot pushed a commit that referenced this pull request Jul 28, 2022
When creating a NdbEventOperationImpl it need reference to a
NdbDictionary::Event. Creating a NdbDictionary::Event involves a
roundtrip to NDB in order to "open" the Event and return the Event
instance. This may fail and is not suitable for doing in a constructor.

Fix by moving the opening of NdbDictionary::Event out of
NdbEventOperationImpl constructor.

Change-Id: I5752f8b636ddd31672ac95f59b8f272a41cddfa9
pull bot pushed a commit that referenced this pull request Oct 11, 2022
Add various json fields in the new JSON format. Have json field
"access_type" with value "index" for many scans that use some or the
other forms of index. Plans with "access_type=index" have additional
fields such as index_access_type, covering, lookup_condition,
index_name, etc. The value of index_access_type will further tell us
what specfic type of index scan it is; like Index range scan, Index
lookup scan, etc.

Join plan nodes have access_type=join. Such plans will, again, have
additional json fields that tell us whether it's a hash join, merge
join, and whether it is an antijoin, semijoin, etc.

If a plan node is a root of a subquery subtree, it additionally
has the field 'subquery' with value "true". Such plan nodes will also
have fields like "location=projection", "dependent=true" corresponding
to the TREE format synopsis :
Select #2 (subquery in projection; dependent)

If a json field is absent, its value should be interpreted as either
0, empty, or false, depending on its type.

A side effect of this commit is that for AccessPath::REF, the phrase
"iterate backwards" is changed to "reverse".

New test file added to test format=JSON with hypergraph optimizer.

Change-Id: I816af3ec546c893d4fc0c77298ef17d49cff7427
pull bot pushed a commit that referenced this pull request Oct 11, 2022
Enh#34350907 - [Nvidia] Allow DDLs when tables are loaded to HeatWave
Bug#34433145 - WL#15129: mysqld crash Assertion `column_count == static_cast<int64_t>(cp_table-
Bug#34446287 - WL#15129: mysqld crash at rapid::data::RapidNetChunkCtx::consolidateEncodingsDic
Bug#34520634 - MYSQLD CRASH : Sql_cmd_secondary_load_unload::mysql_secondary_load_or_unload
Bug#34520630 - Failed Condition: "table_id != InvalidTableId"

Currently, DDL statements such as ALTER TABLE*, RENAME TABLE, and
TRUNCATE TABLE are not allowed if a table has a secondary engine
defined. The statements fail with the following error: "DDLs on a table
with a secondary engine defined are not allowed."

This worklog lifts this restriction for tables whose secondary engine is
RAPID.

A secondary engine hook is called in the beginning (pre-hook) and in the
end (post-hook) of a DDL statement execution. If the DDL statement
succeeds, the post-hook will direct the recovery framework to reload the
table in order to reflect that change in HeatWave.

Currently all DDL statements that were previously disallowed will
trigger a reload. This can be improved in the future by checking whether
the DDL operation has an impact on HeatWave or not. However detecting
all edge-cases in this behavior is not straightforward so this
improvement has been left as a future improvement.

Additionally, if a DDL modifies the table schema in a way that makes it
incompatible with HeatWave (e.g., dropping a primary key column) the
reload will fail silently. There is no easy way to recognize whether the
table schema will become incompatible with HeatWave in a pre-hook.

List of changes:
  1) [MySQL] Add new HTON_SECONDARY_ENGINE_SUPPORTS_DDL flag to indicate
whether a secondary engine supports DDLs.
  2) [MySQL] Add RAII hooks for RENAME TABLE and TRUNCATE TABLE, modeled
on the ALTER TABLE hook.
  3) Define HeatWave hooks for ALTER TABLE, RENAME TABLE, and TRUNCATE
TABLE statements.
  4) If a table reload is necessary, trigger it by marking the table as
stale (WL#14914).
  4) Move all change propagation & DDL hooks to ha_rpd_hooks.cc.
  5) Adjust existing tests to support table reload upon DDL execution.
  6) Extract code related to RapidOpSyncCtx in ha_rpd_sync_ctx.cc, and
the PluginState enum to ha_rpd_fsm.h.

* Note: ALTER TABLE statements related to secondary engine setting and
loading were allowed before:
    - ALTER TABLE <TABLE> SECONDARY_UNLOAD,
    - ALTER TABLE SECONDARY_ENGINE = NULL.

-- Bug#34433145 --
-- Bug#34446287 --

--Problem #1--
Crashes in Change Propagation when the CP thread tries to apply DMLs of
tables with new schema to the not-yet-reloaded table in HeatWave.

--Solution #1--
Remove table from Change Propagation before marking it as stale and
revert the original change from rpd_binlog_parser.cc where we were
checking if the table was stale before continuing with binlog parsing.
The original change is no longer necessary since the table is removed
from CP before being marked as stale.

--Problem #2--
In case of a failed reload, tables are not removed from Global State.

--Solution #2--
Keep track of whether the table was reloaded because it was marked as
STALE. In that case we do not want the Recovery Framework to retry the
reload and therefore we can remove the table from the Global State.

-- Bug#34520634 --

Problem:
Allowing the change of primary engine for tables with a defined
secondary engine hits an assertion in mysql_secondary_load_or_unload().

Example:
    CREATE TABLE t1 (col1 INT PRIMARY KEY) SECONDARY_ENGINE = RAPID;
    ALTER TABLE t1 ENGINE = BLACKHOLE;
    ALTER TABLE t1 SECONDARY_LOAD; <- assertion hit here

Solution:
Disallow changing the primary engine for tables with a defined secondary
engine.

-- Bug#34520630 --

Problem:
A debug assert is being hit in rapid_gs_is_table_reloading_from_stale
because the table was dropped in the meantime.

Solution:
Instead of asserting, just return false if table is not present in the
Global State.

This patch also changes rapid_gs_is_table_reloading_from_stale to a more
specific check (inlined the logic in load_table()). This check now also
covers the case when a table was dropped/unloaded before the Recovery
Framework marked it as INRECOVERY. In that case, if the reload fails we
should not have an entry for that table in the Global State.

The patch also adjusts dict_types MTR test, where we no longer expect
for tables to be in UNAVAIL state after a failed reload. Additionally,
recovery2_ddls.test is adjusted to not try to offload queries running on
Performance Schema.

Change-Id: I6ee390b1f418120925f5359d5e9365f0a6a415ee
pull bot pushed a commit that referenced this pull request Apr 27, 2023
Enable the NDB binlog injector to calculate transaction
dependencies for changes written to the binlog.

This is done by extending rpl_injector to populate the "THD::writeset"
array with 64 bit hash values representing the key(s) of each binlogged
row in the current transaction. Those values are then at binlog commit
time compared with the historical writeset to find the oldest transaction
for which there are no change to the same key(s).

The same mechanism is already available for other storage engines since
WL#9556 and intention is that it should now work in an identical fashion
and with same limitations also for transactions binlogged for changes
done in NDB.

The transaction writeset hash value calculations is
enabled by using --ndb-log-transaction-dependency=[ON|OFF],
thus enabling the use of WRITESET dependency tracking mode when
the ndb_binlog thread writes to the binlog.

Expected result is basically that the "last_comitted" value
for each binlogged transaction will be set to the sequence
number of the transaction that previously have modified the
same row(s) or the last serial synchronization point.

Change-Id: I7b50365ce13d26a70eb5e93a3d703c0fe82ba8a4
pull bot pushed a commit that referenced this pull request Apr 27, 2023
Fix static analysis warnings for variables that are assigned but never
used.
storage/ndb/test/src/UtilTransactions.cpp:286:16: warning: Although the
value stored to 'eof' is used in the enclosing expression, the value is
never actually read from 'eof' [clang-analyzer-deadcode.DeadStores]

Change-Id: I6d7b1fbae691a8d9750f57a8a11c2b12ff65041d
pull bot pushed a commit that referenced this pull request Apr 27, 2023
  # This is the 1st commit message:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA

  Problem Statement
  -----------------
  Currently customers cannot enable heatwave analytics service to their
  HA DBSystem or enable HA if they are using Heatwave enabled DBSystem.
  In this change, we attempt to remove this limitation and provide
  failover support of heatwave in an HA enabled DBSystem.

  High Level Overview
  -------------------
  To support heatwave with HA, we extended the existing feature of auto-
  reloading of tables to heatwave on MySQL server restart (WL-14396). To
  provide seamless failover functionality to tables loaded to heatwave,
  each node in the HA cluster (group replication) must have the latest
  view of tables which are currently loaded to heatwave cluster attached
  to the primary, i.e., the secondary_load flag should be in-sync always.

  To achieve this, we made following changes -
    1. replicate secondary load/unload DDL statements to all the active
       secondary nodes by writing the DDL into the binlog, and
    2. Control how secondary load/unload is executed when heatwave cluster
       is not attached to node executing the command

  Implementation Details
  ----------------------
  Current implementation depends on two key assumptions -
   1. All MDS DBSystems will have RAPID plugin installed.
   2. No non-MDS system will have the RAPID plugin installed.

  Based on these assumptions, we made certain changes w.r.t. how server
  handles execution of secondary load/unload statements.
   1. If secondary load/unload command is executed from a mysql client
      session on a system without RAPID plugin installed (i.e., non-MDS),
      instead of an error, a warning message will be shown to the user,
      and the DDL is allowed to commit.
   2. If secondary load/unload command is executed from a replication
      connection on an MDS system without heatwave cluster attached,
      instead of throwing an error, the DDL is allowed to commit.
   3. If no error is thrown from secondary engine, then the DDL will
      update the secondary_load metadata and write a binlog entry.

  Writing to binlog implies that all the consumer of binlog now need to
  handle this DDL gracefully. This has an adverse effect on Point-in-time
  Recovery. If the PITR backup is taken from a DBSystem with heatwave, it
  may contain traces of secondary load/unload statements in its binlog.
  If such a backup is used to restore a new DBSystem, it will cause failure
  while trying to execute statements from its binlog because
   a) DBSystem will not heatwave cluster attached at this time, and
   b) Statements from binlog are executed from standard mysql client
      connection, thus making them indistinguishable from user executed
      command.
  Customers will be prevented (by control plane) from using PITR functionality
  on a heatwave enabled DBSystem until there is a solution for this.

  Testing
  -------
  This commit changes the behavior of secondary load/unload statements, so it
   - adjusts existing tests' expectations, and
   - adds a new test validating new DDL behavior under different scenarios

  Change-Id: Ief7e9b3d4878748b832c366da02892917dc47d83

  # This is the commit message #2:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (PITR SUPPORT)

  Problem
  -------
  A PITR backup taken from a heatwave enabled system could have traces
  of secondary load or unload statements in binlog. When such a backup
  is used to restore another system, it can cause failure because of
  following two reasons:

  1. Currently, even if the target system is heatwave enabled, heatwave
  cluster is attached only after PITR restore phase completes.
  2. When entries from binlogs are applied, a standard mysql client
  connection is used. This makes it indistinguishable from other user
  session.

  Since secondary load (or unload) statements are meant to throw error
  when they are executed by user in the absence of a healthy heatwave
  cluster, PITR restore workflow will fail if binlogs from the backup
  have any secondary load (or unload) statements in them.

  Solution
  --------
  To avoid PITR failure, we are introducing a new system variable
  rapid_enable_delayed_secondary_ops. It controls how load or unload
  commands are to be processed by rapid plugin.

    - When turned ON, the plugin silently skips the secondary engine
      operation (load/unload) and returns success to the caller. This
      allows secondary load (or unload) statements to be executed by the
      server in the absence of any heatwave cluster.
    - When turned OFF, it follows the existing behavior.
    - The default value is OFF.
    - The value can only be changed when rapid_bootstrap is IDLE or OFF.
    - This variable cannot be persisted.

  In PITR workflow, Control Plane would set the variable at the start of
  PITR restore and then reset it at the end of workflow. This allows the
  workflow to complete without failure even when heatwave cluster is not
  attached. Since metadata is always updated when secondary load/unload
  DDLs are executed, when heatwave cluster is attached at a later point
  in time, the respective tables get reloaded to heatwave automatically.

  Change-Id: I42e984910da23a0e416edb09d3949989159ef707

  # This is the commit message #3:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (TEST CHANGES)

  This commit adds new functional tests for the MDS HA + HW integration.

  Change-Id: Ic818331a4ca04b16998155efd77ac95da08deaa1

  # This is the commit message #4:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA
  BUG#34776485: RESTRICT DEFAULT VALUE FOR rapid_enable_delayed_secondary_ops

  This commit does two things:
  1. Add a basic test for newly introduced system variable
  rapid_enable_delayed_secondary_ops, which controls the behavior of
  alter table secondary load/unload ddl statements when rapid cluster
  is not available.

  2. It also restricts the DEFAULT value setting for the system variable
  So, following is not allowed:
  SET GLOBAL rapid_enable_delayed_secondary_ops = default
  This variable is to be used in restricted scenarios and control plane
  only sets it to ON/OFF before and after PITR apply. Allowing set to
  default has no practical use.

  Change-Id: I85c84dfaa0f868dbfc7b1a88792a89ffd2e81da2

  # This is the commit message #5:

  Bug#34726490: ADD DIAGNOSTICS FOR SECONDARY LOAD / UNLOAD DDL

  Problem:
  --------
  If secondary load or unload DDL gets rolled back due to some error after
  it had loaded / unloaded the table in heatwave cluster, there is no undo
  of the secondary engine action. Only secondary_load flag update is
  reverted and binlog is not written. From User's perspective, the table
  is loaded and can be seen on performance_schema. There are also no
  error messages printed to notify that the ddl didn't commit. This
  creates a problem to debug any issue in this area.

  Solution:
  ---------
  The partial undo of secondary load/unload ddl will be handled in
  bug#34592922. In this commit, we add diagnostics to reveal if the ddl
  failed to commit, and from what stage.

  Change-Id: I46c04dd5dbc07fc17beb8aa2a8d0b15ddfa171af

  # This is the commit message #6:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (TEST FIX)

  Since ALTER TABLE SECONDARY LOAD / UNLOAD DDL statements now write
  to binlog, from Heatwave's perspective, SCN is bumped up.

  In this commit, we are adjusting expected SCN values in certain
  tests which does secondary load/unload and expects SCN to match.

  Change-Id: I9635b3cd588d01148d763d703c72cf50a0c0bb98

  # This is the commit message mysql#7:

  Adding MTR tests for ML in rapid group_replication suite

  Added MTR tests with Heatwave ML queries with in
  an HA setup.

  Change-Id: I386a3530b5bbe6aea551610b6e739ab1cf366439

  # This is the commit message mysql#8:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (MTR TEST ADJUSTMENT)

  In this commit we have adjusted the existing test to work with the
  new MTR test infrastructure which extends the functionalities to
  HA landscape. With this change, a lot of mannual settings have now
  become redundant and thus removed in this commit.

  Change-Id: Ie1f4fcfdf047bfe8638feaa9f54313d509cbad7e

  # This is the commit message mysql#9:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (CLANG-TIDY FIX)

  Fix clang-tidy warnings found in previous change#16530, patch#20

  Change-Id: I15d25df135694c2f6a3a9146feebe2b981637662

Change-Id: I3f3223a85bb52343a4619b0c2387856b09438265
pull bot pushed a commit that referenced this pull request Apr 27, 2023
…::register_variable

Several problems stacked up together:
1. The component initialization, when failing should clean up after
itself.

Fixed the validate_password component's init method to properly clean up
in case of failures.

2. The validate_password component had an REQUIRES_SERIVCE(registry).

While this is not wrong per se, it collided with the implicit
REQUIRES_SERVICE(registry) done by the BEGIN_COMPONENT_REQUIRES() macro
in that it was using the same placeholder global variable.
So now the same service handle was released twice on error or component
unload.
Fixed by removing the second REQUIRES_SERVICE(registry).

3. The dynamic loader is releasing the newly acquired service references
for the required services on initialization error. However after doing
that it was actually setting the service handle placeholder to NULL.
This is not wrong, but combined with problem #2 was causing a reference
to the registry service to be acquired twice, stored into the same
placeholder and then released just once, since after the first release
the placeholder was set to null and thus the second release is a no-op.

Fixed by not resetting the handle placeholder after releasing the
service reference.

4. The system variable registration service wouldn't release the
intermediate memory slots it was allocating on error.

Fixed by using std::unique_ptr to handle the proper releasing.

Change-Id: Ib2c7ae80736c591838af8c182fda1980be1e1f0e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.