-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backup does not snapshot correctly while using parallel replication #2537
Comments
I think this was all a misunderstanding about how GTIDSets work. @sougou can we close this? |
Yes. |
frouioui
pushed a commit
to planetscale/vitess
that referenced
this issue
Nov 21, 2023
…ackup_pitr` into two distinct CI tests: builtin vs Xtrabackup (vitessio#2538) * backport of 2537 * regenerate .github/workflows/cluster_endtoend_vreplication_partial_movetables* Signed-off-by: Shlomi Noach <[email protected]> --------- Signed-off-by: Shlomi Noach <[email protected]> Co-authored-by: Shlomi Noach <[email protected]>
frouioui
pushed a commit
to planetscale/vitess
that referenced
this issue
Mar 26, 2024
…CI tests: builtin vs Xtrabackup (vitessio#2537) * cherry pick of 13395 * remove backup_pitr_xtrabackup Signed-off-by: Shlomi Noach <[email protected]> * remove backup_pitr_xtrabackup Signed-off-by: Shlomi Noach <[email protected]> * make generate_ci_workflows Signed-off-by: Shlomi Noach <[email protected]> * remove xtrabackup 5.7 workflow Signed-off-by: Shlomi Noach <[email protected]> --------- Signed-off-by: Shlomi Noach <[email protected]> Co-authored-by: Shlomi Noach <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Because there are parallel threads applying statements to MySQL, our current process of shutting down mysql leads to gaps in the binlog that are not applied compared to the last binlog position saved.
After restore, since we start from the last known position, those parts of the binlog get left out, and the replica has diverged from the master.
Either we need a way to ensure that there are no transactions left out in the replica as of the last position, or we need to restart replication at an earlier point to make sure that everything is replayed.
The text was updated successfully, but these errors were encountered: