Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: GC job may fail with error which should be retried #65000

Closed
ajwerner opened this issue May 11, 2021 · 2 comments · Fixed by #65910
Closed

sql: GC job may fail with error which should be retried #65000

ajwerner opened this issue May 11, 2021 · 2 comments · Fixed by #65910
Assignees
Labels
A-jobs A-schema-changes C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.

Comments

@ajwerner
Copy link
Contributor

Describe the problem

The GC job should very rarely fail. There is a general fear that some errors may be permanent. Nevertheless, today roughly no errors get retried. That's madness. This can leave orphaned data in the keyspace.

To Reproduce

Kill nodes during the execution of a GC job. See an error.

Expected behavior

The job will get retried.

Additional context

Relates to and may replace #55740.

We could (should?) switch the default to retry everything once we have backoff (#44594).

In the schema changer we have some bespoke retry logic that we could mostly reuse.

// isPermanentSchemaChangeError returns true if the error results in
// a permanent failure of a schema change. This function is a allowlist
// instead of a blocklist: only known safe errors are confirmed to not be
// permanent errors. Anything unknown is assumed to be permanent.
func isPermanentSchemaChangeError(err error) bool {

@ajwerner ajwerner added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. A-schema-changes A-jobs labels May 11, 2021
@ajwerner
Copy link
Contributor Author

For backport to 21.1 and 20.2 we should classify errors using the linked function or some part of it and then for the fuller solution we can just retry everything with backoff.

@ajwerner
Copy link
Contributor Author

For the backport, something like:

--- a/pkg/sql/gcjob/gc_job.go
+++ b/pkg/sql/gcjob/gc_job.go
@@ -80,7 +80,12 @@ func performGC(
 }
 
 // Resume is part of the jobs.Resumer interface.
+func (r schemaChangeGCResumer) Resume(ctx context.Context, execCtx interface{}) (err error) {
+       defer func() {
+               if !isPermanent(err) {
+                       err = jobs.NewRetryJobError(err.Error())
+               }
+       }()
        p := exe

sajjadrizvi pushed a commit to sajjadrizvi/cockroach that referenced this issue Jun 1, 2021
In the previous implementation, failed GC jobs were not being retried regardless
whether the failure is permanent or transient. As a result, a GC job's failure
risked orphaned data, which cannot be reclaimed.

This commit adds a mechanism to retry failed GC jobs that are not permanent. No
limit is set on the number of retries. For the time being, the failure type is
determined based on the failure categorization of schema-change jobs. This
behavior is expected to change once exponential backoff mechanism is
implemented for failed jobs (cockroachdb#44594).

Release note: None

Fixes: cockroachdb#65000
craig bot pushed a commit that referenced this issue Jun 1, 2021
65867: changefeedccl: Fix flaky tests. r=miretskiy a=miretskiy

Fix flaky test and re-enable it to run under stress.
The problem was that the transaction executed by the table feed can
be restarted.  If that happens, then we would see the same keys again,
but because we had side effects inside transaction (marking the keys
seen), we would not emit those keys causing the test to be hung.
The stress race was failing because of both transaction restarts and
the 10ms resolved timestamp frequency (with so many resolved timestamps
being generated, the table feed transaction was always getting
restarted).

Fixes #57754
Fixes #65168

Release Notes: None

65868: storage: expose pebble.IteratorStats through {MVCC,Engine}Iterator r=sumeerbhola a=sumeerbhola

These will potentially be aggregated before exposing in trace
statements, EXPLAIN ANALYZE etc.

Release note: None

65900: roachtest: fix ruby-pg test suite r=rafiss a=RichardJCai

Update blocklist with passing test.
The not run test causing a failure is because the test is no longer failing.
Since it is not failing, it shows up under not run.

Release note: None

65910: sql/gcjob: retry failed GC jobs r=ajwerner a=sajjadrizvi

In the previous implementation, failed GC jobs were not being retried regardless
whether the failure is permanent or transient. As a result, a GC job's failure
risked orphaned data, which cannot be reclaimed.

This commit adds a mechanism to retry failed GC jobs that are not permanent. No
limit is set on the number of retries. For the time being, the failure type is
determined based on the failure categorization of schema-change jobs. This
behavior is expected to change once exponential backoff mechanism is
implemented for failed jobs (#44594).

Release note: None

Fixes: #65000

Release note (<category, see below>): <what> <show> <why>

65925: ccl/importccl: skip TestImportPgDumpSchemas/inject-error-ensure-cleanup r=tbg a=adityamaru

Refs: #65878

Reason: flaky test

Generated by bin/skip-test.

Release justification: non-production code changes

Release note: None

65933: kv/kvserver: skip TestReplicateQueueDeadNonVoters under race r=sumeerbhola a=sumeerbhola

Refs: #65932

Reason: flaky test

Generated by bin/skip-test.

Release justification: non-production code changes

Release note: None

65934: kv/kvserver: skip TestReplicateQueueSwapVotersWithNonVoters under race r=sumeerbhola a=sumeerbhola

Refs: #65932

Reason: flaky test

Generated by bin/skip-test.

Release justification: non-production code changes

Release note: None

65936: jobs: fix flakey TestMetrics r=fqazi a=ajwerner

Fixes #65735

The test needed to wait for the job to be fully marked as paused.

Release note: None

Co-authored-by: Yevgeniy Miretskiy <[email protected]>
Co-authored-by: sumeerbhola <[email protected]>
Co-authored-by: richardjcai <[email protected]>
Co-authored-by: Sajjad Rizvi <[email protected]>
Co-authored-by: Aditya Maru <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
@craig craig bot closed this as completed in e532c2c Jun 1, 2021
sajjadrizvi pushed a commit to sajjadrizvi/cockroach that referenced this issue Jun 1, 2021
In the previous implementation, failed GC jobs were not being retried
regardless whether the failure is permanent or transient. As a
result, a GC job's failure risked orphaned data, which cannot be
reclaimed.

This commit adds a mechanism to retry failed GC jobs that are not
permanent. No limit is set on the number of retries. For the time
being, the failure type is determined based on the failure
categorization of schema-change jobs. This behavior is expected to
change once exponential backoff mechanism is implemented for failed
jobs (cockroachdb#44594).

Release note: None

Fixes: cockroachdb#65000
sajjadrizvi pushed a commit to sajjadrizvi/cockroach that referenced this issue Jun 1, 2021
In the previous implementation, failed GC jobs were not being retried
regardless whether the failure is permanent or transient. As a
result, a GC job's failure risked orphaned data, which cannot be
reclaimed.

This patch adds a mechanism to retry failed GC jobs that are not
permanent. No limit is set on the number of retries. For the time
being, the failure type is determined based on the failure
categorization of schema-change jobs. This behavior is expected to
change once exponential backoff mechanism is implemented for failed
jobs (cockroachdb#44594).

This is a backport of cockroachdb#65910.

Release note: None

Fixes: cockroachdb#65000
sajjadrizvi pushed a commit to sajjadrizvi/cockroach that referenced this issue Jun 2, 2021
In the previous implementation, failed GC jobs were not being retried
regardless whether the failure is permanent or transient. As a
result, a GC job's failure risked orphaned data, which cannot be
reclaimed.

This patch adds a mechanism to retry failed GC jobs that are not
permanent. No limit is set on the number of retries. For the time
being, the failure type is determined based on the failure
categorization of schema-change jobs. This behavior is expected to
change once exponential backoff mechanism is implemented for failed
jobs (cockroachdb#44594).

This is a backport of cockroachdb#65910.

Release note: None

Fixes: cockroachdb#65000
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-jobs A-schema-changes C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants