Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try to recover automatically from the background error about sst corrupt #1667

Merged
merged 5 commits into from
Aug 15, 2023

Conversation

caipengbo
Copy link
Contributor

@caipengbo caipengbo commented Aug 11, 2023

When the SST file corrupts, which is an unrecoverable error for the rocksdb, then rocksdb will go into read-only mode(https://github.com/facebook/rocksdb/wiki/Background-Error-Handling). Only restart rocksdb to try to recover from the error.

When does sst file corruption occur? The error message looks like this:

1. Corruption: Corrupt or unsupported format_version: 1005 in /tmp/kvrocks/data/db/000038.sst
2. Corruption: Bad table magic number: expected 9863518390377041911, found 9863518390377041912 in /tmp/kvrocks_db/data/db/000038.sst
3. Corruption: block checksum mismatch: stored = 3308200672, computed = 51173877, type = 4 in /tmp/kvrocks_db/data/db/000038.sst offset 0 size 15715

The cause of the error is usually a hardware issue or a problem with the network or cloud disk (when using the cloud disk).

The most common place we see this error is when a file is generated by Compaction or Flush and the Version applies the result.

In this case, the result of the compaction is not actually applied, so we can ignore the error and avoid restarting the rocksdb.

Tikv introduces this check when sst file corruption occurs, you can refer to:

Let's try it on Kvrocks:

  1. Extract the sst file from the background error message
  2. Determine if it is a living file
  3. If not, we ignore the error and force recovery from the background error

For the rocksdb error message, before the rocksdb v7.10.2, the error message was imperfect and we could only recover from a limited number of errors. Thanks to this PR facebook/rocksdb#11009, the error message is enriched and we can recover from more scenarios.

@git-hulk
Copy link
Member

@caipengbo Thanks for your solution, some users did suffer from this issue even though it's a low probability. For test cases, it's good for me, since it's hard to mock without hijacking the rocksdb.

@mapleFU
Copy link
Member

mapleFU commented Aug 11, 2023

I'm not sure it can be used. Seems that myrocks didn't apply the rule: https://github.com/facebook/mysql-5.6/blob/fb-mysql-8.0.28/storage/rocksdb/event_listener.cc#L116

TiKV has replicas, and it scan transfer the leader to replica, so I think there could be something different. I'll take a look into rocksdb later, does any one else use this pattern to recover from Corrupt?

@caipengbo
Copy link
Contributor Author

Seems that myrocks didn't apply the rule

Yes, myrocks will abort. I've looked into myrocks and other systems before and only found tikv with a special check.

I'm not sure it can be used.

In the past, we had a NIC failure in production, which caused this error to become more frequent. So we added this mechanism, and we will monitor the background error when it appears, and furthermore, we will archive the error file so that we can locate the problem.

Since then, there have been a few similar errors, all of which have been recovered safely. @mapleFU

@mapleFU
Copy link
Member

mapleFU commented Aug 14, 2023

So the problem here is that we have background network IO, which might cause error much more-frequently than disk io error?
This idea looks ok, but I prefer a default disabled config for this. Because it might cause some unexpected behavior.

@caipengbo
Copy link
Contributor Author

IMHO, it will not affect the correctness of the data. The possible effect is that users tend to overlook background errors they encounter. HDYT @git-hulk @PragmaTwice

@git-hulk
Copy link
Member

It should be fine since this only impacts the io error scenario, and the instance also would be broken if got those errors.

Copy link
Member

@mapleFU mapleFU left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@git-hulk git-hulk merged commit 5cdbb19 into apache:unstable Aug 15, 2023
26 checks passed
p1u3o pushed a commit to p1u3o/incubator-kvrocks that referenced this pull request Aug 15, 2023
…che#1667)

When the SST file corrupts, which is an unrecoverable error for the rocksdb, then rocksdb will go into read-only mode(https://github.com/facebook/rocksdb/wiki/Background-Error-Handling). Only restart rocksdb to try to recover from the error.

When does sst file corruption occur? The error message looks like this:
```
1. Corruption: Corrupt or unsupported format_version: 1005 in /tmp/kvrocks/data/db/000038.sst
2. Corruption: Bad table magic number: expected 9863518390377041911, found 9863518390377041912 in /tmp/kvrocks_db/data/db/000038.sst
3. Corruption: block checksum mismatch: stored = 3308200672, computed = 51173877, type = 4 in /tmp/kvrocks_db/data/db/000038.sst offset 0 size 15715
```

The cause of the error is usually a hardware issue or a problem with the network or cloud disk (when using the cloud disk).

The most common place we see this error is when a file is generated by `Compaction` or `Flush` and the `Version` applies the result.

In this case, the result of the compaction is not actually applied, so we can ignore the error and avoid restarting the rocksdb.

Tikv introduces this check when sst file corruption occurs, you can refer to:
- tikv/tikv#10578
- tikv/tikv#10961


Let's try it on Kvrocks:
1. Extract the sst file from the background error message
2. Determine if it is a living file
3. If not, we ignore the error and force recovery from the background error

For the rocksdb error message, before the rocksdb v7.10.2, the error message was imperfect and we could only recover from a limited number of errors. Thanks to this PR facebook/rocksdb#11009, the error message is enriched and we can recover from more scenarios.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants