Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It would be a good idea to document what we do know about the correctness of our metadata handling #44

Open
mulkieran opened this issue Jul 5, 2017 · 1 comment

Comments

@mulkieran
Copy link
Member

Right now, we know this:

  1. When a BDA is first initialized, it has no variable length metadata. That is correct. We assume that after setup, the BDA has variable length metadata, at least one, and that that is correct.
  2. If saving of both areas succeeds, the MDAHeaders are correctly updated, i.e., the designated saved one is updated with correct new information, timestamp, size of metadata written, metadata crc.
  3. If the size of the data is too big for the metadata region, saving will fail immediately, w/out making an attempt to write or to update the headers. This will also be true for a save time that is actually less recent than the timestamp of the data recorded. In each of these cases, an EngineError with value Invalid will be returned, which allows to distinguish between these conditions and I/O errors. This gets back to the question of how to select which blockdev's to write the metadata to in the first place. It makes no sense to try to write data that is too big, but we have no reason to believe that data is always going to increase in size. If we trim our pool right down, the data could actually decrease in size.
  4. If saving the first or both fails, the corresponding MDAHeader will not be updated and an error result will be returned. This is a good, simple decision. It has one interesting consequence, though. The next time it is needed to write metadata to the disks, that region will be attempted again. If it was a non-transient failure, that particular blockdev will fail to have the metadata correctly written over and over again. We should consider the consequences of this.

What this leaves open is understanding the probability of getting wrong data when reading metadata during setup.

@mulkieran mulkieran changed the title It would be a good idea to document what we do no about the correctness of our metadata handling It would be a good idea to document what we do know about the correctness of our metadata handling Jul 6, 2017
@mulkieran
Copy link
Member Author

We actually ought to document how our metadata reading works in the first place. I think we can do that with a reasonable sized FSM encoding a regular language.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant