-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove a few hashes from the block header in favor of merkelizing them #7860
Comments
Very cool proposal. Thanks for thinking about this. Note that right now we have parity between the block header and the header sent in BeginBlock. So unless we introduce a new header that contains these hashes, apps wouldn't get this info. Though it's not entirely clear that apps need this info in the first place
So a hash that was previously in the header is now only a leaf in a new merkle tree with root in the header. Can you clarify how this affects data-availability? How is the new data (say, the ResultsHash and the internal nodes that proof it merklizes to BlockHash) different from other data we already don't have (eg. the txs themselves) ?
Will they not want to know when it does?
Can't assume this - evidence might not change voting power right away, for instance.
This is neat, so long as the EvidenceHash still goes into the BlockHash. The int makes sure that lite-clients know whenever there is evidence in a block. |
Oh cool, hadn't realized that. NumEvidence takes care of that then :)
I had misread which struct ConsensusHash was a hash of. It turns out its BlockSize and NumEvidence, so I do agree, they would want to know. So we can pull the same trick as with evidence, use an int8 / bool to specify if there was a change, while still keeping this merkelized.
The ResultsHash and the internal nodes would be known to all full nodes and thus they could compare against the Upon further thought, I think AppHash should be moved into the BlockHash as well. Not many lite clients will want to query AppHash across a range of blocks (The latest state when they want to query should suffice). However they will want to query a range of blocks for Tags, so it makes sense to make the TagHash remain in the top level. (Unless Results Hash is supposed to be the thing which we query for tags, if so I didn't realize that)
I don't think apps really need these hashes in the first place, so we can maintain this parity for now. |
Into this.
Mmm let's not do this. AppHash is a first class citizen in Tendermint design, since it's returned by Commit. Let's keep it in the header.
This is an open question - whether we split the results and tags hashes. It seems we should, so the results hash can just be for simple querying the results while the tags hash can support proofs of existence/absence on tags. Also, I think we're expecting tags to be the primary mode of querying, so they should have some first class status in the header ... BTW - have you seen the new General Merkle Proof stuff? It would be able to capture these proofs, where we have merkle trees in merkle trees. |
Sounds good. Might be worth revisiting querying needs once the network is live.
Agreed on both fronts. I think it should be split from the results hash, and that the TagHash should directly be in the header. (I see now that what I wrote previously was ambiguous, but I was in agreement :)).
I actually hadn't looked into it! Your right it looks like it will be able to capture these proofs. However, if I understand it correctly, I think it requires a fair amount of restructuring #postlaunch for greater efficiency (smaller proof size). I'll read into it more this week, and write-up a follow up issue. I think the block hash should be structured as:
Where |
Upon further thought, I still don't think we need So the lite client only needs to have an in-sync copy of the validator set. This is handled via nextValidatorSetHash and querying for the new validator powers if the hash changes. If you trust state transitions are executed properly, then evidence will be slashing validators at the correct height. It doesn't really matter that slashing could happen after arbitrary delay, since that doesn't change how you validate consensus. (I do agree that evidence hash needs to be in that merkelized tree of the BlockHash, just not at the top level for lite clients) Also on further thought, I'm not sure that a lite client needs to know about the consensus hash having changed. ConsensusHash is |
Hrmm, another thought is what if we also merkelize
Data and cons being at the bottom since if your querying data you can afford a few more hashes, and cons at the bottom since I don't see why you'd query it as a lite client. Evidence and res can be switched depending on what we think will be queried more. |
Removing this from the 0.34 release (slated for mid-to-late May) because it still needs further discussion and design. Please speak up if you disagree! |
Note, we probably no longer have time to do this prelaunch, but (unless I'm missing something), I think it should be done next BlockVersion upgrade. (Hopefully soon)
Currently the BlockHeader contains many Hashes. (See https://github.com/tendermint/tendermint/blob/master/types/block.go#L258) Its unclear to me why most of these are in here, as it gets merkelized anyway. The importance for lite clients comes in only if there is a data availability problem or a knowledge of change problem AFAICT. Otherwise, your just requesting a single additional node internal node in a merkle proof only on the blocks you wish to query. (Saving bandwidth for everyone, including the consensus engine)
The hashes in the blockheader which I want to comment on are:
Below is what I think it should be:
accums, height, round
, they'd have to be able to derive accums. To derive accums, they'd have to watch every block / know when rounds are incremented, which is too high of a synchrony requirement.Thoughts on this?
The text was updated successfully, but these errors were encountered: