-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
checkpointing: add keeper and core state #27
Conversation
added checkpointing tx proto add query proto for checkpointing generated proto code wraps up the scaffolding of checkpointing
…pointing-data-structure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!
I suspect the schema design has been influenced by @vitsalis BTC header keeper. I think it could be simplified here a lot. I see you left most validation as TODO, which would have been the places you would have used this storage API, and it could have been indicative of what you will actually need and what you won't, but here's my intuition:
- You shouldn't need such a complex key because since BM-32: Update diagrams for out-of-sequence checkpoint handling #25 we won't allow multiple raw checkpoints per epoch. The only one should be the one we create at the end of the epoch and that's it. The epoch number is the unique key of the checkpoint, no need even to store the hash in the key, although a hash->epoch association may be needed.
- But even that might be optional, since the BTC checkpoints submitted will have the epoch number in the OP_RETURN, not the hash of the checkpoint, so you never need to retrieve by hash.
- The natural order of epochs, with no out-of-sequence checkpointing, means you can simplify the by-status query, and get rid of the "tip" since it's just the last entry.
- I don't think you need to store the full BLS signature, it should be enough to aggregate it as soon as it's received, update the bitmap, and wait for more. This would make a lot of storage and queries go away as well.
- The hashes of the signatures are not as prominent as a block hash, they are not well known identifiers, so I don't think "get by hash" and its various combinations will come up in practice.
It would be cool to add an ER diagram about the schema to the docs.
A follow up PR with fuzz testing could catch simple typos, but again generics would be super helpful to reduce the boilerplate as well.
Othwerwise, awesome, keep up the good work 👍
|
||
// Epoching defines the expected interface needed to retrieve epoch info | ||
type EpochingKeeper interface { | ||
GetCurrentEpoch(ctx sdk.Context) sdk.Uint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to introduce a newtype type EpochNumber sdk.Uint
. It would make it easier later to not mix up block height and epoch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. Shall we define this in our project level types or in the checkpointing module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what @vitsalis decided to do in #29
I'm not sure it's the best place actually, from the perspective that if we want to make modules independent of each other, it's strange that some of these types should live in the common project type namespaces. I reckon some common types could live there (like the Int
and Dec
in the SDK) but module specific ones should go into the module that requires them. So BTC related ones in for example the BTC light-client module, and this one in the epoching module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. cc @SebastianElvis, could you please add type EpochNumber sdk.Uint
somewhere in epoching/types
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to introduce a newtype type EpochNumber sdk.Uint. It would make it easier later to not mix up block height and epoch.
I can do this, but this seems to make things over complicated. The new type EpochNumber
does not introduce any extra functions atop sdk.Uint
. Normally one needs to introduce a new type wrapping an existing type in order to implement some new functions that the existing type does not have. I guess this is also the reason why Cosmos SDK's incomplete epoching module uses uint64
for the epoch number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we don't need additional methods for this new type but I guess the purpose of introducing
EpochingNumber
is to avoid mixing up block heigh and epoch.
To me the situation seems to be the same even if we have EpochNumber
type, as EpochNumber
and Height
will share the same set of operations and we don't explicitly declare their types that often.
Perhaps let's see Akosh's follow-up comments so that we can reach a consensus there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to force it on you, but if it was me, I would introduce a type for almost everything I can. Especially in Go and Rust where methods don't take named arguments, it can help with documentation and type safety at the same time. Just by looking a type signature that turns Height
into EpochNumber
I can guess what it's doing, and I know I can't pass EpochNumber
to a function that expects Height
. This benefit is more relevant to things that take multiple []byte
parameters (like the signature checking function in the other PR).
I saw you like using short variable names, like sk
for staking.Keeper
IIRC. This is still readable because of the type. Seeing e uint64
and h uint64
versus e Epoch
and h Height
could be more puzzling. Also useful when we use these data out of context like in database keys.
Regarding operations: these are still numbers, and Go's semantics here are quite convenient, look. You can add a constant and it still preserves the type (so you don't need to re-wrap after incrementing by one for example), and you can add the same types together - these for example in Scala would require introducing operations. But it forbids adding two different types together, which is what we want. And you can always pass these types to things that do expect just numbers, no unwrapping needed.
Another instance where I found it useful is timestamps and time intervals. Both are numeric but represent different things. You are right that on these, some operations like adding timestamps doesn't make any sense, which these simple wrappers don't prevent. Still a lot of codebases use uint64
for timestamp.
In other languages like Rust, with a richer type system, these wrappers are the best tools to introduce arbitrary instance generation logic in fuzz testing, to be able to distinguish between random time and random duration in a simple way.
So at the very least it is for expressiveness of the code, and a piece of mind over not mixing up stuff. It doesn't cost a whole lot, but it's much more difficult to introduce later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. Your code snippet makes a very good point and now I'm convinced.
Yeah, I agree. cc @SebastianElvis, could you please add type EpochNumber sdk.Uint somewhere in epoching/types?
@gitferry Feel free to do this directly in this PR, so that it won't be a blocker for you. Later within this week I will make another PR that makes all epoch numbers to be the wrapped type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool. Thanks @SebastianElvis and @aakoshh! I would prefer @SebastianElvis to do it in your latest PR if it does not trouble you much because I prefer not to touch the epoching
module. You know better than me about where to put EpochNumber
. I'll do an update in my future PRs. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem, will do
RawCheckpointStatus status = 5; | ||
uint32 status = 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you decide to drop the enum? It seems more expressive than a number on its own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I used enum, the status is defined as an int32
type by proto-gen, but I want uint32
for status. I don't know how to specify the type of the status to uint32
. Otherwise, I would have to convert types in many places. Some suggestions? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you use those statuses through constants in the code eitherway, you could keep the enum and use int
in the code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just saw the staking
module uses enum
for status in their proto. I guess I should follow them. See here.
} | ||
|
||
// GetBlsSigEpoch retrieves the epoch of a bls sig | ||
func (bs BlsSigsState) GetBlsSigEpoch(hash types.BlsSigHash) (uint64, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you ever need this retrieval on its own? Don't you always have a BlsSig
, from which you can get both the epoch and the hash, e.g. to check if something is a duplicate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point. I guess we only need GetBlsSigsByEpoch
to generate multi-sig
} | ||
|
||
// GetBlsSigByHash retrieves a bls sig by its hash | ||
func (bs BlsSigsState) GetBlsSigByHash(hash types.BlsSigHash) (*types.BlsSig, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you ever need to retrieve something by hash? Are you ever in a situation where you know the hash but not the epoch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. In our workflow, there's no one who would ask for a bls sig by its hash. In fact, bls sigs are internal to the checkpointing module. Other modules would not be interested in querying the bls sigs. I implemented these APIs for debugging, or we don't need those at all?
} | ||
|
||
// Exists Check whether a hash is maintained in storage | ||
func (bs BlsSigsState) Exists(hash types.BlsSigHash) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question: if this is for duplicate checking, don't we have the pre-image of the hash at hand?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can use this method to prevent adding duplicate bls sigs in the mempool, can't we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we could use it for that. What I meant was that you would have a full blown signature in that case, so you can just pass it as-is, and calculate its storage key. Nobody has just the hash, and not the full signature.
Whether we should check this duplicate before the mempool: I'm not sure that's necessary, if the validators only send their signatures once, everyone their own, they would not be duplicated, because the mempool would reject them as identical transactions.
If they actually kept sending their signatures multiple times in different transactions, then we could charge them for it to discourage this behaviour, but only if we let the transactions into the mempool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense. Thanks!
x/checkpointing/keeper/ckpt_state.go
Outdated
if status == types.CONFIRMED { | ||
err := cs.UpdateLastConfirmedEpoch(ckpt.EpochNum) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe worth leaving a comment that calling update with a confirmed status twice will result in an error.
x/checkpointing/keeper/ckpt_state.go
Outdated
// GetTipEpoch returns the highest epoch that has created a raw checkpoint | ||
func (cs CheckpointsState) GetTipEpoch() uint64 { | ||
if !cs.tipEpoch.Has(types.TipKey()) { | ||
return 0 | ||
} | ||
bz := cs.tipEpoch.Get(types.TipKey()) | ||
return sdk.BigEndianToUint64(bz) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of storing this separately, you can use the store that stores by epoch number and a ReverseIterator
, and only yield the last entry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to know. Will consider it. Thanks!
x/checkpointing/types/blssig_set.go
Outdated
validators types.Validators | ||
|
||
sum uint64 | ||
sigsBySigner map[string]*BlsSig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to remember this? Since we have the bitmap, we should be able to use that to deduplicate.
x/checkpointing/types/blssig_set.go
Outdated
if bs == nil { | ||
panic("AddVote() on nil BlsSigSet") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a common check in Go? Or does it come up in our case? What happens if you call a method on nil, wouldn't it panic on its own?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I learnt this from Tendermint (see here). Not sure if it is a common check in Go. But I agree with you that this panic is unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe in the case of VoteSet
it can be nil
initially, when it's empty.
x/checkpointing/types/utils.go
Outdated
m.LastCommitHash, | ||
m.BlsSig, | ||
sdk.Uint64ToBigEndian(m.EpochNum), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m.LastCommitHash, | |
m.BlsSig, | |
sdk.Uint64ToBigEndian(m.EpochNum), | |
m.LastCommitHash, | |
sdk.Uint64ToBigEndian(m.EpochNum), | |
m.BlsSig, |
Just because the other hash starts like this as well, a bit more intuitive. Perhaps even put the epoch number first, like in the database?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! I would suggest that you add a CLI interface as well so that you're able to do some manual tests with sample queries to uncover potential errors. I have done a similar thing for btclightclient
and it helped a lot.
RawCheckpointStatus status = 5; | ||
uint32 status = 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you use those statuses through constants in the code eitherway, you could keep the enum and use int
in the code?
x/checkpointing/keeper/ckpt_state.go
Outdated
// CreateRawCkpt inserts the raw checkpoint into the hash->epoch, hash->status, and (epoch, status, hash)->ckpt storage | ||
func (cs CheckpointsState) CreateRawCkpt(ckpt *types.RawCheckpoint) error { | ||
ckptHash := ckpt.Hash() | ||
ckptsKey := types.CkptsObjectKey(ckpt.EpochNum, ckpt.Status, ckptHash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still, if we have multiple unconfirmed checkpoints, then wouldn't the epoch-status-hash
key be the same for them?
…pointing-data-structure
@aakoshh @vitsalis Thanks for your reviews. I have addressed them and replied. Since we have decided to store at most one checkpoint for each epoch, this significantly simplifies the checkpoint state. I also mark @KonradStaniec as a reviewer since I added the Future PRs will include cli implementation, fuzz tests, bls sig verification, raw checkpoint generation. |
784c4f5
to
232dc96
Compare
@@ -20,7 +20,7 @@ message RawCheckpoint { | |||
// bls_multi_sig defines the multi sig that is aggregated from individual bls sigs | |||
bytes bls_multi_sig = 4; | |||
// status defines the status of the checkpoint | |||
RawCheckpointStatus status = 5; | |||
CkptStatus status = 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't use abbreviations in type names. I think the corresponding type is RawCheckpoint
, so CheckpointStatus
seems readable. I thought the nested version was also good.
x/checkpointing/keeper/ckpt_state.go
Outdated
if ckpt.Status == status { | ||
ckpts = append(ckpts, ckpt) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to be careful with these that at some day, there will be too many confirmed checkpoints to return at once. There will be 168 per week after all, and it's unlikely that in 2 years someone would want to retrieve over 18000 checkpoints in one go.
For example it's much more likely that the BLS signatures of an epoch will fit in an array, yet there you use the iteration technique with the option to stop early. Most likely anyone who is interested in confirmed checkpoints, will be interested in something like the last 12 or whatever, a single page in the GRPC query which has a limit
. It would be best if all these methods used iteration, to work with the pagination mechanism.
x/checkpointing/keeper/ckpt_state.go
Outdated
func (cs CheckpointsState) UpdateCkptStatus(rawCkptBytes []byte, status types.CkptStatus) error { | ||
ckpt := cs.DeserializeCkpt(rawCkptBytes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This rawCkptBytes []byte
could use a custom type wrapper. That could signal to yo that it has already been deserialised and checked for valid structure by the caller.
Now you are deserialising it here, but it's curious why there is no error handling. If any []byte
can be passed, surely there is no guarantee that it is even a checkpoint.
x/checkpointing/keeper/ckpt_state.go
Outdated
|
||
func (cs CheckpointsState) DeserializeCkpt(rawCkptBytes []byte) *types.RawCheckpoint { | ||
ckpt := new(types.RawCheckpoint) | ||
cs.cdc.MustUnmarshal(rawCkptBytes, ckpt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like if someone sent unexpected bytes, the whole application would panic and crash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. I was following cosmos convention that they use MustUnmarshal
in many places without any error checking see here. But I realized I can't follow them because the raw bytes from BTC could be random bytes. I should use Unmarshal
to raise an error. Thanks for the catch!
x/checkpointing/keeper/ckpt_state.go
Outdated
if !c.Hash().Equals(ckpt.Hash()) { | ||
return errors.New("hash not the same with existing checkpoint") | ||
} | ||
ckpt.Status = status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am getting confused by the checkpoint design because it has this Status
field in the data structure. I have argued against it many times, but here's why I find it confusing in this code:
RawCheckpoint
in my mind is just the epoch, the hash, the signatures and the bitmap; stuff that needs to be sent to BTCckpt := cs.DeserializeCkpt(rawCkptBytes)
looks like you acquired some of the bytes that came from BTC and now you are deserializing it, which seems fine (except for error handling)ckpt.Status = status
seems to set that extra field which is I thought was not part of the bytescs.cdc.MustMarshal(ckpt)
writes it to the database - but if status is not part of the bytes, what gets written?- and if it's not part of the bytes, then what does
GetRawCkptsByStatus
even retrieve? - at this point I remember you commenting that it's only not part of the
Hash()
- so it looks like you have some
[]bytes
that may be missing theStatus
but protobuf ignores that, but it's fine because we overwrite it with another parameter argument - but what if the record had yet another meta-field that protobuf ignored, and now we are resetting that to its default value
- or maybe the input did include some other field that wasn't part of the
Hash
, and we are overwriting that with something an attacker wants to inject into our database - why don't we overwrite the value in
c
, the record we retrieved from the database, so we don't lose anything we already have?
So I still maintain that it would be much better to have a RawCheckpoint
record without Status
, and another record that contains it and adds the Status
and whatever else we might need in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for stating your point! I agree that we should separate RawCheckpoint
with Status
. Would it make sense if I do the changes as follows:
- Remove
status
fromRawCheckpoint
proto message. - Define const
CheckpointStatus
intypes.go
. checkpoints
storesRawCheckpoint
withoutStatus
.- Add a new store called
hashToStatus
which maps checkpoint hash to its status. As such, when I do update status, I only need to changehashToStatus
store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, happy that our views are converging!
What I'd do is leave the status in the proto - it's good for API queries after all. Then define a wrapper type like CheckpointWithMeta
or CheckpointWithStatus
that has a raw_ckpt
and a status
field, and use this as the record to be stored in the database. Then your enumerations continue to work as well, without having to do a separate lookup in the hash-to-status collection, and you can always extend it, should it need more fields, such as a timestamps (e.g. when was it created, when was it observed on BTC, when was it confirmed, etc). It's okay to retrieve a record, amend, and write it back, and simpler than maintaining a myriad of collections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds great. Many thanks!
// TODO: aggregate bls sigs and try to build raw checkpoints | ||
k.BlsSigsState(ctx).CreateBlsSig(sig) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to keep this design of storing lists of BLS checkpoints?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I don't have strong opinions on either way we should choose. I just didn't see any projects that aggregate sigs one by one. I would love to hear it if you have any argument against this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Against what you are doing? My only counter argument would be not to have to maintain and test code to store and retrieve this. The on-the-fly aggregation can be done all in memory by amending a checkpoint record. Perhaps it can even have more statuses, like "accumulation", starting with an empty signature. It's unique key is still the epoch, so it could be:
func (k *Keeper) addBlsSig(sig *BlsSig) error {
let (ckpt, err) := k.GetEpochCheckpoint(sig.EpochNumber())
if err != nil {
return err
}
validators := k.GetEpochValidators(ckpt.EpochNumber())
if err = ckpt.Accumulate(validators, sig), err != nil {
return err
}
k.saveEpochCheckpoint(ckpt)
return nil
}
And we can put all logic into into Accumulate
: status checking, status changes, validator eligibility checks, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something like
struct CheckpointMeta {
RawCheckpoint RawCheckpoint,
Status CheckpointStatus
TotalPower Power
}
func (ckpt *CheckpointMeta) Accumulate(validators []ValidatorWithBlsKeyAndPower, sig BlsSig) error {
if ckpt.Status != CKPT_ACCUMULATING {
return CheckpointNoLongerAccumulating
}
validator, index := findValidator(validators, sig.BlsPublicKey)
if validator == nil {
return UneligibleValidator
}
if !IsValidSignature(sig, ckpt.RawCheckpoint) {
return InvalidSignature
}
if ckpt.Bitmap[index] {
return AlreadyVoted
}
ckpt.TotalPower += validator.Power
ckpt.Signature = BlsAggregate(ckpt.Signature, bls.Signature)
ckpt.Bitmap[index] = true
if ckpt.TotalPower > totalPower(validators) / 3 {
ckpt.Status = CKPT_UNCHECKPOINTED
}
return nil
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks neat. Thanks, @aakoshh! I agree that avoiding storing bls sigs separately is a big gain of on-the-fly aggregation. Since it includes bls aggregation and verification, I'll do this in a future PR.
x/checkpointing/keeper/keeper.go
Outdated
|
||
// CheckpointEpoch verifies checkpoint from BTC and returns epoch number | ||
func (k Keeper) CheckpointEpoch(ctx sdk.Context, rawCkptBytes []byte) (uint64, error) { | ||
ckpt := k.CheckpointsState(ctx).DeserializeCkpt(rawCkptBytes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should have its own type, like
ckpt, err := raw_ckpt.NewFromBytes(bytes)
This is a pure data, there's no "state" involved, so there shouldn't be a need to delegate to the state handler for deserialisation.
// CONFIRMED indicates the checkpoint has sufficient confirmation depth on BTC | ||
CONFIRMED = 2; | ||
// CkptStatus is the status of a checkpoint. | ||
enum CkptStatus{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enum CkptStatus{ | |
enum CkptStatus { |
@@ -45,7 +45,7 @@ service Query { | |||
// RPC method. | |||
message QueryRawCheckpointListRequest { | |||
// status defines the status of the raw checkpoints of the query | |||
RawCheckpointStatus status = 1; | |||
uint32 status = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to use the CkptStatus
enum here?
|
||
for ; iter.Valid(); iter.Next() { | ||
rawBytes := iter.Value() | ||
blsSig := bs.DeserializeBlsSig(rawBytes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you can define the serialization/de-serialization methods for types.BlsSig
as a method based on that type. You can check out what #29 introduced as a reference.
Suggesting this mostly because the SerializeBlsSig
and DeserializeBlsSig
methods don't belong that much inside the BlsSigsState
interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for the corresponding methods of ckpt_state
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor stuff.
} | ||
|
||
// CkptStatus is the status of a checkpoint. | ||
enum CheckpointStatus{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enum CheckpointStatus{ | |
enum CheckpointStatus { |
return ckptWithMeta, err | ||
} | ||
|
||
func BlsSigToBytes(cdc codec.BinaryCodec, blsSig *BlsSig) []byte { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why add a wrapper for the cdc.MustMarshal
method? An argument would be that it abstracts away information, but given that cdc
is already a parameter I would not say that is the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH, I was following what Cosmos does in the staking
module. See here.
ckptWithMeta := types.NewCheckpointWithMeta(ckpt, types.Uncheckpointed) | ||
cs.checkpoints.Set(types.CkptsObjectKey(ckpt.EpochNum), types.CkptWithMetaToBytes(cs.cdc, ckptWithMeta)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be a good idea to return an error if this record already exists, just in case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or even to panic, as it would be a gross programming error in our case.
ckptsKey := types.CkptsObjectKey(epoch) | ||
rawBytes := cs.checkpoints.Get(ckptsKey) | ||
if rawBytes == nil { | ||
return nil, types.ErrCkptDoesNotExist.Wrap("no raw checkpoint with provided epoch") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Wrap
doesn't seem to provide any extra info beyond what the ErrCkptDoesNotExist
already says.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! I'm not sure if you want to do anything more about the BLS discussion. If not, I don't see any blockers in this PR.
Thanks for separating the status and the raw checkpoint, it's easier to grasp this way I think 👍
* fix bug * fix bug * fix cmts * update condition Co-authored-by: Runchao Han <[email protected]> --------- Co-authored-by: Runchao Han <[email protected]>
This PR fixes BM-22.
In particular,
TODOs for future PRs on the checkpointing module: