Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ambiguity around rotating keys and deleting metadata #71

Open
erickt opened this issue Dec 4, 2019 · 7 comments
Open

Ambiguity around rotating keys and deleting metadata #71

erickt opened this issue Dec 4, 2019 · 7 comments
Assignees

Comments

@erickt
Copy link
Contributor

erickt commented Dec 4, 2019

In section 5.1.9, it states:

1.9. If the timestamp and / or snapshot keys have been rotated, then delete the trusted timestamp and snapshot metadata files. This is done in order to recover from fast-forward attacks after the repository has been compromised and recovered. A fast-forward attack happens when attackers arbitrarily increase the version numbers of: (1) the timestamp metadata, (2) the snapshot metadata, and / or (3) the targets, or a delegated targets, metadata file in the snapshot metadata. Please see the Mercury paper for more details.

There some ambiguity here:

  • We should define what it means to rotate a key. Do we only remove the metadata if we add a new key and remove an old key at the same time in a root metadata? Or should we also delete the local metadata if we remove a key?
  • In the case where we have multiple keys and a threshold > 1 for a role, do we still delete the metadata if we only rotate one key? Theoretically, it should be safe to rotate less than the threshold number of keys in a given root metadata, since an attacker shouldn't be able to perform a fast-forward attack with less than the threshold number of compromised keys.
  • Why do we delete both the local timestamp and snapshot metadata if just the timestamp key, rather than just deleting the timestamp metadata? Similarly for just a snapshot key rotation. Does this protect against a known attack, or is this more about cleaning up attacker controlled files that might contain other unknown attacks?
  • Why do we not delete the targets metadata if those keys are rotated? (I think this might be addressed though by @lukpueh in Remove problematic targets rollback attack check #65 or 84103fc if that gets merged in).

Thanks again!

@mnm678
Copy link
Collaborator

mnm678 commented Dec 13, 2019

I opened #74 to address the first two points you made and clarify what rotation means in this context, please review/comment. I will leave the last point to @lukpueh in #65. If that does not answer the question @erickt, we can move the discussion there.

As for your third point, the trusted snapshot metadata will always need to be deleted after a fast-forward attack because it will list the fast-forwarded version of the timestamp or other metadata. I am not sure that this holds for the timestamp (if only snapshot is compromised, the timestamp metadata can still be trusted). However, if online keys are used for both of these roles, in practice it would make sense to rotate (revoke and replace) both timestamp and snapshot in the event of a compromise. @trishankatdatadog @JustinCappos is there another attack that is prevented by deleting both timestamp and snapshot? Either way, I think it would be safer to continue to delete both the timestamp and snapshot metadata as part of compromise recovery.

@lukpueh
Copy link
Member

lukpueh commented Dec 16, 2019

@mnm678: Timestamp needs to be deleted, even if only snapshot was compromised, because it could list the fast-forwarded version number of snapshot. Exactly how snapshot needs to be deleted, even if only (delegated) targets were compromised.

@lukpueh
Copy link
Member

lukpueh commented Dec 16, 2019

  • Why do we delete both the local timestamp and snapshot metadata if just the timestamp key, rather than just deleting the timestamp metadata?

IIUC it should be enough, if the client only deletes the trusted timestamp, when any of timestamp, snapshot or targets had a threshold of keys removed via root. Because the client will be forced via new timestamp to fetch a new snapshot, and in turn forced via snapshot to fetch new targets metadata.

However, this approach requires a PR that does for snapshot what #65 does for targets, i.e. that removes the old-to-new-snapshot version comparison, in favor of the snapshot version check via timestamp.

I prefer this strategy because it seems like a cleaner separation of roles -- only timestamp should be responsible for freshness -- and thus reduces complexity.

@lukpueh
Copy link
Member

lukpueh commented Dec 16, 2019

  • Why do we not delete the targets metadata if those keys are rotated?

As described above, I think it's enough to require the client to only delete the timestamp metadata autonomously upon key rotation, because the new timestamp allows an automatic chain of updates of the rest of the metadata.

Currently we only say that timestamp or snapshot "rotation" requires deleting old trusted metadata. I think we need to add targets as well. I suggest the following wording based on #74:

If a threshold of any top-level role keys other than root have been removed, then delete the trusted timestamp metadata file.**

@mnm678, what do you think?

@mnm678
Copy link
Collaborator

mnm678 commented Dec 18, 2019

IIUC it should be enough, if the client only deletes the trusted timestamp, when any of timestamp, snapshot or targets had a threshold of keys removed via root. Because the client will be forced via new timestamp to fetch a new snapshot, and in turn forced via snapshot to fetch new targets metadata.

This would work to prevent the fast forward attack, but I think it would make a rollback attack possible. Timestamp verification does not have the per-target rollback check that is performed by snapshot, so if this check were removed in snapshot there would be no explicit rollback protection for targets files.

@lukpueh
Copy link
Member

lukpueh commented Dec 19, 2019

[...] I think it would it would make a rollback attack possible.

In case no keys are compromised, there still is implicit rollback protection for target files. Because timestamp specifies the latest snapshot, and snapshot specifies the latest targets.

However, I realize now that my suggestion leaves rollback protection entirely to the timestamp role. That is, if we don't check directly on snapshot and targets metadata that a new version is higher than the last trusted version, then a compromise of timestamp would be enough to successfully launch rollback attacks. OTOH, if we leave these checks, a rollback attacker would have to compromise timestamp, snapshot and (delegated) targets.

So I propose,

  • to withdraw Remove problematic targets rollback attack check #65,
  • we require strict version number increment checks from last trusted to new metadata for every role metadata. (The spec already does this for all top-level roles and @erickt has a pending PR that adds it for delegated targets roles, see Clarify how delegated roles are downloaded #72), and
  • in order to recover from fast-forward attacks, we need the following deleting/untrusting strategies:
    (a) if a threshold of targets keys was removed in root, delete old targets and snapshot,
    (b) if a threshold of snapshot keys was removed in root, delete old snapshot and timestamp,
    (c) if a threshold of timestamp keys was removed in root, delete old timestamp,
    (d) if a threshold of delegated targets keys was removed in the delegating targets metadata, delete the delegated targets and snapshot.

(a), (b) and (c) should be clarified in section 1.9, and (d) probably between 4.4 and 4.5. of the client workflow.

@mnm678, does this make sense?

@mnm678
Copy link
Collaborator

mnm678 commented Dec 19, 2019

@lukpueh That makes sense. It adds a few extra steps to the fast-forward recovery, but I think having rollback protection outside of just the timestamp roll is important.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants