Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] MSC3814: Dehydrated devices with SSSS #3814

Draft
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

uhoreg
Copy link
Member

@uhoreg uhoreg commented May 12, 2022

@uhoreg uhoreg changed the title MSCxxxx: Dehydrated devices with SSSS MSC3814: Dehydrated devices with SSSS May 12, 2022
@uhoreg uhoreg changed the title MSC3814: Dehydrated devices with SSSS [WIP] MSC3814: Dehydrated devices with SSSS May 12, 2022
@uhoreg uhoreg added e2e proposal A matrix spec change proposal kind:feature MSC for not-core and not-maintenance stuff needs-implementation This MSC does not have a qualifying implementation for the SCT to review. The MSC cannot enter FCP. labels May 12, 2022
/dehydrated_device/{device_id}/events` to obtain the next batch.

```
POST /dehydrated_device/{device_id}/events

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a POST and not a GET like /sync and /messages?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, the rationale was because the call has side-effects (deleting the device).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit weird that it doesn't follow the pattern of /messages, /events or /sync imo. I'll try implementing it as a GET without the device deletion first and see how that works out, I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A GET endpoint with side-effects seems like a big no-no to me. Everyone expects a GET request to have approximately zero side-effects.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, but we're also proposing removing the side-effects? SGTM in that case

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, the current implementation no longer automatically deletes the device on the server side, but relies on the client to delete/create a new device. So we're going to try to make this a GET.

```

Once a client calls `POST /dehydrated_device/{device_id}/events`, the server
can delete the device (though not necessarily its to-device messages). Once a

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should the server delete the device? Shouldn't this rather be done by the client explicitly in a delete call?

Imo it is not quite obvious that fetching the events should "break" the device. A client might fail to properly restore and now you lost all the intermediate sessions. instead the client should replace the device once it is somewhat sure it restored successfully and has uploaded the megolm keys to online backup.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is that when the client starts getting events, it means that the client is signalling its intention to use the dehydrated device, and it has been "claimed", so it shouldn't be used by anyone else. At this point, there isn't much that can be done if the client, e.g. fails to decrypt some messages. If it fails to decrypt messages with the dehydrated device, it's unlikely that leaving the device around will fix anything in the future -- any future attempts would likely fail as well. So the best thing to do is to replace the dehydrated device with a new one anyways.

I'm not insistent on this endpoint deleting the dehydrated device, but I think that once you start using a dehydrated device, you'll want to create a new device no matter what.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's more that if the device fetches the first few events but then the user closes the browser and it never gets to upload the devices, then you have effectively thrown the dried fish out the window without properly getting to use it. So in that case it should either only delete the device, when it deletes the first few messages (i.e. by the client paginating with a next token), or just wait for the user to send a new device. Since you CAN still use the same dried device from another device, I think. All of the messages will be PRE KEY messages, so you can decrypt them as long as you haven't deleted the one time keys from the pickled device. So even if a client downloads the first batch of messages and then starts with the next batch and the first batch gets deleted, a different client should still be able to pick up from there.

I agree that you want to create a new device no matter what, but that can just be done by uploading a new one instead of implicitly doing it when receiving messages.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the issue of a client starting to load events and then dying somehow is possible, but seems like it would be extremely rare.

I think another consideration is that a client could forget to replace the dehydrated device. If the device gets deleted automatically, then it makes it obvious that the client didn't do that.

In any event, I think it's fine to try it out with GET and without automatically deleting the device, and see how it goes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also an issue if there's a connection problem during the POST request; the client presumably won't be able to re-try the request because the device will have been deleted.
We generally try to design our APIs so that they work / can be retried even if the request fails half-way through for some reason.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, the idea is that the dehydrated device is "deleted" in the sense that no other client can claim it, and if a client queries for the dehydrated device, it won't be returned. But the events associated with it are still there and can be retrieved (until the events get deleted as described elsewhere in the MSC), so if the client re-tries the POST request, it will still get the events.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the problem is just that if the client fails to replace the device, then a failed login attempt will break the device dehydration until the next successfull login, since there is no way to receive messages in the meantime (while that would work fine if the device is just kept).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually ran into another race condition here in production. We currently have it implemented that the PUT of a new device removes the old device. However, since uploading a new device takes several requests (claim new device, upload keys, sign it, upload encrypted device), we run into a race condition, where the user closes the browser window during one of the steps and maybe only signs back in later. That means we have an unhydrateable device and we again lose messages over the gap. Ideally there would be some way to make this atomic to prevent this race condition.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that having this be a PUT is the wrong pattern here. If the whole idea here is for a Client to fetch to-device events and room keys then we should design the flows in a way that minimizes the risk of loss of such room keys.

As it stands the flows can't be resumed once you start fetching the to-device events, and the fetching of the to-device events will be by far the longest operation here.

What would allow perfect resumption is:

  1. The dehydrated device only gets deleted if the client requests so.
  2. The to-device events only get deleted¹ when the dehydrated device gets deleted, this is opposed to the current mechanism, where to-device events get deleted once the server sees a next_batch from a previous request.

No 2. would ensure that, even if a device that attempts a rehydration gets stopped and deleted mid-rehydration, another, new device can restart the rehydration process.

I actually ran into another race condition here in production. We currently have it implemented that the PUT of a new device removes the old device. However, since uploading a new device takes several requests (claim new device, upload keys, sign it, upload encrypted device),

Agree here as well, PUT of a new device should happen in a single request which should upload the dehydrated device, its device keys and any one-time keys. I implemented a draft version of this behavior in this patch: matrix-org/synapse@777b305

dehydration algorithm `m.dehydration.v1.olm` will be called
`org.matrix.msc3814.v1.olm`. The SSSS name for the dehydration key will be
`org.matrix.msc3814` instead of `m.dehydrated_device`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Client implementation: https://gitlab.com/famedly/company/frontend/famedlysdk/-/merge_requests/1111

Server implementation: matrix-org/synapse#13581

Both not merged yet and notably missing is the dehydrated device format.

losing any megolm keys that were sent to the dehydrated device, but the client
would likely have received those megolm keys itself.

Alternatively, the client could perform a `/sync` for the dehydrated device,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this sill works with v2? can we still sync on the dehydrated device?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't sync as a different device in this proposal. You can fetch the events for that device, but this proposal implicitly deletes the device in that case, which means you can't keep the device alive after that. So imo your only option is to replace it (which is somewhat easy to do, but you might need to authenticate the new signature upload/device?).


If the client is able to decrypt the data and wants to use the dehydrated
device, the client retrieves the to-device messages sent to the dehydrated
device by calling `POST /dehydrated_device/{device_id}/events`, where
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why include a device_id if you can only have a single dehydrated device? It has implications that you can provide more than one (and causes additional error checking of whether the provided device ID matches the dehydrated device ID).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the device ID is redundant. Though since this MSC has been written a new use-case for this endpoint has been found.

In the sliding sync world we have split out the fetching of to-device events into a separate sync loop. Namely one of the biggest problems of the existing /sync mechanism is that you get too much data all at once and the downloading and processing of that data prevents the UI from being updated.

To-device events are one of those things that are not directly related to the things that a client will want to display in a room or room list, so putting it into a separate sync loop allows the main loop to quickly send updates while to-device moves along in the background. More info here: matrix-org/matrix-rust-sdk#1928

I think that old sync could handle such a split as well, so I would suggest here to rename the endpoint to become /sync/to_device/{device_id} where device_id might be optional and used only in the case of a dehydrated device.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for the device_id parameter is that, while one client is fetching events, another client could create a new dehydrated device. Without the device_id parameter, the server could think that the client wants to fetch the events for the new device which, if there are any, it won't be able to decrypt since it's for a device that it doesn't know about. With the device_id parameter, the server will at least be able to say that there are no more events (since the device has been replaced by a new one).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds quite racy to me -- how does the server know that one dehydrated device is claimed? How would the client know to make a new one instead of claim the old one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does the server know that one dehydrated device is claimed?

It's OK for multiple clients to rehydrate the same device (unlike in the previous proposal), because it never becomes a real device. So the server can just wait until some client fetches all the events before dropping the device.

How would the client know to make a new one instead of claim the old one?

Making a new device and rehydrating an old one are two different use cases. Rehydration happens after you log in, and you're setting up encryption and trying to get keys. It only happens once in the device's lifetime. Creating a new dehydrated device would happen after you've already set up your encryption and already attempted to rehydrate a device.

Comment on lines 126 to 129
batches. For the last batch of messages, the server will still send a
`next_batch` token, and return an empty `events` array when called with that
token, so that it knows that the client has successfully received all the
messages and can clean up all the to-device messages for that device.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm failing to track what's going on here, I think it is:

  1. Client requests with no next_batch; server returns some messages and a next_batch of A
  2. Client requests with next_batch=A; server returns some messages and a next_batch of B
  3. Client requests with next_batch=B; server returns an empty array of messages and a next_batch of C
  4. Client requests with next_batch=C and discards any response

I'm unsure about steps 3 & 4. Why would the server provide a next_batch if it is already out of messages?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that in your example, the "last batch" of messages is 2. where the client requests with next_batch=A. So the server still returns a next_batch of B even though it knows that the next call will be empty. Then at 3. when the client requests with next_batch=B, the server discards all messages, and returns an empty array of messages and no next_batch. I agree that the text could be clearer.

yingziwu added a commit to yingziwu/synapse that referenced this pull request Aug 2, 2023
No significant changes since 1.89.0rc1.

- Add Unix Socket support for HTTP Replication Listeners. [Document and provide usage instructions](https://matrix-org.github.io/synapse/v1.89/usage/configuration/config_documentation.html#listeners) for utilizing Unix sockets in Synapse. Contributed by Jason Little. ([\matrix-org#15708](matrix-org#15708), [\matrix-org#15924](matrix-org#15924))
- Allow `+` in Matrix IDs, per [MSC4009](matrix-org/matrix-spec-proposals#4009). ([\matrix-org#15911](matrix-org#15911))
- Support room version 11 from [MSC3820](matrix-org/matrix-spec-proposals#3820). ([\matrix-org#15912](matrix-org#15912))
- Allow configuring the set of workers to proxy outbound federation traffic through via `outbound_federation_restricted_to`. ([\matrix-org#15913](matrix-org#15913), [\matrix-org#15969](matrix-org#15969))
- Implement [MSC3814](matrix-org/matrix-spec-proposals#3814), dehydrated devices v2/shrivelled sessions and move [MSC2697](matrix-org/matrix-spec-proposals#2697) behind a config flag. Contributed by Nico from Famedly, H-Shay and poljar. ([\matrix-org#15929](matrix-org#15929))

- Fix a long-standing bug where remote invites weren't correctly pushed. ([\matrix-org#15820](matrix-org#15820))
- Fix background schema updates failing over a large upgrade gap. ([\matrix-org#15887](matrix-org#15887))
- Fix a bug introduced in 1.86.0 where Synapse starting with an empty `experimental_features` configuration setting. ([\matrix-org#15925](matrix-org#15925))
- Fixed deploy annotations in the provided Grafana dashboard config, so that it shows for any homeserver and not just matrix.org. Contributed by @wrjlewis. ([\matrix-org#15957](matrix-org#15957))
- Ensure a long state res does not starve CPU by occasionally yielding to the reactor. ([\matrix-org#15960](matrix-org#15960))
- Properly handle redactions of creation events. ([\matrix-org#15973](matrix-org#15973))
- Fix a bug where resyncing stale device lists could block responding to federation transactions, and thus delay receiving new data from the remote server. ([\matrix-org#15975](matrix-org#15975))

- Better clarify how to run a worker instance (pass both configs). ([\matrix-org#15921](matrix-org#15921))
- Improve [the documentation](https://matrix-org.github.io/synapse/v1.89/admin_api/user_admin_api.html#login-as-a-user) for the login as a user admin API. ([\matrix-org#15938](matrix-org#15938))
- Fix broken Arch Linux package link. Contributed by @SnipeXandrej. ([\matrix-org#15981](matrix-org#15981))

- Remove support for calling the `/register` endpoint with an unspecced `user` property for application services. ([\matrix-org#15928](matrix-org#15928))

- Mark `get_user_in_directory` private since it is only used in tests. Also remove the cache from it. ([\matrix-org#15884](matrix-org#15884))
- Document which Python version runs on a given Linux distribution so we can more easily clean up later. ([\matrix-org#15909](matrix-org#15909))
- Add details to warning in log when we fail to fetch an alias. ([\matrix-org#15922](matrix-org#15922))
- Remove unneeded `__init__`. ([\matrix-org#15926](matrix-org#15926))
- Fix bug with read/write lock implementation. This is currently unused so has no observable effects. ([\matrix-org#15933](matrix-org#15933), [\matrix-org#15958](matrix-org#15958))
- Unbreak the nix development environment by pinning the Rust version to 1.70.0. ([\matrix-org#15940](matrix-org#15940))
- Update presence metrics to differentiate remote vs local users. ([\matrix-org#15952](matrix-org#15952))
- Stop reading from column `user_id` of table `profiles`. ([\matrix-org#15955](matrix-org#15955))
- Build packages for Debian Trixie. ([\matrix-org#15961](matrix-org#15961))
- Reduce the amount of state we pull out. ([\matrix-org#15968](matrix-org#15968))
- Speed up updating state in large rooms. ([\matrix-org#15971](matrix-org#15971))

* Bump anyhow from 1.0.71 to 1.0.72. ([\matrix-org#15949](matrix-org#15949))
* Bump click from 8.1.3 to 8.1.6. ([\matrix-org#15984](matrix-org#15984))
* Bump cryptography from 41.0.1 to 41.0.2. ([\matrix-org#15943](matrix-org#15943))
* Bump jsonschema from 4.17.3 to 4.18.3. ([\matrix-org#15948](matrix-org#15948))
* Bump pillow from 9.4.0 to 10.0.0. ([\matrix-org#15986](matrix-org#15986))
* Bump prometheus-client from 0.17.0 to 0.17.1. ([\matrix-org#15945](matrix-org#15945))
* Bump pydantic from 1.10.10 to 1.10.11. ([\matrix-org#15946](matrix-org#15946))
* Bump pygithub from 1.58.2 to 1.59.0. ([\matrix-org#15834](matrix-org#15834))
* Bump pyo3-log from 0.8.2 to 0.8.3. ([\matrix-org#15951](matrix-org#15951))
* Bump sentry-sdk from 1.26.0 to 1.28.1. ([\matrix-org#15985](matrix-org#15985))
* Bump serde_json from 1.0.100 to 1.0.103. ([\matrix-org#15950](matrix-org#15950))
* Bump types-pillow from 9.5.0.4 to 10.0.0.1. ([\matrix-org#15932](matrix-org#15932))
* Bump types-requests from 2.31.0.1 to 2.31.0.2. ([\matrix-org#15983](matrix-org#15983))
* Bump typing-extensions from 4.5.0 to 4.7.1. ([\matrix-org#15947](matrix-org#15947))
Fizzadar added a commit to beeper/synapse-legacy-fork that referenced this pull request Aug 29, 2023
No significant changes since 1.89.0rc1.

- Add Unix Socket support for HTTP Replication Listeners. [Document and provide usage instructions](https://matrix-org.github.io/synapse/v1.89/usage/configuration/config_documentation.html#listeners) for utilizing Unix sockets in Synapse. Contributed by Jason Little. ([\matrix-org#15708](matrix-org#15708), [\matrix-org#15924](matrix-org#15924))
- Allow `+` in Matrix IDs, per [MSC4009](matrix-org/matrix-spec-proposals#4009). ([\matrix-org#15911](matrix-org#15911))
- Support room version 11 from [MSC3820](matrix-org/matrix-spec-proposals#3820). ([\matrix-org#15912](matrix-org#15912))
- Allow configuring the set of workers to proxy outbound federation traffic through via `outbound_federation_restricted_to`. ([\matrix-org#15913](matrix-org#15913), [\matrix-org#15969](matrix-org#15969))
- Implement [MSC3814](matrix-org/matrix-spec-proposals#3814), dehydrated devices v2/shrivelled sessions and move [MSC2697](matrix-org/matrix-spec-proposals#2697) behind a config flag. Contributed by Nico from Famedly, H-Shay and poljar. ([\matrix-org#15929](matrix-org#15929))

- Fix a long-standing bug where remote invites weren't correctly pushed. ([\matrix-org#15820](matrix-org#15820))
- Fix background schema updates failing over a large upgrade gap. ([\matrix-org#15887](matrix-org#15887))
- Fix a bug introduced in 1.86.0 where Synapse starting with an empty `experimental_features` configuration setting. ([\matrix-org#15925](matrix-org#15925))
- Fixed deploy annotations in the provided Grafana dashboard config, so that it shows for any homeserver and not just matrix.org. Contributed by @wrjlewis. ([\matrix-org#15957](matrix-org#15957))
- Ensure a long state res does not starve CPU by occasionally yielding to the reactor. ([\matrix-org#15960](matrix-org#15960))
- Properly handle redactions of creation events. ([\matrix-org#15973](matrix-org#15973))
- Fix a bug where resyncing stale device lists could block responding to federation transactions, and thus delay receiving new data from the remote server. ([\matrix-org#15975](matrix-org#15975))

- Better clarify how to run a worker instance (pass both configs). ([\matrix-org#15921](matrix-org#15921))
- Improve [the documentation](https://matrix-org.github.io/synapse/v1.89/admin_api/user_admin_api.html#login-as-a-user) for the login as a user admin API. ([\matrix-org#15938](matrix-org#15938))
- Fix broken Arch Linux package link. Contributed by @SnipeXandrej. ([\matrix-org#15981](matrix-org#15981))

- Remove support for calling the `/register` endpoint with an unspecced `user` property for application services. ([\matrix-org#15928](matrix-org#15928))

- Mark `get_user_in_directory` private since it is only used in tests. Also remove the cache from it. ([\matrix-org#15884](matrix-org#15884))
- Document which Python version runs on a given Linux distribution so we can more easily clean up later. ([\matrix-org#15909](matrix-org#15909))
- Add details to warning in log when we fail to fetch an alias. ([\matrix-org#15922](matrix-org#15922))
- Remove unneeded `__init__`. ([\matrix-org#15926](matrix-org#15926))
- Fix bug with read/write lock implementation. This is currently unused so has no observable effects. ([\matrix-org#15933](matrix-org#15933), [\matrix-org#15958](matrix-org#15958))
- Unbreak the nix development environment by pinning the Rust version to 1.70.0. ([\matrix-org#15940](matrix-org#15940))
- Update presence metrics to differentiate remote vs local users. ([\matrix-org#15952](matrix-org#15952))
- Stop reading from column `user_id` of table `profiles`. ([\matrix-org#15955](matrix-org#15955))
- Build packages for Debian Trixie. ([\matrix-org#15961](matrix-org#15961))
- Reduce the amount of state we pull out. ([\matrix-org#15968](matrix-org#15968))
- Speed up updating state in large rooms. ([\matrix-org#15971](matrix-org#15971))

* Bump anyhow from 1.0.71 to 1.0.72. ([\matrix-org#15949](matrix-org#15949))
* Bump click from 8.1.3 to 8.1.6. ([\matrix-org#15984](matrix-org#15984))
* Bump cryptography from 41.0.1 to 41.0.2. ([\matrix-org#15943](matrix-org#15943))
* Bump jsonschema from 4.17.3 to 4.18.3. ([\matrix-org#15948](matrix-org#15948))
* Bump pillow from 9.4.0 to 10.0.0. ([\matrix-org#15986](matrix-org#15986))
* Bump prometheus-client from 0.17.0 to 0.17.1. ([\matrix-org#15945](matrix-org#15945))
* Bump pydantic from 1.10.10 to 1.10.11. ([\matrix-org#15946](matrix-org#15946))
* Bump pygithub from 1.58.2 to 1.59.0. ([\matrix-org#15834](matrix-org#15834))
* Bump pyo3-log from 0.8.2 to 0.8.3. ([\matrix-org#15951](matrix-org#15951))
* Bump sentry-sdk from 1.26.0 to 1.28.1. ([\matrix-org#15985](matrix-org#15985))
* Bump serde_json from 1.0.100 to 1.0.103. ([\matrix-org#15950](matrix-org#15950))
* Bump types-pillow from 9.5.0.4 to 10.0.0.1. ([\matrix-org#15932](matrix-org#15932))
* Bump types-requests from 2.31.0.1 to 2.31.0.2. ([\matrix-org#15983](matrix-org#15983))
* Bump typing-extensions from 4.5.0 to 4.7.1. ([\matrix-org#15947](matrix-org#15947))

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE1508oLYUKainYFJakD7OEIo53t0FAmTI2e4ACgkQkD7OEIo5
# 3t2x1RAAohu1Rmjv0mOqFR4P1YZpA5RFbYajcyq77n/ciDKSM1dqBelONqKOq2A9
# uGbVNm6rC+EFwIl5MF5TrFdsDQHvGcRgW6NpQDZ+uIUOYizjZH1g37BoNPLlGYQx
# fmKG7/XqdWhSc5tHN9HsRHyHKmsndebjXoUCPKmieGZa1GLXvGwrNkWQlEpwd9Qu
# mj3uewJxLFGgIIAOiplJ4UO8FaCbMD+By27hSiWtVsLT6pyav4HC2P8RQD1iv0jW
# OXNHvEWyqfBPlsPOkCD4nQZrmZqa5GWLYfBm8zFgIBxNy+e33C07L4bO+QdCE86v
# /SUKug/0nsp66jSZst1fM/M2ssXvjU+LNO9fqonOCZ4TiJ4i/yoa8AvmcAg5hy7C
# HR9IBp9cMrQ2u1y2/knxF657AGHxgXEltgw0PDvZHowqsqoSb+5HWl0zv1wnVjMa
# 2QYLKWPBk/AdlHkmC3S4/+gfVZVsT2RSBP3JUCbFyOqug9vXFvSGTfH07Lk4PDI3
# o5idBzumvyonsuC2ypkzlj49FAj21l/8DInxEpY9JcHdVncLWvu9gmLd+H7GY7H7
# ODa2gOynrsSGVH7IpOl6dpw/GH6R8ZlfHl87bFslOqVObBxquL/ODIoFOgld+MpT
# YYXp+0tW564mg+AYw3+eo44JTq0lKh7eyENP3SqKN/Z8ssQL97c=
# =Ar/g
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue Aug  1 11:09:50 2023 BST
# gpg:                using RSA key D79D3CA0B61429A8A760525A903ECE108A39DEDD
# gpg: key 903ECE108A39DEDD: new key but contains no user ID - skipped
# gpg: Total number processed: 1
# gpg:           w/o user IDs: 1
# gpg: Can't check signature: No public key

# Conflicts:
#	poetry.lock
#	synapse/http/site.py
#	synapse/storage/databases/main/roommember.py
│Curve25519 key pair │ KeyPair │ 64 │
│Number of one-time keys │ u32 │ 4 │
│One-time keys │ [OneTimeKey] │ N * 69 │
│Fallback keys │ FallbackKeys │ 2 * 69 │
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we only need one fallback key. Normal clients store two fallback keys because they upload a new fallback key after the current fallback key is used. But with dehydrated devices, we will never upload a new fallback.

│Number of one-time keys │ u32 │ 4 │
│One-time keys │ [OneTimeKey] │ N * 69 │
│Fallback keys │ FallbackKeys │ 2 * 69 │
│Next key ID │ u32 │ 4 │
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Likewise, I don't think we need this because we won't upload new OTKs after we upload the dehydrated device. It also makes assumptions about the structure of the OTK IDs, which may not be true for all olm implementations.

Comment on lines +223 to +224
│Key ID │ u32 │ 4 │
│Is published │ u8 │ 1 │
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need these, because the key ID doesn't get used when decrypting events, and any OTKs that are included in the dehydrated device will surely have been published -- any unpublished OTKs should just be omitted.

├────────────────────────┼───────────────┼──────────────────┤
│Key ID │ u32 │ 4 │
│Is published │ u8 │ 1 │
│Curve 25519 key pair │ KeyPair │ 69 │
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
│Curve 25519 key pair │ KeyPair │ 69
│Curve 25519 key pair │ KeyPair │ 64

If no dehydrated device is available, the server responds with an error code of
`M_NOT_FOUND`, HTTP code 404.

If the client is able to decrypt the data and wants to use the dehydrated
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we can say something like: the server is allowed to discard any non-m.room.encrypted to-device message that it receives for the dehydrated device. There's no point in keeping key requests sent to the dehydrated device because it won't send anything back.

will remove any previously-set dehydrated device.

The client *must* use the public [Curve25519] [identity key] of the device,
encoded as unpadded Base64, as the device ID.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the device ID ends up in a path parameter, we should probably make this URL-safe Base64 to avoid issues with clients failing to URL-encode the ID.

Suggested change
encoded as unpadded Base64, as the device ID.
encoded as unpadded URL-safe Base64, as the device ID.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was previously considered, but was rejected as we would end up with two different representations of the same data (as device ID and as the key), which would be confusing.

"device_keys": {
"user_id": "<user_id>",
"device_id": "<device_id>",
"valid_until_ts": <millisecond_timestamp>,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This property isn't defined anywhere

If the given `device_id` is not the dehydrated device ID, the server responds
with an error code of `M_FORBIDDEN`, HTTP code 403.

### Deleting a dehydrated device
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably specify what happens when we use POST /delete_devices and DELETE /devices/{deviceId} on the dehydrated device. Also POST /logout/all. Presumably those would delete the dehydrated device. So maybe we don't need our own DELETE endpoint here? Though this endpoint allows you to delete the dehydrated device without knowing the device ID, so might still be useful.

TODO: Explain why the double derivation is necessary.

The encryption key used for the dehydrated device will be randomly generated
and stored/shared via SSSS using the name `m.dehydrated_device`.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if I'm reading the iOS implementation correctly, the key is encoded with unpadded base64 (as is done with the other keys in secret storage)

Comment on lines +173 to +174
For the last batch of messages, the server will still send a
`next_batch` token, and return an empty `events` array when called with that
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not what https://spec.matrix.org/v1.9/appendices/#pagination says should happen. I'd like to see this changed before this stabilises.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original reason why this endpoint used an empty events array to signal the end was that the client using the final next_batch token would signal to the server that it could delete the to-device messages. Since we aren't doing that any more, we can make it work like the appendix says it should.

Comment on lines +336 to +340
To reduce the chances of one-time key exhaustion, if the user has an active
client, it can periodically replace the dehydrated device with a new dehydrated
device with new one-time keys. If a client does this, then it runs the risk of
losing any megolm keys that were sent to the dehydrated device, but the client
would likely have received those megolm keys itself.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we doing this [replacing the dehydrated device periodically] or not?

It seems like both have serious downsides. If we do replace it, we have a very racy operation that is certain to cause UTDs in practice. If we don't replace it, then we'll end up with no remaining OTKs at all, and an incredibly long list of to-device messages all of which have to be downloaded and decrypted by any new clients.

Comment on lines +271 to +296
```math
\begin{aligned}
DEVICE\_KEY
&= \text{HKDF} \left(\text{``Device ID``}, RANDOM\_KEY, \text{``dehydrated-device-pickle-key"}, 32\right)
\end{aligned}
```

The `device_key` is then further expanded into a AES256 key, HMAC key and
initialization vector.


```math
\begin{aligned}
AES\_KEY \parallel HMAC\_KEY \parallel AES\_IV
&= \text{HKDF}\left(0,DEVICE\_KEY,\text{``Pickle"},80\right)
\end{aligned}
```

The plain-text is encrypted with [AES-256] in [CBC] mode with [PKCS#7] padding,
using the key $`AES\_KEY`$ and the IV $`AES\_IV`$ to give the cipher-text.

Then the cipher-text are passed through [HMAC-SHA-256]. The first 8 bytes of the
MAC are appended to the cipher-text.

The cipher-text, including the appended MAC tag, are encoded using unpadded
Base64 to give the device pickle.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this key derivation so different from the existing symmetric scheme for encrypting secrets in secret storage? I'm looking at https://spec.matrix.org/v1.10/client-server-api/#msecret_storagev1aes-hmac-sha2-1

If the existing m.secret_storage.v1.aes-hmac-sha2 is not good enough for this key, then why are we still using it for the other secrets?

If it is good enough, then why do something randomly different here?

IMO if there's going to be a new scheme for encrypting secrets in secret storage, then that should be a new MSC all on its own. Otherwise it feels like meaningless duplication of effort, and more chance for implementations to make a stupid mistake.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you have noticed, this is a scheme already used in the stack, it's used by Olm/Megolm. The libolm pickle format is using it as well.

Not that I'm disagreeing with you on your other points, but the scheme isn't randomly different.

Comment on lines +284 to +285
AES\_KEY \parallel HMAC\_KEY \parallel AES\_IV
&= \text{HKDF}\left(0,DEVICE\_KEY,\text{``Pickle"},80\right)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the IV not random here?

Is there a reason why this encryption needs to be deterministic? I don't think there is. But even if so, why not use a well known deterministic encryption mode like SIV?

Otherwise this looks pretty sketchy. In practice it's probably not going to lead to an exploit, but it should probably get flagged in a security audit.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I think I see what's going on. Based on this comment https://github.com/matrix-org/matrix-rust-sdk/blob/794b11a0cecd2fab8e43931639eba5212fb89921/crates/matrix-sdk-crypto/src/dehydrated_devices.rs#L347 you're using the libolm pickle encryption scheme?

But given the warning in that Rust comment, it sounds like the potential for IV reuse here is a real problem. If an application fails to generate a new random key when it creates a new dehydrated device, then you're going to reuse the IV.

So I guess this just brings me back to my question below: Why not just use the existing m.secret_storage.v1.aes-hmac-sha2 construction here?

```math
\begin{aligned}
DEVICE\_KEY
&= \text{HKDF} \left(\text{``Device ID``}, RANDOM\_KEY, \text{``dehydrated-device-pickle-key"}, 32\right)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RANDOM_KEY is mentioned in the example but not defined in the text. Should have a sentence to define what we mean by it unambiguously. I left a suggestion to resolve the concern above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the quotes around "Device ID" make me think this is a constant. Will leave a suggestion for that as well.

```math
\begin{aligned}
AES\_KEY \parallel HMAC\_KEY \parallel AES\_IV
&= \text{HKDF}\left(0,DEVICE\_KEY,\text{``Pickle"},80\right)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incidentally, I don't think we define the HKDF parameter order anywhere. Not in this MSC and not in the spec. The only place that defines it is the Olm specification.

We should define it in the MSC and then reflect in the spec as well.

MAC are appended to the cipher-text.

The cipher-text, including the appended MAC tag, are encoded using unpadded
Base64 to give the device pickle.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we name the RFC we are basing our Base64 encoding on?

Comment on lines +268 to +269
The randomly generated encryption key *must* be expanded using the HMAC-based
Key Derivation function defined in [RFC5869].
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The randomly generated encryption key *must* be expanded using the HMAC-based
Key Derivation function defined in [RFC5869].
The randomly generated encryption key (`RANDOM_KEY` in the example below)
*must* be expanded using the HMAC-based Key Derivation function defined in
[RFC5869].

Comment on lines +271 to +276
```math
\begin{aligned}
DEVICE\_KEY
&= \text{HKDF} \left(\text{``Device ID``}, RANDOM\_KEY, \text{``dehydrated-device-pickle-key"}, 32\right)
\end{aligned}
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
```math
\begin{aligned}
DEVICE\_KEY
&= \text{HKDF} \left(\text{``Device ID``}, RANDOM\_KEY, \text{``dehydrated-device-pickle-key"}, 32\right)
\end{aligned}
```
```math
\begin{aligned}
\text{DEVICE\_KEY}
&= \text{HKDF} \left(\text{Device ID}, \text{RANDOM\_KEY}, \text{``dehydrated-device-pickle-key"}, 32\right)
\end{aligned}
```

Comment on lines +283 to +286
\begin{aligned}
AES\_KEY \parallel HMAC\_KEY \parallel AES\_IV
&= \text{HKDF}\left(0,DEVICE\_KEY,\text{``Pickle"},80\right)
\end{aligned}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
\begin{aligned}
AES\_KEY \parallel HMAC\_KEY \parallel AES\_IV
&= \text{HKDF}\left(0,DEVICE\_KEY,\text{``Pickle"},80\right)
\end{aligned}
\begin{aligned}
\text{AES\_KEY} \parallel \text{HMAC\_KEY} \parallel text{AES\_IV}
&= \text{HKDF}\left(0,\text{DEVICE\_KEY},\text{``Pickle"},80\right)
\end{aligned}

```

The plain-text is encrypted with [AES-256] in [CBC] mode with [PKCS#7] padding,
using the key $`AES\_KEY`$ and the IV $`AES\_IV`$ to give the cipher-text.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
using the key $`AES\_KEY`$ and the IV $`AES\_IV`$ to give the cipher-text.
using the key $`\text{AES\_KEY}`$ and the IV $`\text{AES\_IV}`$ to give the cipher-text.

The alternatives discussed in MSC2697 are also alternatives here.


## Security considerations
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section should contain a discussion of malicious dehydrated devices injected by the server.

The MSC in its current form is almost purely focused on the mechanics of creation and rehydration of dehydrated devices. There is very little discussion of how these devices should be treated by message senders.

And yet, these are not ordinary devices, as evidenced by the fact that we are proposing special UI/UX for them in the section concerning the dehydrated flag. The obvious concern is that dehydrated devices could end up being hidden or obscured from the user's view—even more so than ordinary devices—and therefore bear an even greater risk of malicious injection.

The long-term plan is to move to a model where a user's devices must be signed by the user's cryptographic identity in order to be considered valid (see MSC4153) . Given that context, and the fact that dehydrated devices are a completely new feature, I strongly recommend that this MSC should require that a dehydrated device MUST be signed by a pinned (TOFU-trusted) user identity in order to be considered valid. If the dehydrated device is not signed, or is signed by a user identity which is not the one that is currently pinned by the client, the dehydrated device MUST be ignored by senders as if it it does no exist. That is, clients MUST NOT send any to-device messages to such a device nor accept any to-device messages from it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
e2e kind:feature MSC for not-core and not-maintenance stuff needs-implementation This MSC does not have a qualifying implementation for the SCT to review. The MSC cannot enter FCP. proposal A matrix spec change proposal
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants