-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved multisig handling #5661
Comments
Neat! Is the e2e encryption just an add-on bonus? We don't technically need the transfer to be encrypted, but it's nice. Also, step (6), who does the actual broadcasting? It needs to be a single entity. Is that agreed upon during step (1)? |
Being that this logic (command) is actually in the SDK, I'll move the issue there. |
Hello, we have been looking into this issue before and we consider the same technology as @zmanian proposed. After some thinking we came out with a solution The proposal is to use a modified magic-wormhole implementation as an input to the SPAKE2 protocol and as a network transport layer to provide a more convenient transport layer for exchanging unsigned transactions and their signatures. ExecutionThe multisigning session initiator shares a weak session key to all signing peers. The weak session key looks like this: A Cosmos address and a generated channel id will be transparently added to this session key before deriving the stronger session key for communication with the signing peer via SPAKE2. This allows the multisigning session initiator to keep track of only 1 weak session key, but still have different weak session keys per signing user. It also helps distinguish between multisigning sessions in the unlikely event that two signing parties initiate multisigning sessions independently. InteractionGiven the accounts C1^k^ CM^k^
the commmand will create 3 channels (as much as public keys are required for the multisig), and will print the session ID for this interaction. the actual communication string will be composed of:
where
The The signers (CX) will on the other side have to run the command line client for signing a transaction providing the
at this point the intiator of the multisig process will receive confirmations from the signer and will be able to verify the signatures:
Required infrastructureThe partecipant to the multisig need to connect to lighthouse/relay to communicate to each other. The lighthouse has to be hosted as a centralized service and provides websocket connectivity to the parties (Open Issue #2). OPEN ISSUES
Resources
what do you think about it? |
@zmanian do you have thoughts on the rough protocol stated above? I'm also a bit uneasy about introducing such complexities directly into the SDK. What are your thoughts on wallet providers enabling such a protocol instead? |
Maybe it doesn't belong directly in the SDK but in a stand alone binary that uses the SDK as dependency. I think a more usable multisig is an imperative. |
I understand that to include a somewhat complex component in the SDK is always a delicate choice and delegate the functionality seems a more safe approach. On the other end in my experience there is a trade off to be considered by delegating certain functionalities to the "clients", since, for example, they might choose different approaches that will create confusion and incompatibilities. We can start with a standalone implementation, also to evaluate whenever the proposed solution works well and improve significantly the multisig experience, but, if it does, it would have to be included in the SDK eventually, since to add another tool it will go against the very goal to make it easier (from my point of view). could this be a safer way to approach the issue @alexanderbez @zmanian ? |
@noandrea I would say a standalone implementation would be great. The relayer has some of this functionality and you should see some code examples in there. |
@zmanian dixit:
I fully agree. A better multisig UX is needed, whilst additional complexity should not be put on the SDK. A standalone tool seems the most sensible solution. |
Hi, I'm working together with @noandrea on this. I couldn't find multisig functionality in the relayer. Did you mean the general structure of a off-chain CLI program that connects with a node? |
I also hesitate a bit on the complexity of this solution as being maybe unnecessary. Having e2e encryption is a nice to have, but these messages will all be published publicly as soon as the interaction is complete, so i'm not sure it's actually needed. The problem this solution seems to address the most is the actual transport of messages instead of the actual composition and coordination of multiple signatures within a single multisig transaction. This seems to be the part that is the currently the most difficult, manually opening a JSON file and adding a signature that was manually created previously is the painful part, isn't it? Just a command which can properly assess a JSON object that may already contain signatures, and properly add new ones (or check that it would not be a redundant signature) would be clear improvements that don't even require networking work. Getting all signers online at the same time is also a potential problem I see. I would imagine asynchronous emails with JSON attachments is actually preferable to getting everyone online at the same time. It also avoids the problem of running a server that needs to be used as the lighthouse in this scenario. However, if there was a way to use something like a dedicated signing server for asynchronous signature sharing I could see the benefit. Maybe if the CLI was able to reference the JSON object from a remote URL as well as locally it would already create a better user flow than email. Maybe the ability to post it to IPFS? Ensure it stays pinned? I imagine an ideal userflow could be something like: gaiacli tx send $(gaiacli keys show me -a) $(gaiacli keys show you -a) 1000uatoms --generate-only > multi.json
# multi.json generated with no signatures
gaiacli tx add-sig multi.json --from me
# multi.json modified to include signature from me
gaiacli tx add-sig multi.json --from me
# Error: Signature from <cosmosaddress> already included
# At this point the multi.json file can be sent via email, published via IPFS or on a publicly available URL
gaiacli tx add-sig https://pastebin.com/raw/abcdefg123 --from him > multi.sig
# after confirming this is the correct transaction signing a remote JSON object could be saved locally
gaiacli tx broadcast multi.json I don't do multisig transactions terribly often though, @zmanian @ebuchman what do you all think? |
I like this idea @okwme, I'll prepare a POC ASAP |
I think e2e encryption is actually pretty important here. Concealing the identity of the participants in the multisig from a server which is easily compromised is pretty key to making this a safe solution. You can't remove e2e from the requirements. The current best practice is to use Signal attachments for assembling signatures. |
@zmanian dixit:
It surely is! It'd be out of scope of |
I agree with @zmanian that securing the channel for communication is very important, especially while transmitting transaction that will be eventually signed. |
Yea it seems like if there are extensive new dependencies a standalone solution would make more sense. I wonder if all of the more minimal implementations have been fully explored though? For example using just SSL to use a common AWS bucket to post / share message & signatures? Maybe even just SFTP credentials to a common endpoint? @ebuchman do you have opinions about any of this? |
So we use multisig all the time and find it not overtly terrible, but it could defntly be improved if sharing the unsigned tx and sig were native to the tool. I would opt for a simple way to do this with minimal dependencies, eg. using an AWS S3 bucket where only the signers and can read and write. What comes to mind is something like the following:
This is of course just a high level sketch. Actual implementation would be a bit more involved:
As an iterative way to approach this, we could build a simple standalone tool that just pushes and pulls unsigned txs and signatures to an S3 bucket, and otherwise leaves gaiacli exactly as is. I don't know much about what kind of setup wormhole requires, but I would suspect S3 buckets would adequately serve the need here and might be simpler to configure. Not sure if there's something extra wormhole otherwise provides. |
@ebuchman dixit:
A
Alternatively, just make the command able to automatically identify which protocol needs to be used to retrieve the file (I'd start supporting local file and https). Doable even w/o extra flags I think - will certainly give it a stab.
Yes, that was my design decision inspired by how UNIX programs usually work - if something is required, it should be passed as positional argument (flags are optional by design). Happy to revisit that though.
YES! Like that. We should expect a specific interface contract to be met, and throw away anything that does not comply. Doable.
IMHO this config should be app-specific, e.g. for what concerns |
@ebuchman @alessio I have a doubt s3, if anybody can store their tx and signatures in a bucket, does it mean that the everyone on the internet has read and write permission on that bucket? The advantage of the wormhole protocol is that the lighthouse service is a very lightweight service that has the only function to connect two peers. |
Don't think so, I thought you could have private read and write.
Might be worth looking into more. We'd certainly want a solution that doesn't just depend on AWS, it's just one option that seems straight forward/simple. Wormhole seems cool too, I just don't know enough about it. In any case, it should be easy for folks to experiment with tools that can be used in conjunction with the gaiacli signing tool to try out different options here. |
Unsigned Txs are just JSON files, and I tend to think that reading an unsigned Tx JSON from a remote endpoint should be as easy as possible. {Down,up}load of signatures to/from S3 private buckets could be done piping outputs to the |
my question is, if the S3 bucket used to exchange the tx/sig is provided by Cosmos, then it has to allow read/write publicly, doesn't it? An alternative approach could be to use mozilla send as a hosted service configured only to accept small files. The upside will be that the exchanges could be likely highly automated with no cognitive overload for the user. The downside that is that it is still a fairly easy target for abuse. From my point of view negotiating the transport layer on behalf of the user (on top of combining signatures and transactions) delivers a significant better user xp, but maybe I misunderstood the scope of the issue. |
Why are we assuming this? This is not |
I think we assume that signer groups would BYO S3 bucket. |
hello, looks like this one got stuck :) summing up, there are 2 approaches proposed for this issue:
The 1. assumes no prior setup or knowledge from the user side, where the tool takes care of every aspect of transport, while the 2. assumes that the users have enough know-how to set-up, secure and give access to the service that will be used to transfer the tx files. From a purely user perspective, I like the first approach since personally I like when I use a tool that let me get to achieve my goal without any obstacles in between (in this case, eg for aws, would be to set up the bucket, send or receive the access keys, test that it works, etc..), but I also agree that the solution 2. is easier to implement (@alessio already has a pr for retrieving files from an http url) and everybody that has used a computer is a bit used to suffer to make them do what you want (lol). Since @alexanderbez asked for a resolve of this issue before going forward with coding let's vote with
|
I agree, and have voted 2. Plus wrt 1., I'd add to the contros that if we were to go for a specific protocol, we'd restrict users freedom. Conversely, if we got 2. well implemented and simple, it would not necessarily be error prone or requiring any special or particularly complicated setup from the users. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Reopening this to record some thoughts around what is really a multisig coordinator service. Prior art include: This seems like it could be a hackathon scale project possibly:
IPFS is not a requirement it just means that no one is responsible for keeping a server online. Similarly pastebin could actually be used. What would actually be useful is if the app interfaced with keplr or lunie so that the transactions could be signed without having to rely on the command line. These capabilities should be available soon... |
So I built a prototype tool for this here: https://github.com/informalsystems/multisig It uses an s3 bucket and supports multiple binaries and keys. Of course this is going to be much less relevant when authz starts going live everywhere (eg. starting next week even), but I'm sure it will still come in handy. |
Just checked how the process of building a multisig tx looks like and to be honest, if we take aside the fact of having to share the unsigned tx to the rest of the signers, the process doesn't feel as painful. |
this will be worked on as part of the accounts module, should be out in the next couple weeks |
we will be incorporating a onchain multisig with the new accounts module. The current multisig will still exist but ideally people move on chain. Closing this issue as we are tracking this in the accounts project board. An issue will be opened when the work commences |
At the moment, multisig is a bit of a usability nightmare.
The user flow is
Transport for the unsigned tx and individual sigs are external and require users to bring their own transport.
Here is a proposed alternative.
Incorporate golang magic wormhole into gaia.
For Admin Use
The text was updated successfully, but these errors were encountered: