Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible problem for data redundancy application #8

Open
fedsten opened this issue Aug 21, 2019 · 4 comments
Open

Possible problem for data redundancy application #8

fedsten opened this issue Aug 21, 2019 · 4 comments

Comments

@fedsten
Copy link

fedsten commented Aug 21, 2019

The Storm protocol is useful for two main use cases:

  1. I want to store some data but I do not have enough space on my devices, so I rent someone else's disk;
  2. I have very important data I am afraid to lose, so I back it up on someone else's disk for redundancy.

The second use case is probably the most interesting one as it can be a solution for channel states back-ups in the Lighting Network and other L2 protocols. For such use cases, Alice will face a situation where she lost her data for some device or software failure and she needs to recover it form the remote storage provider. However, to do that she needs two things: the key to decrypt the data, and the Merkle root to verify that Bob still stores the data. While thanks to HD structures it is relatively easy to store keys off-line and redundantly, the Merkle root is something that changes every time you update the files (which is quite often in LN use cases) and can be probably be lost as easily as the data you are trying to preserve. In addition to this probably some other metadata such as the txid of the funding transaction and the commitment transactions is needed.

In general it feels like this L2 protocol suffers of the same problems that it tries to solve for other L2 protocols, making it unpractical for such applications and probably usable only for the use cases 1 in the above list.

A very ugly solution/mitigation to this issue could be to attach those necessary metadata to the funding transaction itself in a dedicated output (80 bytes should be more than enough), but in principle I personally don't like it as the blockchain is not to be intended as a place to store your personal data.

@ZmnSCPxj
Copy link

A very ugly solution/mitigation to this issue could be to attach those necessary metadata to the funding transaction itself in a dedicated output (80 bytes should be more than enough), but in principle I personally don't like it as the blockchain is not to be intended as a place to store your personal data.

But if you are updating often, this means you have to also do a lot of onchain activity, which loses the advantage of working offchain.

@dr-orlovsky
Copy link
Member

@fedsten it might be we need different channel construction for these two cases... I will think on that. Also, I assume that L2 w/o client-side stored data is impossible, so any L2 stogare design will suffer this problem — or we need just to decide to keep some data on-chain, which is a terrible solution. So basically all that Storm (or any other L2 storage) can do regarding this problem is to reduce the amount of the data you need to keep locally: i.e. the rest of L2 client data (for multiplicity of protocols) will be accessible just by keeping data for a single L2 storage protocol — like with a single seed phrase for many keys.

@fedsten
Copy link
Author

fedsten commented Aug 22, 2019

@ZmnSCPxj you are right, that wouldn't work

@dr-orlovsky maybe a solution could be to require Bob to provide a Merkle root previously signed by Alice, that Alice can then use to very that Bob still stores the data in order to cooperatively close the channel. The incentives for Bob to provide the signed Merkle root would be: 1) to get the reward without waiting for the timelock to expire and 2) to avoid the risk of losing the stake, in case Alice recovers the commitment transactions to unilaterally close the channels from some other storage provider. Alice must obviously use multiple storage providers, and store towards each of them all the metadata regarding the channels with the other ones, so if at least one cooperates, she can claim the stakes from all the ones that do no cooperate.
If we manage to make all of this work, Alice should be able to recover all the data just by taking care of her HD seed.

@dr-orlovsky
Copy link
Member

I think this is a great solution, will add it to the spec. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants