Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update bootstrappers #89

Merged
merged 4 commits into from
Jan 10, 2020
Merged

Update bootstrappers #89

merged 4 commits into from
Jan 10, 2020

Conversation

dirkmc
Copy link
Contributor

@dirkmc dirkmc commented Dec 17, 2019

As part of Fixing Bootstrapping we need to add a repo migration that

  • removes bootstrap nodes with small keys
  • adds new nodes with appropriately large keys

TODO:

  • write sharness tests

Copy link

@ribasushi ribasushi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See one comment, not sure if actionable

"/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dirkmc is it intentional that you are looking only for the .82 host here, but above ( line 96 ) there are 2 "old style" hosts?

Approving, as the rest looks good and the above is likely intentional

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question - yes, in the migrate forward we move from oldConfig to expectedMigrateForward. In the migrate backwards we move from newConfig to expectedMigrateBackward for which I wanted a test that includes one peer that is "known" (in the list of peers with acceptably large keys) and one that is not in that list, to make sure both get through.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the var names to make it a little clearer what's going on, and added some more explicit tests

@aschmahmann
Copy link
Contributor

aschmahmann commented Dec 18, 2019

@dirkmc overall seems reasonable a couple comments/thoughts.

  1. Repo migrations look really painful, it also looks like there's a bunch of boilerplate required to do a really simple change
  2. Doing a repo migration to update bootstrappers seems a little wacky. I understand that if we need this now then we've just go to do it, but is there a better way to deal with this? If we have to do a migration anyway maybe we should add a UseDefaults boolean and just remove all of our bootstrappers, this will let us push updates in code instead of forcing repo migrations.

Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comment: we should try to respect user choices as much as possible. We need to add our new bootstrappers if the user hasn't explicitly removed our old bootstrappers and we should re-add the old bootstrappers if the user hasn't explicitly removed our new bootstrappers. Otherwise, the user is on their own.

Otherwise, we could cause problems for users on private/separate networks.

ipfs-7-to-8/migration/config_conv.go Outdated Show resolved Hide resolved
ipfs-7-to-8/migration/config_conv.go Show resolved Hide resolved
ipfs-7-to-8/migration/config_conv.go Show resolved Hide resolved
ipfs-7-to-8/migration/config_conv.go Show resolved Hide resolved
ipfs-7-to-8/migration/config_conv.go Show resolved Hide resolved
ipfs-7-to-8/migration/config_conv.go Outdated Show resolved Hide resolved
@dirkmc
Copy link
Contributor Author

dirkmc commented Dec 18, 2019

@aschmahmann you're right that there's a lot of boilerplate. Much of this code is copied from https://github.com/ipfs/fs-repo-migrations/tree/master/ipfs-5-to-6 (and much of that is copied from earlier version changes)
Because the code is intended to be used at one point in time and remain static thereafter I'm not sure if this is a big issue.
With respect to doing a repo migration to update bootstrappers I agree it is a little strange. It may be worth talking through a better way to apply config-only changes if this is something that happens often.

@aschmahmann
Copy link
Contributor

@dirkmc as I mentioned in the issue I linked we really need to up our config file game so that we can deal with default values. Currently we just give people a config file on init and the only way to upgrade it is a repo migration or having users manually change the files.

We have some experimental features that can be enabled optionally via config (e.g. pubsub) that when we want to enable by default (we already do) we don't have any choices other than modifying the user's config file (e.g. via a repo migration)

@dirkmc
Copy link
Contributor Author

dirkmc commented Dec 20, 2019

@Stebalien I think I've addressed all your suggestions, let me know if the logic looks right to you now

@Stebalien Stebalien marked this pull request as ready for review December 20, 2019 15:15
Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost there. I have one somewhat paranoid nit (my network actually has an ipfs.local node so I wouldn't be surprised if people use /dns4/ipfs.my-domain.com as a bootstrapper).

ipfs-7-to-8/migration/config_conv.go Outdated Show resolved Hide resolved
@Stebalien
Copy link
Member

@dirkmc how's your internet? To test this, I think we need to:

Build ipfs/kubo#6814 with https://github.com/ipfs/distributions:

  1. ipfs get -o releases /ipns/dist.ipfs.io. This will download 10GiB of crap and add it to your local IPFS repo, FYI. However, you'll only have to download it once.
  2. (./dist.sh add-version go-ipfs $HASH_OF_THAT_PR).
  3. make publish (let's call the final hash MY_DIST_PATH

(checkout the readme in github.com/ipfs/distributions, it should walk you through this process)

This will publish a local release of IPFS with $HASH_OF_THAT_PR as the version. Note: Given that we normally use, you know, sane version numbers, I'm not sure if this is going to actually work. We may need a step 1.5 that moves the built release to some real version (v0.5.0-beta?). If so, you'll probably need to rename a bunch of files and edit the dist.json file.

Once you have everything built, you can write the sharness test. Model it after t0110-migration-6-to-7.sh:

  1. Set IPFS_DIST_PATH to MY_DIST_PATH
  2. Use $HASH_OF_THAT_PR as the new version and v0.4.22 as the old version.

If everything works, pin MY_DIST_PATH using pinbot. We don't need to publish this release, we just need it to stick around.

@Stebalien Stebalien merged commit baf3e76 into master Jan 10, 2020
@Stebalien
Copy link
Member

Merging as there's no reason to leave this sitting in a PR. It won't be used until we bump the repo version and cut a release.

@dirkmc
Copy link
Contributor Author

dirkmc commented Jan 10, 2020

Thanks 👍

I messed around with trying to get sharness tests working with a temp release but was having trouble with the old, vendored dependencies (eg ipfs-update).

It may make sense to wait till we have a RC so as to avoid having to cut a temp release and the versioning issues that raises when trying to write sharness tests.

@Stebalien
Copy link
Member

Stebalien commented Jan 10, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants