Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cvmfs plugin #27

Closed
wants to merge 8 commits into from
Closed

Add cvmfs plugin #27

wants to merge 8 commits into from

Conversation

siscia
Copy link

@siscia siscia commented Jan 13, 2020

Hi there :)

Sorry I was quite slow, but other stuff came up!

Anyhow, I rebase the PR #4

Cheers,

@siscia
Copy link
Author

siscia commented Jan 13, 2020

Good! It is not even building!

I am on it!

@siscia
Copy link
Author

siscia commented Jan 13, 2020

Aw I see we change the mount interface! Many many thanks!

If we support multiple filesystem implementation, wouldn't it be wise to move the definition of the labels from: https://github.com/ktock/remote-snapshotter/blob/master/filesystems/stargz/handler/handler.go to https://github.com/ktock/remote-snapshotter/blob/master/filesystems/plugin.go

Or even directly in containerd.

Am I missing something?

@ktock
Copy link
Member

ktock commented Jan 14, 2020

Thanks for rebasing!
Each filesystem may require a different set of labels so I think we should separate handlers(+labels) per filesystems rather than having "one global handler". I'll open another PR to make it easier.
And of course, I think we should upstream these handlers to containerd clients(ctr, cri plugin, etc.).

@siscia
Copy link
Author

siscia commented Jan 14, 2020

Aw, in my understanding we (filesystem implementator) are not really controlling the label that are passed down to us from containerd, am I right?

The label used for stargz can be used for cvmfs and third filesystem as well.

Do you think is better to replicate them?
(It is definitely a non-issue, but it might hide a misunderstanding from my side :) )

@ktock
Copy link
Member

ktock commented Jan 14, 2020

This topic (the remote snapshotter architecture and handlers) is under the discussion with containerd so I thought it's safe for us to keep everything which is filesystem-specific separated.
But yes, it's undefined so let's have a shared handler at now if it meets your needs! 👍

@siscia
Copy link
Author

siscia commented Jan 14, 2020

Let me know your preference.
I can either revert the changes to be filesystem specific and considering merging now.
Or we can wait until stuff don't stabilize and then I re-work with the code.

@ktock
Copy link
Member

ktock commented Jan 16, 2020

Let's wait now until the architecture being agreed by containerd community 🙏 . Sorry for unstabilized situation.

@ktock
Copy link
Member

ktock commented Jan 22, 2020

@siscia @AkihiroSuda

Hi, sorry for the long waiting.
Through discussion with containerd community(on github and slack), we are getting following feedbacks and considering to fix this repo accordingly:

  • Make remote snapshotter configurable (not pluggable) and
  • tighten the scope of this remote snapshotter project repo to one filesystem(i.e. stargz/CRFS)

Current remote snapshotter implementation has a pluggable architecture for filesystems. But we restructure the architecture to be configurable by making general parts(snapshotter, cache, ...) importable and make it easy to create specific remote snapshotters for filesystem developers. This means rather than having "one big pluggable snapshotter" we have a remote snapshotter for a specific filesystem(i.e. stargz/CRFS) in this repo but you can still have CernVM-FS snapshotter using our generic parts.

Thanks for your collabolation!

@AkihiroSuda
Copy link
Member

Seems landed as https://github.com/cvmfs/containerd-remote-snapshotter 🎉

@siscia
Copy link
Author

siscia commented Jul 16, 2020

Definitely! Sorry for not closing it myself!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants