Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how-to request: syncoid + ssh-key command lockdown possible? #82

Closed
spikedrba opened this issue May 3, 2017 · 5 comments
Closed

how-to request: syncoid + ssh-key command lockdown possible? #82

spikedrba opened this issue May 3, 2017 · 5 comments

Comments

@spikedrba
Copy link

Dear Jim, all,

@jimsalterjrs thanks a lot for this wonderful tool, it's a great contribution to the zfs ecosystem. I'm trying to lockdown syncoid as I'm used to with other backup tools using ssh, but not having much luck with it. Do you happen to have something working or some suggestion? Basically I'm trying to make sure that the backup user on remote host (using passwordless keys) can't get a shell and can't mess with the filesystem. Ideally it'd be a "append" only kind of thing, however I noticed there are some calls to zfs rollback too.

Do you think syncoid can work in a set up like that and do you have any suggestions on how to make that happen? I can do it with a plain send receive by having command="sudo receive pool" and do a send -r pool, but that's missing all the goodies you added to syncoid.

thanks,

Spike

@jimsalterjrs
Copy link
Owner

I don't think it's going to be possible, the way you're looking to do it. What you CAN do is use syncoid with non-root users and sudo. The problem here is that syncoid actually needs to run quite a few different commands on both ends to do its job.

It would be possible - and I have considered doing this - to use syncoid itself to run the commands on the remote server, as a wrapper, which would then allow you to lock down the user the way I think you're suggesting. Hell of a lot of work, though, and also means that you'd then need syncoid installed on both ends of the connection, not just the local side.

@spikedrba
Copy link
Author

got it, I understand, thank you for your quick reply. My main concern basically was the backup ending up being tampered with, which would be quite unfortunate. I don't see how to protect that once the user has any rights other than append. I'm not quite sure how, but what about allow/unallow? do you see a way where I could force the command to run as a specific user (even tho I'd still require sudo to run zfs on linux) and then limit what that user is allowed to?

The other alternative I thought of was to send to a file to the remote host and then have something there receiving and deleting the file sent. That way the remote user never really touches the backup.

thanks,

@jimsalterjrs
Copy link
Owner

From a security context, the best way to handle this is to do pull backups, not push. Your backup server pulls backups from production on its own schedule, also rotating backups on its own schedule, which leaves production unable to screw with the backup. Production allows ssh in from backup, backup does NOT allow ssh in from production.

This is a better approach for a number of reasons, not the least of which being, it's easier to keep your backup server secure than it is your production server, since fewer services need to be running or exposed on it, and fewer people need to interact with it.

@STRML
Copy link

STRML commented Jul 9, 2017

I've had luck with the following in sudoers for a backup user on the remote (pull setup):

# Syncoid commands
backup ALL=NOPASSWD: /sbin/zfs get *
backup ALL=NOPASSWD: /sbin/zfs snapshot *
backup ALL=NOPASSWD: /sbin/zfs send *
# We only want to destroy snapshots
backup ALL=NOPASSWD: /sbin/zfs destroy *@syncoid_backup*

@jimsalterjrs
Copy link
Owner

Yep, that's what I was really getting at with allowing syncoid to run as non-root and use sudo in the first place. The only real issue remaining is that a remote (pull backup) server with access to production as defined in @STRML's sudoers file is still capable of destroying snapshots, which is obviously quite destructive. The good news is, with it defined as narrowly as @STRML did, at least you can't use zfs destroy to destroy entire datasets (assuming there are no bugs in that sudoers setup; please do not take this as me having personally vetted it).

If you want to get even more locked down, use --no-sync-snap from your remote pull backup server - then you don't need either the zfs snapshot * line or the zfs destroy *@* line, as the pull backup server neither needs to create nor destroy snapshots.

Note that it should also be possible to limit what datasets the remote syncoid has access to, by even more carefully drilling down your sudoers configs than what @STRML has done here, if for example you wanted the remote backup server to have access to pool/backmeup but not have access to pool/staythehelloutomgsuperprivate.

If anybody wants easier / more comprehensive access limits, that would require a syncoid wrapper with its own config files for allowing / disallowing access to certain datasets on a per-user or per-credentials basis; that's possible but would very firmly fall into the "somebody's going to have to pay me to develop that" category. =)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants