Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup OCI runtime before storage #6229

Merged
merged 2 commits into from
May 14, 2020

Conversation

mheon
Copy link
Member

@mheon mheon commented May 14, 2020

Some runtimes (e.g. Kata containers) seem to object to having us unmount storage before the container is removed from the runtime. This is an easy fix (change the order of operations in cleanup) and seems to make more sense than the way we were doing things.

Some runtimes (e.g. Kata containers) seem to object to having us
unmount storage before the container is removed from the runtime.
This is an easy fix (change the order of operations in cleanup)
and seems to make more sense than the way we were doing things.

Signed-off-by: Matthew Heon <[email protected]>
@rhatdan
Copy link
Member

rhatdan commented May 14, 2020

LGTM

@snir911
Copy link

snir911 commented May 14, 2020

doesn't fix the issue :/

@mheon
Copy link
Member Author

mheon commented May 14, 2020

Still worth merging, IMO, this makes more sense than the way we did it before.

@mheon
Copy link
Member Author

mheon commented May 14, 2020

This does sound like a Kata bug as this point, from what I'm hearing. Podman, on stopping a container, calls the OCI runtime's kill (which should guarantee the container and associated VM stops), and then deletes it from the runtime, and then unmounts storage. This sounds like a correct workflow.

Cleaning up the OCI runtime is not allowed in the Removing state.
To ensure it is actually cleaned up, when calling cleanup() as
part of removing a container, do so before we set the Removing
state, so we can successfully remove.

Signed-off-by: Matthew Heon <[email protected]>
@openshift-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mheon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 14, 2020
@TomSweeneyRedHat
Copy link
Member

LGTM
bummer it doesn't fix the bug, but a good change regardless.

@mheon
Copy link
Member Author

mheon commented May 14, 2020

I pushed another commit that might actually resolve it, but haven't checked

@rhatdan
Copy link
Member

rhatdan commented May 14, 2020

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 14, 2020
@openshift-merge-robot openshift-merge-robot merged commit 0d96251 into containers:master May 14, 2020
@snir911
Copy link

snir911 commented May 14, 2020

I can confirm this is working, i had a similar version but i was not sure if doing the cleanup before setting the Removing state will make Removing state redundant (or even to regress the bug< #3906> was fixed with 25cc43c)

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 25, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 25, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants