Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman-remote-pull - service pulls even when client is killed #7558

Closed
vrothberg opened this issue Sep 8, 2020 · 24 comments · Fixed by #10599
Closed

podman-remote-pull - service pulls even when client is killed #7558

vrothberg opened this issue Sep 8, 2020 · 24 comments · Fixed by #10599
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@vrothberg
Copy link
Member

/kind bug

Description

When killing a podman-remote pull $IMAGE the service continues pulling.

Steps to reproduce the issue:

  1. podman system service -t0

  2. podman-remote pull alpine + CTRL+C

Describe the results you received:

Service pulls the image.

Describe the results you expected:

Service should cancel the pull/copy operation

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Podman v2.0.X and following.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 8, 2020
@mheon
Copy link
Member

mheon commented Sep 8, 2020

For reference, I think Docker works like this as well - so it's not the worst thing if we do as well...

@mheon
Copy link
Member

mheon commented Sep 8, 2020

Still, we need a big rework of the Pull endpoint anyways (it needs to stream progress back to the client). We can probably accommodate this as part of that.

@rhatdan
Copy link
Member

rhatdan commented Sep 8, 2020

Is this really a bug? Or is this expected behaviour. If I pulled a huge image and the network went down, I would be happy that the image completed.

@vrothberg
Copy link
Member Author

Docker does not continue pulling.

If I pulled a huge image and the network went down, I would be happy that the image completed.

The speed of our link at home is quite poor (a couple 100 meters far from fibre unfortunately). If I accidentally pull a huge image and CTRL^C the client I want it to stop pulling. If it continued, I had to kill the service.

@vrothberg vrothberg added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/bug Categorizes issue or PR as related to a bug. labels Sep 15, 2020
@vrothberg
Copy link
Member Author

Labelling it as a feature for now. Remote pull is working and other items have a higher priority.

To make it work, we first have to allow copy.Image(...) in c/image to be cancellable via the provided context.Context.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Oct 21, 2020

@mtrmac PTAL

@mtrmac
Copy link
Collaborator

mtrmac commented Oct 22, 2020

To make it work, we first have to allow copy.Image(...) in c/image to be cancellable via the provided context.Context.

Isn’t it cancellable already? At the very least

  • the Docker transport’s makeRequestToResolvedURL is cancellable
  • copySemaphore.Acquire is cancellable
    and that should propagate up.

That might well not be good enough (maybe if we manage to kick off copyLayerHelper for all layers, I don’t immediately know how much work each of them does before noticing a pending cancellation, and there are quite a few “// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().” pending comments added at the time contexts were added), but I did think we got the basics done at least.

@mtrmac
Copy link
Collaborator

mtrmac commented Oct 27, 2020

OTOH https://github.com/containers/image/blob/master/internal/uploadreader/upload_reader.go — that was ugly. It’s quite possible that the HTTP stack is similarly continuing to receive data (for a limited time, I hope) regardless of cancellation. (I haven’t checked in detail at this point.)

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Nov 27, 2020

@vrothberg I take it that this has not been fixed and is still an issue?

@vrothberg
Copy link
Member Author

Yes. I don't think anybody found time to tackle it.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

github-actions bot commented Mar 1, 2021

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Mar 1, 2021

@vrothberg Should we be raising the priority on this one now?

@vrothberg
Copy link
Member Author

@vrothberg Should we be raising the priority on this one now?

It depends on the free cycles. My queue is long but I can certainly give guidance fixing it.

@github-actions
Copy link

github-actions bot commented Apr 2, 2021

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

github-actions bot commented May 6, 2021

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented May 6, 2021

Sadly another month has passed.

@github-actions
Copy link

github-actions bot commented Jun 8, 2021

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jun 8, 2021

@vrothberg Any chance you can look at this?
@Luap99 @cdoern Interested?

@Luap99
Copy link
Member

Luap99 commented Jun 8, 2021

I take a look.

@Luap99 Luap99 self-assigned this Jun 8, 2021
@vrothberg
Copy link
Member Author

Thanks, @Luap99. I haven't tested it in a long while. It may have already been fixed in the meanwhile.

Luap99 added a commit to Luap99/libpod that referenced this issue Jun 8, 2021
If a client closes the http connection during image pull, the
service should cancel the pull operation.

[NO TESTS NEEDED] I have no idea how we could test this reliable.

Fixes: containers#7558

Signed-off-by: Paul Holzinger <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Jun 11, 2021
If a client closes the http connection during image pull, the
service should cancel the pull operation.

[NO TESTS NEEDED] I have no idea how we could test this reliable.

Fixes: containers#7558

Signed-off-by: Paul Holzinger <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants