Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not fetch recursive pins from pinner unnecessarily #7883

Merged
merged 2 commits into from
Mar 29, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 16 additions & 16 deletions core/coreapi/pin.go
Original file line number Diff line number Diff line change
Expand Up @@ -219,8 +219,11 @@ func (p *pinInfo) Err() error {
}

// pinLsAll is an internal function for returning a list of pins
//
// The caller must keep reading results until the channel is closed to prevent
// leaking the goroutine that is fetching pins.
func (api *PinAPI) pinLsAll(ctx context.Context, typeStr string) <-chan coreiface.Pin {
out := make(chan coreiface.Pin)
out := make(chan coreiface.Pin, 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, the output channel is now buffered. This allows the goroutine to exit in the case the pinner returns an error and there is no reader for the output channel. This might be possible if a canceled context causes the caller to abandon waiting to read the output of Ls().

Ah, I see. Unfortunately, that doesn't really fix the issue as we're not guaranteed to see that the context has been canceled immediately.

See ipfs/interface-go-ipfs-core#62

Copy link
Contributor Author

@gammazero gammazero Mar 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are correct that if the caller abandons waiting, we may still try to deliver other results before delivering the error... in which case the goroutine is back to blocking on writing to the channel. The idea is that this may help if the caller gives up waiting (context times out, or is canceled) due to waiting too long for results.

Since this does not fix actually prevent leaking the goroutine, I can remove the buffering, and instead add a comment stating that the caller must keep reading results until the channel is closed to prevent leaking the goroutine. Or, provide an implementation similar to ipfs/interface-go-ipfs-core#62

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine keeping the buffering just to be nice. I'd also add a comment.

Let's punt on the interface changes for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


keys := cid.NewSet()

Expand Down Expand Up @@ -249,37 +252,34 @@ func (api *PinAPI) pinLsAll(ctx context.Context, typeStr string) <-chan coreifac
go func() {
defer close(out)

var dkeys, rkeys []cid.Cid
var err error
if typeStr == "recursive" || typeStr == "all" {
rkeys, err := api.pinning.RecursiveKeys(ctx)
rkeys, err = api.pinning.RecursiveKeys(ctx)
if err != nil {
out <- &pinInfo{err: err}
return
}
if err := AddToResultKeys(rkeys, "recursive"); err != nil {
if err = AddToResultKeys(rkeys, "recursive"); err != nil {
out <- &pinInfo{err: err}
return
}
}
if typeStr == "direct" || typeStr == "all" {
dkeys, err := api.pinning.DirectKeys(ctx)
dkeys, err = api.pinning.DirectKeys(ctx)
if err != nil {
out <- &pinInfo{err: err}
return
}
if err := AddToResultKeys(dkeys, "direct"); err != nil {
if err = AddToResultKeys(dkeys, "direct"); err != nil {
out <- &pinInfo{err: err}
return
}
}
if typeStr == "all" {
set := cid.NewSet()
rkeys, err := api.pinning.RecursiveKeys(ctx)
if err != nil {
out <- &pinInfo{err: err}
return
}
for _, k := range rkeys {
err := merkledag.Walk(
err = merkledag.Walk(
ctx, merkledag.GetLinksWithDAG(api.dag), k,
set.Visit,
merkledag.SkipRoot(), merkledag.Concurrent(),
Expand All @@ -289,7 +289,7 @@ func (api *PinAPI) pinLsAll(ctx context.Context, typeStr string) <-chan coreifac
return
}
}
if err := AddToResultKeys(set.Keys(), "indirect"); err != nil {
if err = AddToResultKeys(set.Keys(), "indirect"); err != nil {
out <- &pinInfo{err: err}
return
}
Expand All @@ -298,14 +298,14 @@ func (api *PinAPI) pinLsAll(ctx context.Context, typeStr string) <-chan coreifac
// We need to first visit the direct pins that have priority
// without emitting them

dkeys, err := api.pinning.DirectKeys(ctx)
dkeys, err = api.pinning.DirectKeys(ctx)
if err != nil {
out <- &pinInfo{err: err}
return
}
VisitKeys(dkeys)

rkeys, err := api.pinning.RecursiveKeys(ctx)
rkeys, err = api.pinning.RecursiveKeys(ctx)
if err != nil {
out <- &pinInfo{err: err}
return
Expand All @@ -314,7 +314,7 @@ func (api *PinAPI) pinLsAll(ctx context.Context, typeStr string) <-chan coreifac

set := cid.NewSet()
for _, k := range rkeys {
err := merkledag.Walk(
err = merkledag.Walk(
ctx, merkledag.GetLinksWithDAG(api.dag), k,
set.Visit,
merkledag.SkipRoot(), merkledag.Concurrent(),
Expand All @@ -324,7 +324,7 @@ func (api *PinAPI) pinLsAll(ctx context.Context, typeStr string) <-chan coreifac
return
}
}
if err := AddToResultKeys(set.Keys(), "indirect"); err != nil {
if err = AddToResultKeys(set.Keys(), "indirect"); err != nil {
out <- &pinInfo{err: err}
return
}
Expand Down