Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore widgets in batches to not exceed the zmq high water mark message limit in the kernel #2765

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

jasongrout
Copy link
Member

@jasongrout jasongrout commented Jan 30, 2020

Current ZMQ by default limits the kernel’s iopub message send queue to at most 1000 messages (and the real limit can be much lower, see ZMQ_SNDHWM at http://api.zeromq.org/4-3:zmq-setsockopt). We now request comm state in batches to avoid this limit.

See voila-dashboards/voila#534 for more details, including where this was causing real problems.

CC @maartenbreddels

  • Do the same batching for the classic notebook manager

@jasongrout
Copy link
Member Author

I put the utility mapBatch function in the base manager package so it could easily be used by lots of managers.

…age limit in the kernel.

Current ZMQ by default limits the kernel’s iopub message send queue to at most 1000 messages (and the real limit can be much lower, see ZMQ_SNDHWM at http://api.zeromq.org/4-3:zmq-setsockopt). We now request comm state in batches to avoid this limit.


See voila-dashboards/voila#534 for more details, including where this was causing real problems.
@jasongrout jasongrout added the Work in Progress PRs that are not ready to be reviewed/merged. label Jan 30, 2020
@maartenbreddels
Copy link
Member

I have an alternative implementation in mind, I'll open a PR to voila so we can compare/discuss.

@maartenbreddels
Copy link
Member

I can confirm this is working (classical notebook). I did notice that via notebook, it is much faster than voila (even using your code).

Copy link
Member

@maartenbreddels maartenbreddels left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whow, really nice work. The only thing I worry about is that we are going to hit the io rate limit of 1000 msg'es/second. I added some suggestions how to add that (not tested), let me know what you think.


// Start the first concurrent chunks. Each chunk will automatically start
// the next available chunk when it finishes.
await Promise.all(chunks.splice(0, concurrency).map(f => f()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very intersting pattern!
This is the place where we can protect against the io rate limit I think (not tested).

Suggested change
await Promise.all(chunks.splice(0, concurrency).map(f => f()));
const concurrency = chunkOptions.rate ?? Infinity;
const delay = (sec) => new Promise((resolve) => setTimeout(resolve, sec*1000));
const rateLimit (f) = > Promise.all(f(), delay((concurrency * chunkSize) / rate)).then(v => v[0]);
await Promise.all(chunks.splice(0, concurrency).map(f => rateLimit(f())), de);

* The maximum number of chunks to evaluate simultaneously. Defaults to 1.
*/
concurrency?: number;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
}
/**
* The maximum rate (calls/second) at which we may call fn. Defaults to Infinity.
*/
rate?: number;
}

// more details.
const widgets_info = await chunkMap(
Object.keys(comm_ids),
{ chunkSize: 10, concurrency: 10 },
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
{ chunkSize: 10, concurrency: 10 },
{ chunkSize: 10, concurrency: 10, rate: 500 },

// https://github.com/voila-dashboards/voila/issues/534 for more details.
return baseManager.chunkMap(
comms,
{ chunkSize: 10, concurrency: 10 },
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
{ chunkSize: 10, concurrency: 10 },
{ chunkSize: 10, concurrency: 10, rate: 500 },

@jasongrout
Copy link
Member Author

@maartenbreddels - testing this out a bit, I tried to see how it affected the data rates. I added this line: self.log.warning("msg rate: {}".format(msg_rate))
here: https://github.com/jupyter/notebook/blob/43df5af2b614088b4b297fae90a70b6505b9bf84/notebook/services/kernels/handlers.py#L393
which should print out the message data rate. I opened JupyterLab and ran

import ipywidgets as widgets
widgets.Widget.close_all()
def recursive(n):
    if n == 0:
        return widgets.Button(description='Hi')
    else:
        cls = (widgets.VBox if n % 2 else widgets.HBox)
        return cls([recursive(n-1), recursive(n-1)])
display(recursive(9))
display(len(widgets.Widget.widgets))

For the initial display, I was seeing rates around 600-700 from the comm open messages from the kernel to the frontend. However, when I refreshed, which triggered the download of all comm info for the notebook with 2k widgets, I was only seeing message rates of 0.3, even when I switched the jlab manager back to using straight Promise.all(list.map(...)), i.e., no chunked mapping or throttling. In other words, I wasn't seeing anything near even 10, much less 100 or 1000.

What kind of rates do you see?

@jtpio
Copy link
Member

jtpio commented Mar 26, 2021

FYI, just rebased this PR and pushed to a branch on my fork: https://github.com/jtpio/ipywidgets/tree/limitrequests

Let me know if you want to quickly check if the diff looks good and I can force push to this PR:

master...jtpio:limitrequests

The code snippet posted above slightly changes to take into account the change in #3122:

import ipywidgets as widgets
widgets.Widget.close_all()
def recursive(n):
    if n == 0:
        return widgets.Button(description='Hi')
    else:
        cls = (widgets.VBox if n % 2 else widgets.HBox)
        return cls([recursive(n-1), recursive(n-1)])
display(recursive(9))
display(len(widgets.Widget._active_widgets))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Work in Progress PRs that are not ready to be reviewed/merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants