-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potentially add the ability to pause an incoming request completely? #661
Comments
Yes you have to either pause or resume both write and read. The Node.js APIs that allow you to pause only the read side are broken by design and you can actually DOS a Node.js server that is in paused state because they implement the pausing by just buffering up the side of the stream that happens to have something to write. So if you spam a bunch of renegotiations you will fill up that buffer, or just deadlock the writing I don't remember the details but this is something I asked on OpenSSL mailing list since the Node.js way made no sense when I looked into it. Problem is adding something like us_socket_pause, us_socket_resume is low level and requires lots of testing and is error prone and may break other parts. So that is low prio, it already has an issue tracking it but no customer has really brought it up so it remains a todo. |
I see, that makes sense. In that case, I'll just follow the issue you linked for the future. Thanks! |
I thought about it a bit and I can probably add experimental pause/resume pretty easily. Let's see... |
I have implemented this very sloppily here uNetworking/uWebSockets@6dc4a11 No time to test it or wrap it for JS right now |
Sounds good, I'll be sure to follow this and test on the JS side once it has been wrapped in uWebsockets.js |
Can you try latest binaries (in binaries branch)? res.pause, res.resume. There is no hard guarantee that once you call res.pause immediately you won't get any more chunks (they might already be downloaded and about to be emitted). So you need to handle that. Please write back about how it runs. |
Just tested the latest binaries by running tests with modifications to this function https://github.com/kartikk221/hyper-express/blob/cb82084ec59e7dd46bc3c9df9b3388416fb45ab1/src/components/http/Request.js#L111 in HyperExpress which handles the downloading of the incoming request body into memory. // See link above for code before this snippet of _download_buffer()
// Store body into a singular Buffer for most memory efficiency
let body_buffer;
let body_cursor = 0;
let use_fast_buffers = reference.#master_context.options.fast_buffers;
let paused = false;
// Store incoming buffer chunks into buffers Array
reference.#raw_response.onData((array_buffer, is_last) => {
// Do not process chunks if request has been aborted
if (reference.#raw_response.aborted) return;
// TESTING: Pause after each chunk to simulate throttling
console.log(array_buffer);
if (!is_last && !paused) {
paused = true;
reference.#raw_response.pause();
console.log('PAUSED');
setTimeout(() => {
console.log('RESUMING...');
paused = false;
reference.#raw_response.resume();
}, 1000);
}
// See link above for code after this snippet of _download_buffer() I received the following console output output.txt which signifies that everything is running good in the tests with a large body. Unless there is potential for bugs to occur under heavy load with concurrent paused and resumed requests, I think the pause()/resume() methods look to be fine. |
Nice but I'm pretty sure that if you pause it for more than 10s in one go it will abort the download from timeout. So pause needs to stop timeouts and resume needs to start them. |
Yeah, you're most likely right, I didn't have any scenario in which I tested with a pause for longer than 10 seconds but If there is an internal timeout for the requests in uWS then the above would need to be addressed. Let me know once the new binaries are out and I can do some testing with long pauses. |
New binaries please test everything |
Just tested again with the new binaries. I modified the function from my last testing post above except this time with a gradually increasing delay. Code: // See link above for code before this snippet of _download_buffer()
// Store body into a singular Buffer for most memory efficiency
let body_buffer;
let body_cursor = 0;
let use_fast_buffers = reference.#master_context.options.fast_buffers;
let delay = 15;
// Store incoming buffer chunks into buffers Array
reference.#raw_response.onData((array_buffer, is_last) => {
// Do not process chunks if request has been aborted
if (reference.#raw_response.aborted) return;
if (!is_last) {
delay *= 2;
console.log(`PAUSING FOR ${delay}ms`);
reference.#raw_response.pause();
setTimeout(() => {
reference.#raw_response.resume();
console.log('RESUMING');
}, delay);
console.log(array_buffer);
}
// See link above for code after this snippet of _download_buffer() I received the following output output.txt after running the hyper-express tests with random doubling delays and a delay of greater than 10 seconds. Looks like your implementation is stable at least in all of the HyperExpress tests. |
Alright this is released now. Thanks for testing |
Hello,
I am the maintainer of hyper-express and was recently in the process of implementing file uploading into the webserver package.
While the file uploading works great, I noticed that the onData() callback will constantly read and pipe chunks without any way of pausing this behavior. This makes implementing throttling or any form of stream implementation with the request body impossible as there is no way to pause the incoming chunks from the request.
After looking through previous issues I saw here #648 that you stated throttling only receiving data with SSL was not possible as it could create "deadlocks" due to the two way nature of SSL.
With all of the above in mind, would it potentially be possible to implement some kind of a method or way to pause an incoming request completely meaning no data can be written or received until the request is resumed?
The text was updated successfully, but these errors were encountered: