Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault in http2 module when cancelling download in Chrome #18973

Closed
iczero opened this issue Feb 24, 2018 · 39 comments
Closed

Segmentation fault in http2 module when cancelling download in Chrome #18973

iczero opened this issue Feb 24, 2018 · 39 comments
Labels
confirmed-bug Issues with confirmed bugs. http2 Issues or PRs related to the http2 subsystem.

Comments

@iczero
Copy link

iczero commented Feb 24, 2018

  • Version: v9.5.0, v9.6.1, and v10.0.0-pre commit 743f890
  • Platform: linux 64-bit (kernel 4.4.0-116-generic from Ubuntu)
  • Subsystem: http2

Steps to reproduce:

  1. Serve a file from Node.js to Chrome using the http2 module
  2. Cancel the download from Chrome

I have not been able to reproduce this problem locally, however.

This was the stacktrace which was generated by node-segfault-handler (Node.js v9.5.0):

PID 1371 received SIGSEGV for address: 0x0
/home/iczero/prj/buildserver/node_modules/segfault-handler/build/Release/segfault-handler.node(+0x1acd)[0x7fccc33beacd]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fccc59a0390]
node(_ZN4node5http212Http2Session22OnStreamAfterWriteImplEPNS_9WriteWrapEiPv+0x55)[0x90f4b5]
node(_ZN4node10StreamBase10AfterWriteEPNS_9WriteWrapEi+0xa0)[0x94c330]
node(_ZN4node7TLSWrap6EncOutEv+0xc5)[0x999075]
node(_ZN4node10StreamBase10AfterWriteEPNS_9WriteWrapEi+0xa0)[0x94c330]
node(_ZN4node15LibuvStreamWrap10AfterWriteEPNS_9WriteWrapEi+0x1b)[0x9502cb]
node(_ZN4node15LibuvStreamWrap12AfterUvWriteEP10uv_write_si+0x54)[0x950f14]
node[0x14240b9]
node[0x1424d74]
node[0x142ac58]
node(uv_run+0x156)[0x1419676]
node(_ZN4node5StartEP9uv_loop_siPKPKciS5_+0x4bd)[0x8d49ed]
node(_ZN4node5StartEiPPc+0x153)[0x8d12d3]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fccc55e5830]
node[0x89d7e1]

Relevant symbols demangled:

node::http2::Http2Session::OnStreamAfterWriteImpl(node::WriteWrap*, int, void*)
node::StreamBase::AfterWrite(node::WriteWrap*, int)
node::TLSWrap::EncOut()
node::StreamBase::AfterWrite(node::WriteWrap*, int)
node::LibuvStreamWrap::AfterWrite(node::WriteWrap*, int)
node::LibuvStreamWrap::AfterUvWrite(uv_write_s*, int)

A small script that causes the segfault:

const http2 = require('http2');
const fs = require('fs');

const FILE = '/usr/bin/node';

// generate a self-signed cert first:
// $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out cert.pem

let server = http2.createSecureServer({
  key: fs.readFileSync('./key.pem'),
  cert: fs.readFileSync('./cert.pem')
}, async (request, response) => {
  console.log('a request!');
  fs.createReadStream(FILE).pipe(response);
});
server.listen(9998);
@targos targos added the http2 Issues or PRs related to the http2 subsystem. label Feb 24, 2018
@michael42
Copy link

It seems I'm hitting the same issue when trying an HTTP2 push when Chrome already closed the connection. gdb spits out a similar stack trace:

#0  0x0000000000911185 in node::http2::Http2Session::OnStreamAfterWriteImpl(node::WriteWrap*, int, void*) ()
#1  0x000000000094e240 in node::StreamBase::AfterWrite(node::WriteWrap*, int) ()
#2  0x0000000000998cc4 in ?? ()
#3  0x000000000099b1cf in node::TLSWrap::EncOut() ()
#4  0x000000000094e240 in node::StreamBase::AfterWrite(node::WriteWrap*, int) ()
#5  0x0000000000952bc4 in node::LibuvStreamWrap::AfterUvWrite(uv_write_s*, int) ()
#6  0x0000000001427019 in ?? ()
#7  0x0000000001427cd4 in ?? ()
#8  0x000000000141c57b in uv_run ()
#9  0x00000000008d74bd in node::Start(uv_loop_s*, int, char const* const*, int, char const* const*) ()

I managed to reduce the code to following example:

const fs = require("fs");
const http2 = require("http2");

const options = {
    key: fs.readFileSync(`${__dirname}/localhost.key`),
    cert: fs.readFileSync(`${__dirname}/localhost.crt`)
};
const server = http2.createSecureServer(options, async (req, res) => {
    if (req.url !== "/") {
        res.writeHead(404);
        res.end();
        return;
    }

    // no crash when pushing a single response
    push(req, res, "/dummy1");
    push(req, res, "/dummy2");

    res.writeHead(200);
    res.write("test");
    res.end();
});

server.listen(8443);
console.log("Listening");

function push(req, res, path) {
    res.createPushResponse({":path": path}, (err, pushResponse) => {
        if (err)
            return console.error({err}, "createPushResponse failed");

        pushResponse.stream.on("error", (err) => {
            console.warn("Writing push failed", err);
            pushResponse.stream.end();
        });

        // no crash when immediately sending a response and closing the stream
        const readStream = fs.createReadStream(__filename);
        readStream.on("end", () => pushResponse.stream.end());
        pushResponse.writeHead(200, {});
        readStream.pipe(pushResponse.stream);
    });
}

Steps:

@addaleax
Copy link
Member

I couldn’t reproduce the segfault with either example yet. :/ If it is at all possible, can you try to reproduce this with Node-only code? I.e. a standalone script that crashes and doesn’t require a browser or other external HTTP/2 clients.

It would be appreciated if you could provide a core dump, or try to dig into this with a debugger if you can. Building a debug build of Node takes a bit, but it should be worth it.

If your local reproduction is reliable enough, you could probably also figure out something meaningful by attaching valgrind to the Node process.

What I can tell from the offsets in the stack traces is that most likely one of the stream_ fields of a WriteWrap created for an Http2Stream is invalid, but that’s not really very telling about the cause of the issue.

v10.0.0-pre commit 743f890

@iczero Does that mean that that commit fixes the issue for you, or that you didn’t try anything that includes that patch? It would be weird if that commit fixed things, the original PR that introduce that bug didn’t make it into Node 9.

@iczero
Copy link
Author

iczero commented Feb 25, 2018

@addaleax I meant that the problem still existed with that commit. Sorry for the confusion.

@addaleax
Copy link
Member

@iczero @michael42 Could you check whether the patch from #18987 fixes this issue for you? That would be a huge help.

@addaleax addaleax added the confirmed-bug Issues with confirmed bugs. label Feb 25, 2018
@michael42
Copy link

I think I can check it tomorrow. For now, I can't get it reproduced without Chrome, but here's the output from chrome://net-internals:

t= 2 [st= 0]  HTTP2_SESSION_SEND_HEADERS
              --> exclusive = true
              --> fin = true
              --> has_priority = true
              --> :method: GET
                  :authority: localhost:8443
                  :scheme: https
                  :path: /
                  cache-control: max-age=0
                  upgrade-insecure-requests: 1
                  user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36
                  accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
                  dnt: 1
                  accept-encoding: gzip, deflate, br
                  accept-language: de,en;q=0.9,en-US;q=0.8
              --> parent_stream_id = 0
              --> source_dependency = 11872 (HTTP_STREAM_JOB)
              --> stream_id = 3
              --> weight = 256
t= 5 [st= 3]  HTTP2_SESSION_RECV_PUSH_PROMISE
              --> :scheme: https
                  :authority: localhost:8443
                  :method: GET
                  :path: /dummy1
              --> id = 3
              --> promised_stream_id = 6
t= 5 [st= 3]  HTTP2_SESSION_SEND_RST_STREAM
              --> description = "Duplicate pushed stream with url: https://localhost:8443/dummy1"
              --> error_code = "7 (REFUSED_STREAM)"
              --> stream_id = 6
t= 5 [st= 3]  HTTP2_SESSION_RECV_PUSH_PROMISE
              --> :scheme: https
                  :authority: localhost:8443
                  :method: GET
                  :path: /dummy2
              --> id = 3
              --> promised_stream_id = 8
t= 5 [st= 3]  HTTP2_SESSION_SEND_RST_STREAM
              --> description = "Duplicate pushed stream with url: https://localhost:8443/dummy2"
              --> error_code = "7 (REFUSED_STREAM)"
              --> stream_id = 8
t= 5 [st= 3]  HTTP2_SESSION_RECV_HEADERS
              --> fin = false
              --> :status: 200
                  date: Sun, 25 Feb 2018 21:13:13 GMT
              --> stream_id = 3
t= 5 [st= 3]  HTTP2_SESSION_RECV_HEADERS
              --> fin = false
              --> :status: 200
                  date: Sun, 25 Feb 2018 21:13:13 GMT
              --> stream_id = 6
t= 5 [st= 3]  HTTP2_SESSION_RECV_HEADERS
              --> fin = false
              --> :status: 200
                  date: Sun, 25 Feb 2018 21:13:13 GMT
              --> stream_id = 8
t= 5 [st= 3]  HTTP2_SESSION_RECV_DATA
              --> fin = false
              --> size = 4
              --> stream_id = 3
t= 5 [st= 3]  HTTP2_SESSION_UPDATE_RECV_WINDOW
              --> delta = -4
              --> window_size = 15726176
t= 5 [st= 3]  HTTP2_SESSION_RECV_DATA
              --> fin = true
              --> size = 0
              --> stream_id = 3
t= 6 [st= 4]  HTTP2_SESSION_UPDATE_RECV_WINDOW
              --> delta = 4
              --> window_size = 15726180
t=11 [st= 9]  HTTP2_SESSION_RECV_DATA
              --> fin = false
              --> size = 1230
              --> stream_id = 8
t=11 [st= 9]  HTTP2_SESSION_UPDATE_RECV_WINDOW
              --> delta = -1230
              --> window_size = 15724950
t=11 [st= 9]  HTTP2_SESSION_UPDATE_RECV_WINDOW
              --> delta = 1230
              --> window_size = 15726180
t=14 [st=12]  HTTP2_SESSION_CLOSE
              --> description = "Connection closed"
              --> net_error = -100 (ERR_CONNECTION_CLOSED)
t=14 [st=12]  HTTP2_STREAM_ERROR
              --> description = "Abandoned."
              --> net_error = "ERR_CONNECTION_CLOSED"
              --> stream_id = 2
t=14 [st=12]  HTTP2_SESSION_UPDATE_RECV_WINDOW
              --> delta = 1230
              --> window_size = 15727410
t=14 [st=12]  HTTP2_STREAM_ERROR
              --> description = "Abandoned."
              --> net_error = "ERR_CONNECTION_CLOSED"
              --> stream_id = 4
t=14 [st=12]  HTTP2_SESSION_UPDATE_RECV_WINDOW
              --> delta = 1230
              --> window_size = 15728640
t=14 [st=12]  HTTP2_SESSION_POOL_REMOVE_SESSION
t=14 [st=12] -HTTP2_SESSION

I also ran it under valgrind --leak-check=full (is that correct, never used it before?), but it doesn't crash in that case. It does report some possibly lost bytes (3,004 bytes in 15 blocks), though.

@iczero
Copy link
Author

iczero commented Feb 25, 2018

@addaleax It doesn't seem to fix the problem. :(

Thread 1 "node" received signal SIGSEGV, Segmentation fault.
0x000000000089ccba in node::http2::Http2Session::ClearOutgoing(int) ()

I'm going to try running node in gdb to see if I can isolate the problem.

@addaleax
Copy link
Member

@michael42 Thanks for reporting back! Yes, it’s expected that there are some lost bytes since Node doesn’t fully clean up before process exit. It would be more interesting to know about any kind of uninitialized data usage.

@iczero Thanks for the info. :/ Do you think you could try a debug build of Node to possibly provide more context?

@iczero
Copy link
Author

iczero commented Feb 25, 2018

A better backtrace provided by the debug version:

#0  0x00000000014a613f in node::StreamResource::EmitAfterWrite (this=0x3e6d450, w=0x3e76040, status=0) at ../src/stream_base-inl.h:121
#1  0x00000000014a61dc in node::StreamBase::AfterWrite(node::WriteWrap*, int)::{lambda()#1}::operator()() const (__closure=0x7fffffff9b20) at ../src/stream_base-inl.h:138
#2  0x00000000014a6f0c in node::StreamBase::AfterRequest<node::WriteWrap, node::StreamBase::AfterWrite(node::WriteWrap*, int)::{lambda()#1}>(node::WriteWrap*, node::StreamBase::AfterWrite(node::WriteWrap*, int)::{lambda()#1}) (this=0x3e6d450, req_wrap=0x3e76040, emit=...) at ../src/stream_base-inl.h:155
#3  0x00000000014a622b in node::StreamBase::AfterWrite (this=0x3e6d450, req_wrap=0x3e76040, status=0) at ../src/stream_base-inl.h:137
#4  0x00000000014a6322 in node::WriteWrap::OnDone (this=0x3e76040, status=0) at ../src/stream_base-inl.h:413
#5  0x00000000014b43e4 in node::StreamReq::Done (this=0x3e76040, status=0, error_str=0x0) at ../src/stream_base-inl.h:424
#6  0x0000000001523680 in node::http2::Http2Session::ClearOutgoing (this=0x3e6cfb0, status=0) at ../src/node_http2.cc:1454
#0  0x00000000014a613f in node::StreamResource::EmitAfterWrite (this=0x3e6d450, w=0x3e76040, status=0) at ../src/stream_base-inl.h:121
121       listener_->OnStreamAfterWrite(w, status);
#1  0x00000000014a61dc in node::StreamBase::AfterWrite(node::WriteWrap*, int)::{lambda()#1}::operator()() const (__closure=0x7fffffff9b20) at ../src/stream_base-inl.h:138
138         EmitAfterWrite(req_wrap, status);
#2  0x00000000014a6f0c in node::StreamBase::AfterRequest<node::WriteWrap, node::StreamBase::AfterWrite(node::WriteWrap*, int)::{lambda()#1}>(node::WriteWrap*, node::StreamBase::AfterWrite(node::WriteWrap*, int)::{lambda()#1}) (this=0x3e6d450, req_wrap=0x3e76040, emit=...) at ../src/stream_base-inl.h:155
155       emit();
#3  0x00000000014a622b in node::StreamBase::AfterWrite (this=0x3e6d450, req_wrap=0x3e76040, status=0) at ../src/stream_base-inl.h:137
137       AfterRequest(req_wrap, [&]() {
#4  0x00000000014a6322 in node::WriteWrap::OnDone (this=0x3e76040, status=0) at ../src/stream_base-inl.h:413
413       stream()->AfterWrite(this, status);
#5  0x00000000014b43e4 in node::StreamReq::Done (this=0x3e76040, status=0, error_str=0x0) at ../src/stream_base-inl.h:424
424       OnDone(status);
#6  0x0000000001523680 in node::http2::Http2Session::ClearOutgoing (this=0x3e6cfb0, status=0) at ../src/node_http2.cc:1454
1454          wrap->Done(status);
#7  0x0000000001523389 in node::http2::Http2Session::OnStreamAfterWrite (this=0x3e6cfb0, w=0x3e77810, status=0) at ../src/node_http2.cc:1410
1410      ClearOutgoing(status);

@addaleax
Copy link
Member

@iczero Can you figure out why that is crashing? Is listener_ null?

@iczero
Copy link
Author

iczero commented Feb 25, 2018

Seems like it:

(gdb) p listener_
$1 = (node::StreamListener *) 0x0

@addaleax
Copy link
Member

@iczero Can you build with ./configure --debug-http2 and share the output for a crashing run, and ideally add logging like this:

--- a/src/node_http2.cc
+++ b/src/node_http2.cc
@@ -1755,2 +1755,3 @@ Http2Stream::Http2Stream(
 Http2Stream::~Http2Stream() {
+  DEBUG_HTTP2STREAM2(this, "tearing down stream");
   if (session_ != nullptr) {

listener_ == nullptr should basically never happen at that point, it means that all listeners have been removed, which means that the stream destructor has been called. So, it might be nice to figure out how that can happen…

@iczero
Copy link
Author

iczero commented Feb 25, 2018

The compile errored with that patch:

In file included from ../src/node_http2.cc:4:0:
../src/node_http2.cc: In destructor ‘virtual node::http2::Http2Stream::~Http2Stream()’:
../src/node_http2.h:39:52: error: expected primary-expression before ‘)’ token
 #define DEBUG_HTTP2(...) debug_vfprintf(__VA_ARGS__);
                                                    ^
../src/node_http2.h:66:5: note: in expansion of macro ‘DEBUG_HTTP2’
     DEBUG_HTTP2("Http2Stream %d (%.0lf) [Http2Session %s (%.0lf)] " message   \
     ^~~~~~~~~~~
../src/node_http2.cc:1756:3: note: in expansion of macro ‘DEBUG_HTTP2STREAM2’
   DEBUG_HTTP2STREAM2(this, "tearing down stream");
   ^~~~~~~~~~~~~~~~~~
In file included from ../src/node_http2.cc:4:0:
../src/node_http2.cc: In destructor ‘virtual node::http2::Http2Stream::~Http2Stream()’:
../src/node_http2.h:39:52: error: expected primary-expression before ‘)’ token
 #define DEBUG_HTTP2(...) debug_vfprintf(__VA_ARGS__);
                                                    ^
../src/node_http2.h:66:5: note: in expansion of macro ‘DEBUG_HTTP2’
     DEBUG_HTTP2("Http2Stream %d (%.0lf) [Http2Session %s (%.0lf)] " message   \
     ^~~~~~~~~~~
../src/node_http2.cc:1756:3: note: in expansion of macro ‘DEBUG_HTTP2STREAM2’
   DEBUG_HTTP2STREAM2(this, "tearing down stream");

@addaleax
Copy link
Member

Weird, my compiler didn’t complain. :o

Anyway, using DEBUG_HTTP2STREAM instead of DEBUG_HTTP2STREAM2 should fix that (my bad).

@iczero
Copy link
Author

iczero commented Feb 25, 2018

The output now:

Http2Session server (28) reading outbound data for stream 1
Http2Session server (28) stream 1 has pending outbound data
Http2Session server (28) sending 16384 bytes for data frame on stream 1
Http2Session server (28) nghttp2 has 16384 bytes to send directly
Http2Session server (28) reading outbound data for stream 1
Http2Session server (28) deferring stream 1
Http2Session server (28) wants data in return? 1
Http2Session server (28) receiving 13 bytes
Http2Session server (28) receiving 13 bytes [wants data? 1]
Http2Session server (28) complete frame received: type: 3
Http2Session server (28) stream 1 closed with code: 8
Http2Stream 1 (34) [Http2Session server (28)] closed with code 8
Http2Stream 1 (34) [Http2Session server (28)] destroying stream
Http2Stream 1 (34) [Http2Session server (28)] destroying stream
Http2Session server (28) sending pending data
Http2Session server (28) processed 13 bytes. wants more? 1
Http2Stream 1 (34) [Http2Session server (28)] tearing down stream
Http2Session server (28) write finished with status 0

Thread 1 "node_g" received signal SIGSEGV, Segmentation fault.
0x00000000014a613f in node::StreamResource::EmitAfterWrite (this=0x3e6c2c0, w=0x3e6d650, status=0) at ../src/stream_base-inl.h:121
121       listener_->OnStreamAfterWrite(w, status);

Also, I am on gcc 7.2.0-1ubuntu1~16.04.

@addaleax
Copy link
Member

addaleax commented Feb 25, 2018

Thanks, that seems helpful.

One thing that sticks out as weird is the fact that destroying stream pops up twice in the output, because there is an explicit check that should prevent that. Nvm, they are from different call sites.

The hypothesis I’d get from the above is that the delete stream; line in Http2Stream::Destroy is run before the write (that was already posted to the socket) finishes its callback, so accessing the stream object fails at that point.

Can you provide a backtrace for the tearing down stream line? Just to make sure that that’s what’s happening…

You could also try removing the delete stream; line altogether for now. It’s not a good fix, but it should work and not create a memory leak, it’s just there to avoid having to wait for garbage collection. (If that doesn’t work I’m probably wrong anyway.)

I’m going to sleep now, but thanks a lot for sticking with me in debugging this. 💙 If somebody else wants to think about the ideal patch here, they’re welcome to do that, because it’s not quite obvious to me at this point what the best approach would be.

@iczero
Copy link
Author

iczero commented Feb 25, 2018

Thanks for your time. Good night!

@iczero
Copy link
Author

iczero commented Feb 26, 2018

Backtrace to Http2Stream destructor:

#0  node::http2::Http2Stream::~Http2Stream (this=0x3e6c720, __in_chrg=<optimized out>) at ../src/node_http2.cc:1756
1756      DEBUG_HTTP2STREAM(this, "tearing down stream");
#1  0x0000000001524178 in node::http2::Http2Stream::~Http2Stream (this=0x3e6c720, __in_chrg=<optimized out>) at ../src/node_http2.cc:1764
1764    }
#2  0x000000000153f278 in node::http2::Http2Stream::Destroy()::{lambda(node::Environment*, void*)#1}::operator()(node::Environment*, void*) const (__closure=0x0, env=0x7fffffffd520, data=0x3e6c720)
    at ../src/node_http2.cc:1849
1849        delete stream;
#3  0x000000000153f2a0 in node::http2::Http2Stream::Destroy()::{lambda(node::Environment*, void*)#1}::_FUN(node::Environment*, void*) () at ../src/node_http2.cc:1850
1850      }, this, this->object());
#4  0x00000000014abf68 in node::Environment::<lambda()>::operator()(void) const (__closure=0x7fffffffd310) at ../src/env.cc:316
316             it->cb_(this, it->data_);

addaleax added a commit to addaleax/node that referenced this issue Feb 26, 2018
This fixes a crash that occurred when a `Http2Stream` write
is completed after it is already destroyed.

Instead, don’t destroy the stream in that case and wait for
GC to take over.

Fixes: nodejs#18973
@addaleax
Copy link
Member

Okay, I think here’s a real fix: #19002

It would be great if you could check whether that works, possibly combined with #18987.

@iczero
Copy link
Author

iczero commented Feb 26, 2018

Yep it works. Thanks!

@iczero iczero closed this as completed Feb 26, 2018
@TimothyGu
Copy link
Member

Let’s keep this open until the PR gets merged.

@TimothyGu TimothyGu reopened this Feb 26, 2018
@iczero
Copy link
Author

iczero commented Feb 26, 2018

Oh, sorry about that.

MylesBorins pushed a commit that referenced this issue Mar 7, 2018
This fixes a crash that occurred when a `Http2Stream` write
is completed after it is already destroyed.

Instead, don’t destroy the stream in that case and wait for
GC to take over.

Backport-PR-URL: #19185
PR-URL: #19002
Fixes: #18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Anatoli Papirovski <[email protected]>
kjin pushed a commit to kjin/node that referenced this issue May 1, 2018
This fixes a crash that occurred when a `Http2Stream` write
is completed after it is already destroyed.

Instead, don’t destroy the stream in that case and wait for
GC to take over.

Backport-PR-URL: nodejs#19185
PR-URL: nodejs#19002
Fixes: nodejs#18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Anatoli Papirovski <[email protected]>
MylesBorins pushed a commit that referenced this issue May 2, 2018
This fixes a crash that occurred when a `Http2Stream` write
is completed after it is already destroyed.

Instead, don’t destroy the stream in that case and wait for
GC to take over.

Backport-PR-URL: #19185
Backport-PR-URL: #20456
PR-URL: #19002
Fixes: #18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Anatoli Papirovski <[email protected]>
MayaLekova pushed a commit to MayaLekova/node that referenced this issue May 8, 2018
This fixes a crash that occurred when a `Http2Stream` write
is completed after it is already destroyed.

Instead, don’t destroy the stream in that case and wait for
GC to take over.

PR-URL: nodejs#19002
Fixes: nodejs#18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Anatoli Papirovski <[email protected]>
MayaLekova pushed a commit to MayaLekova/node that referenced this issue May 8, 2018
This might otherwise result in a hard crash when trying
to write to a socket after a sudden disconnect.

Note that the test here uses an aborted `h2load` run to create
the failing requests; That’s far from ideal, but it provides
a reasonably reliably reproduction at this point.

PR-URL: nodejs#18987
Fixes: nodejs#18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
MylesBorins pushed a commit that referenced this issue May 15, 2018
This fixes a crash that occurred when a `Http2Stream` write
is completed after it is already destroyed.

Instead, don’t destroy the stream in that case and wait for
GC to take over.

Backport-PR-URL: #19185
Backport-PR-URL: #20456
PR-URL: #19002
Fixes: #18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Anatoli Papirovski <[email protected]>
MylesBorins pushed a commit that referenced this issue May 15, 2018
This fixes a crash that occurred when a `Http2Stream` write
is completed after it is already destroyed.

Instead, don’t destroy the stream in that case and wait for
GC to take over.

Backport-PR-URL: #19185
Backport-PR-URL: #20456
PR-URL: #19002
Fixes: #18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Anatoli Papirovski <[email protected]>
addaleax added a commit to addaleax/node that referenced this issue Sep 18, 2018
This might otherwise result in a hard crash when trying
to write to a socket after a sudden disconnect.

Note that the test here uses an aborted `h2load` run to create
the failing requests; That’s far from ideal, but it provides
a reasonably reliably reproduction at this point.

PR-URL: nodejs#18987
Fixes: nodejs#18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
MylesBorins pushed a commit that referenced this issue Sep 25, 2018
This might otherwise result in a hard crash when trying
to write to a socket after a sudden disconnect.

Note that the test here uses an aborted `h2load` run to create
the failing requests; That’s far from ideal, but it provides
a reasonably reliably reproduction at this point.

Backport-PR-URL: #22924
PR-URL: #18987
Fixes: #18973
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
@sam-github
Copy link
Contributor

Did a CVE get issued for this? @jasnell @mhdawson

@addaleax
Copy link
Member

addaleax commented Nov 2, 2018

@sam-github Not according to our CVE management repository. This was also during the time when HTTP/2 support was classified as experimental, I don’t know if we assign CVEs for those cases?

@sam-github
Copy link
Contributor

I don't know either! :-)

On the one hand... "experimental" - use at your own risk, if it breaks, you get to keep the parts.

On the other hand, we shipped it, people are using it, and their servers can be crashed, maybe still can be crashed if #24037 is accurate. A hard crash might deserve a CVE.

@nodejs/security @nodejs/security-wg Anyone have some thoughts on what our policy should be here? Should TSC decide that experimental features accept bug reports, but not vulnerability reports? Or decide that if people are using it they deserve to know about sec problems?

@mcollina
Copy link
Member

mcollina commented Nov 3, 2018

I think that experimental feature are “use at your own risk”.

@hermanbanken
Copy link

hermanbanken commented Nov 5, 2018

I agree, this should not need to be fixed in 8.12. We explicitly invoke node with --expose-http2.

Unfortunately, there seems to be a problem with Node 10, https.createServer, our hardware client-devices and OpenSSL/TLS, so we can not update. From our debugging-client, we see

Error: 140736036709248:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:../deps/openssl/openssl/ssl/s3_pkt.c:365:

when connecting (from a NodeJS 8.12 (websocket) client) to the updated Node 10 (websocket) server. The server is created using https.createServer({ secureProtocol: "TLSv1_2_method" }), about which I can find no changelog/breaking changes except that Node 10 now supports OpenSSL 1.1.0. The error about SSL3 is weird, considering we supply secureProtocol. Might that be ignored, and thus a bug in Node 10?

@addaleax
Copy link
Member

addaleax commented Nov 5, 2018

/cc @nodejs/crypto for the comment above?

@sam-github
Copy link
Contributor

If you provide standalone scripts that demo this 8.x to 10.x interop problem I will try to debug it.

@hermanbanken
Copy link

hermanbanken commented Nov 5, 2018

@sam-github, I've created a minimal sample repo (docker-compose up) https://gist.github.com/hermanbanken/c1059caabff0d2dac90a773a903fbc07, which produces on the Node 8.12 client:

Error: 140242062162824:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:../deps/openssl/openssl/ssl/s3_pkt.c:365

Try running the latest commit (Node8<->Node10) and the pre-latest one (Node8<->Node8).

@hermanbanken
Copy link

screen shot 2018-11-05 at 23 25 04

@sam-github
Copy link
Contributor

I think our definitions of standalone differ! I'll be wanting to wireshark the communication, and I don't want to play with docker. I'll see if I can run the client.js/server.js from the gist directly on my Linux laptop.

@hermanbanken
Copy link

hermanbanken commented Nov 5, 2018

Got it. This was a ‘simple’ way for me to not have dependencies on which OpenSSL is installed and recreate our exact case (our node server runs in Docker too).

Hopefully the scripts inside produce the same result with your system.

@hermanbanken
Copy link

hermanbanken commented Nov 6, 2018

I can reproduce it locally using this wireshark filter tcp.dstport == 44330 or tcp.srcport == 44330 with this bash:

sh make-certs.sh
(nvm use 10.13 && PORT=44330 node server.js) &
(nvm use 8.12 && PORT=44330 node client.js)

@hermanbanken
Copy link

hermanbanken commented Nov 6, 2018

I have no experience with Wireshark, but I definitely see something different going on in the Server Hello message. The Node 8 server sends a Server Hello of 83 bytes, while the Node 10 server sends one of 61 bytes. The difference is the next_protocol_negotiation (len=18) field, which is only present with Node 8 and contains 'next protocol http/1.1, next protocol http/1.0'.

After that Hello, the Node 8 and Node 10 servers seem to proceed similarly for a few packets, except that the Node 10 terminates early. With my limited wireshark knowledge I don't know if the following is interesting: in the Node 10 capture Wireshark detects a HTTP switching protocols to Websockets, while in Node 8 Wireshark just marks it as "Application Data".

@sam-github
Copy link
Contributor

What's happening is uws, for node 10, is dumping the ws/HTTP "switching protocols" message directly onto the TCP connection, unencrypted! The TLS client then attempts to decode it as a TLS application data record, but since its just unencypted HTTP, it fails to decode, and the client errors out.

Now, why does uws do the wrong thing here, I'm not sure. That will take more digging. I'll take a bit more time to look at this to see if I can tell why its doing that, it could be something changed in node that should not have.

Since uws is dead upstream, doesn't support node 11, and has been spectacularly abandoned by its maintainer, I'm not sure how much time I'll spend on this.

@sam-github
Copy link
Contributor

Sorry, no clue. This disappears into uws' cpp internals, I'm not honestly sure why it ever works with TLS, uws seems to replace all the node TLS code with its own. Its possible it just doesn't support openssl 1.1.0 (8.x used 1.0.2). Btw, this doesn't depend on the client node version, its just the server.

Also, I don't think this issue has anything to do with the one whose thread we are using, I think we hijacked it.

@hermanbanken
Copy link

True, this has nothing to do with this issue.

For your interest: I found that UWS relies on net.Socket._handle.ssl._external. This seems to be gone from Node 10. I'm still looking for if that _external property is somehow exposed elsewhere (if you know how: please tell!). UWS has an internal HTTP & SSL implementation, which really speeds-up our server.

@addaleax
Copy link
Member

addaleax commented Nov 7, 2018

@hermanbanken Fyi, there’s some context in #21711. It’s temporarily added back, but please take this up with the maintainers of UWS, because this is not a permanent solution (if it is unmaintained, you might have to look into finding new people to maintain it :/).

@hermanbanken
Copy link

hermanbanken commented Nov 7, 2018

Eureka, I also just found #21711.

We're aware of the - unfortunate - unmaintainment. We do not have a good replacement ATM. Looking into the @discord/uws fork, and also hoping for a patch in Node 8 HTTP2 (unrelated bug) so we do not need to upgrade ATM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confirmed-bug Issues with confirmed bugs. http2 Issues or PRs related to the http2 subsystem.
Projects
None yet
8 participants