Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When dns module resolve mx, core dumped, and process aborted #25839

Closed
catroll opened this issue Jan 31, 2019 · 9 comments
Closed

When dns module resolve mx, core dumped, and process aborted #25839

catroll opened this issue Jan 31, 2019 · 9 comments
Assignees
Labels
confirmed-bug Issues with confirmed bugs.

Comments

@catroll
Copy link

catroll commented Jan 31, 2019

  • Version:10.14.2
  • Platform:Linux 10-9-70-48 2.6.32-279.19.27.el6.ucloud.x86_64 deps: update openssl to 1.0.1j #1 SMP Fri Aug 14 16:10:19 CST 2015 x86_64 x86_64 x86_64 GNU/Linux
  • Subsystem:dns
# uname -a
Linux 10-9-70-48 2.6.32-279.19.27.el6.ucloud.x86_64 #1 SMP Fri Aug 14 16:10:19 CST 2015 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.3 (Santiago)

# nvm --version
0.33.2

# nvm list
->     v10.14.2
default -> lts/* (-> v10.14.2)
node -> stable (-> v10.14.2) (default)
stable -> 10.14 (-> v10.14.2) (default)
iojs -> N/A (default)
lts/* -> lts/dubnium (-> v10.14.2)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.15.1 (-> N/A)
lts/carbon -> v8.14.0 (-> N/A)
lts/dubnium -> v10.14.2

# node -v
v10.14.2

# npm -v
6.4.1

# cat /tmp/test-dns.js 
const dns = require('dns');
const mx = (domain) => {
    dns.resolveMx(domain, function (err, addresses) {
        console.log(err);
        console.log(addresses);
    })
}
mx('torbox3uiot6wchz.onion')

# node /tmp/test-dns.js 
Segmentation fault (core dumped)
(gdb) bt
#0  0x0000003ebd27b95c in __libc_free (mem=0x1528816) at malloc.c:3731
#1  0x00000000016a3816 in ares_query (channel=0x2571a10, name=Unhandled dwarf expression opcode 0xf3
) at ../deps/cares/src/ares_query.c:124
#2  0x00000000008af32a in node::cares_wrap::(anonymous namespace)::QueryWrap::AresQuery ()
#3  0x00000000008b3193 in void node::cares_wrap::(anonymous namespace)::Query<node::cares_wrap::(anonymous namespace)::QueryMxWrap>(v8::FunctionCallbackInfo<v8::Value> const&) ()
#4  0x0000000000b5eb3f in v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) ()
#5  0x0000000000b5f6a9 in v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*) ()
#6  0x00003c779595be1d in ?? ()
#7  0x0000000000000006 in ?? ()
#8  0x00003c779595bd81 in ?? ()
#9  0x00007fffffffce40 in ?? ()
#10 0x0000000000000006 in ?? ()
#11 0x00007fffffffcf08 in ?? ()
#12 0x00003c77959118d5 in ?? ()
#13 0x000012c7cdc026f1 in ?? ()
#14 0x000039d35b65aa31 in ?? ()
#15 0x0000000700000000 in ?? ()
#16 0x000012c7cdc02801 in ?? ()
#17 0x000039d35b654ba1 in ?? ()
#18 0x000006ba26dcbd49 in ?? ()
#19 0x000006ba26dc7701 in ?? ()
#20 0x000012c7cdc026f1 in ?? ()
#21 0x000012c7cdc06bd1 in ?? ()
#22 0x000006ba26dc7701 in ?? ()
#23 0x000039d35b65aa31 in ?? ()
#24 0x000006ba26dcbce9 in ?? ()
#25 0x000012c7cdc026f1 in ?? ()
#26 0x000006ba26dcbd49 in ?? ()
#27 0x000012c7cdc026f1 in ?? ()
#28 0x000000c000000000 in ?? ()
#29 0x000039d35b65dc21 in ?? ()
#30 0x000006ba26dc81a1 in ?? ()
#31 0x000006ba26dc8169 in ?? ()
#32 0x00007fffffffcf70 in ?? ()
#33 0x00003c77959118d5 in ?? ()
#34 0x000006ba26dcbca9 in ?? ()
#35 0x000039d35b654ba1 in ?? ()
#36 0x000006ba26dc7681 in ?? ()
#37 0x000006ba26dcbca9 in ?? ()
#38 0x000012c7cdc026f1 in ?? ()
#39 0x000006ba26dc9019 in ?? ()
#40 0x000006ba26dcad91 in ?? ()
#41 0x0000004c00000000 in ?? ()
#42 0x000039d35b65d9d1 in ?? ()
#43 0x000006ba26dcbc71 in ?? ()
#44 0x000006ba26dc0f79 in ?? ()
#45 0x00007fffffffcfd0 in ?? ()
#46 0x00003c77959118d5 in ?? ()
#47 0x000039d35b654ba1 in ?? ()
#48 0x00002a5d1441ad11 in ?? ()
#49 0x000039d35b654ba1 in ?? ()
#50 0x000012c7cdc026f1 in ?? ()
#51 0x0000088f16603dc1 in ?? ()
#52 0x000006ba26dcbc71 in ?? ()
#53 0x0000005600000000 in ?? ()
#54 0x000039d35b654f09 in ?? ()
#55 0x000006ba26dc0b89 in ?? ()
#56 0x000006ba26dc0f79 in ?? ()
#57 0x00007fffffffd0a8 in ?? ()
---Type <return> to continue, or q <return> to quit---
#58 0x00003c77959118d5 in ?? ()
#59 0x000006ba26dc0bc9 in ?? ()
#60 0x000006ba26dbf4a9 in ?? ()
#61 0x000006ba26dbf871 in ?? ()
#62 0x000006ba26dc0c29 in ?? ()
#63 0x000006ba26dbf929 in ?? ()
#64 0x000006ba26dbf929 in ?? ()
#65 0x000006ba26dc0bc9 in ?? ()
#66 0x000006ba26dbf4a9 in ?? ()
#67 0x000006ba26dbf871 in ?? ()
#68 0x000006ba26dc0c29 in ?? ()
#69 0x000006ba26dbf929 in ?? ()
#70 0x000006ba26dbf929 in ?? ()
#71 0x000006ba26dc0b89 in ?? ()
#72 0x0000088f16604ad1 in ?? ()
#73 0x000012c7cdc026f1 in ?? ()
#74 0x0000000000000000 in ?? ()
(gdb) info threads
  7 Thread 0x7ffff7ffc700 (LWP 8405)  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:183
  6 Thread 0x7ffff57df700 (LWP 8404)  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:183
  5 Thread 0x7ffff61e0700 (LWP 8403)  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:183
  4 Thread 0x7ffff6be1700 (LWP 8402)  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:183
  3 Thread 0x7ffff75e2700 (LWP 8401)  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:183
  2 Thread 0x7ffff7fe3700 (LWP 8400)  0x0000003ebd2e8e8c in epoll_pwait (epfd=<value optimized out>, events=<value optimized out>, maxevents=<value optimized out>, timeout=<value optimized out>, set=<value optimized out>) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:50
* 1 Thread 0x7ffff7fe5720 (LWP 8397)  0x0000003ebd27b95c in __libc_free (mem=0x1528816) at malloc.c:3731
(gdb) info frame
Stack level 0, frame at 0x7fffffffc750:
 rip = 0x3ebd27b95c in __libc_free (malloc.c:3731); saved rip 0x16a3816
 called by frame at 0x7fffffffc790
 source language c.
 Arglist at 0x7fffffffc740, args: mem=0x1528816
 Locals at 0x7fffffffc740, Previous frame's sp is 0x7fffffffc750
 Saved registers:
  rip at 0x7fffffffc748
@catroll
Copy link
Author

catroll commented Jan 31, 2019

# node -p "require('dns').resolveMx('wtf.wtf', (err, records) => { console.log(records) } )"
QueryReqWrap {
  bindingName: 'queryMx',
  callback: [Function],
  hostname: 'wtf.wtf',
  oncomplete: [Function: onresolve],
  ttl: false,
  channel: ChannelWrap {} }
[ { exchange: 'mx20.mailspamprotection.com', priority: 20 },
  { exchange: 'mx10.mailspamprotection.com', priority: 10 },
  { exchange: 'mx30.mailspamprotection.com', priority: 30 } ]
# node -p "require('dns').resolveMx('wtf.onion', (err, records) => { console.log(records) } )"
Segmentation fault (core dumped)

Was .onion domain poisoned?
I know what is this, but, why my program just die?

@catroll
Copy link
Author

catroll commented Jan 31, 2019

OK:

  • Ubuntu 18.10 + node 8.11.4
  • docker node:lts-alpine

@XadillaX
Copy link
Contributor

I'll try to fix it.

@XadillaX
Copy link
Contributor

XadillaX commented Jan 31, 2019

I think it's a bug of cares. And I'm making a PR for it now.

@catroll
Copy link
Author

catroll commented Jan 31, 2019

兄Dei~
Can you provide me some solutions to avoid this problem? @XadillaX
If my program die frequently in Spring Festival, some classmates in operations team will feel bad

@XadillaX
Copy link
Contributor

XadillaX commented Jan 31, 2019

兄Dei~
Can you provide me some solutions to avoid this problem? @XadillaX
If my program die frequently in Spring Festival, some classmates in operations team will feel bad

Only .onion domains has this problem. You can trmporary make an if to judge whether it's onion or not. If it is, you can return false directly in your application logic.

e.g.

if (domain.endsWith('.onion') || domain.endsWith('.onion.')) {
  // callback an error
}

@catroll
Copy link
Author

catroll commented Jan 31, 2019

3Q
Thanks, that is the only thing I can do at present, and I've done that

@XadillaX
Copy link
Contributor

3Q
Thanks, that is the only thing I can do at present, and I've done that

After #25840 is merged and new version of Node.js is published, you can remove that code then.

@catroll
Copy link
Author

catroll commented Jan 31, 2019

Niubility!

@XadillaX XadillaX self-assigned this Jan 31, 2019
@XadillaX XadillaX added the confirmed-bug Issues with confirmed bugs. label Jan 31, 2019
@danbev danbev closed this as completed in 4cc9b5f Feb 6, 2019
addaleax pushed a commit that referenced this issue Feb 6, 2019
c-ares rejects *.onion MX query but forgot to set `*bufp` to NULL. This
will occur SegmentFault when free `*bufp`.

I make this quick fix and then will make a PR for c-ares either.

PR-URL: #25840
Fixes: #25839
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_create_query.c#L97-L103
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_query.c#L124
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: James M Snell <[email protected]>
MylesBorins pushed a commit to bnoordhuis/io.js that referenced this issue May 16, 2019
c-ares rejects *.onion MX query but forgot to set `*bufp` to NULL. This
will occur SegmentFault when free `*bufp`.

I make this quick fix and then will make a PR for c-ares either.

PR-URL: nodejs#25840
Fixes: nodejs#25839
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_create_query.c#L97-L103
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_query.c#L124
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: James M Snell <[email protected]>
MylesBorins pushed a commit that referenced this issue May 16, 2019
c-ares rejects *.onion MX query but forgot to set `*bufp` to NULL. This
will occur SegmentFault when free `*bufp`.

I make this quick fix and then will make a PR for c-ares either.

Backport-PR-URL: #27542
PR-URL: #25840
Fixes: #25839
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_create_query.c#L97-L103
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_query.c#L124
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: James M Snell <[email protected]>
MylesBorins pushed a commit that referenced this issue May 16, 2019
c-ares rejects *.onion MX query but forgot to set `*bufp` to NULL. This
will occur SegmentFault when free `*bufp`.

I make this quick fix and then will make a PR for c-ares either.

Backport-PR-URL: #27542
PR-URL: #25840
Fixes: #25839
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_create_query.c#L97-L103
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_query.c#L124
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: James M Snell <[email protected]>
abhishekumar-tyagi pushed a commit to abhishekumar-tyagi/node that referenced this issue May 5, 2024
c-ares rejects *.onion MX query but forgot to set `*bufp` to NULL. This
will occur SegmentFault when free `*bufp`.

I make this quick fix and then will make a PR for c-ares either.

Fixes: nodejs/node#25839
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_create_query.c#L97-L103
Refs: https://github.com/c-ares/c-ares/blob/955df98/ares_query.c#L124
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confirmed-bug Issues with confirmed bugs.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants