Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate flaky test-fs-readfile-tostring-fail #16601

Closed
Trott opened this issue Oct 30, 2017 · 66 comments · Fixed by #27053
Closed

Investigate flaky test-fs-readfile-tostring-fail #16601

Trott opened this issue Oct 30, 2017 · 66 comments · Fixed by #27053
Labels
flaky-test Issues and PRs related to the tests with unstable failures on the CI. fs Issues and PRs related to the fs subsystem / file system. libuv Issues and PRs related to the libuv dependency or the uv binding. test Issues and PRs related to the tests.

Comments

@Trott
Copy link
Member

Trott commented Oct 30, 2017

  • Version: v9.0.0-pre on CI
  • Platform: osx 1010
  • Subsystem: test

https://ci.nodejs.org/job/node-test-commit-osx/13607/nodes=osx1010/console

not ok 1990 sequential/test-fs-readfile-tostring-fail
  ---
  duration_ms: 0.506
  severity: fail
  stack: |-
    /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:60
      throw err;
      ^
    
    AssertionError [ERR_ASSERTION]: false == true
        at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:34:12
        at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/common/index.js:533:15
        at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:528:3)
@Trott Trott added the flaky-test Issues and PRs related to the tests with unstable failures on the CI. label Oct 30, 2017
@mscdex mscdex added fs Issues and PRs related to the fs subsystem / file system. macos Issues and PRs related to the macOS platform / OSX. test Issues and PRs related to the tests. labels Oct 30, 2017
@Leko
Copy link
Contributor

Leko commented Jan 20, 2018

assert.ok(err instanceof Error);

I think better is assert.equal(err.constructor, Error) to display more information.
If err is not an instance of Error (ex. Number), it will display constructor name.
It would be nicer than assert(err instanceof Error).

> const err = new Error()
undefined

> assert(err instanceof Number)
AssertionError [ERR_ASSERTION]: false == true

> assert.equal(err.constructor, Number)
AssertionError [ERR_ASSERTION]: { [Function: Error] stackTraceLimit: 10, prepareStackTrace: undefined } == [Function: Number]

My opinion does not solve the issue but I think that it will provide useful information at next time the same problem occurs.
How do you think?

@joyeecheung
Copy link
Member

@Leko If the error is not an Error I think in this case it's basically a null. The question is why the read/toString() succeeded here.

@Leko
Copy link
Contributor

Leko commented Jan 20, 2018

it's basically a null

@joyeecheung Ah, I see. It’s just nothing.

@apapirovski
Copy link
Member

This seems to be failing reasonably often again. Anyone have any ideas?

https://ci.nodejs.org/job/node-test-commit-osx/16147/nodes=osx1010/tapResults/

@MylesBorins
Copy link
Contributor

@BridgeAR
Copy link
Member

This does not only fail on OS-X as it seems:

https://ci.nodejs.org/job/node-test-commit-linux/16441/nodes=ubuntu1404-64/console

@BridgeAR BridgeAR changed the title Investigate flaky test-fs-readfile-tostring-fail on macOS Investigate flaky test-fs-readfile-tostring-fail Feb 16, 2018
@gireeshpunathil
Copy link
Member

Easily reproduced with adjusting ulimits:

#ulimit -f 10000000
#./node test/sequential/test-fs-readfile-tostring-fail.js
#ulimit -f 1000000
#./node test/sequential/test-fs-readfile-tostring-fail.js

/home/gireesh/node/test/sequential/test-fs-readfile-tostring-fail.js:67
  throw err;
  ^

AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:

  assert.ok(err instanceof Error)

    at /home/gireesh/node/test/sequential/test-fs-readfile-tostring-fail.js:34:12
    at /home/gireesh/node/test/common/index.js:474:15
    at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:424:3)

with this patch it shows the error was null - evidently the write failed, so the read succeeded.

--- a/test/sequential/test-fs-readfile-tostring-fail.js
+++ b/test/sequential/test-fs-readfile-tostring-fail.js
@@ -31,6 +31,7 @@ for (let i = 0; i < 201; i++) {
 stream.end();
 stream.on('finish', common.mustCall(function() {
   fs.readFile(file, 'utf8', common.mustCall(function(err, buf) {
+    console.log(err)
     assert.ok(err instanceof Error);

#./node test/sequential/test-fs-readfile-tostring-fail.js

null
...
#l /home/gireesh/node/test/.tmp/toobig.txt
-rw-r--r--  1 gireeshpunathil  staff  1024000000 May 16 22:06 /home/gireesh/node/test/.tmp/toobig.txt

So I am not claiming that th CI had ulimit -f set to low values, but under differing fs situations, such a circumstances would have become in effect.

I guess the test should validate that kStringMaxLength bytes of data is indeed written, before making such an assertion.

@BridgeAR
Copy link
Member

@gireeshpunathil
Copy link
Member

Inviting interested parties to come up with a PR - I know the issue and can provide pointers.

@gireeshpunathil gireeshpunathil added good first issue Issues that are suitable for first-time contributors. mentor-available labels May 19, 2018
@Trott
Copy link
Member Author

Trott commented May 21, 2018

Since the most recent reported failure here is February, I'll mention that it happened again today:

https://ci.nodejs.org/job/node-test-commit-osx/18712/nodes=osx1010/console

not ok 2198 sequential/test-fs-readfile-tostring-fail
  ---
  duration_ms: 0.642
  severity: fail
  exitcode: 7
  stack: |-
    /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:67
      throw err;
      ^
    
    AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
    
      assert.ok(err instanceof Error)
    
        at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:34:12
        at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/common/index.js:443:15
        at FSReqWrap.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:53:3)
  ...

@Trott
Copy link
Member Author

Trott commented May 21, 2018

Mostly guessing, but maybe common.isAIX in this line needs to be changed to ! common.isWindows?

if (common.isAIX && (Number(cp.execSync('ulimit -f')) * 512) < kStringMaxLength)

@Trott
Copy link
Member Author

Trott commented May 22, 2018

Failed on test-requireio-osx1010-x64-1:

https://ci.nodejs.org/job/node-test-commit-osx/18735/nodes=osx1010/console

not ok 2199 sequential/test-fs-readfile-tostring-fail
  ---
  duration_ms: 0.264
  severity: fail
  exitcode: 7
  stack: |-
    /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:67
      throw err;
      ^
    
    AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
    
      assert.ok(err instanceof Error)
    
        at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:34:12
        at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/common/index.js:443:15
        at FSReqWrap.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:53:3)
  ...

@Trott
Copy link
Member Author

Trott commented May 22, 2018

Mostly guessing, but maybe common.isAIX in this line needs to be changed to ! common.isWindows?

if (common.isAIX && (Number(cp.execSync('ulimit -f')) * 512) < kStringMaxLength)

Alas, that won't work. We just saw it fail on test-requireio-osx1010-x64-1 in CI and ulimit -f reports unlimited on that machine.

@Trott
Copy link
Member Author

Trott commented May 22, 2018

Stress test: https://ci.nodejs.org/job/node-stress-single-test/1855/nodes=osx1010/

Edit: Stress test was running on macstadium and seemed to be doing fine after 573-ish runs. Going to try again and hope I get a requireio machine this time for comparison.

@richardlau
Copy link
Member

Mostly guessing, but maybe common.isAIX in this line needs to be changed to ! common.isWindows?

if (common.isAIX && (Number(cp.execSync('ulimit -f')) * 512) < kStringMaxLength)

Alas, that won't work. We just saw it fail on test-requireio-osx1010-x64-1 in CI and ulimit -f reports unlimited on that machine.

Even if not unlimited, I'm not sure ulimit -f reports in 512 byte blocks everywhere.

@gireeshpunathil
Copy link
Member

I was investigating this. Few points:

  1. ulimit -f does not seem to be a factor here: as, if one CI lacks sufficient user limit of files, it should cause consistent failure to this test case, unless some one alter this value (which I don't think is the case).

  2. However, ulimit -f <a low value> can be used to mimic the condition (that simulates a low disk condition). With that, I installed stream.on(error) in expectation of catching error, but could not.

system trace showd that writev failed, but was never detected, retried, or propagated upwards, and was silently ignored:

25144/0x1b7f10: writev(0xA, 0x10305AC00, 0xC8) = 45831292 0 // it was supposed to write 1GB.

live debugger showed the same, and we seem to be closing the file as if we wrote enough:

Process 25275 resuming
Process 25275 stopped
* thread #6, stop reason = breakpoint 2.1
    frame #0: 0x000000010094d1bd node`uv__fs_write(req=0x000000010250b148) at fs.c:727 [opt]
   724 	
   725 	  if (req->off < 0) {
   726 	    if (req->nbufs == 1)
-> 727 	      r = write(req->file, req->bufs[0].base, req->bufs[0].len);
   728 	    else
   729 	      r = writev(req->file, (struct iovec*) req->bufs, req->nbufs);
   730 	  } else {
Target 0: (node) stopped.
(lldb) n
(lldb) p r
(ssize_t) $10 = 5368708
(lldb) c
Process 25275 resuming
Process 25275 stopped
* thread #9, stop reason = breakpoint 3.1
    frame #0: 0x000000010094d243 node`uv__fs_write(req=0x0000000103023478) at fs.c:729 [opt]
   726 	    if (req->nbufs == 1)
   727 	      r = write(req->file, req->bufs[0].base, req->bufs[0].len);
   728 	    else
-> 729 	      r = writev(req->file, (struct iovec*) req->bufs, req->nbufs);
   730 	  } else {
   731 	    if (req->nbufs == 1) {
   732 	      r = pwrite(req->file, req->bufs[0].base, req->bufs[0].len, req->off);
Target 0: (node) stopped.
(lldb) p r
(ssize_t) $11 = 45831292
(lldb) p req->file
(uv_file) $14 = 13
(lldb) c
Process 25302 resuming
Process 25302 stopped
* thread #10, stop reason = breakpoint 6.28 7.28
    frame #0: 0x00007fff7b0ec4f8 libsystem_kernel.dylib`close
libsystem_kernel.dylib`close:
->  0x7fff7b0ec4f8 <+0>:  movl   $0x2000006, %eax          ; imm = 0x2000006 
    0x7fff7b0ec4fd <+5>:  movq   %rcx, %r10
    0x7fff7b0ec500 <+8>:  syscall 
    0x7fff7b0ec502 <+10>: jae    0x7fff7b0ec50c            ; <+20>
Target 0: (node) stopped.
(lldb) f 1
frame #1: 0x000000010094b310 node`uv__fs_work(w=<unavailable>) at fs.c:1113 [opt]
   1110	    X(ACCESS, access(req->path, req->flags));
   1111	    X(CHMOD, chmod(req->path, req->mode));
   1112	    X(CHOWN, chown(req->path, req->uid, req->gid));
-> 1113	    X(CLOSE, close(req->file));
   1114	    X(COPYFILE, uv__fs_copyfile(req));
   1115	    X(FCHMOD, fchmod(req->file, req->mode));
   1116	    X(FCHOWN, fchown(req->file, req->uid, req->gid));
(lldb) p req->file
error: Couldn't materialize: couldn't get the value of variable req: no location, value may have been optimized out
error: errored out in DoExecute, couldn't PrepareToExecuteJITExpression
(lldb) reg read rdi
     rdi = 0x000000000000000d

(lldb) c
Process 25275 resuming
/Users/gireeshpunathil/Desktop/collab/node/test/sequential/test-fs-readfile-tostring-fail.js:67
  throw err;

So this would mean we shoud:

  • identify and fix why we silently comes out of partial writes (libuv)
  • identify a way to know whether the file has enough content (test)
  • or both

/cc @nodejs/libuv

@santigimeno
Copy link
Member

@gireeshpunathil If it's an issue with partial writes, can you check if libuv/libuv#1742 fixes the issue?

@gireeshpunathil
Copy link
Member

o!

26084/0x1be13b:  writev(0xA, 0x103048800, 0xC8)          = 45831292 0
26084/0x1be124:  kevent(0x3, 0x7FFEEFBF70B0, 0x0)                = -1 Err#4
26084/0x1be13b:  writev(0xA, 0x103048880, 0xC0)          = -1 Err#27

the error is propagated, and the write is re-attempted, and finally it is thrown properly too:
Filesize limit exceeded: 25

In disc-near-full case the error can be different, but we won't reach the scenario where we are currently.

thanks @santigimeno !

So I guess we just have to mark this as flaky, wait for libuv#1742 to land, and for Node to consume it!

@santigimeno
Copy link
Member

Nice. Let's see if we can finally move forward with the review of the PR.

@Trott Trott added libuv Issues and PRs related to the libuv dependency or the uv binding. and removed good first issue Issues that are suitable for first-time contributors. labels May 23, 2018
@Trott
Copy link
Member Author

Trott commented Mar 4, 2019

Who knows the background of the test, that can state whether the inter-relation is a must for the validity of the test, or those can be split?

Test was introduced in b620790 by @evanlucas. PR was #3485 and it was to fix a bug reported in #2767.

@gireeshpunathil
Copy link
Member

ok, so looks like fs.readFile is the key API being tested here; so we cannot avoid reading large content! So the only question is, can we avoid writing large content, instead leverage an existing large content, say process.execPath or something similar? the amount in question is 1GB, and the node executable is much smaller than that.

@Trott
Copy link
Member Author

Trott commented Apr 1, 2019

https://ci.nodejs.org/job/node-test-commit-linux/26604/nodes=ubuntu1804-64/console

test-joyent-ubuntu1804-x64-1

00:26:47 not ok 2460 sequential/test-fs-readfile-tostring-fail
00:26:47   ---
00:26:47   duration_ms: 26.401
00:26:47   severity: fail
00:26:47   exitcode: 7
00:26:47   stack: |-
00:26:47     /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:67
00:26:47       throw err;
00:26:47       ^
00:26:47     
00:26:47     AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
00:26:47     
00:26:47       assert.ok(err instanceof Error)
00:26:47     
00:26:47         at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:34:12
00:26:47         at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/common/index.js:369:15
00:26:47         at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:54:3)
00:26:47   ...

@Trott
Copy link
Member Author

Trott commented Apr 1, 2019

It may be a slight cheat to get the issue resolved, but I wonder if given that it deals with a 1Gb file, it should be moved to pummel where it will still be tested in CI, but only once a day and on one platform.

@Trott
Copy link
Member Author

Trott commented Apr 2, 2019

https://ci.nodejs.org/job/node-test-commit-linux/26616/nodes=ubuntu1804-64/console

test-joyent-ubuntu1804-x64-1

17:26:21 not ok 2460 sequential/test-fs-readfile-tostring-fail
17:26:21   ---
17:26:21   duration_ms: 27.288
17:26:21   severity: fail
17:26:21   exitcode: 7
17:26:21   stack: |-
17:26:21     /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:67
17:26:21       throw err;
17:26:21       ^
17:26:21     
17:26:21     AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
17:26:21     
17:26:21       assert.ok(err instanceof Error)
17:26:21     
17:26:21         at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:34:12
17:26:21         at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/common/index.js:369:15
17:26:21         at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:54:3)
17:26:21   ...

@gireeshpunathil
Copy link
Member

but only once a day and on one platform.

@Trott - I don't know if this is feasible, but how about:

  • read a file such as node binary in a loop (1g/file size) times
  • populate the same buffer

and assert that toString fails on the edge?

the key here I believe to be able to use the same buffer for iterative file read operation.

@Trott
Copy link
Member Author

Trott commented Apr 2, 2019

test-digitalocean-ubuntu1804-x64-1 is passing consistently, but test-joyent-ubuntu1804-x64-1 is failing consistently.

@Trott
Copy link
Member Author

Trott commented Apr 2, 2019

The test is failing on that host because it has less than 1Gb of free disk space, so the file gets truncated and the error does not occur when the file is read. I think moving to pummel is the right answer after all.

@Trott
Copy link
Member Author

Trott commented Apr 2, 2019

I'm removing workspaces and will put it back online, then open a pull request to move this test to pummel.

@richardlau
Copy link
Member

The test is failing on that host because it has less than 1Gb of free disk space, so the file gets truncated and the error does not occur when the file is read. I think moving to pummel is the right answer after all.

I'd expect the test to detect that -- Is it ignoring errors when writing the file out?

@Trott
Copy link
Member Author

Trott commented Apr 2, 2019

I'd expect the test to detect that -- Is it ignoring errors when writing the file out?

const stream = fs.createWriteStream(file, {
  flags: 'a'
});

const size = kStringMaxLength / 200;
const a = Buffer.alloc(size, 'a');

for (let i = 0; i < 201; i++) {
  stream.write(a);
}

stream.end();

@Trott
Copy link
Member Author

Trott commented Apr 2, 2019

I'd expect that to throw if there's a problem and it does indeed when I mess with file permissions to cause a problem.

There's also this, but that seems like it shouldn't get in the way either:

function destroy() {
  try {
    fs.unlinkSync(file);
  } catch {
    // it may not exist
  }
}

...

process.on('uncaughtException', function(err) {
  destroy();
  throw err;
});

@Trott
Copy link
Member Author

Trott commented Apr 2, 2019

Could be an OS-specific and/or file-system-specific and/or configuration-specific thing so someone may need to log in again to figure out why it's not throwing an error if it's a mystery.

@richardlau
Copy link
Member

If it's a stream should it be listening for the error event?

https://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback

The writable.write() method writes some data to the stream, and calls the
supplied callback once the data has been fully handled. If an error
occurs, the callback may or may not be called with the error as its
first argument. To reliably detect write errors, add a listener for the
'error' event.

@joyeecheung
Copy link
Member

Interesting observation: the recent 19 failures all happened on test-joyent-ubuntu1804-x64-1

Reason sequential/test-fs-readfile-tostring-fail
Type JS_TEST_FAILURE
Failed PR 19 (#24997, #26973, #26928, #26997, #26963, #27027, #27022, #27026, #27031, #27033, #27032, #26874, #26989, #27039, #27011, #27020, #26966, #26951, #26871)
Appeared test-joyent-ubuntu1804-x64-1
First CI https://ci.nodejs.org/job/node-test-pull-request/22051/
Last CI https://ci.nodejs.org/job/node-test-pull-request/22113/
Example
not ok 2470 sequential/test-fs-readfile-tostring-fail
  ---
  duration_ms: 23.935
  severity: fail
  exitcode: 7
  stack: |-
    /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:67
      throw err;
      ^
    
    AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
    
      assert.ok(err instanceof Error)
    
        at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:34:12
        at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/common/index.js:369:15
        at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:54:3)
  ...

@targos
Copy link
Member

targos commented Apr 2, 2019

It would be interesting to know what kind of value err is.

refack pushed a commit to Trott/io.js that referenced this issue Apr 2, 2019
Fixes: nodejs#16601

PR-URL: nodejs#27053
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Yongsheng Zhang <[email protected]>
Reviewed-By: Refael Ackermann <[email protected]>
@refack
Copy link
Contributor

refack commented Apr 2, 2019

worker config is not too shabby (maybe a bit of a small disk)

ubuntu@test-joyent-ubuntu1804-x64-1:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:           3.6G        281M        2.2G        388K        1.2G        3.1G
Swap:          1.9G         12M        1.9G
ubuntu@test-joyent-ubuntu1804-x64-1:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.8G     0  1.8G   0% /dev
tmpfs           370M  672K  369M   1% /run
/dev/vda1       7.3G  6.2G  1.1G  85% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda15      105M  3.4M  102M   4% /boot/efi
/dev/vdb         98G   61M   93G   1% /mnt
tmpfs           370M     0  370M   0% /run/user/1000

Should we upgrade the host, or keep it as a canary?

@richardlau
Copy link
Member

worker config is not too shabby (maybe a bit of a small disk)

ubuntu@test-joyent-ubuntu1804-x64-1:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:           3.6G        281M        2.2G        388K        1.2G        3.1G
Swap:          1.9G         12M        1.9G
ubuntu@test-joyent-ubuntu1804-x64-1:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.8G     0  1.8G   0% /dev
tmpfs           370M  672K  369M   1% /run
/dev/vda1       7.3G  6.2G  1.1G  85% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda15      105M  3.4M  102M   4% /boot/efi
/dev/vdb         98G   61M   93G   1% /mnt
tmpfs           370M     0  370M   0% /run/user/1000

Should we upgrade the host, or keep it as a canary?

Maybe use it to see if #27058 gives better diagnostics when it fails?

@refack refack mentioned this issue Apr 3, 2019
4 tasks
BethGriggs pushed a commit that referenced this issue Apr 5, 2019
Fixes: #16601

PR-URL: #27053
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Yongsheng Zhang <[email protected]>
Reviewed-By: Refael Ackermann <[email protected]>
BethGriggs pushed a commit that referenced this issue Apr 9, 2019
Fixes: #16601

PR-URL: #27053
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Yongsheng Zhang <[email protected]>
Reviewed-By: Refael Ackermann <[email protected]>
Signed-off-by: Beth Griggs <[email protected]>
BethGriggs pushed a commit that referenced this issue Apr 9, 2019
Fixes: #16601

PR-URL: #27053
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Yongsheng Zhang <[email protected]>
Reviewed-By: Refael Ackermann <[email protected]>
Signed-off-by: Beth Griggs <[email protected]>
BethGriggs pushed a commit that referenced this issue Apr 10, 2019
Fixes: #16601

PR-URL: #27053
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Yongsheng Zhang <[email protected]>
Reviewed-By: Refael Ackermann <[email protected]>
Signed-off-by: Beth Griggs <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flaky-test Issues and PRs related to the tests with unstable failures on the CI. fs Issues and PRs related to the fs subsystem / file system. libuv Issues and PRs related to the libuv dependency or the uv binding. test Issues and PRs related to the tests.
Projects
None yet