Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

listxattr EFAULT #503

Closed
chrisrd opened this issue Dec 19, 2011 · 19 comments
Closed

listxattr EFAULT #503

chrisrd opened this issue Dec 19, 2011 · 19 comments
Milestone

Comments

@chrisrd
Copy link
Contributor

chrisrd commented Dec 19, 2011

With spl e05bec8, zfs 30a9524, "set xattr=sa"...

I was getting an EFAULT when trying to read the xattrs on a specific file, which first showed up as an error when using "rsync -X" to copy into the ZFS fs. I went to have a closer look at the file:

brick# cd /pool/xxxx/xxxx
brick# getfattr -d xxxx
getfattr: xxxx: Bad address

...and tried setting an xattr, which it happily did without any apparent error, but reading it back again failed:

brick# setfattr -n user.foo -v foo xxxx
brick# getfattr -d xxxx
getfattr: xxxx: Bad address

Strace showed:

lstat("xxxx", {st_mode=S_IFREG|0650, st_size=932, ...}) = 0
listxattr("xxxx", (nil), 0) = -1 EFAULT (Bad address)

Changing the fs to "set xattr=dir" didn't help (it was then changed it back to 'sa'). I was able to successfully set xattrs on other files in the fs. E.g.:

brick# mv xxxx xxxx-
brick# cp -a xxxx- xxxx
brick# setfattr -n user.foo -v foo xxxx
brick# getfattr -d xxxx
# file: xxxx
user.foo="foo"

Also:

brick# mv xxxx- /pool/tmp/xxxx
mv: listing attributes of `xxxx-': Bad address
brick# cd /pool/tmp
brick# setfattr -n user.foo -v 'foo' 'xxxx'
brick# getfattr -d 'xxxx'
# file: xxxx
user.foo="foo"

Augh. So I've now lost the ability to reproduce the problem on that specific file. When it happens again, is there anything in particular I should be doing help track down the problem?

@chrisrd
Copy link
Contributor Author

chrisrd commented Dec 19, 2011

OK, I've got another one...

brick# getfattr -d aaaa
getfattr: aaaa: Bad address

But interestingly:

brick# getfattr -n user.rsync.%stat aaaa
# file: aaaa
user.rsync.%stat="100250 0,0 123:123"

@behlendorf
Copy link
Contributor

It's not immediately clear how this can happen, if you can come up with a consistent reproducer that would help a lot. Alternately, if you install systemtap on your system you can get a detailed kernel stack the next time it occurs by running the following. That will show exactly how the EFAULT is occurring.

# stap /usr/share/doc/systemtap-1.4/examples/general/para-callgraph.stp \
   'module("zfs").function("*@module/zfs/*.c")' \
   'module("zfs").function("zpl_xattr_list")' -c "getfattr -d aaaa"

@chrisrd
Copy link
Contributor Author

chrisrd commented Dec 20, 2011

Of course, in the grand tradition of heisenbugs, once I booted the box into a systemtap-enabled kernel, I'm no longer seeing the problem. I imagine that's due to the reboot rather than deliberate mischievousness from the FSM. I'll let you know if it pops up again.

(I hadn't looked at systemtap previously - WOW!)

@chrisrd
Copy link
Contributor Author

chrisrd commented Dec 22, 2011

OK, I have another xattr "Bad address" case. Interestingly, this one survived a reboot. Stap output:

# stap para-callgraph.stp 'module("zfs").function("*@module/zfs/*.c")' 'module("zfs").function("zpl_xattr_list")' -c "getfattr -d aaaa" > /tmp/stap.fail
getfattr: aaaa: Bad address
Warning: child process exited with status 1
Error removing module 'stap_72ec80d66c7c760846c26c364b9fcf5f_11_5712': Device or resource busy.
Warning: /usr/bin/staprun exited with status: 1
Pass 5: run failed.  Try again with another '--vp 00001' option.

# cat /tmp/stap.fail
   0 getfattr(5713):->zfs_sa_get_xattr zp=0xffff8801ad208c88
   9 getfattr(5713): ->sa_size hdl=0xffff8801c2018dd8 attr=0x14 size=0xffff8801ad0d5e24
  14 getfattr(5713):  ->sa_attr_op hdl=0xffff8801c2018dd8 bulk=0xffff8801ad0d5da8 count=0x1 data_op=0x0 tx=0x0
  18 getfattr(5713):  <-sa_attr_op return=0x0
  21 getfattr(5713): <-sa_size return=0x0
  26 getfattr(5713): ->sa_spill_alloc flags=0xd0
1137 getfattr(5713): <-sa_spill_alloc return=0xffffc900186f6158
1144 getfattr(5713): ->sa_lookup hdl=0xffff8801c2018dd8 attr=0x14 buf=0xffffc900186f6158 buflen=0x18
1150 getfattr(5713):  ->sa_lookup_impl hdl=0xffff8801c2018dd8 bulk=0xffff8801ad0d5db8 count=0x1
1155 getfattr(5713):   ->sa_attr_op hdl=0xffff8801c2018dd8 bulk=0xffff8801ad0d5db8 count=0x1 data_op=0x0 tx=0x0
1159 getfattr(5713):   <-sa_attr_op return=0x0
1161 getfattr(5713):  <-sa_lookup_impl return=0x0
1164 getfattr(5713): <-sa_lookup return=0x0
1172 getfattr(5713): ->sa_spill_free obj=0xffffc900186f6158
1176 getfattr(5713): <-sa_spill_free 
1179 getfattr(5713):<-zfs_sa_get_xattr return=0xe

In contrast with one that works:

# stap para-callgraph.stp 'module("zfs").function("*@module/zfs/*.c")' 'module("zfs").function("zpl_xattr_list")' -c "getfattr -d foo"
# file: foo
user.rsync.%stat="100700 0,0 123:123"

    0 getfattr(8282):->zpl_xattr_filldir arg=0xffff8800ad4efe78 name=0xffff88016b3c63e0 name_len=0x10 offset=0x0 objnum=0x0 d_type=0x0
    7 getfattr(8282):<-zpl_xattr_filldir return=0x0
    0 getfattr(8282):->zfs_lookup dip=0xffff8801534878a0 nm=0x0 ipp=0xffff8800ad4efea0 flags=0x2 cr=0xffff8800b1716540 direntflags=0x0 realpnp=0x0
    5 getfattr(8282): ->rrw_enter rrl=0xffff8801b2b762f8 rw=0x2 tag=0xffffffffa042af88
    8 getfattr(8282): <-rrw_enter 
    15 getfattr(8282): ->zfs_get_xattrdir zp=0xffff8801534876f8 xipp=0xffff8800ad4efea0 cr=0xffff8800b1716540 flags=0x2
    22 getfattr(8282):  ->zfs_dirent_lock dlpp=0xffff8800ad4efd98 dzp=0xffff8801534876f8 name=0xffffffffa043a357 zpp=0xffff8800ad4efda0 flag=0x8 direntflags=0x0 realpnp=0x0
    31 getfattr(8282):   ->sa_lookup hdl=0xffff8801535d4040 attr=0x9 buf=0xffff8800ad4efc88 buflen=0x8
    37 getfattr(8282):    ->sa_lookup_impl hdl=0xffff8801535d4040 bulk=0xffff8800ad4efbd8 count=0x1
    44 getfattr(8282):     ->sa_attr_op hdl=0xffff8801535d4040 bulk=0xffff8800ad4efbd8 count=0x1 data_op=0x0 tx=0x0
    50 getfattr(8282):      ->sa_get_spill hdl=0xffff8801535d4040
    56 getfattr(8282):       ->dmu_spill_hold_existing bonus=0xffff88015f927bd0 tag=0x0 dbp=0xffff8801535d4068
    60 getfattr(8282):        ->zrl_add zrl=0xffff8801535fe248
    63 getfattr(8282):        <-zrl_add 
    68 getfattr(8282):        ->spa_version spa=0xffff8801c06f7000
    72 getfattr(8282):        <-spa_version return=0x1c
    76 getfattr(8282):        ->zrl_remove zrl=0xffff8801535fe248
    79 getfattr(8282):        <-zrl_remove 
    82 getfattr(8282):       <-dmu_spill_hold_existing return=0x2
    85 getfattr(8282):      <-sa_get_spill return=0x2
    88 getfattr(8282):     <-sa_attr_op return=0x2
    91 getfattr(8282):    <-sa_lookup_impl return=0x2
    94 getfattr(8282):   <-sa_lookup return=0x2
    96 getfattr(8282):  <-zfs_dirent_lock return=0x0
   101 getfattr(8282):  ->zfs_dirent_unlock dl=0xffff88018c889900
   105 getfattr(8282):  <-zfs_dirent_unlock 
   108 getfattr(8282): <-zfs_get_xattrdir return=0x2
   112 getfattr(8282): ->rrw_exit rrl=0xffff8801b2b762f8 tag=0xffffffffa042af88
   115 getfattr(8282): <-rrw_exit 
   118 getfattr(8282):<-zfs_lookup return=0x2
     0 getfattr(8282):->zpl_xattr_filldir arg=0xffff8800ad4efe78 name=0xffff88016b3c63e0 name_len=0x10 offset=0x0 objnum=0x0 d_type=0x0
     4 getfattr(8282):<-zpl_xattr_filldir return=0x0
     0 getfattr(8282):->zfs_lookup dip=0xffff8801534878a0 nm=0x0 ipp=0xffff8800ad4efea0 flags=0x2 cr=0xffff8800b1716540 direntflags=0x0 realpnp=0x0
     5 getfattr(8282): ->rrw_enter rrl=0xffff8801b2b762f8 rw=0x2 tag=0xffffffffa042af88
     8 getfattr(8282): <-rrw_enter 
    13 getfattr(8282): ->zfs_get_xattrdir zp=0xffff8801534876f8 xipp=0xffff8800ad4efea0 cr=0xffff8800b1716540 flags=0x2
    19 getfattr(8282):  ->zfs_dirent_lock dlpp=0xffff8800ad4efd98 dzp=0xffff8801534876f8 name=0xffffffffa043a357 zpp=0xffff8800ad4efda0 flag=0x8 direntflags=0x0 realpnp=0x0
    24 getfattr(8282):   ->sa_lookup hdl=0xffff8801535d4040 attr=0x9 buf=0xffff8800ad4efc88 buflen=0x8
    29 getfattr(8282):    ->sa_lookup_impl hdl=0xffff8801535d4040 bulk=0xffff8800ad4efbd8 count=0x1
    34 getfattr(8282):     ->sa_attr_op hdl=0xffff8801535d4040 bulk=0xffff8800ad4efbd8 count=0x1 data_op=0x0 tx=0x0
    38 getfattr(8282):      ->sa_get_spill hdl=0xffff8801535d4040
    43 getfattr(8282):       ->dmu_spill_hold_existing bonus=0xffff88015f927bd0 tag=0x0 dbp=0xffff8801535d4068
    47 getfattr(8282):        ->zrl_add zrl=0xffff8801535fe248
    49 getfattr(8282):        <-zrl_add 
    53 getfattr(8282):        ->spa_version spa=0xffff8801c06f7000
    56 getfattr(8282):        <-spa_version return=0x1c
    60 getfattr(8282):        ->zrl_remove zrl=0xffff8801535fe248
    63 getfattr(8282):        <-zrl_remove 
    65 getfattr(8282):       <-dmu_spill_hold_existing return=0x2
    68 getfattr(8282):      <-sa_get_spill return=0x2
    71 getfattr(8282):     <-sa_attr_op return=0x2
    73 getfattr(8282):    <-sa_lookup_impl return=0x2
    76 getfattr(8282):   <-sa_lookup return=0x2
    79 getfattr(8282):  <-zfs_dirent_lock return=0x0
    82 getfattr(8282):  ->zfs_dirent_unlock dl=0xffff88018c889900
    86 getfattr(8282):  <-zfs_dirent_unlock 
    88 getfattr(8282): <-zfs_get_xattrdir return=0x2
    92 getfattr(8282): ->rrw_exit rrl=0xffff8801b2b762f8 tag=0xffffffffa042af88
    95 getfattr(8282): <-rrw_exit 
    98 getfattr(8282):<-zfs_lookup return=0x2
Error removing module 'stap_72ec80d66c7c760846c26c364b9fcf5f_11_8281': Device or resource busy.
Warning: /usr/bin/staprun exited with status: 1
Pass 5: run failed.  Try again with another '--vp 00001' option.

Hmmm, am I reading that right? The one that works is a directory-base xattr, whereas the one that doesn't work is an sa-based xatter?

Interestingly, 'getfattr -Rd .' tells me that of the 1107 files in that directory, which would all have been created in the same rsync run, 4 come back with the 'Bad address' problem.

If I create a new file and set the xattr:

# touch foo
# setfattr -n 'user.rsync.%stat' -v '100700 0,0 123:123' foo
# getfattr -d foo
# file: foo
user.rsync.%stat="100700 0,0 123:123"

Stap on that file shows a similar output to the older file that worked, i.e. starting with zpl_xattr_filldir rather than zfs_sa_get_xattr.

The file system was created with, and is now definately set for xattr=sa:

# zfs get all /tank/foo | grep xattr
tank/foo  xattr                 sa                     local

...so, aside from the "bad address" issue with the sa xattrs, why is it using directory xattrs (if that is indeed what it's doing)?

@chrisrd
Copy link
Contributor Author

chrisrd commented Dec 22, 2011

Looking at one of the files with the xattr problem, and with "getfattr -d" vs "getfattr -n":

# getfattr -d bar
getfattr: bar: Bad address
# getfattr -n user.rsync.%stat bar
bar: user.rsync.%stat: No such attribute

I'm confident the 'user.rsync.%stat' xattr would have been set on the file as it's a part of the rsync transfer. Then, explicitly setting that xattr:

# setfattr -n user.rsync.%stat -v '100700 0,0 264210240:264210240' bar
# getfattr -n user.rsync.%stat bar
# file: bar
user.rsync.%stat="100700 0,0 264210240:264210240"
# getfattr -d bar
getfattr: bar: Bad address

The stap output for the '-d' case:

# stap ~chris/para-callgraph.stp 'module("zfs").function("*@module/zfs/*.c")' 'module("zfs").function("zpl_xattr_list")' -c "getfattr -d bar"
getfattr: bar: Bad address
Warning: child process exited with status 1
     0 getfattr(12278):->zfs_sa_get_xattr zp=0xffff88019ce514b8
    11 getfattr(12278): ->sa_size hdl=0xffff88019ce18ce8 attr=0x14 size=0xffff88005fc09e24
    20 getfattr(12278):  ->sa_attr_op hdl=0xffff88019ce18ce8 bulk=0xffff88005fc09da8 count=0x1 data_op=0x0 tx=0x0
    25 getfattr(12278):  <-sa_attr_op return=0x0
    28 getfattr(12278): <-sa_size return=0x0
    35 getfattr(12278): ->sa_spill_alloc flags=0xd0
  1510 getfattr(12278): <-sa_spill_alloc return=0xffffc90071fe7158
  1561 getfattr(12278): ->sa_lookup hdl=0xffff88019ce18ce8 attr=0x14 buf=0xffffc90071fe7158 buflen=0x18
  1569 getfattr(12278):  ->sa_lookup_impl hdl=0xffff88019ce18ce8 bulk=0xffff88005fc09db8 count=0x1
  1575 getfattr(12278):   ->sa_attr_op hdl=0xffff88019ce18ce8 bulk=0xffff88005fc09db8 count=0x1 data_op=0x0 tx=0x0
  1579 getfattr(12278):   <-sa_attr_op return=0x0
  1582 getfattr(12278):  <-sa_lookup_impl return=0x0
  1585 getfattr(12278): <-sa_lookup return=0x0
  1596 getfattr(12278): ->sa_spill_free obj=0xffffc90071fe7158
  1601 getfattr(12278): <-sa_spill_free 
  1604 getfattr(12278):<-zfs_sa_get_xattr return=0xe
Error removing module 'stap_72ec80d66c7c760846c26c364b9fcf5f_1_12277': Device or resource busy.
Warning: /usr/bin/staprun exited with status: 1
Pass 5: run failed.  Try again with another '--vp 00001' option.

The stap output for the '-v' case:

# stap ~chris/para-callgraph.stp 'module("zfs").function("*@module/zfs/*.c")' 'module("zfs").function("zpl_xattr_list")' -c "getfattr -n user.rsync.%stat bar"
# file: bar
user.rsync.%stat="100700 0,0 264210240:264210240"

Error removing module 'stap_72ec80d66c7c760846c26c364b9fcf5f_1_14775': Device or resource busy.
Warning: /usr/bin/staprun exited with status: 1
Pass 5: run failed.  Try again with another '--vp 00001' option.

Huh? No zfs usage at all? ...or, more likely, stap simply isn't being told to trace the relevant path. At this point my knowledge of what the relevant path might be, how to point stap towards it, and whether it's even relevant, all fail me!

@behlendorf
Copy link
Contributor

The stap output is very helpful. The problem appears to be that nvlist_unpack() in zfs_sa_get_xattr() is returning EFAULT. Now my understanding is that indicates an encode/decoding error although why that's the case isn't clear. If you add the module("znvpair").function("*@module/nvpair/*.c") to your stap line we can probably learn some more.

As for whether sa or dir based xattrs are used depends exactly on the situation. Even when you have xattr=sa set individual xattrs which exceed 32k will be created as dir xattrs instead. Additionally, if your xattrs on a single file total more than 64k new xattrs with be created at dir xattrs. Both cases should be really unlikely on a Linux box since for example ext4 has a hard xattr limit of about 4k. This spill functionality is just provided for additional compatibility.

Also because directory xattrs once created never get automatically migrated to sa xattrs we always check the dir and sa xattr namespaces when listing them.

@chrisrd
Copy link
Contributor Author

chrisrd commented Dec 23, 2011

Below is the stap incorporating module("znvpair").function("@module/nvpair/.c") for a failing listxattr.

I'm confident the filesystem in question was originally created with xattr=sa, and the only xattr set on a file is like user.rsync.%stat="100700 0,0 123:123", i.e. significantly smaller than 32k. So I'm surprised that where I've looked at a few files where the listxattr works, they all appear to have dir xattrs. Is there any way of telling, other than the stap fingerprint, whether a file has sa or dir xattrs?

As an aside, are there any suggestions as to what to do to enable the stap module to unload? I'm using systemtap 1.6 and per previous dumps, every stap run ends like:

Error removing module 'stap_72ec80d66c7c760846c26c364b9fcf5f_1_14775': Device or resource busy.

...leaving the module installed. Doing an "rmmod -f" on that module causes a kernel NULL pointer dereference. Google is suprisingly unforthcoming (or perhaps it's my google-fu). As I'm unsure of the consequences of having multiple stap modules loaded, I'm forced to reboot between every stap run - a tad annoying!

The stap output:

  0 getfattr(6437):->zfs_sa_get_xattr zp=0xffff8805d03d90a0
  7 getfattr(6437): ->sa_size hdl=0xffff8805d3c4d130 attr=0x14 size=0xffff8805e2a75e24
 11 getfattr(6437):  ->sa_attr_op hdl=0xffff8805d3c4d130 bulk=0xffff8805e2a75da8 count=0x1 data_op=0x0 tx=0x0
 15 getfattr(6437):  <-sa_attr_op return=0x0
 18 getfattr(6437): <-sa_size return=0x0
 22 getfattr(6437): ->sa_spill_alloc flags=0xd0
278 getfattr(6437): <-sa_spill_alloc return=0xffffc900178b4158
283 getfattr(6437): ->sa_lookup hdl=0xffff8805d3c4d130 attr=0x14 buf=0xffffc900178b4158 buflen=0x18
287 getfattr(6437):  ->sa_lookup_impl hdl=0xffff8805d3c4d130 bulk=0xffff8805e2a75db8 count=0x1
291 getfattr(6437):   ->sa_attr_op hdl=0xffff8805d3c4d130 bulk=0xffff8805e2a75db8 count=0x1 data_op=0x0 tx=0x0
294 getfattr(6437):   <-sa_attr_op return=0x0
297 getfattr(6437):  <-sa_lookup_impl return=0x0
299 getfattr(6437): <-sa_lookup return=0x0
304 getfattr(6437): ->nvlist_unpack buf=0xffffc900178b4158 buflen=0x18 nvlp=0xffff8805d03d9218 kmflag=0xd0
309 getfattr(6437):  ->nvlist_xunpack buf=0xffffc900178b4158 buflen=0x18 nvlp=0xffff8805d03d9218 nva=0xffffffffa03712d0
313 getfattr(6437):   ->nvlist_xalloc nvlp=0xffff8805e2a75dd0 nvflag=0x0 nva=0xffffffffa03712d0
317 getfattr(6437):    ->nv_alloc_sleep_spl nva=0xffffffffa03712d0 size=0x28
320 getfattr(6437):    <-nv_alloc_sleep_spl return=0xffff88060ffa0a00
324 getfattr(6437):    ->nv_mem_zalloc nvp=0xffff88060ffa0a00 size=0x18
328 getfattr(6437):     ->nv_alloc_sleep_spl nva=0xffffffffa03712d0 size=0x18
331 getfattr(6437):     <-nv_alloc_sleep_spl return=0xffff8805e34e3c00
334 getfattr(6437):    <-nv_mem_zalloc return=0xffff8805e34e3c00
337 getfattr(6437):   <-nvlist_xalloc return=0x0
341 getfattr(6437):   ->nvlist_common nvl=0xffff8805e34e3c00 buf=0xffffc900178b4158 buflen=0xffff8805e2a75dc0 encoding=0x0 nvs_op=0x1
346 getfattr(6437):    ->nvs_operation nvs=0xffff8805e2a75d68 nvl=0xffff8805e34e3c00 buflen=0xffff8805e2a75dc0
350 getfattr(6437):     ->nvs_xdr_nvlist nvs=0xffff8805e2a75d68 nvl=0xffff8805e34e3c00 size=0xffff8805e2a75dc0
354 getfattr(6437):     <-nvs_xdr_nvlist return=0x0
358 getfattr(6437):     ->nvs_xdr_nvpair nvs=0xffff8805e2a75d68 nvp=0x0 size=0xffff8805e2a75d00
361 getfattr(6437):     <-nvs_xdr_nvpair return=0xe
363 getfattr(6437):    <-nvs_operation return=0xe
366 getfattr(6437):   <-nvlist_common return=0xe
369 getfattr(6437):   ->nvlist_free nvl=0xffff8805e34e3c00
373 getfattr(6437):    ->nv_free_spl nva=0xffffffffa03712d0 buf=0xffff8805e34e3c00 size=0x18
376 getfattr(6437):    <-nv_free_spl
380 getfattr(6437):    ->nv_free_spl nva=0xffffffffa03712d0 buf=0xffff88060ffa0a00 size=0x28
383 getfattr(6437):    <-nv_free_spl
385 getfattr(6437):   <-nvlist_free
387 getfattr(6437):  <-nvlist_xunpack return=0xe
390 getfattr(6437): <-nvlist_unpack return=0xe
394 getfattr(6437): ->sa_spill_free obj=0xffffc900178b4158
398 getfattr(6437): <-sa_spill_free
400 getfattr(6437):<-zfs_sa_get_xattr return=0xe

@fche
Copy link

fche commented Dec 23, 2011

Make sure your stap kernel has CONFIG_MODULE_UNLOAD=y. This script might help identify the peculiarity of your setup: http://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=stap-report
By the way, the module("zfs").function("@module/zfs/.c") syntax does not require the @filename part.
Have you tried module("zfs").function("*") alone?

@chrisrd
Copy link
Contributor Author

chrisrd commented Dec 28, 2011

Yes, I definitely have CONFIG_MODULE_UNLOAD=y. Thanks for the info on removing the @filename part. I haven't tried that yet as the problem seems to be fundamental in system tap as even a simple script like:

stap -e 'probe begin {printf("foo\n"); exit()}'

...fails the same way (can't unload the module) and the people over at the systemtap mailing list are helping me with this issue: http://thread.gmane.org/gmane.linux.systemtap/18785

@chrisrd
Copy link
Contributor Author

chrisrd commented Jan 24, 2012

Note: my stap problem has been resolved (was due to incompatible gcc versions for kernel compile vs stap module compile) so it's now easier for me to stap as much as might be useful.

@chrisrd
Copy link
Contributor Author

chrisrd commented Jan 25, 2012

From the stap output above, the first sign of failure comes from nvs_xdr_nvpair:

361 getfattr(6437):     <-nvs_xdr_nvpair return=0xe

...so I put some printk statements in there to see which part was returning EFAULT:

$ git diff module/nvpair/nvpair.c
diff --git a/module/nvpair/nvpair.c b/module/nvpair/nvpair.c
index 5c68984..dd6cccb 100644
--- a/module/nvpair/nvpair.c
+++ b/module/nvpair/nvpair.c
@@ -3247,7 +3247,10 @@ nvs_xdr_nvpair(nvstream_t *nvs, nvpair_t *nvp, size_t *size)

                /* get the encode and decode size */
                if (!xdr_int(xdr, &encode_len) || !xdr_int(xdr, &decode_len))
+{
+printk("<4>otndbg 1\n");
                        return (EFAULT);
+}
                *size = decode_len;

                /* are we at the end of the stream? */
@@ -3256,10 +3259,16 @@ nvs_xdr_nvpair(nvstream_t *nvs, nvpair_t *nvp, size_t *size)

                /* sanity check the size parameter */
                if (!xdr_control(xdr, XDR_GET_BYTES_AVAIL, &bytesrec))
+{
+printk("<4>otndbg 2\n");
                        return (EFAULT);
+}

                if (*size > NVS_XDR_MAX_LEN(bytesrec.xc_num_avail))
+{
+printk("<4>otndbg 3: size=%ld > NVS_XDR_MAX_LEN=%ld\n", *size, NVS_XDR_MAX_LEN(bytesrec.xc_num_avail));
                        return (EFAULT);
+}
                break;
        }

Then I used stap to trace all spl/zfs calls (script below) whilst running the failing getfattr. The kernel spat out:

otndbg 3: size=72 > NVS_XDR_MAX_LEN=24

The stap output was:

  0 getfattr(12635):->zpl_getattr mnt=0xffff88061ec42f00 dentry=0xffff8805ca25e0c0 stat=0xffff88059b831ef8
 12 getfattr(12635): ->zfs_getattr_fast ip=0xffff8805b0adb1e8 sp=0xffff88059b831ef8
 22 getfattr(12635):  ->rrw_enter rrl=0xffff88060dfd8348 rw=0x2 tag=0xffffffffa03cc370
 28 getfattr(12635):  <-rrw_enter 
 38 getfattr(12635):  ->sa_object_size hdl=0xffff8805879d8c70 blksize=0xffff88059b831f50 nblocks=0xffff88059b831f58
 45 getfattr(12635):   ->dmu_object_size_from_db db_fake=0xffff880c0452ba80 blksize=0xffff88059b831f50 nblk512=0xffff88059b831f58
 50 getfattr(12635):    ->zrl_add zrl=0xffff88061ec63058
 53 getfattr(12635):    <-zrl_add 
 57 getfattr(12635):    ->zrl_remove zrl=0xffff88061ec63058
 60 getfattr(12635):    <-zrl_remove 
 62 getfattr(12635):   <-dmu_object_size_from_db 
 64 getfattr(12635):  <-sa_object_size 
 68 getfattr(12635):  ->rrw_exit rrl=0xffff88060dfd8348 tag=0xffffffffa03cc370
 71 getfattr(12635):  <-rrw_exit 
 75 getfattr(12635): <-zfs_getattr_fast return=0x0
 77 getfattr(12635):<-zpl_getattr return=0x0
  0 getfattr(12635):->zpl_xattr_list dentry=0xffff8805ca25e0c0 buffer=0x0 buffer_size=0x0
  5 getfattr(12635): ->zfs_sa_get_xattr zp=0xffff8805b0adb040
 11 getfattr(12635):  ->sa_size hdl=0xffff8805879d8c70 attr=0x14 size=0xffff88059b831e24
 18 getfattr(12635):   ->sa_attr_op hdl=0xffff8805879d8c70 bulk=0xffff88059b831da8 count=0x1 data_op=0x0 tx=0x0
 22 getfattr(12635):   <-sa_attr_op return=0x0
 24 getfattr(12635):  <-sa_size return=0x0
 28 getfattr(12635):  ->sa_spill_alloc flags=0xd0
 36 getfattr(12635):  <-sa_spill_alloc return=0xffffc9004b7f80b8
 41 getfattr(12635):  ->sa_lookup hdl=0xffff8805879d8c70 attr=0x14 buf=0xffffc9004b7f80b8 buflen=0x18
 46 getfattr(12635):   ->sa_lookup_impl hdl=0xffff8805879d8c70 bulk=0xffff88059b831db8 count=0x1
 51 getfattr(12635):    ->sa_attr_op hdl=0xffff8805879d8c70 bulk=0xffff88059b831db8 count=0x1 data_op=0x0 tx=0x0
 54 getfattr(12635):    <-sa_attr_op return=0x0
 56 getfattr(12635):   <-sa_lookup_impl return=0x0
 59 getfattr(12635):  <-sa_lookup return=0x0
 63 getfattr(12635):  ->nvlist_unpack buf=0xffffc9004b7f80b8 buflen=0x18 nvlp=0xffff8805b0adb1b8 kmflag=0xd0
 68 getfattr(12635):   ->nvlist_xunpack buf=0xffffc9004b7f80b8 buflen=0x18 nvlp=0xffff8805b0adb1b8 nva=0xffffffffa031d360
 73 getfattr(12635):    ->nvlist_xalloc nvlp=0xffff88059b831dd0 nvflag=0x0 nva=0xffffffffa031d360
 78 getfattr(12635):     ->nv_alloc_sleep_spl nva=0xffffffffa031d360 size=0x28
 82 getfattr(12635):     <-nv_alloc_sleep_spl return=0xffff8805f0547000
 86 getfattr(12635):     ->nv_mem_zalloc nvp=0xffff8805f0547000 size=0x18
 89 getfattr(12635):      ->nv_alloc_sleep_spl nva=0xffffffffa031d360 size=0x18
 92 getfattr(12635):      <-nv_alloc_sleep_spl return=0xffff8805879dec60
 95 getfattr(12635):     <-nv_mem_zalloc return=0xffff8805879dec60
 97 getfattr(12635):    <-nvlist_xalloc return=0x0
102 getfattr(12635):    ->nvlist_common nvl=0xffff8805879dec60 buf=0xffffc9004b7f80b8 buflen=0xffff88059b831dc0 encoding=0x0 nvs_op=0x1
107 getfattr(12635):     ->nvs_operation nvs=0xffff88059b831d68 nvl=0xffff8805879dec60 buflen=0xffff88059b831dc0
112 getfattr(12635):      ->nvs_xdr_nvlist nvs=0xffff88059b831d68 nvl=0xffff8805879dec60 size=0xffff88059b831dc0
116 getfattr(12635):      <-nvs_xdr_nvlist return=0x0
120 getfattr(12635):      ->nvs_xdr_nvpair nvs=0xffff88059b831d68 nvp=0x0 size=0xffff88059b831d00
152 getfattr(12635):      <-nvs_xdr_nvpair return=0xe
155 getfattr(12635):     <-nvs_operation return=0xe
157 getfattr(12635):    <-nvlist_common return=0xe
160 getfattr(12635):    ->nvlist_free nvl=0xffff8805879dec60
165 getfattr(12635):     ->nv_free_spl nva=0xffffffffa031d360 buf=0xffff8805879dec60 size=0x18
168 getfattr(12635):     <-nv_free_spl 
171 getfattr(12635):     ->nv_free_spl nva=0xffffffffa031d360 buf=0xffff8805f0547000 size=0x28
174 getfattr(12635):     <-nv_free_spl 
176 getfattr(12635):    <-nvlist_free 
178 getfattr(12635):   <-nvlist_xunpack return=0xe
180 getfattr(12635):  <-nvlist_unpack return=0xe
184 getfattr(12635):  ->sa_spill_free obj=0xffffc9004b7f80b8
188 getfattr(12635):  <-sa_spill_free 
190 getfattr(12635): <-zfs_sa_get_xattr return=0xe
193 getfattr(12635):<-zpl_xattr_list return=0xfffffffffffffff2

Stap script:

#! /usr/bin/env stap

function trace(entry_p, extra) {
    printf("%s%s%s %s\n",
        thread_indent (entry_p),
        (entry_p>0?"->":"<-"),
        probefunc (),
        extra)
}

probe module("spl").function("*").call        { trace(1, $$parms) }
probe module("spl").function("*").return      { trace(-1, $$return) }

probe module("zavl").function("*").call       { trace(1, $$parms) }
probe module("zavl").function("*").return     { trace(-1, $$return) }

probe module("zcommon").function("*").call    { trace(1, $$parms) }
probe module("zcommon").function("*").return  { trace(-1, $$return) }

probe module("zfs").function("*").call        { trace(1, $$parms) }
probe module("zfs").function("*").return      { trace(-1, $$return) }

probe module("znvpair").function("*").call    { trace(1, $$parms) }
probe module("znvpair").function("*").return  { trace(-1, $$return) }

probe module("zunicode").function("*").call   { trace(1, $$parms) }
probe module("zunicode").function("*").return { trace(-1, $$return) }

@fche
Copy link

fche commented Jan 25, 2012

chrisrd, for what it's worth, the equivalent of inserting printk's and recompiling could be using statement probes:

probe module("zwhatever").statement("*@dir/file.c:2322") { println(pp()," ",$$vars) }

@behlendorf
Copy link
Contributor

@chrisrd Thanks for doing the leg work on this. Your debugging suggests there may be some flaw in the xdr decoding implementation. This error case in particular indicates that we ran out of bytes in the stream before decoding the excepted length. Could you try commenting out this specific error case and running to test again to see if your able to access the full xattr.

@chrisrd
Copy link
Contributor Author

chrisrd commented Jan 26, 2012

On Wed, Jan 25, 2012 at 07:11:21AM -0800, Frank Ch. Eigler wrote:

chrisrd, for what it's worth, the equivalent of inserting printk's and recompiling could be using statement probes:

probe module("zwhatever").statement("*@dir/file.c:2322") { println(pp()," ",$$vars) }

Gob. Smack.

Just amazing!

...now to start reading the systemtap manual to see how to dig into the XDR structure.

@chrisrd
Copy link
Contributor Author

chrisrd commented Jan 26, 2012

No, with the "*size > NVS_XDR_MAX_LEN(bytesrec.xc_num_avail)" error return patched out it's getting a little further but still failing:

     0 getfattr(8778):->zpl_getattr mnt=0xffff8806208aa900 dentry=0xffff880c23f0bd40 stat=0xffff88060fea7ef8
    12 getfattr(8778): ->zfs_getattr_fast ip=0xffff880614772de8 sp=0xffff88060fea7ef8
    21 getfattr(8778):  ->rrw_enter rrl=0xffff88062109f348 rw=0x2 tag=0xffffffffa0414370
    27 getfattr(8778):  <-rrw_enter 
    35 getfattr(8778):  ->sa_object_size hdl=0xffff880611ac3d60 blksize=0xffff88060fea7f50 nblocks=0xffff88060fea7f58
    44 getfattr(8778):   ->dmu_object_size_from_db db_fake=0xffff880608d2a970 blksize=0xffff88060fea7f50 nblk512=0xffff88060fea7f58
    51 getfattr(8778):    ->zrl_add zrl=0xffff8806098fb058
    56 getfattr(8778):    <-zrl_add 
    60 getfattr(8778):    ->zrl_remove zrl=0xffff8806098fb058
    65 getfattr(8778):    <-zrl_remove 
    68 getfattr(8778):   <-dmu_object_size_from_db 
    71 getfattr(8778):  <-sa_object_size 
    77 getfattr(8778):  ->rrw_exit rrl=0xffff88062109f348 tag=0xffffffffa0414370
    82 getfattr(8778):  <-rrw_exit 
    86 getfattr(8778): <-zfs_getattr_fast return=0x0
    89 getfattr(8778):<-zpl_getattr return=0x0
     0 getfattr(8778):->zpl_xattr_list dentry=0xffff880c23f0bd40 buffer=0x0 buffer_size=0x0
     6 getfattr(8778): ->zfs_sa_get_xattr zp=0xffff880614772c40
    11 getfattr(8778):  ->sa_size hdl=0xffff880611ac3d60 attr=0x14 size=0xffff88060fea7e24
    18 getfattr(8778):   ->sa_attr_op hdl=0xffff880611ac3d60 bulk=0xffff88060fea7da8 count=0x1 data_op=0x0 tx=0x0
    22 getfattr(8778):   <-sa_attr_op return=0x0
    24 getfattr(8778):  <-sa_size return=0x0
    28 getfattr(8778):  ->sa_spill_alloc flags=0xd0
    35 getfattr(8778):  <-sa_spill_alloc return=0xffffc9001888b298
    40 getfattr(8778):  ->sa_lookup hdl=0xffff880611ac3d60 attr=0x14 buf=0xffffc9001888b298 buflen=0x18
    46 getfattr(8778):   ->sa_lookup_impl hdl=0xffff880611ac3d60 bulk=0xffff88060fea7db8 count=0x1
    50 getfattr(8778):    ->sa_attr_op hdl=0xffff880611ac3d60 bulk=0xffff88060fea7db8 count=0x1 data_op=0x0 tx=0x0
    53 getfattr(8778):    <-sa_attr_op return=0x0
    56 getfattr(8778):   <-sa_lookup_impl return=0x0
    58 getfattr(8778):  <-sa_lookup return=0x0
    62 getfattr(8778):  ->nvlist_unpack buf=0xffffc9001888b298 buflen=0x18 nvlp=0xffff880614772db8 kmflag=0xd0
    67 getfattr(8778):   ->nvlist_xunpack buf=0xffffc9001888b298 buflen=0x18 nvlp=0xffff880614772db8 nva=0xffffffffa0365360
    72 getfattr(8778):    ->nvlist_xalloc nvlp=0xffff88060fea7dd0 nvflag=0x0 nva=0xffffffffa0365360
    77 getfattr(8778):     ->nv_alloc_sleep_spl nva=0xffffffffa0365360 size=0x28
    81 getfattr(8778):     <-nv_alloc_sleep_spl return=0xffff88060a1ab4c0
    84 getfattr(8778):     ->nv_mem_zalloc nvp=0xffff88060a1ab4c0 size=0x18
    88 getfattr(8778):      ->nv_alloc_sleep_spl nva=0xffffffffa0365360 size=0x18
    91 getfattr(8778):      <-nv_alloc_sleep_spl return=0xffff880624ac5f00
    94 getfattr(8778):     <-nv_mem_zalloc return=0xffff880624ac5f00
    96 getfattr(8778):    <-nvlist_xalloc return=0x0
   100 getfattr(8778):    ->nvlist_common nvl=0xffff880624ac5f00 buf=0xffffc9001888b298 buflen=0xffff88060fea7dc0 encoding=0x0 nvs_op=0x1
   105 getfattr(8778):     ->nvs_operation nvs=0xffff88060fea7d68 nvl=0xffff880624ac5f00 buflen=0xffff88060fea7dc0
   110 getfattr(8778):      ->nvs_xdr_nvlist nvs=0xffff88060fea7d68 nvl=0xffff880624ac5f00 size=0xffff88060fea7dc0
   114 getfattr(8778):      <-nvs_xdr_nvlist return=0x0
   118 getfattr(8778):      ->nvs_xdr_nvpair nvs=0xffff88060fea7d68 nvp=0x0 size=0xffff88060fea7d00
module("znvpair").statement("nvs_xdr_nvpair@/home/chris/git/zfsonlinux/zfs/module/nvpair/../../module/nvpair/nvpair.c:3269")
  bytesrec={...} nvs=0xffff88060fea7d68 nvp=0x0 size=0xffff88060fea7d00 xdr=0xffff88060fea7d48 encode_len=? decode_len=0x48
  *size=72
  bytesrec={.xc_is_last_record=1, .xc_num_avail=4}
  xdr={.x_ops=0xffffffffa02ed8d0, .x_addr="", .x_addr_end="", .x_op=1}
   160 getfattr(8778):      <-nvs_xdr_nvpair return=0x0
   163 getfattr(8778):      ->nv_mem_zalloc nvp=0xffff88060a1ab4c0 size=0x58
   166 getfattr(8778):       ->nv_alloc_sleep_spl nva=0xffffffffa0365360 size=0x58
   169 getfattr(8778):       <-nv_alloc_sleep_spl return=0xffff8806240dcb40
   172 getfattr(8778):      <-nv_mem_zalloc return=0xffff8806240dcb40
   176 getfattr(8778):      ->nvs_xdr_nvp_op nvs=0xffff88060fea7d68 nvp=0xffff8806240dcb50
   179 getfattr(8778):      <-nvs_xdr_nvp_op return=0xe
   183 getfattr(8778):      ->nvp_buf_free nvl=0xffff880624ac5f00 nvp=0xffff8806240dcb50
   187 getfattr(8778):       ->nv_free_spl nva=0xffffffffa0365360 buf=0xffff8806240dcb40 size=0x58
   191 getfattr(8778):       <-nv_free_spl 
   193 getfattr(8778):      <-nvp_buf_free 
   195 getfattr(8778):     <-nvs_operation return=0xe
   197 getfattr(8778):    <-nvlist_common return=0xe
   200 getfattr(8778):    ->nvlist_free nvl=0xffff880624ac5f00
   204 getfattr(8778):     ->nv_free_spl nva=0xffffffffa0365360 buf=0xffff880624ac5f00 size=0x18
   207 getfattr(8778):     <-nv_free_spl 
   210 getfattr(8778):     ->nv_free_spl nva=0xffffffffa0365360 buf=0xffff88060a1ab4c0 size=0x28
   212 getfattr(8778):     <-nv_free_spl 
   214 getfattr(8778):    <-nvlist_free 
   216 getfattr(8778):   <-nvlist_xunpack return=0xe
   219 getfattr(8778):  <-nvlist_unpack return=0xe
   223 getfattr(8778):  ->sa_spill_free obj=0xffffc9001888b298
   226 getfattr(8778):  <-sa_spill_free 
   228 getfattr(8778): <-zfs_sa_get_xattr return=0xe
   231 getfattr(8778):<-zpl_xattr_list return=0xfffffffffffffff2

Note 1: added to previous stap script:

probe module("znvpair").statement("*@nvpair/nvpair.c:3269") {
  println(pp());
  println("  ",$$vars);
  println("  *size=",kernel_long($size));
  println("  bytesrec=", $bytesrec$$);
  println("  xdr=", $xdr$$);
}  

Note 2: the above stap output was for the second run. The first run produced a 53k line output (with all spl/zfs functions monitored) - I imagine the difference between accessing uncached vs cached. If interested the output for the uncached case is available here: http://www.onthe.net.au/private/stap.out.bz2

behlendorf added a commit to behlendorf/zfs that referenced this issue Mar 2, 2012
A private SA handle must be used to ensure we can drop the dbuf
hold on the spill block prior to calling dmu_tx_commit().  If we
call dmu_tx_commit() before sa_handle_destroy(), then our hold
will trigger a copy of the dbuf to be made.  This is done to
prevent data from leaking in to the syncing txg.  As a result
the original dirty spill block will remain cached.

Additionally, relying on the shared zp->z_sa_hdl is unsafe in
the xattr context because the znode may be asynchronously dropped
from the cache.  It's far safer and simpler just to use a private
handle for xattrs.  Plus any additional overhead is offset by
the avoidance of the previously mentioned memory copy.

These forever dirty buffers can be noticed in the arcstats under
the anon_size.  On a quiescent system the value should be zero.
Without this fix and a SA xattr write workload you will see
anon_size increase.  Eventually, if enough dirty data builds up
your system it will appear to hang.  This occurs because the dmu
won't allow new txs to be assigned until that dirty data is
flushed, and it won't be because it's not part of an assigned tx.

As an aside, I typically see anon_size lurk around 16k so I think
there is another place in the code which needs a similar fix.
However, this value doesn't grow over time so it isn't critical.

Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#503
Issue openzfs#513
@behlendorf
Copy link
Contributor

@chrisrd Please try the above patch (which has been merged in to master), it should resolve both this issue and issue #513. If it doesn't please let me know!

@chrisrd
Copy link
Contributor Author

chrisrd commented Mar 15, 2012

Apologies for the delay in responding, I've just had a chance to look at this again...

Is it expected that the xattrs added to files whilst running with xattr=sa prior to @ec2626a would be ok on-disk and it's just a problem reading them out again, or is it expected that the on-disk xattrs are corrupted and this causes the listxattr problem?

I've just tried openzfs/spl@a3a69b7 and openzfs/spl@0ece356 on linux-3.1.10

I tried to read the xattrs on a "known problematic" file from previous issues, using
getfattr -dh /pool/foo (the file is actually a symlink, I want the xattrs on the symlink itself).

The getfattr just hangs and is unkillable with ^C. strace on the process also hangs and is likewise unkillable.

Interestingly, ps shows the TIME for the getfattr is climbing steadily, and uptime show a constant
load of around 1 on an otherwise idle box.

zpool iostat 10 also hangs, and the uptime load is now steady around 2.

Doing this a few times several minutes apart:

find /proc/spl /proc/sys/kernel/spl -type f | while read f; do echo "# cat $f"; cat $f; done > /tmp/zfs-stat.1

...shows a modest amount of activity within zfs:

# diff -u /tmp/zfs-stat.[23] | egrep '^( #|\+|\-)'
--- /tmp/zfs-stat.2 2012-03-15 18:56:01.209799976 +1100
+++ /tmp/zfs-stat.3 2012-03-15 19:14:43.878796710 +1100
-Thu Mar 15 18:56:01 EST 2012
+Thu Mar 15 19:14:43 EST 2012
 # cat /proc/spl/kstat/zfs/vdev_cache_stats
-5 1 0x01 3 144 1617146552828 3318518735060
+5 1 0x01 3 144 1617146552828 4441210430243
 # cat /proc/spl/kstat/zfs/arcstats
-4 1 0x01 77 3696 1617146487293 3318520593689
+4 1 0x01 77 3696 1617146487293 4441212209307
 # cat /proc/spl/kstat/zfs/dmu_tx
-3 1 0x01 12 576 1617143416957 3318522255189
+3 1 0x01 12 576 1617143416957 4441214156147
 # cat /proc/spl/kstat/zfs/zfetchstats
-2 1 0x01 11 528 1617143410979 3318523959161
+2 1 0x01 11 528 1617143410979 4441216010689
 # cat /proc/spl/kstat/zfs/xuio_stats
-1 1 0x01 6 288 1617095164234 3318525559204
+1 1 0x01 6 288 1617095164234 4441217888828
 # cat /proc/spl/kstat/zfs/fm
-0 1 0x01 4 192 1617094122657 3318527138729
+0 1 0x01 4 192 1617094122657 4441219839912
 # cat /proc/sys/kernel/spl/vm/swapfs_reserve
 # cat /proc/sys/kernel/spl/vm/availrmem
-24555810
+24546575
 # cat /proc/sys/kernel/spl/vm/freemem
-24531623
+24522305
 # cat /proc/sys/kernel/spl/vm/physmem
 # cat /proc/sys/kernel/spl/kmem/kmem_used

Actually, these are the symptoms of #539, which I thought might have been due to #513. However this is on a freshly booted system without any NULL pointer dereferences so it looks like #513 wasn't down to #539 after all. More's the pity!

Oh, I've just seen that my kern.log now has:

[ 2262.257726] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=15002 jiffies)
[ 2442.114612] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=60034 jiffies)
[ 2548.921638] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=15002 jiffies)
[ 2621.971559] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=105066 jiffies)
[ 2728.778604] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=60034 jiffies)
[ 2801.828512] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=150098 jiffies)
[ 2908.635567] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=105066 jiffies)
[ 2981.685490] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=195130 jiffies)
[ 3088.492542] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=150098 jiffies)
[ 3161.542470] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=240162 jiffies)
[ 3268.349538] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=195130 jiffies)
[ 3341.399457] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=285194 jiffies)
[ 3448.206517] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=240162 jiffies)
[ 3521.256443] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=330226 jiffies)
[ 3628.063504] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=285194 jiffies)
[ 3701.113432] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=375258 jiffies)
[ 3807.920494] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=330226 jiffies)
[ 3880.970422] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=420290 jiffies)
[ 3987.777480] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=375258 jiffies)
[ 4060.827412] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=465322 jiffies)
[ 4167.634482] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=420290 jiffies)
[ 4240.684399] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=510354 jiffies)
[ 4347.491462] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=465322 jiffies)
[ 4420.541386] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=555386 jiffies)
[ 4527.348449] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=510354 jiffies)
[ 4600.398388] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=600418 jiffies)
[ 4707.205441] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=555386 jiffies)
[ 4780.255369] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=645450 jiffies)
[ 4887.062432] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=600418 jiffies)
[ 4960.112356] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=690482 jiffies)
[ 5066.919417] INFO: rcu_bh_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=645450 jiffies)
[ 5139.969348] INFO: rcu_sched_state detected stalls on CPUs/tasks: { 8} (detected by 2, t=735514 jiffies)

I.e. we're now looking like #557. With reference to the suggested work-around in #557 of increasing min_free_kbytes, I'm running with:

# cat /proc/sys/vm/min_free_kbytes 
524288

Cheers,

Chris.

@behlendorf
Copy link
Contributor

Chris I believe it would have been possible (but rare) that some of the xattrs would have been only half written. That could make the subsequent getattr fail as you described and it would be persistent since it's now saved on disk. With the patch that will no longer be possible since the handle destroy before the tx is committed.

behlendorf pushed a commit to behlendorf/zfs that referenced this issue Aug 25, 2012
When calling sa_update() and friends it is possible that a spill
buffer will be needed to accomidate the update.  When this happens
a hold is taken on the new dbuf and that hold must be released
before calling dmu_tx_commit().  Failing to release the hold will
cause a copy of the dbuf to be made in dbuf_sync_leaf().  This is
done to ensure further updates to the dbuf never sneak in to the
syncing txg.

This could be left to the sa_update() caller.  But then the caller
would need to be aware of this internal SA implementation detail.
It is therefore preferable to handle this all internally in the
SA implementation.

Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#503
Issue openzfs#513
behlendorf pushed a commit that referenced this issue Aug 25, 2012
When calling sa_update() and friends it is possible that a spill
buffer will be needed to accomidate the update.  When this happens
a hold is taken on the new dbuf and that hold must be released
before calling dmu_tx_commit().  Failing to release the hold will
cause a copy of the dbuf to be made in dbuf_sync_leaf().  This is
done to ensure further updates to the dbuf never sneak in to the
syncing txg.

This could be left to the sa_update() caller.  But then the caller
would need to be aware of this internal SA implementation detail.
It is therefore preferable to handle this all internally in the
SA implementation.

Signed-off-by: Brian Behlendorf <[email protected]>
Closes #503
Closes #513
@behlendorf
Copy link
Contributor

This was fixed properly under #1978, the fix was merged as:

5d862cb Properly handle updates of variably-sized SA entries.

ryao added a commit to ryao/zfs that referenced this issue Oct 9, 2014
The below excerpt of a backtrace is from a ztest failure when running
ZoL's ztest.

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

The backtrace was an infinite loop of `vdev_queue_io_to_issue()` invoking
`zio_execute()` until it overran the stack. vdev_queue_io_to_issue() will ony
invoke `zio_execute()` on raidz vdevs when aggregation I/Os are generated to
improve aggregation continuity. These I/Os do not trigger any writes. However,
it appears that they can be generated in such a way that they recurse
infinitely upon return to `vdev_queue_io_to_issue()`. As a consequence, we see
the number of parents by 1 each time the recursion returns to
`vdev_raidz_io_start()`.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 9, 2014
The below excerpt of a backtrace is from a ztest failure when running
ZoL's ztest.

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

The backtrace was an infinite loop of `vdev_queue_io_to_issue()` invoking
`zio_execute()` until it overran the stack. vdev_queue_io_to_issue() will ony
invoke `zio_execute()` on raidz vdevs when aggregation I/Os are generated to
improve aggregation continuity. These I/Os do not trigger any writes. However,
it appears that they can be generated in such a way that they recurse
infinitely upon return to `vdev_queue_io_to_issue()`. As a consequence, we see
the number of parents by 1 each time the recursion returns to
`vdev_raidz_io_start()`.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 9, 2014
The below excerpt of a backtrace is from a ztest failure when running
ZoL's ztest.

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

The backtrace was an infinite loop of `vdev_queue_io_to_issue()` invoking
`zio_execute()` until it overran the stack. vdev_queue_io_to_issue() will ony
invoke `zio_execute()` on raidz vdevs when aggregation I/Os are generated to
improve aggregation continuity. These I/Os do not trigger any writes. However,
it appears that they can be generated in such a way that they recurse
infinitely upon return to `vdev_queue_io_to_issue()`.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 10, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 10, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 10, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 10, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 10, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 10, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 11, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
ryao added a commit to ryao/zfs that referenced this issue Oct 11, 2014
The below excerpt of a backtrace is from a ztest failure when running ZoL's
ztest:

/openzfs#453 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#454 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350003de0) at ../../module/zfs/vdev_queue.c:747
/openzfs#455 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350003de0) at ../../module/zfs/zio.c:2659
/openzfs#456 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1399
/openzfs#457 zio_nowait (zio=0x7f0350003de0) at ../../module/zfs/zio.c:1456
/openzfs#458 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350003a10) at ../../module/zfs/vdev_mirror.c:374
/openzfs#459 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1399
/openzfs#460 zio_nowait (zio=0x7f0350003a10) at ../../module/zfs/zio.c:1456
/openzfs#461 0x00007f03c806464c in vdev_raidz_io_start (zio=0x7f0350003380) at ../../module/zfs/vdev_raidz.c:1607
/openzfs#462 0x00007f03c807f243 in __zio_execute (zio=0x7f0350003380) at ../../module/zfs/zio.c:1399
/openzfs#463 zio_nowait (zio=0x7f0350003380) at ../../module/zfs/zio.c:1456
/openzfs#464 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002fb0) at ../../module/zfs/vdev_mirror.c:374
/openzfs#465 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1399
/openzfs#466 zio_nowait (zio=0x7f0350002fb0) at ../../module/zfs/zio.c:1456
/openzfs#467 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033957ebf0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#468 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:2707
/openzfs#469 0x00007f03c808285b in __zio_execute (zio=0x7f033957ebf0) at ../../module/zfs/zio.c:1399
/openzfs#470 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f0390001330, pio=0x7f033957ebf0) at ../../module/zfs/zio.c:547
/openzfs#471 zio_done (zio=0x7f0390001330) at ../../module/zfs/zio.c:3278
/openzfs#472 0x00007f03c808285b in __zio_execute (zio=0x7f0390001330) at ../../module/zfs/zio.c:1399
/openzfs#473 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4013a00, pio=0x7f0390001330) at ../../module/zfs/zio.c:547
/openzfs#474 zio_done (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:3278
/openzfs#475 0x00007f03c808285b in __zio_execute (zio=0x7f03b4013a00) at ../../module/zfs/zio.c:1399
/openzfs#476 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014210, pio=0x7f03b4013a00) at ../../module/zfs/zio.c:547
/openzfs#477 zio_done (zio=0x7f03b4014210) at ../../module/zfs/zio.c:3278
/openzfs#478 0x00007f03c808285b in __zio_execute (zio=0x7f03b4014210) at ../../module/zfs/zio.c:1399
/openzfs#479 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03b4014620, pio=0x7f03b4014210) at ../../module/zfs/zio.c:547
/openzfs#480 zio_done (zio=0x7f03b4014620) at ../../module/zfs/zio.c:3278
/openzfs#481 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03b4014620) at ../../module/zfs/zio.c:1399
/openzfs#482 zio_execute (zio=zio@entry=0x7f03b4014620) at ../../module/zfs/zio.c:1337
/openzfs#483 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#484 0x00007f03c806106e in vdev_queue_io (zio=zio@entry=0x7f0350002be0) at ../../module/zfs/vdev_queue.c:747
/openzfs#485 0x00007f03c80818c1 in zio_vdev_io_start (zio=0x7f0350002be0) at ../../module/zfs/zio.c:2659
/openzfs#486 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1399
/openzfs#487 zio_nowait (zio=0x7f0350002be0) at ../../module/zfs/zio.c:1456
/openzfs#488 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350002810) at ../../module/zfs/vdev_mirror.c:374
/openzfs#489 0x00007f03c807f243 in __zio_execute (zio=0x7f0350002810) at ../../module/zfs/zio.c:1399
/openzfs#490 zio_nowait (zio=0x7f0350002810) at ../../module/zfs/zio.c:1456
/openzfs#491 0x00007f03c8064593 in vdev_raidz_io_start (zio=0x7f0350001270) at ../../module/zfs/vdev_raidz.c:1591
/openzfs#492 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001270) at ../../module/zfs/zio.c:1399
/openzfs#493 zio_nowait (zio=0x7f0350001270) at ../../module/zfs/zio.c:1456
/openzfs#494 0x00007f03c805f71b in vdev_mirror_io_start (zio=0x7f0350001e60) at ../../module/zfs/vdev_mirror.c:374
/openzfs#495 0x00007f03c807f243 in __zio_execute (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1399
/openzfs#496 zio_nowait (zio=0x7f0350001e60) at ../../module/zfs/zio.c:1456
/openzfs#497 0x00007f03c805ed43 in vdev_mirror_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/vdev_mirror.c:499
/openzfs#498 0x00007f03c807a0c0 in zio_vdev_io_done (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:2707
/openzfs#499 0x00007f03c808285b in __zio_execute (zio=0x7f033a0c39c0) at ../../module/zfs/zio.c:1399
/openzfs#500 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f03a8003c00, pio=0x7f033a0c39c0) at ../../module/zfs/zio.c:547
/openzfs#501 zio_done (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:3278
/openzfs#502 0x00007f03c808285b in __zio_execute (zio=0x7f03a8003c00) at ../../module/zfs/zio.c:1399
/openzfs#503 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800c400, pio=0x7f03a8003c00) at ../../module/zfs/zio.c:547
/openzfs#504 zio_done (zio=0x7f038800c400) at ../../module/zfs/zio.c:3278
/openzfs#505 0x00007f03c808285b in __zio_execute (zio=0x7f038800c400) at ../../module/zfs/zio.c:1399
/openzfs#506 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800da00, pio=0x7f038800c400) at ../../module/zfs/zio.c:547
/openzfs#507 zio_done (zio=0x7f038800da00) at ../../module/zfs/zio.c:3278
/openzfs#508 0x00007f03c808285b in __zio_execute (zio=0x7f038800da00) at ../../module/zfs/zio.c:1399
/openzfs#509 zio_notify_parent (wait=ZIO_WAIT_DONE, zio=0x7f038800fd80, pio=0x7f038800da00) at ../../module/zfs/zio.c:547
/openzfs#510 zio_done (zio=0x7f038800fd80) at ../../module/zfs/zio.c:3278
/openzfs#511 0x00007f03c807a6d3 in __zio_execute (zio=0x7f038800fd80) at ../../module/zfs/zio.c:1399
/openzfs#512 zio_execute (zio=zio@entry=0x7f038800fd80) at ../../module/zfs/zio.c:1337
/openzfs#513 0x00007f03c8060b35 in vdev_queue_io_to_issue (vq=vq@entry=0x99f8a8) at ../../module/zfs/vdev_queue.c:706
/openzfs#514 0x00007f03c806119d in vdev_queue_io_done (zio=zio@entry=0x7f03a0010950) at ../../module/zfs/vdev_queue.c:775
/openzfs#515 0x00007f03c807a0e8 in zio_vdev_io_done (zio=0x7f03a0010950) at ../../module/zfs/zio.c:2686
/openzfs#516 0x00007f03c807a6d3 in __zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1399
/openzfs#517 zio_execute (zio=0x7f03a0010950) at ../../module/zfs/zio.c:1337
/openzfs#518 0x00007f03c7fcd0c4 in taskq_thread (arg=0x966d50) at ../../lib/libzpool/taskq.c:215
/openzfs#519 0x00007f03c7fc7937 in zk_thread_helper (arg=0x967e90) at ../../lib/libzpool/kernel.c:135
/openzfs#520 0x00007f03c78890a3 in start_thread (arg=0x7f03c2703700) at pthread_create.c:309
/openzfs#521 0x00007f03c75c50fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This occurred when ztest was simulating a scrub under heavy I/O load. Under
those circumstances, it was possible for a mix of noop I/Os for aggregation
continuity and the I/O elevator to generate arbitrarily deep recursion.

This patch modifies ZFS to propapage a recursion counter inside the zio_t
objects such that IOs will be redispatched upon reaching a given recursion
depth.  We can detect long call chains and dispatch to another ZIO taskq. We
cut in-line when we do this to minimize the potential for taskq exhaustion that
can prevent a zio from notifying its parent.

Signed-off-by: Richard Yao <[email protected]>
pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
…aster

Merge remote-tracking branch '6.0/stage' into 'master'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants