Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added output to zpool online and offline #16244

Merged
merged 1 commit into from
Nov 1, 2024

Conversation

rincebrain
Copy link
Contributor

Motivation and Context

zpool online and zpool offline don't actually print any error information, they just set an error code and die.

That's not super useful for interactively noticing your command failed.

Description

^

How Has This Been Tested?

Isn't testing what CI is for?

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@stuartthebruce
Copy link

Will this also fix the silent stderr output for a failed zpool export, e.g.,

[root@node2259 ~]# cat /sys/module/zfs/version 
2.2.4-1

[root@node2259 ~]# zpool export test

[root@node2259 ~]# echo $?
1
[root@node2259 ~]# zpool status test
  pool: test
 state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
config:

	NAME        STATE     READ WRITE CKSUM
	test        ONLINE       0     0     0
	  sdo       ONLINE       0     0     0
	  sdp       ONLINE       0     0     0
	  sdq       ONLINE       3   464     0
	  sdr       ONLINE       0     0     0
	  sds       ONLINE       0     0     0

errors: 217 data errors, use '-v' for a list

Is there a hack to destroy this pool without rebooting? Note, the current error message for that is also misleading,

[root@node2259 ~]# zpool destroy -f test
could not destroy 'test': could not unmount datasets

[root@node2259 ~]# zfs list
no datasets available

@rincebrain
Copy link
Contributor Author

rincebrain commented Jun 12, 2024

No, but I could go do that.

As far as fixing the inability to do anything, go look at the forced export PR.

@behlendorf behlendorf added the Status: Accepted Ready to integrate (reviewed, tested) label Jun 14, 2024
@behlendorf
Copy link
Contributor

This just needs to be rebased on master. It looks like it based on master from Jan '24 which is why the Fedora builders failed.

@rincebrain
Copy link
Contributor Author

Sure, I'm going to go see about convincing export to print errors for whatever case they hit and then rebase.

I was surprised to discover today that `zpool online` and
`zpool offline` don't print any information about why they failed in
many cases, they just return 1 with no information about why.

Let's improve that where we can without changing the library function.

Signed-off-by: Rich Ercolani <[email protected]>
@amotin amotin merged commit acb6e71 into openzfs:master Nov 1, 2024
19 of 20 checks passed
behlendorf pushed a commit to behlendorf/zfs that referenced this pull request Nov 1, 2024
I was surprised to discover today that `zpool online` and
`zpool offline` don't print any information about why they failed in
many cases, they just return 1 with no information about why.

Let's improve that where we can without changing the library function.

Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Tony Hutter <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Allan Jude <[email protected]>
Signed-off-by: Rich Ercolani <[email protected]>
Closes openzfs#16244
ixhamza pushed a commit to truenas/zfs that referenced this pull request Nov 11, 2024
I was surprised to discover today that `zpool online` and
`zpool offline` don't print any information about why they failed in
many cases, they just return 1 with no information about why.

Let's improve that where we can without changing the library function.

Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Tony Hutter <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Allan Jude <[email protected]>
Signed-off-by: Rich Ercolani <[email protected]>
Closes openzfs#16244
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants