Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/restore qgroups size #94

Merged
merged 1 commit into from
Nov 21, 2022

Conversation

tpraxl
Copy link
Contributor

@tpraxl tpraxl commented Nov 11, 2022

Refs: #47

This PR mostly restores old behavior:

  • feat: Enable display of shared and exclusive sizes
  • fix: Destroy qgroups upon snapshot removal (on a system with qgroups enabled, I encountered a lot of 0.00B qgroups (btrfs qgroup show .), supposedly caused by timeshift)

Opposing to the former behavior, timeshift does not enable qgroups by itself anymore.

It simply executes btrfs qgroup show … and evaluates the exit code.

On success, btrfs qgroups are considered enabled (App.btrfs_qgroups_enabled = true), on failure, btrfs qgroups are considered disabled. In the latter case, timeshift neither shows the size columns, nor does it destroy qgroups.

This restores a very important feature for users that do use qgroups.
Other users are practically unaffected (btrfs with disabled qgroups and rsync users).

Screenshot:

image

This PR is a first proposal. If the maintainers generally agree with the attempt, I plan to cleanup the affected code a bit.

@mtwebster
Copy link
Member

Hi, this mostly works fine.

The only thing that somewhat bothers me is the log spam for users running btrfs without qgroups enabled. You could make btrfs_qgroups_enabled have three states (-1, 0, 1) instead of a boolean - an 'uninitialized' (-1) value to check before calling query_subvolume_quotas(), and if that returns 0 we never call it again.

Are you ready for this to be merged? You mentioned wanting to clean up the code a bit more.

Thanks

@tpraxl
Copy link
Contributor Author

tpraxl commented Nov 21, 2022

Thanks for looking into the PR. I very much appreciate your feedback.

The only thing that somewhat bothers me is the log spam for users running btrfs without qgroups enabled. You could make btrfs_qgroups_enabled have three states (-1, 0, 1) instead of a boolean - an 'uninitialized' (-1) value to check before calling query_subvolume_quotas(), and if that returns 0 we never call it again.

Can you be more specific about the log spam? I would be glad to look into it. Maybe we can mute the logging in that case.

I would like to find an alternative to the suggested solution (disable checks), because I anticipate the following use case:

  1. btrfs user with disabled qgroups starts timeshift
  2. timeshift persists disabled qgroups
  3. btrfs user enables qgroups
  4. timeshift will never check again, hence timeshift will not display the columns although qgroups are enabled

Are you ready for this to be merged? You mentioned wanting to clean up the code a bit more.

I would offer the following "roadmap":

  1. I would like to implement the solution to the log spam issue, once we clarified the direction to take
  2. Afterwards, I would like to clean up the code a bit more

These steps can be taken as part of this PR.
However, if you prefer, you may very well merge this PR now. I would love to see the columns back into the distributed version :)
I would then offer to take the steps above after the merge.

@mtwebster
Copy link
Member

  1. btrfs user enables qgroups
  2. timeshift will never check again, hence timeshift will not display the columns although qgroups are enabled

From my testing, qgroups are picked up and columns displayed if they're enabled, but for whatever reason it doesn't happen the first time timeshift is relaunched, but does on subsequent times - unless I'm misunderstanding what you're saying here.

This is printed any time query_subvolume_quotas() is called:

E: ERROR: can't list qgroups: quotas not enabled

E: btrfs returned an error: 256
Failed to query subvolume quota.

Most people wouldn't see it ordinarily but it can be a distraction when troubleshooting.

However, if you prefer, you may very well merge this PR now. I would love to see the columns back into the distributed version :)
I would then offer to take the steps above after the merge.

We definitely want to tag a new release as soon as possible so I'll merge it and we can improve things later. I've got a fix for the log spam (at least a temporary one - I wanted to see how rusty my vala was).

Thanks

@mtwebster
Copy link
Member

Can you squash this? Or I can do it when I merge.

Restored calculation and display, which was deleted in commit 8d77b18.

However, the behavior has been changed slightly:

* We try to determine the sizes using `btrfs qgroup show`
* This is only successful on systems with enabled qgroups
* If so, we enable the columns
* Otherwise, we hide them

I did not restore the methods for enabling quota.
But I restored destroying qgroups on snapshot removal.

This also fixes a bug, I encountered on a system with enabled qgroups:

Since the removal of the now restored behavior, I encountered lots of 0.00B qgroups,
because they seem to have been created all along, but they have no longer been removed:

```
$ sudo btrfs qgroup show .
qgroupid         rfer         excl
--------         ----         ----
0/5          92.00KiB     92.00KiB
0/262         4.00GiB      4.00GiB
0/798       135.20GiB    334.31MiB
0/799        14.07GiB    254.51MiB
0/1657          0.00B        0.00B
0/1658          0.00B        0.00B
0/1665          0.00B        0.00B
0/1666          0.00B        0.00B
0/1691          0.00B        0.00B
0/1692          0.00B        0.00B
0/1721          0.00B        0.00B
0/1722          0.00B        0.00B
0/1723          0.00B        0.00B
0/1724          0.00B        0.00B
0/1727          0.00B        0.00B
…
```
@tpraxl tpraxl force-pushed the feature/restore-qgroups-size branch from 70e41da to 7af60d8 Compare November 21, 2022 20:14
@tpraxl
Copy link
Contributor Author

tpraxl commented Nov 21, 2022

Can you squash this? Or I can do it when I merge.

I squashed it, adapted the commit message, and force pushed.

We definitely want to tag a new release as soon as possible so I'll merge it and we can improve things later. I've got a fix for the log spam (at least a temporary one - I wanted to see how rusty my vala was).

Looking forward to view your solution.

Thanks for releasing it so soon.

@mtwebster mtwebster merged commit 54cb3a3 into linuxmint:master Nov 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants