-
Notifications
You must be signed in to change notification settings - Fork 6
/
index.html
2299 lines (2130 loc) · 138 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="icon" type="image/png" href="https://stratis-storage.github.io/stratis-favicon.png">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<!-- Enable responsiveness on mobile devices-->
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1">
<title>Stratis Storage</title>
<!-- CSS -->
<link rel="stylesheet" href="https://stratis-storage.github.io/print.css" media="print">
<link rel="stylesheet" href="https://stratis-storage.github.io/poole.css">
<link rel="stylesheet" href="https://stratis-storage.github.io/hyde.css">
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=PT+Sans:400,400italic,700|Abril+Fatface">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.5.0/css/all.css" integrity="sha384-B4dIYHKNBt8Bc12p+WXckhzcICo0wtJAoU8YZTY5qE0Id1GSseTk6S+L3BlXeVIU" crossorigin="anonymous">
</head>
<body class="theme-base-10 ">
<div class="sidebar">
<div class="container ">
<div>
<a href="https://stratis-storage.github.io">
<img src="https://stratis-storage.github.io/imgs/stratis_sidebar.png" />
</a>
</div>
<div class="sidebar-about">
<p class="about lead">Easy to use local storage management for Linux.</p>
</div>
User Links:</br>
<ul class="sidebar-nav">
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/howto">How-To</a></li>
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/users">Clients and Operating Systems</a></li>
</ul>
Developer Links:</br>
<ul class="sidebar-nav">
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/StratisSoftwareDesign.pdf">Software Design</a></li>
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/DBusAPIReference.pdf">D-Bus API Reference</a></li>
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/StratisStyleGuidelines.pdf">Programming Style Guidelines</a></li>
</ul>
D-Bus Introspection Files:</br>
<ul class="sidebar-nav">
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/manager.xml">Manager object</a></li>
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/pool.xml">Pool object</a></li>
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/filesystem.xml">Filesystem object</a></li>
<li class="sidebar-nav-item"><a href="https://stratis-storage.github.io/blockdev.xml">Blockdev object</a></li>
</ul>
Contact Links:</br>
<ul class="sidebar-nav">
<li class="sidebar-nav-item" style="display: inline-block">
<a href="https://github.com/stratis-storage">
<i class="fab fa-github"></i>
</a>
</li>
<li class="sidebar-nav-item" style="display: inline-block">
<a href="https://twitter.com/stratisstorage">
<i class="fab fa-twitter"></i>
</a>
</li>
</ul>
</div>
</div>
<div class="content container">
<div class="posts">
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-intro/">
Stratis Description
</a>
</h1>
<div class="post__summary">
<p><em>Dennis Keefe, Stratis Team</em></p>
<h3 id="stratis-description">Stratis Description</h3>
<p>Stratis is a tool to easily configure pools and filesystems with enhanced
storage functionality that works within the existing Linux storage
management stack. To achieve this, Stratis prioritizes a straightforward
command-line experience, a rich API, and a fully automated approach to storage
management. It builds upon elements of the existing storage stack as much as
possible. Specifically, Stratis uses device-mapper, LUKS, XFS, and Clevis.
Stratis may also incorporate additional technologies in the future.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-intro/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-7-2/">
Stratis 3.7.2 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.7.2, which consists of <code>stratisd 3.7.2</code> and <code>stratis-cli 3.7.0</code>
includes one significant enhancement, several minor enhancements, and a
number of small improvements.</p>
<p>Most significantly, Stratis 3.7.2 extends its functionality to allow a user
to revert a snapshot, i.e., to overwrite a Stratis filesystem with a
previously taken snapshot of that filesystem. The process of reverting<br />
requires two steps. First, a snapshot must be scheduled for revert. However,
the revert can only take place when a pool is started. This can be done
while <code>stratisd</code> is running, by stopping and then restarting the pool. A
revert may also be occasioned by a reboot of the system <code>stratisd</code> is running
on. Restarting <code>stratisd</code> will also cause a scheduled revert to occur,
so long as the pool containing the filesystem to be reverted has already
been stopped. To support this functionality, <code>stratis-cli</code> includes
two new filesystem subcommands, <code>schedule-revert</code> and <code>cancel-revert</code>.</p>
<p>Some additional functionality has been added to support this revert
functionality. First, a filesystem's origin field is now included
among its D-Bus properties and updated as appropriate. <code>stratis-cli</code>
displays an origin value in its newly introduced filesystem detail view.
<code>stratisd</code> also support a new filesystem D-Bus method which returns the
filesystem metadata. The filesystem debug commands in <code>stratis-cli</code> now
include a <code>get-metadata</code> option which will display the filesystem metadata
for a given pool or filesystem. Equivalent functionality has been
introduced for the pool metadata as well.</p>
<p><code>stratisd</code> also includes a considerable number of dependency version bumps,
minor fixes and additional testing, while <code>stratis-cli</code> includes
improvements to its command-line parsing implementation.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-7-2/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-6-7/">
stratisd 3.6.7 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p><code>stratisd</code> 3.6.7 contains two bug fixes. The first bug fix prevents a
file descriptor from being closed too soon after opening so that the user
is prevented from specifying a passphrase via the <code>--capture-key</code>
option of the <code>stratis-min pool start</code> command. This bug was introduced in
<code>stratisd</code> 3.6.6. The second corrects an error in the
<code>stratis-fstab-setup</code> script where the pool UUID was not properly
supplied to the <code>stratis-min pool is-encrypted</code> command.</p>
<p><code>stratisd</code> 3.6.6 includes a number of changes. It now defines two
workspaces, one for itself and one for <code>stratisd_proc_macros</code>, mostly to
simplify packaging downstream. It increases the lower bounds of many of its
dependencies; its bindgen dependency lower bound is now increased to
0.69.0. It includes a restriction on the size of any <code>String</code> value in the
Stratis pool-level metadata. It ensures that the <code>UserInfo</code> values on
devices conform to the same restrictions as filesystem names and pool
names. It fixes a bug in lock file handling where it would be possible
for the lock file to contain some extra digits at the end of the running
<code>stratisd</code> process's id.</p>
<p>Both releases contain many minor fixes and improvements.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-6-7/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-6-5/">
stratisd 3.6.5 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p><code>stratisd</code> 3.6.5 includes a modification to its internal locking mechanism
which allows a lock which does not conflict with a currently held lock to
precede a lock that does. This change relaxes a fairness restriction that
gave precedence to locks based solely on the order in which they had been
placed on a wait queue. This release also includes a number of housekeeping
commits and minor improvements.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-6-5/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-6-4/">
stratisd 3.6.4 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>This post includes release notes for the prior patch releases in this
minor release.</p>
<p><code>stratisd</code> 3.6.4 includes a fix for <code>stratisd-min</code> handling of the start
command sent by <code>stratis-min</code> to unencrypted pools. It also captures and logs
errors messages emitted by the <code>thin_check</code> or <code>mkfs.xfs</code> executables.</p>
<p><code>stratisd</code> 3.6.3 explicitly sets the <code>nrext64</code> option to 0 when invoking
<code>mkfs.xfs</code>. A recent version of XFS changed the default for <code>nrext64</code> to 1.
Explicitly setting the value to 0 prevents <code>stratisd</code> from creating XFS
filesystems that are unmountable on earlier kernels.</p>
<p><code>stratisd</code> 3.6.2 includes a fix in the way thin devices are allocated in order
to avoid misalignment of distinct sections of the thin data device. Such
misalignments may result in a performance degradation.</p>
<p><code>stratisd</code> 3.6.1 includes a fix to correct a problem where <code>stratisd</code> would fail
to unlock a pool if the pool was encrypted using both Clevis and the kernel
keyring methods but the key in the kernel keyring was unavailable.</p>
<p>All releases include a number of housekeeping and maintenance updates.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-6-4/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-6-0/">
Stratis 3.6.0 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.6.0 includes one significant enhancement as well as several smaller
improvements.</p>
<p>Most significantly, Stratis 3.6.0 extends its functionality to allow a user
to set a limit on the size of a filesystem. The limit can be set when the
filesystem is created, or at a later time.</p>
<p>In addition, Stratis 3.6.0 allows the user to stop a pool by specifying the pool
to stop either by UUID or by name, and allows better management of partially
constructed pools.</p>
<p>A new <code>--only</code> option was added to <code>stratis-dumpmetadata</code>, to allow it to print
only the pool-level metadata.</p>
<p><code>stratis-min</code>, the minimal CLI for Stratis, was extended with <code>bind</code>, <code>unbind</code>,
and <code>rebind</code> commands.</p>
<p>The <code>devicemapper</code> dependency lower bound is increased to 0.34.0 which
includes an enhancement to check for the presence of the udev daemon.
<code>stratisd</code> and <code>stratisd-min</code> now exit on startup if the udev daemon is not
present.</p>
<p>The <code>libcryptsetup-rs</code> dependency lower bound is increased to 0.9.1 and a
direct dependency is introduced on <code>libcryptsetup-rs-sys</code> 0.3.0 to allow
registering callbacks with libcryptsetup.</p>
<p>The <code>nix</code> dependency lower bound is increased to 0.26.3, to avoid compilation
errors induced by a fix to a lifetime bug in a function in <code>nix</code>'s public API.</p>
<p>The <code>serde_derive</code> dependency lower bound is increased to 1.0.185 to avoid
vendoring the <code>serde_derive</code> executable included in some prior versions of the
package.</p>
<p><code>stratisd</code> also contains sundry internal improvements, error message
enhancements, and so forth.</p>
<p>The <code>stratis-cli</code> command-line interface has been extended with an additional
option to set the filesystem size limit on creation and two new filesystem
commands, <code>set-size-limit</code> and <code>unset-size-limit</code>, to set or unset the
filesystem size limit after a filesystem has been created.</p>
<p><code>stratis-cli</code> now incorporates password verification when it is used to
set a key in the kernel keyring via manual entry.</p>
<p><code>stratis-cli</code> now allows specifying a pool by name or by UUID when stopping
a pool.</p>
<p><code>stratis-cli</code> also contains sundry internal improvements, and enforces
a python requirement of at least 3.9 in its package configuration.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-6-0/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-5-8/">
stratisd 3.5.8 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p><code>stratisd</code> 3.5.8 principally contains changes to make handling of partially<br />
set up or torn down pools more robust. It also fixes a few errors and omissions
in the management of <code>stratisd</code>'s D-Bus layer, including supplying some
previously missing D-Bus property change signals and removing D-Bus object
paths to partially torn down pools which had in some cases persisted past the
point when the pool should be considered stopped. In addition, it removes
the <code>dracut</code> subpackage's dependency on plymouth.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-5-8/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratify/">
Stratis root filesystem installation with stratify.py
</a>
</h1>
<div class="post__summary">
<p><em>Bryn Reeves, Stratis team</em></p>
<p>Support for using Stratis as the root filesystem was added in version 2.4.0 but
without support in distribution installers it can be tricky for users to build
systems for testing.</p>
<p>This blog post will look at a quick method for installing systems with Stratis
as the root filesystem using the Fedora Live ISO, kickstart, and a Python script to
simplify and automate the process.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratify/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-rootfs-fedora/">
stratisd filesystem as root filesystem on Fedora
</a>
</h1>
<div class="post__summary">
<p><em>John Baublitz, Stratis Team</em></p>
<p>Based on recent questions, we wanted to develop a specific guide for additional steps that need to be taken
on Fedora to enable Stratis as the root filesystem for a Fedora install.</p>
<p>If you have not already looked at <a href="https://stratis-storage.github.io/stratis-rootfs/">the guide</a> for root filesystem work, please read that first. It is a
prerequisite.</p>
<p>For a little bit of background, stratisd provides an additional subpackage for our dracut modules that we
use to set up the root filesystem during early boot. This package installs the necessary modules for dracut
to automate the setup. However there are some steps that may not be obvious to users to get this all to work.
We'll cover them below.</p>
<p>Steps:</p>
<ol>
<li>Install the <code>stratisd-dracut</code> package. This is the subpackage mentioned above.</li>
<li><em>Optional</em> If using Clevis for unlocking encrypted pools, add the following configuration
under /etc/dracut.conf.d/99-stratisd.conf:</li>
</ol>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>add_dracutmodules+=" stratis-clevis "
</span></code></pre>
<ol start="3">
<li>Test your configuration or ensure you have a rescue kernel and initramfs in case the update of the
initramfs renders your install unbootable.</li>
<li>Once you've verified that everything works as expected, run <code>dracut --force --kver=[KERNEL_VERSION]</code></li>
</ol>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-rootfs-fedora/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-5-2/">
stratisd 3.5.2 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p><code>stratisd</code> 3.5.2 includes three significant enhancements as well as a bug
fix.</p>
<p>The enhancements are:</p>
<ul>
<li><code>stratisd</code> 3.5.2 is the first <code>stratisd</code> release to include a subpackage,
<code>stratisd-tools</code>, which incorporates <code>stratis-dumpmetadata</code>, an application
which may be used for troubleshooting.</li>
<li><code>stratisd</code> 3.5.2 now depends on <code>devicemapper-rs</code> 0.33.1, which includes
support for synchronization between udev and devicemapper. See
the <a href="https://github.com/stratis-storage/devicemapper-rs/blob/master/CHANGES.txt">devicemapper-rs</a> changelog and <a href="https://github.com/stratis-storage/stratisd/pull/3069">stratisd pr 3069</a> for additional details.</li>
<li><code>stratisd</code> 3.5.2 modifies the way takeover by <code>stratisd</code> from <code>stratisd-min</code>
is managed during early boot. See <a href="https://github.com/stratis-storage/stratisd/pull/3269">stratisd pr 3269</a> for further details.</li>
</ul>
<p><code>stratisd</code> 3.5.2 also fixes a bug in a script used by the stratisd-dracut
subpackage. This fix was included in the <code>stratisd</code> 3.5.1 release. See
<a href="https://github.com/stratis-storage/stratisd/pull/3256">stratisd pr 3256</a> for further details.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-5-2/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-5-0/">
Stratis 3.5.0 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.5.0 includes one significant enhancement as well as several smaller
improvements.</p>
<p>Most significantly, Stratis 3.5.0 extends its functionality to allow a user
to add a cache to an encrypted pool. The cache devices are each encrypted with
the same mechanism as the data devices; consequently the cache itself is
encrypted.</p>
<p>Stratis 3.5.0 also fixes a few bugs:</p>
<ul>
<li>It extends the thin metadata device more eagerly, and responds to
thin metadata low water mark devicemapper events. This fix was included in
the <code>stratisd</code> 3.4.2 release.</li>
<li>It makes the pool name field in the Stratis LUKS2 metadata optional; this
prevents a failure to start an encrypted pool when upgrading from a previous
<code>stratisd</code> version to <code>stratisd</code> 3.4.0. This fix was included in the
<code>stratisd</code> 3.4.3 release.</li>
<li>It requires a new version of the Stratis devicemapper-rs library, which
contains a fix which eliminates undefined behavior in the management of ioctls
with large result values. This fix was included in the <code>stratisd</code> 3.4.4 release.</li>
<li>It requires a new version of the Stratis libblkid-rs library, which fixes a
memory leak in the <code>get_tag_value</code> method used by <code>stratisd</code>. This fix is not
included in any previous release.</li>
</ul>
<p>This release also reduces the problem of repetitive log messages and modifies
the D-Bus API to eliminate the <code>redundancy</code> parameter previously required by
the <code>CreatePool</code> D-Bus method.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-5-0/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-4-0/">
Stratis 3.4.0 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.4.0 includes one significant enhancement as well as several smaller
improvements.</p>
<p>Most significantly, Stratis 3.4.0 extends its functionality to allow users to
specify a pool by its name when starting a stopped pool. Previously it was
only possible to identify a stopped pool by its UUID.</p>
<p>In addition, <code>stratisd</code> enforces some checks on the compatibility of the block
devices which make up a pool. It now takes into account the logical and
physical sector sizes of the individual block devices when creating a pool,
adding a cache, or extending the data or cache tier with additional devices.</p>
<p>The <code>stratis pool start</code> command has been modified to accept either a UUID
or a name option, while the <code>stratis pool list --stopped</code> command now displays
the pool name if it is available.</p>
<p>This release also includes improvements to <code>stratisd</code>'s internal locking
mechanism.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-4-0/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-3-0/">
Stratis 3.3.0 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.3.0 includes one significant enhancement and several smaller
enhancements as well as number of stability and efficiency improvements.</p>
<p>Most significantly, Stratis 3.3.0 extends its functionality to allow users to
instruct <code>stratisd</code> to include additional space that may have become available
on a component data device in the space that is available to the device's pool.
The most typical use case for this is when a RAID device which presents as a
single device to <code>stratisd</code> is expanded.</p>
<p><code>stratis</code> supports these changes with a new command <code>stratis pool extend-data</code>
that allows the user to specify that the pool should make use of
additional space on its devices. The <code>stratis pool list</code> command has been
extended to show an alert if a pool's device has changed in size. The
<code>stratis blockdev list</code> command will display two device sizes if the size
that stratisd has on record differs from a device's detected size.</p>
<p>A less user-visible change is an improvement to the way that <code>stratisd</code>
allocates space for its thin pool metadata and data devices from the backing
store. The new approach is less precise but always more conservative when
allocating space for the thin pool metadata device and will consistently reduce
possible fragmentation of the thin pool metadata device over the backing
store.</p>
<p>Checks for Clevis executables occur whenever a Clevis executable that is
depended on by <code>stratisd</code> needs to be invoked to complete a user's command.
Previously, the check occurred only once, when <code>stratisd</code> was started. We
believe that this change will be more convenient for users who may install
needed Clevis executables after <code>stratisd</code> has already been started.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-3-0/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-2-0/">
Stratis 3.2.0 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.2.0 includes one significant enhancement, one bug fix, and a number
of more minor improvements.</p>
<p>Most significantly, Stratis 3.2.0 extends its functionality to allow users to
stop and start a pool.</p>
<p>Stopping a pool consists of tearing down its storage stack in an orderly way,
but not destroying the pool metadata. It is a <code>pool destroy</code> operation
without the final step of wiping the Stratis metadata. Starting a pool is
setting up a pool according to the information stored in the pool level
metadata of the devices associated with a pool. Whether a pool is stopped or
started is stored in the pool-level metadata, with the consequence that users
can control whether a pool is automatically started when <code>stratisd</code> is started
up, or whether startup of the pool is deferred until explicitly requested.</p>
<p><code>stratis</code> supports these changes with new commands to start and to stop a
pool. It includes an additional <code>debug refresh</code> command which allows a user to
request that the state of all pools be refreshed. The <code>pool list</code> command has
been extended to allow a detailed view of individual pools and to allow the
user to examine stopped pools. The <code>pool unlock</code> command has been removed
in favor of the <code>pool start</code> command.</p>
<p>Other changes include a fix to the algorithm for determining the size of data
and metadata devices that make up a thinpool device, the elimination of all
uses of <code>udevadm settle</code> in the <code>stratisd</code> engine, and general improvements to
the RPC layers used by <code>stratis-min</code> and <code>stratisd-min</code>.</p>
<p>In addition, the <code>stratisd-min</code> service now requires the <code>systemd-udevd</code>
service to ensure that Stratis filesystem symlinks are created when
<code>stratisd-min</code> sets up a Stratis filesystem.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-2-0/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-1-0/">
Stratis 3.1.0 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.1.0 includes significant improvements to the management of the
thin-provisioning layers, as well as a number of other user-visible
enhancements and bug fixes.</p>
<p>Please see <a href="https://stratis-storage.github.io/thin-provisioning-redesign/">this post</a> for a detailed discussion of the thin-provisioning
changes. To support these changes the Stratis CLI has been enhanced to:</p>
<ul>
<li>allow specifying whether or not a pool may be overprovisioned on creation</li>
<li>allow changing whether or not a pool may be overprovisioned while it is
running</li>
<li>allow increasing the filesystem limit for a given pool</li>
<li>display whether or not a pool is overprovisioned in the pool list view</li>
</ul>
<p>Users of the Stratis CLI may also observe the following changes:</p>
<ul>
<li>A <code>debug</code> subcommand has been added to the <code>pool</code>, <code>filesystem</code>, and
<code>blockdev</code> subcommands. Debug commands are not fully supported and may change
or be removed at any time.</li>
<li>The <code>--redundancy</code> option is no longer available when creating a pool. This
option had only one permitted value so specifying it never had any effect.</li>
</ul>
<p>stratisd 3.1.0 includes one additional user-visible change:</p>
<ul>
<li>The minimum size of a Stratis filesystem is increased to 512 MiB.</li>
</ul>
<p>stratisd 3.1.0 also includes a number of internal improvements:</p>
<ul>
<li>The size of any newly created MDV is increased to 512 MiB.</li>
<li>A pool's MDV is mounted in a private mount namespace and remains mounted
while the pool is in operation.</li>
<li>Improved handling of udev events on device removal.</li>
<li>The usual and customary improvements to log messages.</li>
</ul>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-1-0/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/thin-provisioning-redesign/">
Thin provisioning redesign
</a>
</h1>
<div class="post__summary">
<p><em>John Baublitz, Stratis Team</em></p>
<h1 id="overview">Overview</h1>
<p>For a while, we've bumped into a number of problems with our thin provisioning implementation
around reliability and safety for users. After gathering a lot of feedback on our thin
provisioning layer, we put together <a href="https://github.com/stratis-storage/stratisd/issues/2814">a proposal</a> for improvements to how we currently handle
allocations.</p>
<p>The changes can largely be divided up into three areas of improvement:</p>
<ul>
<li>Predictability</li>
<li>Safety</li>
<li>Reliability</li>
</ul>
<h1 id="predictability">Predictability</h1>
<p>We made two notable changes to make behavior in the thin provisioning layer well-defined and
predictable for users. Both parts relate to an existing thin provisioning tool,
<code>thin_metadata_size</code>. This tool allows users to calculate the amount of metadata needed for
a thin pool with a given size and number of thin devices (filesystems and snapshots in the
case of stratisd). We have started taking advantage of <code>thin_metadata_size</code> to make our
metadata space reservation more precise. Instead of our previous approach of allocating
a fixed fraction of the available space, we now calculate the exact amount of space required for
a given pool size and number of filesystems and snapshots. The second change is a switch
to lazy allocation. Previously, we allocated greedily which meant that every time a device
was added, we would allocate a certain amount of space for data and metadata regardless of
the individual user's requirements. We now delay allocation and allocate block device storage
on an as-needed basis allowing users to develop different requirements and adjust accordingly.
For example, a user may realize that they need more filesystems than they originally planned for.
With lazy allocation, assuming there is unallocated space on the pool, the user can now redirect
that unallocated space from data to metadata space so there is enough room for a greater
number of filesystems than was originally anticipated.</p>
<p>This change resulted in two API modifications. One is filesystem limits; to appropriately
ensure that we never exceed the allocated metadata limit, we set a filesystem limit per
pool. This limit can be increased through the API, triggering a new allocation for
metadata space. The other API change is related to the switch to lazy allocation. There
is now information available that reports the amount of space that has been allocated.
Previously we only concerned ourselves with used and total space, but with lazy allocation,
it is now also important to report space that has been allocated but may not be in use yet.</p>
<h1 id="safety">Safety</h1>
<p>A key drawback of thin provisioning is often the failure cases. When overprovisioning a
storage stack, the stack can get into a bad state when the pool becomes full due to the
filesystem being far larger than the pool backing it. We have added in two safety features
to help users cope with this.</p>
<p>One measure is the addition of a mode to disable overprovisioning. This ensures that the size
of all filesystems on the pool does not exceed the available physical storage provided by the
pool. This feature is not necessarily useful for all users, particularly with heavy snapshot
usage because even if storage is shared between a snapshot and a filesystem, this mode will
treat them as entirely independent entities in terms of storage cost. This ensures that copy-
on-write operations will not accidentally fill the pool if the shared storage diverges between
the two, but puts a rather strict limit on snapshot capacity. For users that use Stratis for
critical applications or the root filesystem, this mode prevents certain failure cases that
can be challenging to recover from.</p>
<p>When overprovisioning is enabled, we have also introduced a new API signal to notify the user
when physical storage has been fully allocated. This does not necessarily mean that the pool
has run out of space but serves as a warning to the user that once the remaining free space
fills up, Stratis has no space left to extend to. This gives users time to provide more storage
from which to allocate space before reaching a failure case.</p>
<h1 id="reliability">Reliability</h1>
<p>For a while, we've gotten bug reports about the reliability of filesystem extension. In certain
cases, Stratis was not able to handle filesystem extension smoothly or at all. Between the
<a href="https://stratis-storage.github.io/per-pool-locking/">per-pool locking</a> and the thin provisioning redesign, we have now resolved some of the
previous issues with filesystem extension. The approach we've taken attacks the problem from
a few different angles.</p>
<h2 id="earlier-filesystem-extension">Earlier filesystem extension</h2>
<p>Stratis used to wait until several gigabytes were left to extend the filesystem. If Stratis didn't
resize the filesystem quickly enough, the filesystem would run out of space before the extension
could complete. While this would eventually resolve itself once the filesystem was extended,
it would cause some unnecessary IO errors. We now extend the filesystem at 50% usage
to ensure that users always have a large buffer of free space available for even very IO-heavy
usage patterns.</p>
<h2 id="parallelized-filesystem-extension-operations">Parallelized filesystem extension operations</h2>
<p>Stratis could previously only iterate sequentially through pools. Now stratisd can handle filesystem
extension on two separate pools in parallel, reducing the latency between the point where
high usage is detected and the extension operation being performed.</p>
<h2 id="periodic-checks-for-filesystem-usage">Periodic checks for filesystem usage</h2>
<p>Checking filesystem usage used to be a devicemapper event-dependent operation. This led to
some problems around filesystem extension. A devicemapper event would be generated periodically
as the filesystem filled up, but if the filesystem failed to extend a few times,
devicemapper events would no longer be generated once the pool filled up and users would be
left with a filesystem that couldn't be extended. We've removed our dependency on devicemapper
events for filesystem monitoring and use devicemapper events for pool handling exclusively.
Instead, we run periodic checks in the background on filesystems to ensure that even if
filesystem extension fails multiple times, once the filesystem is ready to be extended,
stratisd can perform the operation in the background, so that we don't leave users in a state
where their filesystem can't be extended.</p>
<h1 id="migration-and-backwards-compatibility">Migration and backwards compatibility</h1>
<p>There are two types of changes that require migrations from older versions of stratisd:
metadata changes and allocation scheme changes.</p>
<h2 id="metadata-changes">Metadata changes</h2>
<p>The changes we made required some schema changes in our MDA, the metadata region outside of
the superblock that records longer form JSON about the specifics of the pool topology. The
migration should be invisible to the user and will be performed the first time the new
version of stratisd detects legacy pools. The migration adds some additional devicemapper
information, information about filesystem limits on a pool, and other bookkeeping information.</p>
<h2 id="allocation-scheme-changes">Allocation scheme changes</h2>
<p>As mentioned above, the previous metadata allocation scheme was less precise and allocated
a larger segment for metadata space than was necessary for the amount of data space present.
Migration for old pools will cause stratisd to detect that the metadata device is already larger
than it needs to be and no additional metadata device growth will occur until the data device
size becomes large enough to require additional metadata space.</p>
<h1 id="future-work">Future work</h1>
<p>We hope to eventually provide some smarter allocation strategies for our data and metadata
allocations to maximize contiguous allocation extents.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/thin-provisioning-redesign/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-0-4/">
stratisd 3.0.4 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>stratisd 3.0.4 contains two fixes to bugs in its D-Bus API.</p>
<p>The D-Bus property changed signal sent on a change to the LockedPools
property of the "org.storage.stratis3.Manager.r0" interface misidentified the
interface as the "org.storage.stratis3.pool.r0" interface; the interface
being sent with the signal is now correct.</p>
<p>The introspection data obtained via the "org.freedesktop.DBus.Introspectable"
interface's "Introspect" method was not correct for the "GetManagedObjects"
method of the "org.freedesktop.DBus.ObjectManager" D-Bus interface; it did
not include the specification of the out argument. This has been corrected.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-0-4/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-0-3/">
stratisd 3.0.3 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>stratisd 3.0.3 contains internal improvements and several bug fixes.</p>
<p>Most significantly, it includes an enhancement to stratisd's original
multi-threading model to allow <a href="https://stratis-storage.github.io/per-pool-locking/">locking individual pools</a>.</p>
<p>A change was made to the conditions under which the stratis dracut module is
included in the initramfs.</p>
<p>Under some conditions, a change in pool size did not result in a corresponding
property changed signal for the relevant D-Bus property change; this has been
fixed.</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/stratisd-release-notes-3-0-3/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/per-pool-locking/">
Addition of per-pool locking
</a>
</h1>
<div class="post__summary">
<p><em>John Baublitz, Stratis Team</em></p>
<h1 id="overview">Overview</h1>
<p>Recently, we've merged a PR that completes our work on improved concurrency in
stratisd. Previously, we had made some changes to the IPC layer to provide the
ability for stratisd to handle incoming requests in parallel which you can read
about <a href="https://stratis-storage.github.io/multi-threading/">here</a>. This work allowed IPC requests to each be handled
in a separate <a href="https://tokio.rs/tokio/tutorial/spawning">tokio task</a>, but the Stratis engine, the part of our code that
handles all of the storage stack operations, could still only be accessed
sequentially.</p>
<h1 id="motivation">Motivation</h1>
<p>After having conversations with the LVM team, it seemed like sequential accesses
of storage operations was not entirely necessary. While modifying multiple layers
of the pool stack at once can cause problems, modifying independent pools in
parallel is safe, and we wanted to take advantage of the potential for increased
concurrency. A large part of this is due to how we handle D-Bus properties. Our
D-Bus properties expose aspects of the storage stack that sometimes require
querying the device-mapper stack for information. With sequential accesses,
this would mean that even two list operations on any two pools could not run in
parallel, a restriction that causes a bad user experience and is not technically
necessary.</p>
<h1 id="requirements">Requirements</h1>
<p>Despite the motivation being clear, the solution turned out to be more complicated.
One of the major problems that we bumped into when trying to achieve more granular
concurrency was the interaction between standard Rust synchronization structures
and the API for listing D-Bus objects.</p>
<p>Our initial idea was to wrap the data structure containing the record of all of
the pools in a read-write lock. This had a few notable drawbacks. For one, you
could not acquire mutable access to two independent pools at a time even though
this is a completely safe operation.</p>
<p>This led us to the idea of wrapping each pool in a read-write lock. Unfortunately,
this also had some major drawbacks. One notable example of this was the behavior of
our list operation with this solution. A list operation would require a read lock
on every single pool and this means that the time that it would take to list all of
the pools or filesystems would increase proportionally with the number of pools
on the system. Because locking is relatively expensive, we noticed a significant
slowdown when listing larger numbers of pools and filesystems.</p>
<p>Our ideal scenario was to have the benefits of a read-write lock so that list
operations could run in parallel but to provide an ability to either lock single
pools or all pools in one operation so that locking all pools would take the
same amount of time no matter how many were present on the system.</p>
<h1 id="design">Design</h1>
<p>After determining that no locking data structure like this appeared to exist
in tokio, we took some time to look into how tokio implements its locking data
structures. The API for much of the locking data structures appeared to
be a lock acquisition method that returned a future. This future would poll the state
of the lock and either update the internal data structures to indicate that
the lock had been acquired or put itself to sleep until it was ready to poll
again. The <code>drop</code> method on the data structure returned by the future would
trigger waking up a task to poll again. This seemed perfectly workable with
a more granular read-write lock. The only difference would be that we would
need to keep track of locks on individual pools as well as locks on the entire
collection. The proper locking conflict rules would need to be checked:</p>
<ul>
<li><code>WriteAll</code> conflicts with all other operations.</li>
<li><code>ReadAll</code> conflicts with <code>WriteAll</code> and <code>Write</code> on any pool.</li>
<li><code>Write</code> conflicts with <code>WriteAll</code> and <code>Read</code> or <code>Write</code> on the same pool.</li>
<li><code>Read</code> conflicts with <code>WriteAll</code> and <code>Write</code> on the same pool.</li>
</ul>
<p>Any attempt to acquire two conflicting locks would queue one of the tasks to
be woken up once the conflicting lock was dropped.</p>
<h1 id="notable-design-choices">Notable design choices</h1>
<p>We chose to implement our lock as a starvation-free lock. Implementing a lock
that allows <code>ReadAll</code> to bypass <code>Write*</code> requests that are queued when another
<code>ReadAll</code> request has already acquired the lock leads to behavior where <code>Write*</code>
requests could block indefinitely. This behavior could cause list operations to
block filesystem extension handling indefinitely, potentially leading to IO errors
and a full filesystem. A starvation-free locking approach puts a task in a FIFO
queue if any are already queued in front of it. The notable downside of this is
slightly more latency for handling locking requests, but the benefits seemed to
outweigh this.</p>
<p>Because tokio can cause spurious wake ups for tasks, we assign a unique integer
ID to each future responsible for polling the lock for readiness. In the case
where there is both a legitimate and spurious wake up at the same time, this
allows our lock to differentiate between the two woken tasks to determine which
one should be given priority and which should be put to sleep. This prevents
spurious wake ups from acquiring the lock before they are scheduled to.</p>
<p>Because tokio does not currently allow lifetimes shorter than <code>'static</code> when
passing a reference across thread boundaries, our locking data structure
heavily uses automatic reference counting (<code>Arc</code>). This enables shared access
between multiple threads and the ability to pass an acquired lock handle to
a separate thread after acquisition. Without the use of <code>Arc</code>, the pool would
have to be operated on in the same task as the lock acquisition which would
prevent passing lock handles to separate tasks to process them in parallel.</p>
<h1 id="optimizations">Optimizations</h1>
<p>After our initial implementation of the write-all lock, we bumped into an issue
where we could not pass all pool lock handles into separate threads to handle
them all in parallel. This was particularly problematic for our implementation
of background devicemapper event handling. Our solution for this was to allow
acquiring all locks at once to avoid the penalty of locking each pool individually
and then converting that lock handle to a set of individual locks that can all
be released when they are no longer needed. This addressed both the issue of
parallelization and constant time locking for all pools nicely.</p>
<p>Originally we also only woke one queued task at a time when a lock was released.
This proved to be less performant. If two <code>ReadAll</code> tasks were queued, these could
both be woken up in parallel and acquire the lock with no conflict. The solution
to this was to factor out the part of the code that tests for conflicts and traverse
the queue and wake up all tasks until a conflicting task is found. This allowed
waking up a batch of queued tasks that could all operate in parallel without
also waking up a conflicting task that would immediately be put back to sleep.</p>
<h1 id="future-work">Future work</h1>
<p>Recently, we discovered that we should be able to provide even more
parallelization for filesystem background operations. While we cannot perform
multiple pool mutation operations in parallel, the filesystems on top of the
pool can be modified independently in parallel. We expect to change the way
background checks on filesystem usage are handled by spawning each filesystem
extension in its own tokio task so that, for pools with many filesystems, the
filesystem extension will be more responsive. Rather than iterating through
hundreds of filesystems, stratisd will be able to handle multiple filesystem
extensions in parallel, speeding up the checking process if there is more than
one filesystem that needs to be extended at once. This will benefit IO performance
by ensuring that the filesystems are extended in a timely manner to avoid cases
where the filesystem is filled before it can be extended.</p>
<h1 id="final-notes">Final notes</h1>
<p>We've added extensive debugging for the locking data structure in case users run
into issues. To enable these logs and see the state of the per-pool locking data
structure over time, simply enable trace logs in stratisd!</p>
</div>
<div class="read-more">
<a href="https://stratis-storage.github.io/per-pool-locking/">Read more...</a>
</div>
</article>
</div>
<div class="post">
<article class="post">
<h1 class="post-title">
<a href="https://stratis-storage.github.io/stratis-release-notes-3-0-0/">
Stratis 3.0.0 Release Notes
</a>
</h1>
<div class="post__summary">
<p><em>mulhern, Stratis Team</em></p>
<p>Stratis 3.0.0 includes many internal improvements, bug fixes, and
user-visible changes.</p>
<p>Users of the Stratis CLI may observe the following changes:</p>
<ul>
<li>It is now possible to set the filesystem logical size when creating a
filesystem.</li>
<li>It is possible to rebind a pool using a Clevis tang server or with a key
in the kernel keyring.</li>
<li>Filesystem and pool list output have been extended and improved. The pool
listing includes an <code>Alerts</code> column. Currently this column is used to indicate
whether the pool is in a restricted operation mode. A new subcommand,
<code>stratis pool explain</code>, which provides a fuller explanation of the codes
displayed in the <code>Alerts</code> column has been added. The filesystem listing
now displays a filesystem's logical size.</li>
<li>With encrypted pools it was previously possible for the display of block
device paths to change format if <code>stratisd</code> was restarted after an encrypted
pool had been created. Now the display of the block device paths is consistent
across <code>stratisd</code> restarts.</li>
</ul>
<p>In stratisd 3.0.0 the D-Bus API has undergone a revision and the prior
interfaces are all removed. The <code>FetchProperties</code> interfaces that
were supported by all objects have been removed. The values that were
previously obtainable via the <code>FetchProperties</code> methods
are now conventional D-Bus properties. The possible values of error codes
returned by the D-Bus methods have been reduced to 0 and 1, with the usual
interpretation.</p>
<p>stratisd 3.0.0 includes a number of significant internal improvements and a few
bug fixes.</p>
<p><code>stratisd</code> bug fixes:</p>