Releases: sigsep/open-unmix-pytorch
Open-Unmix 1.3.0
- fixes broken zenodo urls
- supports
wiener_win_len
in hubmodels (see #106) - update to python 3.9 and test most recent torch version
- speed up unit tests
What's Changed
- Fix_stft_istft_window_parameters by @faroit in #107
- remove 3.6 by @faroit in #117
- Fixed issue #115 with broken --help option by @TobiasB22 in #116
- Bump numpy from 1.21.2 to 1.22.0 in /scripts by @dependabot in #120
- Bump joblib from 1.0.1 to 1.2.0 in /scripts by @dependabot in #126
- The default option for --is-wav is False by @satvik-venkatesh in #130
- Set unidirectional flag by @satvik-venkatesh in #131
- Bump gitpython from 3.1.18 to 3.1.34 in /scripts by @dependabot in #136
- Bump scipy from 1.7.1 to 1.10.0 in /scripts by @dependabot in #134
- Bump future from 0.18.2 to 0.18.3 in /scripts by @dependabot in #145
- Bump gitpython from 3.1.34 to 3.1.41 in /scripts by @dependabot in #146
- openunmix v1.3.0: update zendo links by @faroit in #144
New Contributors
- @TobiasB22 made their first contribution in #116
- @dependabot made their first contribution in #120
- @satvik-venkatesh made their first contribution in #130
Full Changelog: v1.2.1...v1.3.0
Open-Unmix 1.2.1
New features
- training script also allows to select a pre-trained model to be fine-tuned using the
--model
parameter #103 - a commandline progress bar has been added to see the progress for multiple tracks being separated #102
Breaking changes
umxl
is now selected as the new default model for inference using the commandline and the python API #97- training argument
--model
has been renamed to--checkpoint
.
Bug fixes
- addresses backend compatibility issues with sox_io on windows #93
Open-Unmix 1.2.0: Release of UMX-L
We are excited to announce a new pre-trained open-unmix model trained on hundreds of hours of extra training data which significantly boosts performance and generalization. Note that this model can only be used for non-commercial applications.
Open-Unmix 1.1.2
Bugfix release
- fixed missing dependency (tqdm) for cli-only package #81
Open-Unmix 1.1.1
New Features
- added support for pytorch 1.8.0 and torchaudio 0.8.0, unfortunately this means that support for torchaudio <0.7.0 has been dropped. #79
Open-Unmix 1.1
New Features
- added implementation of differentiable wiener filters, removing norbert redundancy
- added more flexible encoder/decoder transform architecture
- updated to codebase to pytorch
1.4.0,1.5.0,1.5.1, 1.6.0, 1.7.0 - added a
force_stereo
augmentation to address heterogeneous datasets of mixed channels #41 - added type hint annotations
- restructured code as a python package #46 via #60
- update to musdb 0.4.0 and museval 0.4.0
- added test for onnx export
- added
stempeg
based audio inference backend to improve reading and writing of compressed audio - switch from travis to github actions (thanks to @mpariente)
- update docs
Bug Fixes
- address issue with dataset statistic calculation #33
- fixes glob(*) behaviour which is known to not be deterministic and can cause non-reproducible behavior
- uses centered audio segments for
sourcefolder
dataset
Thanks
Thanks to @aliutkus, @Baldwin-disso, @keunwoochoi for the contributions.
Special thanks to @mpariente for the awesome asteroid code base.
Initial release of Open-Unmix
This release is matching to the pre-trained model hosted on zenodo:
-
umxhq
(default) trained on MUSDB18-HQ which comprises the same tracks as in MUSDB18 but un-compressed which yield in a full bandwidth of 22050 Hz. -
umx
is trained on the regular MUSDB18 which is bandwidth limited to 16 kHz do to AAC compression. This model should be used for comparison with other (older) methods for evaluation in SiSEC18.