-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This modification is compatible with MacOS M chip and runs in CPU mode. #365
base: main
Are you sure you want to change the base?
Conversation
hello, I modified the codes in model.py and llm.py based on your commit, but it still raised error: Exception in thread Thread-2:
Traceback (most recent call last):
File "/opt/anaconda3/envs/cosyvoice/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/anaconda3/envs/cosyvoice/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/cli/model.py", line 83, in llm_job
for i in self.llm.inference(text=text.to(self.device),
File "/opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/llm/llm.py", line 173, in inference
text, text_len = self.encode(text, text_len)
File "/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/llm/llm.py", line 76, in encode
encoder_out, encoder_mask = self.text_encoder(text, text_lengths, decoding_chunk_size=1, num_decoding_left_chunks=-1)
File "/opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/cosyvoice/transformer/encoder/___torch_mangle_5.py", line 22, in forward
masks = torch.bitwise_not(torch.unsqueeze(mask, 1))
embed = self.embed
_0 = torch.add(torch.matmul(xs, CONSTANTS.c0), CONSTANTS.c1)
~~~~~~~~~~~~ <--- HERE
input = torch.layer_norm(_0, [1024], CONSTANTS.c2, CONSTANTS.c3)
pos_enc = embed.pos_enc
Traceback of TorchScript, original code (most recent call last):
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "test.py", line 9, in <module>
for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
File "/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/cli/cosyvoice.py", line 61, in inference_sft
for model_output in self.model.inference(**model_input, stream=stream):
File "/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/cli/model.py", line 168, in inference
this_tts_speech_token = torch.concat(self.tts_speech_token_dict[this_uuid], dim=1)
RuntimeError: torch.cat(): expected a non-empty list of Tensors My desktop: M2 Pro |
I have upgraded torch, torchaudio and torchvision in my virtual environment, you can try to upgrade them. and I comment the onnxruntime-gpu in requirements.txt. |
@alkaidzone I just tried it again and it is indeed related to the version of torch related libraries. I used the following command to install and upgrade torch related libraries and it worked fine. |
Thanks for reply! I modified the requirements.txt and run the command, but it says it python3 test.py
2024-09-08 17:58:58,754 - modelscope - INFO - PyTorch version 2.4.1 Found.
2024-09-08 17:58:58,754 - modelscope - INFO - Loading ast index from /Users/jiangwenjing/.cache/modelscope/ast_indexer
2024-09-08 17:58:58,784 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 65db473890940e3550f281ba3a7b9944 and a total number of 980 components indexed
failed to import ttsfrd, use WeTextProcessing instead
/opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/diffusers/models/lora.py:393: FutureWarning: `LoRACompatibleLinear` is deprecated and will be removed in version 1.0.0. Use of `LoRACompatibleLinear` is deprecated. Please switch to PEFT backend by installing PEFT: `pip install peft`.
deprecate("LoRACompatibleLinear", "1.0.0", deprecation_message)
2024-09-08 17:59:03,744 INFO input frame rate=50
/opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/nn/utils/weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
WeightNorm.apply(module, name, dim)
/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/dataset/processor.py:24: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
torchaudio.set_audio_backend('soundfile')
/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/cli/frontend.py:57: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.spk2info = torch.load(spk2info, map_location=self.device)
2024-09-08 17:59:05,203 WETEXT INFO found existing fst: /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_tagger.fst
2024-09-08 17:59:05,203 INFO found existing fst: /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_tagger.fst
2024-09-08 17:59:05,203 WETEXT INFO /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_verbalizer.fst
2024-09-08 17:59:05,203 INFO /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_verbalizer.fst
2024-09-08 17:59:05,203 WETEXT INFO skip building fst for zh_normalizer ...
2024-09-08 17:59:05,203 INFO skip building fst for zh_normalizer ...
2024-09-08 17:59:05,427 WETEXT INFO found existing fst: /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_tagger.fst
2024-09-08 17:59:05,427 INFO found existing fst: /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_tagger.fst
2024-09-08 17:59:05,427 WETEXT INFO /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_verbalizer.fst
2024-09-08 17:59:05,427 INFO /opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_verbalizer.fst
2024-09-08 17:59:05,427 WETEXT INFO skip building fst for en_normalizer ...
2024-09-08 17:59:05,427 INFO skip building fst for en_normalizer ...
cpu
/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/cli/model.py:55: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.llm.load_state_dict(torch.load(llm_model, map_location=self.device))
/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/cli/model.py:58: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.flow.load_state_dict(torch.load(flow_model, map_location=self.device))
/Users/jiangwenjing/cs-records/NUS/dl/cosyvoice/CosyVoice/cosyvoice/cli/model.py:60: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.hift.load_state_dict(torch.load(hift_model, map_location=self.device))
['中文女', '中文男', '日语男', '粤语女', '英文女', '英文男', '韩语女']
0%| | 0/1 [00:00<?, ?it/s]2024-09-08 17:59:07,206 INFO synthesis text 你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?
2024-09-08 17:59:20,786 INFO yield speech len 4.655600907029479, rtf 2.9171274100520885
0%| | 0/1 [00:13<?, ?it/s]
Traceback (most recent call last):
File "test.py", line 10, in <module>
torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], 22050)
File "/opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torchaudio/_backend/utils.py", line 312, in save
backend = dispatcher(uri, format, backend)
File "/opt/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torchaudio/_backend/utils.py", line 222, in dispatcher
raise RuntimeError(f"Couldn't find appropriate backend to handle uri {uri} and format {format}.")
RuntimeError: Couldn't find appropriate backend to handle uri sft_0.wav and format None. My conda list below: # packages in environment at /opt/anaconda3/envs/cosyvoice:
#
# Name Version Build Channel
absl-py 2.1.0 pypi_0 pypi
accelerate 0.34.2 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
aiofiles 23.2.1 pypi_0 pypi
aiohappyeyeballs 2.4.0 pypi_0 pypi
aiohttp 3.10.5 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
aliyun-python-sdk-core 2.15.2 pypi_0 pypi
aliyun-python-sdk-kms 2.16.5 pypi_0 pypi
altair 5.4.1 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
anyio 4.4.0 pypi_0 pypi
async-timeout 4.0.3 pypi_0 pypi
atk-1.0 2.36.0 heb41896_4 conda-forge
attrs 24.2.0 pypi_0 pypi
audioread 3.0.1 pypi_0 pypi
beautifulsoup4 4.12.3 pypi_0 pypi
ca-certificates 2024.8.30 hf0a4a13_0 conda-forge
cachetools 5.5.0 pypi_0 pypi
cairo 1.16.0 he69dfd1_1008 conda-forge
certifi 2024.8.30 pypi_0 pypi
cffi 1.17.1 pypi_0 pypi
charset-normalizer 3.3.2 pypi_0 pypi
click 8.1.7 pypi_0 pypi
coloredlogs 15.0.1 pypi_0 pypi
conformer 0.3.2 pypi_0 pypi
contourpy 1.1.1 pypi_0 pypi
crcmod 1.7 pypi_0 pypi
cryptography 43.0.1 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
cython 3.0.11 pypi_0 pypi
datasets 2.18.0 pypi_0 pypi
decorator 5.1.1 pypi_0 pypi
diffusers 0.27.2 pypi_0 pypi
dill 0.3.8 pypi_0 pypi
dnspython 2.6.1 pypi_0 pypi
einops 0.8.0 pypi_0 pypi
email-validator 2.2.0 pypi_0 pypi
exceptiongroup 1.2.2 pypi_0 pypi
expat 2.6.3 hf9b8971_0 conda-forge
fastapi 0.111.0 pypi_0 pypi
fastapi-cli 0.0.4 pypi_0 pypi
ffmpeg 1.4 pypi_0 pypi
ffmpeg-python 0.2.0 pypi_0 pypi
ffmpy 0.4.0 pypi_0 pypi
filelock 3.15.4 pypi_0 pypi
flatbuffers 24.3.25 pypi_0 pypi
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 h77eed37_2 conda-forge
fontconfig 2.13.94 heb65262_0 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
fonttools 4.53.1 pypi_0 pypi
freetype 2.10.4 h17b34a0_1 conda-forge
fribidi 1.0.10 h27ca646_0 conda-forge
frozenlist 1.4.1 pypi_0 pypi
fsspec 2024.2.0 pypi_0 pypi
future 1.0.0 pypi_0 pypi
gast 0.6.0 pypi_0 pypi
gdk-pixbuf 2.42.6 hff60771_0 conda-forge
gdown 5.1.0 pypi_0 pypi
gettext 0.21.1 h0186832_0 conda-forge
giflib 5.2.2 h93a5062_0 conda-forge
google-auth 2.34.0 pypi_0 pypi
google-auth-oauthlib 1.0.0 pypi_0 pypi
gradio 4.32.2 pypi_0 pypi
gradio-client 0.17.0 pypi_0 pypi
graphite2 1.3.13 h9f76cd9_1001 conda-forge
graphviz 2.49.0 h2097151_0 conda-forge
grpcio 1.57.0 pypi_0 pypi
grpcio-tools 1.57.0 pypi_0 pypi
gtk2 2.24.33 hedeb848_1 conda-forge
gts 0.7.6 h4b6d4d6_2 conda-forge
h11 0.14.0 pypi_0 pypi
harfbuzz 3.0.0 h13b3495_1 conda-forge
httpcore 1.0.5 pypi_0 pypi
httptools 0.6.1 pypi_0 pypi
httpx 0.27.2 pypi_0 pypi
huggingface-hub 0.24.6 pypi_0 pypi
humanfriendly 10.0 pypi_0 pypi
hydra-core 1.3.2 pypi_0 pypi
hyperpyyaml 1.2.2 pypi_0 pypi
icu 68.2 hbdafb3b_0 conda-forge
idna 3.8 pypi_0 pypi
importlib-metadata 8.4.0 pypi_0 pypi
importlib-resources 6.4.4 pypi_0 pypi
inflect 7.3.1 pypi_0 pypi
jieba 0.42.1 pypi_0 pypi
jinja2 3.1.4 pypi_0 pypi
jmespath 0.10.0 pypi_0 pypi
joblib 1.4.2 pypi_0 pypi
jpeg 9e h1a8c8d9_3 conda-forge
jsonschema 4.23.0 pypi_0 pypi
jsonschema-specifications 2023.12.1 pypi_0 pypi
kiwisolver 1.4.7 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
libcxx 14.0.6 h848a8c0_0
libexpat 2.6.3 hf9b8971_0 conda-forge
libffi 3.4.4 hca03da5_1
libgd 2.3.2 hdf9055c_0 conda-forge
libglib 2.68.4 h67e64d8_1 conda-forge
libiconv 1.17 h0d3ecfb_2 conda-forge
libpng 1.6.37 hf7e6567_2 conda-forge
librosa 0.10.2 pypi_0 pypi
librsvg 2.52.2 h957afdb_0 conda-forge
libtiff 4.2.0 hc6122e1_3 conda-forge
libtool 2.4.7 h00cdb27_1 conda-forge
libwebp 1.2.0 h45627a8_0 conda-forge
libwebp-base 1.2.0 h27ca646_2 conda-forge
libxml2 2.9.12 h538f51a_0 conda-forge
lightning 2.2.4 pypi_0 pypi
lightning-utilities 0.11.7 pypi_0 pypi
llvmlite 0.41.1 pypi_0 pypi
lz4-c 1.9.3 hbdafb3b_1 conda-forge
markdown 3.7 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
matplotlib 3.7.5 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
modelscope 1.15.0 pypi_0 pypi
more-itertools 10.5.0 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
msgpack 1.0.8 pypi_0 pypi
multidict 6.0.5 pypi_0 pypi
multiprocess 0.70.16 pypi_0 pypi
narwhals 1.6.2 pypi_0 pypi
ncurses 6.4 h313beb8_0
networkx 3.1 pypi_0 pypi
numba 0.58.1 pypi_0 pypi
numpy 1.24.4 pypi_0 pypi
oauthlib 3.2.2 pypi_0 pypi
omegaconf 2.3.0 pypi_0 pypi
onnx 1.16.0 pypi_0 pypi
onnxruntime 1.16.0 pypi_0 pypi
openai-whisper 20231117 pypi_0 pypi
openfst 1.8.2 hdb0ca01_2 conda-forge
openssl 3.3.2 h8359307_0 conda-forge
orjson 3.10.7 pypi_0 pypi
oss2 2.19.0 pypi_0 pypi
packaging 24.1 pypi_0 pypi
pandas 2.0.3 pypi_0 pypi
pango 1.48.10 h26a1e14_2 conda-forge
pcre 8.45 hbdafb3b_0 conda-forge
peft 0.12.0 pypi_0 pypi
pillow 10.4.0 pypi_0 pypi
pip 24.2 py38hca03da5_0
pixman 0.40.0 h27ca646_0 conda-forge
pkgutil-resolve-name 1.3.10 pypi_0 pypi
platformdirs 4.2.2 pypi_0 pypi
pooch 1.8.2 pypi_0 pypi
protobuf 4.25.0 pypi_0 pypi
psutil 6.0.0 pypi_0 pypi
pyarrow 17.0.0 pypi_0 pypi
pyarrow-hotfix 0.6 pypi_0 pypi
pyasn1 0.6.0 pypi_0 pypi
pyasn1-modules 0.4.0 pypi_0 pypi
pycparser 2.22 pypi_0 pypi
pycryptodome 3.20.0 pypi_0 pypi
pydantic 2.7.0 pypi_0 pypi
pydantic-core 2.18.1 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pynini 2.1.5 py38h9dc3d6a_5 conda-forge
pyparsing 3.1.4 pypi_0 pypi
pypinyin 0.52.0 pypi_0 pypi
pysocks 1.7.1 pypi_0 pypi
pysoundfile 0.9.0.post1 pypi_0 pypi
python 3.8.12 hd949e87_1_cpython conda-forge
python-dateutil 2.9.0.post0 pypi_0 pypi
python-dotenv 1.0.1 pypi_0 pypi
python-multipart 0.0.9 pypi_0 pypi
python_abi 3.8 5_cp38 conda-forge
pytorch-lightning 2.3.3 pypi_0 pypi
pytz 2024.1 pypi_0 pypi
pyyaml 6.0.2 pypi_0 pypi
readline 8.2 h1a28f6b_0
referencing 0.35.1 pypi_0 pypi
regex 2024.7.24 pypi_0 pypi
requests 2.32.3 pypi_0 pypi
requests-oauthlib 2.0.0 pypi_0 pypi
rich 13.7.1 pypi_0 pypi
rpds-py 0.20.0 pypi_0 pypi
rsa 4.9 pypi_0 pypi
ruamel-yaml 0.18.6 pypi_0 pypi
ruamel-yaml-clib 0.2.8 pypi_0 pypi
ruff 0.6.4 pypi_0 pypi
safetensors 0.4.5 pypi_0 pypi
scikit-learn 1.3.2 pypi_0 pypi
scipy 1.10.1 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
setuptools 72.1.0 py38hca03da5_0
shellingham 1.5.4 pypi_0 pypi
simplejson 3.19.3 pypi_0 pypi
six 1.16.0 pypi_0 pypi
sniffio 1.3.1 pypi_0 pypi
sortedcontainers 2.4.0 pypi_0 pypi
soundfile 0.12.1 pypi_0 pypi
soupsieve 2.6 pypi_0 pypi
sox 1.5.0 pypi_0 pypi
soxr 0.3.7 pypi_0 pypi
sqlite 3.45.3 h80987f9_0
starlette 0.37.2 pypi_0 pypi
sympy 1.13.2 pypi_0 pypi
tensorboard 2.14.0 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
threadpoolctl 3.5.0 pypi_0 pypi
tiktoken 0.7.0 pypi_0 pypi
tk 8.6.14 h6ba3021_0
tokenizers 0.19.1 pypi_0 pypi
tomli 2.0.1 pypi_0 pypi
tomlkit 0.12.0 pypi_0 pypi
torch 2.4.1 pypi_0 pypi
torchaudio 2.4.1 pypi_0 pypi
torchmetrics 1.4.1 pypi_0 pypi
torchvision 0.19.1 pypi_0 pypi
tqdm 4.66.5 pypi_0 pypi
transformers 4.44.2 pypi_0 pypi
typeguard 4.3.0 pypi_0 pypi
typer 0.12.5 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
tzdata 2024.1 pypi_0 pypi
ujson 5.10.0 pypi_0 pypi
urllib3 2.2.2 pypi_0 pypi
uvicorn 0.30.0 pypi_0 pypi
uvloop 0.20.0 pypi_0 pypi
watchfiles 0.24.0 pypi_0 pypi
websockets 11.0.3 pypi_0 pypi
werkzeug 3.0.4 pypi_0 pypi
wetextprocessing 1.0.3 pypi_0 pypi
wget 3.2 pypi_0 pypi
wheel 0.43.0 py38hca03da5_0
xxhash 3.5.0 pypi_0 pypi
xz 5.4.6 h80987f9_1
yapf 0.40.2 pypi_0 pypi
yarl 1.9.11 pypi_0 pypi
zipp 3.20.1 pypi_0 pypi
zlib 1.2.13 h18a0788_1
zstd 1.5.0 h861e0a7_0 conda-forge I also tried install torchaudio==2.1.2, but it is not compatible with torch==2.4.1, so it raised errors. If both torchaudio and torch version is 2.1.2, it will raise the same error as before( |
@alkaidzone The versions of torchaudio and torch should be the same. I use version 2.4.1. For all the dependencies, you can refer to the requirements.txt file in my forked library. From your error log, it seems that there is a problem with the audio-related library. Please check and install the following libraries.
|
Thx for reply. I checked and found both soundfile==0.12.1 and librosa==0.10.2 (according to requirements.txt) have been installed. So I use |
Both torch and torchaudio are 2.4.1 |
No description provided.