Replies: 2 comments 1 reply
-
@DaveBaker , hello. It's better to use the latest version of ctranslate2 (4x). Have you tried to upgrade CUDA from 11 to 12 ? (refer to this link) ? And could you share you code logic ? You can refer to this example when transcribing with cuda. |
Beta Was this translation helpful? Give feedback.
-
OK, I think i seemed to have got it to work. Using many different sources of pages and manuals by: It does provide faster responses and seems to be using the GPU's rather than the CPU's. But hasn't been easy to deploy. Many thanks for taking your time to reply Dave B |
Beta Was this translation helpful? Give feedback.
-
Please excuse my request, but I would really like some assistance to get faster-whisper working with my Jetson Orin Nano.
I have been working on a voice activated robot for some time, and recently came across faster-whisper. Unfortunately though it seems that I am restricted to using CPU only, if i use the PIP method to install faster-whisper. Due to the fact that it informs me that it is not compiled for use with CUDA (thus a default version is installed).
I have completed the installation of CTranslate2 3.24.0 according to the directions given by the installation instructions. But I still can't seem to use either faster-whisper or the compiled CTranslate2 programs.
As a last resort, I had hoped that someone might be able to guide me (as best as possible) through the process. I am sorry for the request, i am a noob with installing these libraries but have been looking endlessly across the Github site for instructions to help.
If the following information can help with the system I am using, so you can confirm I have the appropriate additions.
Platform: NVIDIA Jetson Orin Nano with aarch64, Linux, Ubuntu 20.04 focal
Python 3.8.10
CUDA: 11.4.315
cuDNN 8.6.0.166
TensorRT: 8.5.2.2
Many thanks for any assistance anyone can give me
Dave B
Beta Was this translation helpful? Give feedback.
All reactions