Replies: 4 comments
-
Sure it's possible, but someone needs to do it :) |
Beta Was this translation helpful? Give feedback.
-
In response to a feature request, I added network transcription support (using the whisper.cpp server implementation) to Blurt (GNOME extension) and BlahST speech-to-text input tools (based on whisper.cpp). I am blown away by the more-than expected speedup of transcription when going to the server! Before, I was getting ~30x-faster-than-realtime transcription with a local whisper.cpp instance that was loading the model file on each call. And the request itself (timed to stderr with curl itself) This is almost 90x-faster-than-real-time (~140 ms for a 12.5s speech clip). Loading the model takes about 110 ms for the "main" executable. Seems like there is extra advantage to running a local server with the model preloaded??? Any thoughts? |
Beta Was this translation helpful? Give feedback.
-
So I found someone to write it, it was very myself! |
Beta Was this translation helpful? Give feedback.
-
There is also this https://github.com/rhasspy/wyoming-whisper-cpp |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I am using
rhasspy/wyoming-whisper
(docker compose on MacBook Pro M3) with home-assistant. It works fine, but slow.I was wondering if it is possible to create a server supporting wyoming protocol.
This is the python handler
Alternatively, it is possible to create a python binding/package for
whisper.cpp
? So the above handler can be updated to interact withwhisper.cpp
(instead of this)Beta Was this translation helpful? Give feedback.
All reactions