-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove torch
dependency, Faster numpy Feature extraction
#1106
Conversation
Mentioning community reviewers, feel free to ignore this |
Hi,
|
I have a Another main bottleneck here is the padding with 30 seconds.
But essentially the effect of padding the input is padding the output with some constant vector. So one could mitigate this doubling by just padding the stft/mel outputs cleverly. We lose almost the same amount of time as STFT to |
I think doing this on GPU with torch is a bit overkill as feature extraction is seldom the bottleneck in the whisper pipeline. Simplicity, dependency management is more important here. And when things move towards numpy2, it'll be not that bad. |
This looks interesting as well, should be quite fast |
I updated the figures along with the benchmarking script and limiting it to 4 cores for reproducability
We're removing torch that's for sure, the choice here is between both cupy and numpy or one of them
This implementation is ported from C++ pytorch implementation, it's very similar to the librosa implementation but has some extra functionality that I'm not sure we need here, the reason I went with this is that the librosa implementation needs an extra dependency, and copying the function over isn't simple because it uses many librosa internal functions that will be a massive increase to the amount of code needed
I'm working on removing the padding completely while still generating identical results but I'm leaving it for another PR, Check here it does reduce the FE time to half indeed
I have no idea how to calculate |
Thanks for your work @MahmoudAshraf97, it's really impressive. My thinking is inline with @ozancaglayan "Simplicity, dependency management is more important here. And when things move towards numpy2, it'll be not that bad." All implementations are good, depend on what you optimize for. I will optimize for simplicity and less dependencies. |
Interesting. I'm always confused about one thing. Given that whisper is trained with 30 secs segments, the above PR and your thinking is not about running Whisper freely on arbitrarily sized audio files but rather removing the I thought that to do proper seeking and timestamping, Whisper requires that +30 secs of zeros all the time but isn't that the case then? i.e. would the output of Whisper be exactly the same if we just pad the inputs to 30 secs at most? |
We don't even need to pad it to 30s, adding Edit: it produces exactly the same encoder outputs for NumPy, while CuPy needs atleast 17s of zero padding to achieve the same numerical accuracy for audios less than 30s, that closes the speed gap even more between the two |
torch
dependency, use CuPy for feature extractiontorch
dependency, Faster numpy Feature extraction
I implemented the reduced padding in bc86503, everyone feel free to test it and report if there are any issues |
Thanks! I wont probably have time next week as I'll be on annual leave. One thing though, does this mean that you'll no longer be padding let's say an audio file of 10 seconds to 30 seconds? People were saying that Whisper has a severe hallucination problem if you let it transcribe signals less than 30 seconds because its not trained with an attention mask. There are even papers which finetunes it to mitigate this. But I'm not 100% sure whether its the same thing or not, need to test it properly. |
In the original whisper implementation, the features are padded with 30s of zeros, converted to log mel spectrogram, and then the features that correspond to the padding are removed, and the features are then padded again with zeros until they are equivalent to 30s. What I did here is that I found that padding with 10ms instead of 30s is mathematically equivalent and produces the same encoder input and output within the numerical tolerance of the data type used. Why does this works? because in the STFT calculation, each window is independent and has an overlap of |
I want to merge this before Friday so we can release a new version at 15th of November, so if anyone has any comments or reviews please let me know |
ed85116
to
8198307
Compare
…YSTRAN#1106)" This reverts commit 3e0ba86.
Hi all,
This PR aims to remove the torch dependency as it's only used for feature extraction, there are 2 options:
These are performance figures on a 30s segment using this script
edit: Decided to remove CuPy as the speed difference is not worth the extra code