-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added medium and medium.en models for TensorRT-LLM backend #31
base: main
Are you sure you want to change the base?
Conversation
Hi @colinator can you run some WER checks on medium and medium.en models for TensorRT-LLM backend? According to TensorRT-LLM repo, they only support large model. You can use these to run the tests: |
Ahoy there. I couldn't find where in tensorrt-llm it only was compatible with large-v2. Maybe in the 'builder'? But that's just the builder. Your code seemed to work. The benchmark doesn't calculate WER right? The transcriptions seem plausible though:
You want me to attach the transcription csv files? |
Do have a script that performs WER calculation from the csv outputs? I see your WER function, but am not totally clear on any pre-processing (lowercasing, etc) you do when you calculate it... |
Well, here are the outputs. I'm on an RTX 3080, so slower than your results, and batch size is 16, because mem$. |
Hey yes I normalize the text and then performs lowercasing as well. Here: https://github.com/shashikg/WhisperS2T/blob/main/tools/text_normalizer.py#L75 Then run this evaluate function on normalized texts: https://github.com/shashikg/WhisperS2T/blob/main/tools/metrics.py#L68 BTW, I quickly checked the outputs txt files, output looks good to me. |
Hi @colinator any update? |
I got this, for medium and medium.en. Card is rtx3080, if that matters... Why is medium.en so much worse?
|
Oh, this is the script that prints it out - might be useful for some bigger pipeline. I'll just paste it here - not sure if I should add it to this PR yet...
|
@shashikg ^^^ |
Seems to work for "medium" and "medium.en" models now, for tensorrt-llm backend.
Fixes #30