diff --git a/egs/librispeech/ASR/RESULTS.md b/egs/librispeech/ASR/RESULTS.md index ed456a6171..28361afdd8 100644 --- a/egs/librispeech/ASR/RESULTS.md +++ b/egs/librispeech/ASR/RESULTS.md @@ -244,6 +244,87 @@ for m in greedy_search modified_beam_search fast_beam_search; do done ``` +### pruned_transducer_stateless7 (Fine-tune with mux) + +See for more details. + +[pruned_transducer_stateless7](./pruned_transducer_stateless7) + +The tensorboard log can be found at + + +You can find the pretrained model and bpe model needed for fine-tuning at: + + +You can find a fine-tuned model, fine-tuning logs, decoding logs, and decoding +results at: + + +You can use to deploy it. + +Number of model parameters: 70369391, i.e., 70.37 M + +| decoding method | dev | test | test-clean | test-other | comment | +|----------------------|------------|------------|------------|------------|--------------------| +| greedy_search | 14.27 | 14.22 | 2.08 | 4.79 | --epoch 20 --avg 5 | +| modified_beam_search | 14.22 | 14.08 | 2.06 | 4.72 | --epoch 20 --avg 5 | +| fast_beam_search | 14.23 | 14.17 | 2.08 | 4.09 | --epoch 20 --avg 5 | + +The training commands are: +```bash +export CUDA_VISIBLE_DEVICES="0,1" + +./pruned_transducer_stateless7/finetune.py \ + --world-size 2 \ + --num-epochs 20 \ + --start-epoch 1 \ + --exp-dir pruned_transducer_stateless7/exp_giga_finetune \ + --subset S \ + --use-fp16 1 \ + --base-lr 0.005 \ + --lr-epochs 100 \ + --lr-batches 100000 \ + --bpe-model icefall-asr-librispeech-pruned-transducer-stateless7-2022-11-11/data/lang_bpe_500/bpe.model \ + --do-finetune True \ + --use-mux True \ + --finetune-ckpt icefall-asr-librispeech-pruned-transducer-stateless7-2022-11-11/exp/pretrain.pt \ + --max-duration 500 +``` + +The decoding commands are: +```bash +# greedy_search +./pruned_transducer_stateless7/decode.py \ + --epoch 20 \ + --avg 5 \ + --use-averaged-model 1 \ + --exp-dir ./pruned_transducer_stateless7/exp_giga_finetune \ + --max-duration 600 \ + --decoding-method greedy_search + +# modified_beam_search +./pruned_transducer_stateless7/decode.py \ + --epoch 20 \ + --avg 5 \ + --use-averaged-model 1 \ + --exp-dir ./pruned_transducer_stateless7/exp_giga_finetune \ + --max-duration 600 \ + --decoding-method modified_beam_search \ + --beam-size 4 + +# fast_beam_search +./pruned_transducer_stateless7/decode.py \ + --epoch 20 \ + --avg 5 \ + --use-averaged-model 1 \ + --exp-dir ./pruned_transducer_stateless7/exp_giga_finetune \ + --max-duration 600 \ + --decoding-method fast_beam_search \ + --beam 20.0 \ + --max-contexts 8 \ + --max-states 64 +``` + ### pruned_transducer_stateless7 (zipformer + multidataset(LibriSpeech + GigaSpeech + CommonVoice 13.0)) See for more details.