Skip to content

Commit

Permalink
Megatron hidden transformations (NVIDIA#6332)
Browse files Browse the repository at this point in the history
* [TTS] bugfix for missing configs. (#4725)

Signed-off-by: Xuesong Yang <[email protected]>

* docs typo fix

Signed-off-by: Oleksii Kuchaiev <[email protected]>

* Fix pynini install in TTS tutorials (#4729)

Signed-off-by: Jocelyn Huang <[email protected]>

Signed-off-by: Jocelyn Huang <[email protected]>

* Fix ASR notebooks (#4738)

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>

* Multilingual VAD model (#4734)

* add ngc link

Signed-off-by: fayejf <[email protected]>

* add tuned VAD config on ASR data

Signed-off-by: fayejf <[email protected]>

* yaml note

Signed-off-by: fayejf <[email protected]>

* update vad asr notebook with mVAD

Signed-off-by: fayejf <[email protected]>

* update vad infer config comment

Signed-off-by: fayejf <[email protected]>

* fix

Signed-off-by: fayejf <[email protected]>

* mvad sd config for ch109

Signed-off-by: fayejf <[email protected]>

* update sd readme

Signed-off-by: fayejf <[email protected]>

* add new mVAD model to doc

Signed-off-by: fayejf <[email protected]>

* style fix

Signed-off-by: fayejf <[email protected]>

* update sd tutorial with mVAD

Signed-off-by: fayejf <[email protected]>

* typo fix

Signed-off-by: fayejf <[email protected]>

Signed-off-by: fayejf <[email protected]>

* publish pretrained itn t5 model for English (#4748)

Signed-off-by: Alexandra Antonova <[email protected]>

Signed-off-by: Alexandra Antonova <[email protected]>
Co-authored-by: Alexandra Antonova <[email protected]>

* Updated docs and doc paths (#4754)

* Updated docs and doc paths

Signed-off-by: Virginia Adams <[email protected]>

* Update Multitask_Prompt_and_PTuning.ipynb

* Update README.rst

* Changed branch name to use single quotes

Signed-off-by: Virginia Adams <[email protected]>

Signed-off-by: Virginia Adams <[email protected]>

* fix bug relating to ddp strategy in joint intent slot classification tutorial (#4762)

* [TTS] updated config with a German IPA phoneme tokenizer (#4756)

* [TTS] added a German IPA phoneme tokenizer
* [TTS][ASR] enabled customized arguments for trimming the leading and trailing silence.
* [TTS] disabled spline interpolation for beta-binomial distribution. Let it generate align prior and save to disks. Use a new phoneme tokenizer.
* [TTS] use consistent spline interpolation with fastpitch checkpoint when generating mel-spectrograms for hifigan finetune.

Signed-off-by: Xuesong Yang <[email protected]>

* Update r1.11 to new heteronyms list (#4745)

* Update configs to new heteronyms list
* Remove old heteronyms list, add alt 'merchandise' pron to CMUdict
* Update remaining references to old heteronyms list

Signed-off-by: Jocelyn Huang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* [TTS] Add multi-speaker German FastPitch and HiFiGAN NGC checkpoints (#4763)

Signed-off-by: Xuesong Yang <[email protected]>

Signed-off-by: Xuesong Yang <[email protected]>

* [TTS] Add single male speaker German FastPitch and HiFiGAN NGC checkpoints (#4770)

Signed-off-by: Xuesong Yang <[email protected]>

* Update CMUdict with more recent 0.7b entries (#4768)

Signed-off-by: Jocelyn Huang <[email protected]>

Signed-off-by: Jocelyn Huang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>

* Install pynini in docker container (#4733)

Signed-off-by: Vladimir Bataev <[email protected]>

Signed-off-by: Vladimir Bataev <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Eric Harper <[email protected]>

* Fix tutorial formatting (#4778)

Signed-off-by: Jocelyn Huang <[email protected]>

* [TTS] deprecated old scripts for ljspeech. (#4780)

* deprecated old scripts for ljspeech.
* removed relevent function calls in TTS docs.

Signed-off-by: Xuesong Yang <[email protected]>

* update branch and typos (#4788)

Signed-off-by: ericharper <[email protected]>

Signed-off-by: ericharper <[email protected]>

* Adding support for models trained with full context for cache-aware streaming. (#4687)

* added support for models trained with full context.

Signed-off-by: Vahid <[email protected]>

* fixed style.

Signed-off-by: Vahid <[email protected]>

* dropped seq_range

Signed-off-by: Vahid <[email protected]>

* fixed indexing in caching methods.

Signed-off-by: Vahid <[email protected]>

* fixed code style.

Signed-off-by: Vahid <[email protected]>

* fixed code style.

Signed-off-by: Vahid <[email protected]>

* updated docs.

Signed-off-by: Vahid <[email protected]>

* addressed comments.

Signed-off-by: Vahid <[email protected]>

* fixed code style.

Signed-off-by: Vahid <[email protected]>

* fixed code style.

Signed-off-by: Vahid <[email protected]>

* fixed code style.

Signed-off-by: Vahid <[email protected]>

* change frame-wise to cache-aware.

Signed-off-by: Vahid <[email protected]>

* change frame-wise to cache-aware.

Signed-off-by: Vahid <[email protected]>

* change frame-wise to cache-aware.

Signed-off-by: Vahid <[email protected]>

* fixed code style.

Signed-off-by: Vahid <[email protected]>

Signed-off-by: Vahid <[email protected]>

* Update megatron encoder decoder model to support py37 for colab (#4791)

* [ASR] Add pretrained ASR models for Croatian (#4682)

* [ASR] Add pretrained ASR models for Croatian

Signed-off-by: Ante Jukić <[email protected]>

* Fix style for import

Signed-off-by: Ante Jukić <[email protected]>

Signed-off-by: Ante Jukić <[email protected]>
Co-authored-by: Ante Jukić <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>

* added/fixed export for Megatron models (#4712)

* added/fixed export for Megatron models

Signed-off-by: David Mosallanezhad <[email protected]>

* fixed style

Signed-off-by: David Mosallanezhad <[email protected]>

* fixed FusedScaleMaskSoftmax in BioMegatron

Signed-off-by: David Mosallanezhad <[email protected]>

* included comments

Signed-off-by: David Mosallanezhad <[email protected]>

Signed-off-by: David Mosallanezhad <[email protected]>
Co-authored-by: David Mosallanezhad <[email protected]>
Co-authored-by: Eric Harper <[email protected]>

* update branch for qa notebook

Signed-off-by: ericharper <[email protected]>

* Fix initializing weights from ptl ckpt with exclude (#4807)

Signed-off-by: sam1373 <[email protected]>

Signed-off-by: sam1373 <[email protected]>

* Fix index error from addition of voiced_mask and p_voiced (#4811)

Signed-off-by: Jocelyn Huang <[email protected]>

Signed-off-by: Jocelyn Huang <[email protected]>

* T5 prompt learning fixes (#4771)

* RPE, hidden size and config fixes

Signed-off-by: MaximumEntropy <[email protected]>

* Update to reflect new config names

Signed-off-by: MaximumEntropy <[email protected]>

* Sentencepiece fixes

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

* Fix finetuning

Signed-off-by: MaximumEntropy <[email protected]>

* Add encoder seq len to gpt

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

* Add finetune eval script

Signed-off-by: MaximumEntropy <[email protected]>

* Fix name

Signed-off-by: MaximumEntropy <[email protected]>

* Update Jenkinsfile

Signed-off-by: MaximumEntropy <[email protected]>

* Update config

Signed-off-by: MaximumEntropy <[email protected]>

* Fix CI test

Signed-off-by: MaximumEntropy <[email protected]>

* Update check

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

* Backward compat

Signed-off-by: MaximumEntropy <[email protected]>

* Update CI test

Signed-off-by: MaximumEntropy <[email protected]>

* Split rank for Enc-Dec models

Signed-off-by: MaximumEntropy <[email protected]>

* Address comments

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: Virginia Adams <[email protected]>

* G2P docs (#4841)

* g2p docs added

Signed-off-by: ekmb <[email protected]>

* fix references

Signed-off-by: ekmb <[email protected]>

* address review feedback

Signed-off-by: ekmb <[email protected]>

Signed-off-by: ekmb <[email protected]>

* Fix providing glue in seq2seq eval (#4843)

* Fix providing glue in seq2seq eval

Signed-off-by: MaximumEntropy <[email protected]>

* Fix

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>

* Updated inference code and squad scripts (#4835)

* Updated inference code and squad scripts

Signed-off-by: Virginia Adams <[email protected]>

* Reverted GPT & T5 inference files back to use NLPDDPlugin

Signed-off-by: Virginia Adams <[email protected]>

* Overwrite frozen LM to use fused adam

Signed-off-by: Virginia Adams <[email protected]>

* Added padded vocab size

Signed-off-by: Virginia Adams <[email protected]>

* Fixed val check interval value

Signed-off-by: Virginia Adams <[email protected]>

* Python format fix

Signed-off-by: Virginia Adams <[email protected]>

* Make t5 prompt learning preds write to file

Signed-off-by: Virginia Adams <[email protected]>

* Added back dp=1 check

Signed-off-by: Virginia Adams <[email protected]>

Signed-off-by: Virginia Adams <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>

* Update README.rst

* Fix uppercasing mismatch for IPA heteronyms (#4860)

Signed-off-by: Jocelyn Huang <[email protected]>

Signed-off-by: Jocelyn Huang <[email protected]>

* Set the number of workers to 0 for validation and test sets in all enc-dec models (#4790)

* Set workers to 0 for validation and test

Signed-off-by: MaximumEntropy <[email protected]>

* Revert pin memory

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: Sean Naren <[email protected]>

* Fix mha (#4866)

* fix bug in mha forward function related to cache update return type

Signed-off-by: Yang Zhang <[email protected]>

* fix lgtm

Signed-off-by: Yang Zhang <[email protected]>

Signed-off-by: Yang Zhang <[email protected]>
Co-authored-by: Sean Naren <[email protected]>

* ipa bug fix (#4871)

Signed-off-by: ekmb <[email protected]>

Signed-off-by: ekmb <[email protected]>

* Fix Megatron NMT consumed samples and ckpt_to_nemo split rank (#4884)

* Fix nmt and ckpt_to_nemo

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>

* added utf8 encoding (#4892)

Signed-off-by: Virginia Adams <[email protected]>

Signed-off-by: Virginia Adams <[email protected]>

* 1. Applying the same patch to r1.11.0 (#4894)

Signed-off-by: Micha Livne <[email protected]>

Signed-off-by: Micha Livne <[email protected]>

* Update tutorials.rst (#4897)

* update readme with apex commit

Signed-off-by: ericharper <[email protected]>

* Add support for Apex distributed Adam optimizer with GPT-3 (#4487)

* Add support for Apex distributed Adam optimizer with GPT-3

Signed-off-by: Tim Moon <[email protected]>

* Fix bug in grad clipping with dist Adam

Grad norm was computed over all params, not respecting model parallelism.

Signed-off-by: Tim Moon <[email protected]>

* Fix bug with DDP initialization

Signed-off-by: Tim Moon <[email protected]>

* Make distopt dependent on megatron_amp_o2

Signed-off-by: Tim Moon <[email protected]>

* Fix code formatting

Signed-off-by: Tim Moon <[email protected]>

* Handle dist Adam in optimizer unit tests

Signed-off-by: Tim Moon <[email protected]>

Signed-off-by: Tim Moon <[email protected]>
Co-authored-by: Eric Harper <[email protected]>

* update readme

Signed-off-by: ericharper <[email protected]>

* update readme

Signed-off-by: ericharper <[email protected]>

* latent model support

* 1. Debugging.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging.

* update branch

Signed-off-by: ericharper <[email protected]>

* fix replace_bos_with_pad not found (#6443)

Signed-off-by: Abhinav Khattar <[email protected]>

* Support Swiglu in TP PP Conversion (#6437)

* Support Swiglu in TP PP Conversion

Signed-off-by: smajumdar <[email protected]>

* Guard activation

Signed-off-by: smajumdar <[email protected]>

* Guard activation

Signed-off-by: smajumdar <[email protected]>

---------

Signed-off-by: smajumdar <[email protected]>

* BERT pre-training mp fork to spawn (#6442)

* change bert fork to spawn

Signed-off-by: Abhinav Khattar <[email protected]>

* num_workers=0 fix

Signed-off-by: Abhinav Khattar <[email protected]>

---------

Signed-off-by: Abhinav Khattar <[email protected]>

* Meagtron encoder decoder fix for empty validation outputs (#6459)

* 1. Meagtron encoder decoder fix for empty validation outputs.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging.

---------

Signed-off-by: Micha Livne <[email protected]>
Co-authored-by: Micha Livne <[email protected]>

* Added/updated new Conformer configs (#6426)

* updated conf files.

Signed-off-by: Vahid <[email protected]>

* added confs.

Signed-off-by: Vahid <[email protected]>

* moved longconformer confs.

Signed-off-by: Vahid <[email protected]>

* updated readme.

Signed-off-by: Vahid <[email protected]>

* updated readme.

Signed-off-by: Vahid <[email protected]>

* updated batch sizes and added fastconformer ctc streaming configs.

Signed-off-by: Vahid <[email protected]>

* updated batch sizes.

Signed-off-by: Vahid <[email protected]>

* added hybrid support.

Signed-off-by: Vahid <[email protected]>

* added hybrid support.

Signed-off-by: Vahid <[email protected]>

---------

Signed-off-by: Vahid <[email protected]>

* reduce workers on NMT CI (#6472)

Signed-off-by: Abhinav Khattar <[email protected]>

* move to nvidia megatron repo (#6465)

Signed-off-by: Abhinav Khattar <[email protected]>

* Megatron KERPLE positional embeddings (#6478)

* [TTS] FastPitch adapter fine-tune and conditional layer normalization (#6416)

[TTS] FastPitch adapter fine-tune and conditional layer normalization (#6416)

---------

Signed-off-by: hsiehjackson <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* [TTS] whitelist broken path fix. (#6412)

* [TTS] whitelist broken path fix.

Signed-off-by: Xuesong Yang <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* [TTS] FastPitch speaker encoder (#6417)

* Add initial codes

Signed-off-by: hsiehjackson <[email protected]>

* Remove wemb

Signed-off-by: hsiehjackson <[email protected]>

* Fix import

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Restore aligner loss

Signed-off-by: hsiehjackson <[email protected]>

* Add ConditionalInput

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix error and support pre-trained config

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Follow comments

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Rename config

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change copyright and random weight test

Signed-off-by: hsiehjackson <[email protected]>

* Add initial codes

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: hsiehjackson <[email protected]>

* Fix import error

Signed-off-by: hsiehjackson <[email protected]>

* Add initial codes

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: hsiehjackson <[email protected]>

* Fix dataset error

Signed-off-by: hsiehjackson <[email protected]>

* Remove reference speaker embedding

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: hsiehjackson <[email protected]>

* Remove SV encoder

Signed-off-by: hsiehjackson <[email protected]>

* Follow comments

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: hsiehjackson <[email protected]>

* Fix length type

Signed-off-by: hsiehjackson <[email protected]>

* Fix append

Signed-off-by: hsiehjackson <[email protected]>

* Move error msg

Signed-off-by: hsiehjackson <[email protected]>

* Add look-up into speaker encoder

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: hsiehjackson <[email protected]>

* Add valueerror msg

Signed-off-by: hsiehjackson <[email protected]>

* Move lookup

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: hsiehjackson <[email protected]>

* Remove unused

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: hsiehjackson <[email protected]>

* Fix error

Signed-off-by: hsiehjackson <[email protected]>

* Rebase and Fix error

Signed-off-by: hsiehjackson <[email protected]>

* Fix spk encoder

Signed-off-by: hsiehjackson <[email protected]>

* Rename n_speakers

Signed-off-by: hsiehjackson <[email protected]>

* Follow comments

Signed-off-by: hsiehjackson <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix n_speakers None error

Signed-off-by: hsiehjackson <[email protected]>

---------

Signed-off-by: hsiehjackson <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Sharded manifests for tarred datasets (#6395)

* testing sharded manifests

Signed-off-by: Dima Rekesh <[email protected]>

* compatibility

Signed-off-by: Dima Rekesh <[email protected]>

* proper fixes

Signed-off-by: Dima Rekesh <[email protected]>

* adding flag tot convert_to_tarred_audio_dataset

Signed-off-by: Dima Rekesh <[email protected]>

* shard_manifests conf param

Signed-off-by: Dima Rekesh <[email protected]>

* propagating the shard_manifests param

Signed-off-by: Dima Rekesh <[email protected]>

* propagating the shard_manifests param

Signed-off-by: Dima Rekesh <[email protected]>

* distributed checks

Signed-off-by: Dima Rekesh <[email protected]>

* typo

Signed-off-by: Dima Rekesh <[email protected]>

* typo

Signed-off-by: Dima Rekesh <[email protected]>

* fixes

Signed-off-by: Dima Rekesh <[email protected]>

* fixes

Signed-off-by: Dima Rekesh <[email protected]>

* fixes

Signed-off-by: Dima Rekesh <[email protected]>

* fixes

Signed-off-by: Dima Rekesh <[email protected]>

* fixes

Signed-off-by: Dima Rekesh <[email protected]>

* fixes

Signed-off-by: Dima Rekesh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixes based on PR comments and tests

Signed-off-by: Dima Rekesh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixes to convert_to_tarred_audio_dataset.py

Signed-off-by: Dima Rekesh <[email protected]>

* reversing manifest shards flag

Signed-off-by: Dima Rekesh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tests

Signed-off-by: Dima Rekesh <[email protected]>

* excluding manifests from webdataset url expansion

Signed-off-by: Dima Rekesh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* expand manifest paths before attempting to cache from datastore

Signed-off-by: Dima Rekesh <[email protected]>

* explicit use of UTF-8 for manifest i/o

Signed-off-by: Dima Rekesh <[email protected]>

---------

Signed-off-by: Dima Rekesh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update wfst_text_normalization.rst (#6374)

Add Hungarian (incoming in NeMo-text-processing)

Signed-off-by: Jim O’Regan <[email protected]>

* Support Swiglu in TP PP Conversion (#6437) (#6451)

* Support Swiglu in TP PP Conversion



* Guard activation



* Guard activation



---------

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>

* Update NeMo_TTS_Primer.ipynb (#6436)

* Update NeMo_TTS_Primer.ipynb

Changed a mistake in line 782. Instead of frequency band (ie. pitch) we should write frequency bin. Note that frequency bins in FFT are not related to pitch.

Signed-off-by: Mostafa Ghorbandoost <[email protected]>

* Update NeMo_TTS_Primer.ipynb

Corrected the description of spectrogram and mel spectrogram calculations in lines 782 & 783 and added a fourth point to the description and added a reference for more mathematical details at the end of this point.

Signed-off-by: Mostafa Ghorbandoost <[email protected]>

---------

Signed-off-by: Mostafa Ghorbandoost <[email protected]>

* add rampup batch size support for Megatron GPT (#6424)

* added rampup batch size support

Signed-off-by: Dmytro Pykhtar <[email protected]>

* added tests for rampup batch size

Signed-off-by: Dmytro Pykhtar <[email protected]>

* fixed the typos

Signed-off-by: Dmytro Pykhtar <[email protected]>

* added assertions

Signed-off-by: Dmytro Pykhtar <[email protected]>

* changed assertion rules

Signed-off-by: Dmytro Pykhtar <[email protected]>

* deleted unused imports

Signed-off-by: Dmytro Pykhtar <[email protected]>

* changed tests for rampup batch size

Signed-off-by: Dmytro Pykhtar <[email protected]>

* updated rampup batch size tests

Signed-off-by: Dmytro Pykhtar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed styling

Signed-off-by: Dmytro Pykhtar <[email protected]>

* rampup batch size tests changes

Signed-off-by: Dmytro Pykhtar <[email protected]>

---------

Signed-off-by: Dmytro Pykhtar <[email protected]>
Signed-off-by: Dmytro Pykhtar <[email protected]>
Co-authored-by: Dmytro Pykhtar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <[email protected]>

* Meagtron encoder decoder fix for empty validation outputs (#6459) (#6461)

* 1. Meagtron encoder decoder fix for empty validation outputs.



* 1. Debugging.

---------

Signed-off-by: Micha Livne <[email protected]>
Co-authored-by: Micha Livne <[email protected]>
Co-authored-by: Micha Livne <[email protected]>

* Code-Switching dataset creation - upgrading to aggregate tokenizer manifest format (#6448)

* added functionality to create agg tokenizer compatible manifest for CS, flag to use this mode by default

Signed-off-by: Kunal Dhawan <[email protected]>

* updated README with the new agg_tokenizer_manifest flag

Signed-off-by: Kunal Dhawan <[email protected]>

* fixed typo in scripts/speech_recognition/code_switching/README.md

Signed-off-by: Kunal Dhawan <[email protected]>

* changed agg_tokenizer_manifest to is_lid_manifest

Signed-off-by: Kunal Dhawan <[email protected]>

---------

Signed-off-by: Kunal Dhawan <[email protected]>
Co-authored-by: Dima Rekesh <[email protected]>

* Added/updated new Conformer configs (#6426) (#6467)

* Update script for ngram rnnt and hat beam search decoding (#6370)

* add rnnt ngram beamsearch script

Signed-off-by: andrusenkoau <[email protected]>

* add return encoding embedding option

Signed-off-by: andrusenkoau <[email protected]>

* update script

Signed-off-by: andrusenkoau <[email protected]>

* add rnnt and hat ngram decoding script

Signed-off-by: andrusenkoau <[email protected]>

* add some parameters

Signed-off-by: andrusenkoau <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add return_encoder_embeddings parameter to RNNTDecodingConfig

Signed-off-by: andrusenkoau <[email protected]>

* replace return_encoder_embeddings parameter

Signed-off-by: andrusenkoau <[email protected]>

* generalization of scipt behavior

Signed-off-by: andrusenkoau <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove return_encoder_embeddings parameter

Signed-off-by: andrusenkoau <[email protected]>

* remove return_encoder_embeddings parameter

Signed-off-by: andrusenkoau <[email protected]>

* add manual encoder_embeddings calculation

Signed-off-by: andrusenkoau <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix beam_width value to 8

Signed-off-by: Andrei Andrusenko <[email protected]>

* fix rescoring description

Signed-off-by: Andrei Andrusenko <[email protected]>

---------

Signed-off-by: andrusenkoau <[email protected]>
Signed-off-by: Andrei Andrusenko <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <[email protected]>

* BERT pre-training mp fork to spawn (#6442) (#6454)

* change bert fork to spawn



* num_workers=0 fix



---------

Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: Abhinav Khattar <[email protected]>

* fix replace_bos_with_pad not found (#6443) (#6450)

Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: Abhinav Khattar <[email protected]>

* reduce workers on NMT CI (#6472) (#6474)

Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: Abhinav Khattar <[email protected]>

* 1. Added KERPLE positional embeddings to encoder-decoder.

Signed-off-by: Micha Livne <[email protected]>

* 1. Added a missing file.

Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Fixing commits.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging.

* 1. Debugging.

* 1. Debugging.

* 1. Debugging.

---------

Signed-off-by: hsiehjackson <[email protected]>
Signed-off-by: Xuesong Yang <[email protected]>
Signed-off-by: Dima Rekesh <[email protected]>
Signed-off-by: Jim O’Regan <[email protected]>
Signed-off-by: smajumdar <[email protected]>
Signed-off-by: Mostafa Ghorbandoost <[email protected]>
Signed-off-by: Dmytro Pykhtar <[email protected]>
Signed-off-by: Dmytro Pykhtar <[email protected]>
Signed-off-by: Micha Livne <[email protected]>
Signed-off-by: Kunal Dhawan <[email protected]>
Signed-off-by: andrusenkoau <[email protected]>
Signed-off-by: Andrei Andrusenko <[email protected]>
Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Dima Rekesh <[email protected]>
Co-authored-by: Jim O’Regan <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <[email protected]>
Co-authored-by: Mostafa Ghorbandoost <[email protected]>
Co-authored-by: Dmytro Pykhtar <[email protected]>
Co-authored-by: Dmytro Pykhtar <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Micha Livne <[email protected]>
Co-authored-by: Kunal Dhawan <[email protected]>
Co-authored-by: Andrei Andrusenko <[email protected]>
Co-authored-by: Abhinav Khattar <[email protected]>

* 1. Added external index sample. (#6462)

Signed-off-by: Micha Livne <[email protected]>

* Fix cache aware hybrid bugs (#6466)

* Update README to add core installation (#6488)

* update README for megatron-core

Signed-off-by: Abhinav Khattar <[email protected]>

* fix

Signed-off-by: Abhinav Khattar <[email protected]>

---------

Signed-off-by: Abhinav Khattar <[email protected]>

* Fix typos (#6494)

Signed-off-by: smajumdar <[email protected]>

* fix broken links r1.18.0 (#6501)

* fix broken links

Signed-off-by: Evelina <[email protected]>

* fix broken links

Signed-off-by: Evelina <[email protected]>

---------

Signed-off-by: Evelina <[email protected]>

* 1. Fixed gaussian hidden transform.

Signed-off-by: Micha Livne <[email protected]>

* 1. Finished updating hidden loss for MIM.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix custom forward_torch_softmax (#6512)

Signed-off-by: Abhinav Khattar <[email protected]>

* [BugFix] Force _get_batch_preds() to keep logits in decoder timestamp… (#6500)

* [BugFix] Force _get_batch_preds() to keep logits in decoder timestamps generator r1.18.0

Signed-off-by: Taejin Park <[email protected]>

* ignore keep_logits in FrameBatchASRLogits

Signed-off-by: Taejin Park <[email protected]>

---------

Signed-off-by: Taejin Park <[email protected]>

* [TTS] fixed broken path. (#6514)

Signed-off-by: Xuesong Yang <[email protected]>

* 1. Added a hiddens module.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix typos (#6523) (#6539)

* Fix typos

Signed-off-by: smajumdar <[email protected]>

* Fix typos

Signed-off-by: smajumdar <[email protected]>

---------

Signed-off-by: smajumdar <[email protected]>
(cherry picked from commit 5468077f5127be1a4c88065de2544f4268b9a6e4)

* added back the fast emit section to the configs. (#6540)

* added back the fast emit section to the configs.

Signed-off-by: Vahid <[email protected]>

* added back the fast emit section to the configs.

Signed-off-by: Vahid <[email protected]>

---------

Signed-off-by: Vahid <[email protected]>

* Fix fp16 (#6543)

Signed-off-by: MaximumEntropy <[email protected]>

* fix (#6529)

Signed-off-by: Abhinav Khattar <[email protected]>

* pass .scale instead of scaler object to core (#6545)

Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: Eric Harper <[email protected]>

* Change Megatron Enc Dec model to use persistent_workers (#6548)

* persistent workers

Signed-off-by: Abhinav Khattar <[email protected]>

* fix

Signed-off-by: Abhinav Khattar <[email protected]>

---------

Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: Eric Harper <[email protected]>

* Add FastConformer Hybrid ASR models for EN, ES, IT, DE, PL, HR, UA, BY (#6549)

* Added fastconfomer hybrid asr models for en, es, it, de, pl, hr, ua, by

Signed-off-by: KunalDhawan <[email protected]>

* updated ASR docs with the fastconformer hybrid checkpoints

Signed-off-by: KunalDhawan <[email protected]>

* added the fastconformer RNNT and CTC models

Signed-off-by: KunalDhawan <[email protected]>

---------

Signed-off-by: KunalDhawan <[email protected]>

* Add scores for FastConformer models (#6557)

Signed-off-by: smajumdar <[email protected]>

* Patch transcribe and support offline transcribe for hybrid model (#6550)

Signed-off-by: fayejf <[email protected]>

* Not doing CastToFloat by default (#6524)

* Not doing CastToFloat by default

Signed-off-by: Boris Fomitchev <[email protected]>

* Added docustring

Signed-off-by: Boris Fomitchev <[email protected]>

* Dummy commit

Signed-off-by: Boris Fomitchev <[email protected]>

---------

Signed-off-by: Boris Fomitchev <[email protected]>

* temp rtd fix (#6568)

Signed-off-by: Abhinav Khattar <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update manifest.py for speedup (#6565)

* Update manifest.py

Re-order the checks for faster processing audio filepaths that are already absolute paths

Signed-off-by: He Huang (Steve) <[email protected]>

* Update manifest.py

Signed-off-by: He Huang (Steve) <[email protected]>

---------

Signed-off-by: He Huang (Steve) <[email protected]>
Co-authored-by: Vahid Noroozi <[email protected]>

* Turn autocast off when precision is fp32 (#6554)

* Turn autocast off when precision is fp32

Signed-off-by: Abhinav Khattar <[email protected]>

* address review

Signed-off-by: Abhinav Khattar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixes

Signed-off-by: Abhinav Khattar <[email protected]>

* merge

Signed-off-by: Abhinav Khattar <[email protected]>

---------

Signed-off-by: Abhinav Khattar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <[email protected]>

* More streaming conformer export fixes (#6567)

Signed-off-by: Greg Clark <[email protected]>
Co-authored-by: Vahid Noroozi <[email protected]>

* Fix batch size reconf for T5 FT for multi-validation (#6582)

Signed-off-by: Abhinav Khattar <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Updated Megatron LM encoder/decoder to use cfg for hiddens.

Signed-off-by: Micha Livne <[email protected]>

* 1. Added support to register externalhidden loss / transforms.

Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Make tensor split contiguous (#6580)

Signed-off-by: Abhinav Khattar <[email protected]>

* Patches from main to r1.18.0 for Virtual Parallel (#6592)

* Add interleaved pp support (#6498)

* Add support for Virtual Pipeline Parallel conversion

Signed-off-by: smajumdar <[email protected]>

* Add support for Virtual Pipeline Parallel conversion

Signed-off-by: smajumdar <[email protected]>

* Switch to megatron core

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 892987169ef277f328e15b71a5a0c9bd961c8ee7)

* Add patches for Virtual Parallel conversion (#6589)

* Add patches for Virtual Parllel conversion

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 1d813a372ab51688e3af6395d905a4c0366ffd23)

* Documentation for ASR-TTS models (#6594)

* Add docs about hybrid ASR-TTS models

Signed-off-by: Vladimir Bataev <[email protected]>

* Add docs about text-only datasets

Signed-off-by: Vladimir Bataev <[email protected]>

* Add docs about ASR-TTS checkpoints

Signed-off-by: Vladimir Bataev <[email protected]>

* Add docs about ASR-TTS configs and training

Signed-off-by: Vladimir Bataev <[email protected]>

* Clean up

Signed-off-by: Vladimir Bataev <[email protected]>

* ASR-TTS docs: add to api, fix imports

Signed-off-by: Vladimir Bataev <[email protected]>

* Clean up

Signed-off-by: Vladimir Bataev <[email protected]>

* Wrap optional import

Signed-off-by: Vladimir Bataev <[email protected]>

* Revert general ASR import

Signed-off-by: Vladimir Bataev <[email protected]>

---------

Signed-off-by: Vladimir Bataev <[email protected]>

* Update SDP docs (#6485)

* add info about SDP e.g. processor classes in docs

Signed-off-by: Elena Rastorgueva <[email protected]>

* add link to SDP docs in README

Signed-off-by: Elena Rastorgueva <[email protected]>

* address code review comments and add SDP overview diagram

Signed-off-by: Elena Rastorgueva <[email protected]>

* Fix spelling typo

Signed-off-by: Elena Rastorgueva <[email protected]>

---------

Signed-off-by: Elena Rastorgueva <[email protected]>

* Create dummy iters to satisy len checks (#6600)

Signed-off-by: Abhinav Khattar <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* Restore GPT support for interleaved pipeline parallelism (#6528)

* Restore logic for data-parallel communication with pipeline parallelism in GPT

Signed-off-by: Tim Moon <[email protected]>

* Support dynamic attention masks in GPT

Signed-off-by: Tim Moon <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Debug typos

Signed-off-by: Tim Moon <[email protected]>

* Debug data iterator caching with interleaved pipeline parallelism

Each model chunk accesses the data iterator multiple times, so we need to cache multiple samples.

Signed-off-by: Tim Moon <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update Megatron-LM commit

Signed-off-by: Tim Moon <[email protected]>

* Distinguish between list of data iterators and data iterator that is a list

Signed-off-by: Tim Moon <[email protected]>

* Create dummy iters to satisy len checks

Signed-off-by: Abhinav Khattar <[email protected]>

* Kludge while waiting for Megatron-LM update

Signed-off-by: Tim Moon <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* set transformers offline to avoid rate limiting

Signed-off-by: ericharper <[email protected]>

---------

Signed-off-by: Tim Moon <[email protected]>
Signed-off-by: Eric Harper <[email protected]>
Signed-off-by: Abhinav Khattar <[email protected]>
Signed-off-by: ericharper <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Abhinav Khattar <[email protected]>

* Patch transcribe_util for steaming mode and add wer calculation back to inference scripts (#6601)

* fix write

Signed-off-by: fayejf <[email protected]>

* decoding ctc

Signed-off-by: fayejf <[email protected]>

* temp set rnnt decoding return_best_hypothesis to true

Signed-off-by: fayejf <[email protected]>

* add wer cal back to transcribe_speech as requested

Signed-off-by: fayejf <[email protected]>

* add wer cal back to speech_to_text_buffered_infer_rnnt  as requested

Signed-off-by: fayejf <[email protected]>

* add wer cal back to speech_to_text_buffered_infer_ctc as requested

Signed-off-by: fayejf <[email protected]>

* style fix

Signed-off-by: fayejf <[email protected]>

* reflect change in asr_evaluator

Signed-off-by: fayejf <[email protected]>

* reflect som and vahid comment

Signed-off-by: fayejf <[email protected]>

* remove return_best_hy=true in transcribe_speech

Signed-off-by: fayejf <[email protected]>

* no text skip

Signed-off-by: fayejf <[email protected]>

---------

Signed-off-by: fayejf <[email protected]>

* 1. Added example conf YAML.

Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Added support in tensor_parallel.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add hat image to docs (#6619)

Signed-off-by: andrusenkoau <[email protected]>

* update core commit hash in readme (#6622)

Signed-off-by: Abhinav Khattar <[email protected]>

* Patch decoding for PC models (#6630)

* Patch decoding logic for PC models

Signed-off-by: smajumdar <[email protected]>

* Patch decoding logic for PC models

Signed-off-by: smajumdar <[email protected]>

---------

Signed-off-by: smajumdar <[email protected]>

* Fix wer.py where 'errors' variable was not set (#6633)

Fix wer.py where 'errors' variable was not set when both reference and hypothesis are empty strings

Signed-off-by: He Huang (Steve) <[email protected]>

* fix att_context_size bug for older models. (#6635)

Signed-off-by: Vahid <[email protected]>

* Add megatron_core to requirements (#6639)

* add megatron_core to requirements

Signed-off-by: ericharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: ericharper <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Remove from jenkins (#6641)

* add megatron_core to requirements

Signed-off-by: ericharper <[email protected]>

* remove from jenkins

Signed-off-by: ericharper <[email protected]>

---------

Signed-off-by: ericharper <[email protected]>

* remove dup (#6643)

Signed-off-by: ericharper <[email protected]>

* 1. Fixed config to use names, and added better error messages.

Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Added support to pass extra data to hiddens for loss computation.

Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Working on passing extra data to hiddnes.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Fixed support in loading .nemo without hiddnes module.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Improved and fixed logging of validation and testing.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Fixed training logging.

Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Fixed logging of hidden loss.

Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Fixed logging names.
2. Added logging to hiddens and tokens loss.

Signed-off-by: Micha Livne <[email protected]>

* 1. Fixed conflicts.

Signed-off-by: Micha Livne <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

* 1. Debugging. Signed-off-by: Micha Livne <[email protected]>

---------

Signed-off-by: Xuesong Yang <[email protected]>
Signed-off-by: Oleksii Kuchaiev <[email protected]>
Signed-off-by: Jocelyn Huang <[email protected]>
Signed-off-by: smajumdar <[email protected]>
Signed-off-by: fayejf <[email protected]>
Signed-off-by: Alexandra Antonova <[email protected]>
Signed-off-by: Virginia Adams <[email protected]>
Signed-off-by: Vladimir Bataev <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: Vahid <[email protected]>
Signed-off-by: Ante Jukić <[email protected]>
Signed-off-by: David Mosallanezhad <[email protected]>
Signed-off-by: sam1373 <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: ekmb <[email protected]>
Signed-off-by: Yang Zhang <[email protected]>
Signed-off-by: Micha Livne <[email protected]>
Signed-off-by: Tim Moon <[email protected]>
Signed-off-by: Abhinav Khattar <[email protected]>
Signed-off-by: smajumdar <[email protected]>
Signed-off-by: Micha Livne <[email protected]>
Signed-off-by: hsiehjackson <[email protected]>
Signed-off-by: Dima Rekesh <[email protected]>
Signed-off-by: Jim O’Regan <[email protected]>
Signed-off-by: Mostafa Ghorbandoost <[email protected]>
Signed-off-by: Dmytro Pykhtar <[email protected]>
Signed-off-by: Dmytro Pykhtar <[email protected]>
Signed-off-by: Kunal Dhawan <[email protected]>
Signed-off-by: andrusenkoau <[email protected]>
Signed-off-by: Andrei Andrusenko <[email protected]>
Signed-off-by: Evelina <[email protected]>
Signed-off-by: Taejin Park <[email protected]>
Signed-off-by: KunalDhawan <[email protected]>
Signed-off-by: Boris Fomitchev <[email protected]>
Signed-off-by: He Huang (Steve) <[email protected]>
Signed-off-by: Greg Clark <[email protected]>
Signed-off-by: Elena Rastorgueva <[email protected]>
Signed-off-by: Eric Harper <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Oleksii Kuchaiev <[email protected]>
Co-authored-by: Jocelyn <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Co-authored-by: fayejf <[email protected]>
Co-authored-by: bene-ges <[email protected]>
Co-authored-by: Alexandra Antonova <[email protected]>
Co-authored-by: Virginia Adams <[email protected]>
Co-authored-by: Zhilin Wang <[email protected]>
Co-authored-by: Vladimir Bataev <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Vahid Noroozi <[email protected]>
Co-authored-by: anteju <[email protected]>
Co-authored-by: Ante Jukić <[email protected]>
Co-authored-by: David <[email protected]>
Co-authored-by: David Mosallanezhad <[email protected]>
Co-authored-by: Samuel Kriman <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Co-authored-by: Evelina <[email protected]>
Co-authored-by: Sean Naren <[email protected]>
Co-authored-by: Yang Zhang <[email protected]>
Co-authored-by: Sean Naren <[email protected]>
Co-authored-by: Tim Moon <[email protected]>
Co-authored-by: Neha Tadimeti <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abhinav Khattar <[email protected]>
Co-authored-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: Dima Rekesh <[email protected]>
Co-authored-by: Jim O’Regan <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Mostafa Ghorbandoost <[email protected]>
Co-authored-by: Dmytro Pykhtar <[email protected]>
Co-authored-by: Dmytro Pykhtar <[email protected]>
Co-authored-by: Kunal Dhawan <[email protected]>
Co-authored-by: Andrei Andrusenko <[email protected]>
Co-authored-by: Taejin Park <[email protected]>
Co-authored-by: Boris Fomitchev <[email protected]>
Co-authored-by: He Huang (Steve) <[email protected]>
Co-authored-by: Greg Clark <[email protected]>
Co-authored-by: Elena Rastorgueva <[email protected]>
  • Loading branch information
1 parent 8a35454 commit eca9dc3
Show file tree
Hide file tree
Showing 13 changed files with 993 additions and 146 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# this file main purpose is documentation, and it should not be used directly
enc_output_name: z # name of key in hidden transforms output to pass to decoder (e.g., z for VAE/MIM)
tokens_loss_weight: 1.0 # weight of tokens loss (if not specified defaults to 1.0)
# the lists below are useful for adding multiple transforms and losses according to order
# if order is not important, you can use a single dictionary in the list with multiple keys
transform: # a list of dictionaries of transforms (or a joint dictionary) to apply to hiddens (list enforces order)
# - <transform_name>: # name of transform
# cls_name: <transform_class_path_name> # class path name
# <transform_param>: <transform_value> # transform parameters
# ...
- q_z_given_x: # Gaussian posterior with reparameterization
cls_name: cond_gaussian # class path name
hidden_size: 512 # hidden size of the encoder
min_logvar: -6.0 # minimum log variance
- logP_cls:
cls_name: guided_cls
input_name: hiddens
attr_name: logP
- QED_cls:
cls_name: guided_cls
input_name: hiddens
attr_name: QED
loss: # a list of dictionaries of loss terms (or a joint dictionary) to add to reconstruction loss (list enforces order)
# - <loss_name>: # name of loss
# cls_name: <loss_class_path_name> # class path name
# <loss_param>: <loss_value> # loss parameters
# ...
# below is example where order of losses does not matter so a single item in list is enough
- mim: # A-MIM example
cls_name: a_mim
loss_weight: 1.0 # weight of the MIM latent loss
vae: # VAE example
cls_name: vae
min_kl_value: null # minimum KL value if a float is provided
loss_weight: 1e-2 # weight of KL term in loss
logP_cls:
cls_name: guided_cls_loss
input_name: logP
loss_weight: 1.0
QED_cls:
cls_name: guided_cls_loss
input_name: logP
loss_weight: 1.0
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ def loss_func(output_tensor):
lm_loss = loss_dict['lm loss']
loss = lm_loss
reduced_loss = average_losses_across_data_parallel_group([loss, lm_loss])
return loss, {'avg': reduced_loss}
return loss, {'loss': reduced_loss}

return output_tensor, loss_func

Expand Down Expand Up @@ -334,7 +334,7 @@ def training_step(self, dataloader_iter, batch_idx):
)

if losses_reduced_per_micro_batch:
loss_tensors_list = [loss_reduced['avg'] for loss_reduced in losses_reduced_per_micro_batch]
loss_tensors_list = [loss_reduced['loss'] for loss_reduced in losses_reduced_per_micro_batch]
loss_tensor = torch.vstack(loss_tensors_list)
loss_mean = loss_tensor.mean(axis=0)
else:
Expand Down Expand Up @@ -447,7 +447,7 @@ def validation_step(self, dataloader_iter, batch_idx):
)

if losses_reduced_per_micro_batch:
loss_tensors_list = [loss_reduced['avg'] for loss_reduced in losses_reduced_per_micro_batch]
loss_tensors_list = [loss_reduced['loss'] for loss_reduced in losses_reduced_per_micro_batch]
loss_tensor = torch.vstack(loss_tensors_list)
loss_mean = loss_tensor.mean(axis=0)
else:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -276,47 +276,27 @@ def _reconfigure_and_process_inference_batch(self, batch, ds_config):
def fwd_bwd_step(self, dataloader_iter, batch_idx, forward_only):
"""
Dataloader produces a global batch which is turned into a list of microbatches.
The list of microbatches is then piped through the pipeline using megatron-core fwd/bwd functions.
The list of microbatches is then piped through the pipeline using Apex fwd/bwd functions.
"""
# Get seq length of batch
batch = next(dataloader_iter)
if isinstance(batch, dict):
# convert to list if not already converted.
batch = self._process_batch(batch)

_, seq_length = batch[0].shape
_, dec_seq_length = batch[1].shape
tensor_shape = [seq_length, get_micro_batch_size(), self.cfg.encoder.hidden_size]
data_iter = get_iterator_k_split(batch, get_num_microbatches())
# Get seq length of batch
encoder_seq_length = batch[0].size(1)
decoder_seq_length = batch[1].size(1)

fwd_bwd_function = get_forward_backward_func()
tensor_shape = [encoder_seq_length, get_micro_batch_size(), self.cfg.encoder.hidden_size]
data_iter = get_iterator_k_split(batch, get_num_microbatches())

losses_reduced_per_micro_batch = fwd_bwd_function(
forward_step_func=self.get_forward_output_and_loss_func(),
return self._execute_fwd_bwd_function(
data_iterator=data_iter,
model=[self.enc_dec_model],
num_microbatches=get_num_microbatches(),
forward_only=forward_only,
tensor_shape=tensor_shape,
decoder_seq_length=dec_seq_length,
dtype=self.autocast_dtype,
grad_scaler=self.trainer.precision_plugin.scaler.scale if self.cfg.precision == 16 else None,
sequence_parallel=self.cfg.get('sequence_parallel', False),
enable_autocast=self.enable_autocast,
decoder_seq_length=decoder_seq_length,
)

# only the last stages of the pipeline return losses
if losses_reduced_per_micro_batch:
# average loss across micro batches
loss_tensors_list = [loss_reduced['avg'] for loss_reduced in losses_reduced_per_micro_batch]
loss_tensor = torch.concat(loss_tensors_list)
loss_mean = loss_tensor.mean()
else:
# we're not on the last pipeline stage so no losses
loss_mean = torch.tensor(0.0).cuda()

return loss_mean

def inference_step(self, dataloader_iter, batch_idx: int, mode: str, dataloader_idx=0):
# Add try except since dataloader_iter in PTL 2.0 doesnt catch the end of the iterator
try:
Expand Down Expand Up @@ -366,12 +346,16 @@ def inference_step(self, dataloader_iter, batch_idx: int, mode: str, dataloader_
_ = metric(pred, label)

outputs = {
'loss': loss,
'preds': preds_text,
'labels': labels_text,
'categories': categories,
'inputs': input_text,
}

if isinstance(loss, dict):
outputs.update(loss)
else:
outputs['loss'] = loss
if mode == 'validation':
if type(self.trainer.val_dataloaders) == list and len(self.trainer.val_dataloaders) > 1:
self.validation_step_outputs[dataloader_idx].append(outputs)
Expand Down
Loading

0 comments on commit eca9dc3

Please sign in to comment.