Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge main #23866

Merged
merged 74 commits into from
May 30, 2023
Merged

merge main #23866

merged 74 commits into from
May 30, 2023

Conversation

mishig25
Copy link
Contributor

No description provided.

Tylersuard and others added 30 commits May 22, 2023 10:53
* Debug example code for MegaForCausalLM

set ignore_mismatched_sizes=True in model loading code

* Fix up
* Fix tensor device while attention_mask is not None

* Fix tensor device while attention_mask is not None
* fix logger bug

* Update tests/mixed_int8/test_mixed_int8.py

Co-authored-by: Zachary Mueller <[email protected]>

* import `PartialState`

---------

Co-authored-by: Zachary Mueller <[email protected]>
* Fix deepspeed recursion

* Better fix
… lots of memory (#23535)

* Fixed bug where LLaMA layer norm would change input type.

* make fix-copies

---------

Co-authored-by: younesbelkada <[email protected]>
* Fix wav2vec2 is_batched check to include 2-D numpy arrays

* address comment

* Add tests

* oops

* oops

* Switch to np array

Co-authored-by: Sanchit Gandhi <[email protected]>

* Switch to np array

* condition merge

* Specify mono channel only in comment

* oops, add other comment too

* make style

* Switch list check from falsiness to empty

---------

Co-authored-by: Sanchit Gandhi <[email protected]>
* Fix SAM tests and use smaller checkpoints

* Override test_model_from_pretrained to use sam-vit-base as well

* make fixup
* fix

* fix

---------

Co-authored-by: ydshieh <[email protected]>
* First draft

* Remove print statements

* Add conditional generation

* Add more tests

* Remove scripts

* Remove BLIP specific linkes

* Add support for pix2struct

* Add fast test

* Address comment

* Fix style
…cision_transformer (#23673)

Bump requests in /examples/research_projects/decision_transformer

Bumps [requests](https://github.com/psf/requests) from 2.27.1 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](psf/requests@v2.27.1...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…sual_bert (#23670)

Bump requests in /examples/research_projects/visual_bert

Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](psf/requests@v2.22.0...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…mert (#23668)

Bump requests in /examples/research_projects/lxmert

Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](psf/requests@v2.22.0...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Add PerSAM args

* Make attn_sim optional

* Rename to attention_similarity

* Add docstrigns

* Improve docstrings
* Update modeling_open_llama.py

Fix typo in `use_memorry_efficient_attention` parameter name

* Update configuration_open_llama.py

Fix typo in `use_memorry_efficient_attention` parameter name

* Update configuration_open_llama.py

Take care of backwards compatibility ensuring that the previous parameter name is taken into account if used

* Update configuration_open_llama.py

format to adjust the line length

* Update configuration_open_llama.py

proper code formatting using `make fixup`

* Update configuration_open_llama.py

pop the argument not to let it be set later down the line
* Making `safetensors` a core dependency.

To be merged later, I'm creating the PR so we can try it out.

* Update setup.py

* Remove duplicates.

* Even more redundant.
…an (#23621)

docs: ko: `tasks/monocular_depth_estimation`

Co-authored-by: Hyeonseo Yun <[email protected]>
Co-authored-by: Sohyun Sim <[email protected]>
Co-authored-by: Gabriel Yang <[email protected]>
Co-authored-by: Wonhyeong Seo <[email protected]>
Co-authored-by: Jungnerd <[email protected]>
fix

Co-authored-by: ydshieh <[email protected]>
* add a dummy pipeline test

* change test name
* New TF version compatibility fixes

* Remove dummy print statement, move expand_1d

* Make a proper framework inference function

* Make a proper framework inference function

* ValueError -> TypeError
* Fix is_batched code to allow 2-D numpy arrays for audio

* Tests

* Fix typo

* Incorporate comments from PR #23223
Ref: huggingface/peft#394
    Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported.
    call module.cuda() before module.load_state_dict()
* Fix some docs what layerdrop does

* Update src/transformers/models/data2vec/configuration_data2vec_audio.py

Co-authored-by: Sylvain Gugger <[email protected]>

* Fix more docs

---------

Co-authored-by: Sylvain Gugger <[email protected]>
tloen and others added 28 commits May 25, 2023 07:48
* Revamp test selection for the example tests

* Rename old XLA test and fake modif in run_glue

* Fixes

* Fake Trainer modif

* Remove fake modifs
* remove unused parameters

* remove unused parameters in config
* Fix is_ninja_available()

search ninja using subprocess instead of importlib.

* Fix style

* Fix doc

* Fix style
#23766)

Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2.
- [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst)
- [Commits](tornadoweb/tornado@v6.0.4...v6.3.2)

---
updated-dependencies:
- dependency-name: tornado
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…l_bert (#23767)

Bump tornado in /examples/research_projects/visual_bert

Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2.
- [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst)
- [Commits](tornadoweb/tornado@v6.0.4...v6.3.2)

---
updated-dependencies:
- dependency-name: tornado
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
class_weights tensor should follow model's device
* Fix no such file or directory error

* Address comment

* Fix formatting issue
… log the adjusted value seperately. (#23800)

* Log right bs

* Log

* Diff message
* Enable code-specific revision for code on the Hub

* invalidate old revision
* ran `transformers-cli add-new-model-like`

* added `AutoformerLayernorm` and `AutoformerSeriesDecomposition`

* added `decomposition_layer` in `init` and `moving_avg` to config

* added `AutoformerAutoCorrelation` to encoder & decoder

* removed caninical self attention `AutoformerAttention`

* added arguments in config and model tester. Init works! 😁

* WIP autoformer attention with autocorrlation

* fixed `attn_weights` size

* wip time_delay_agg_training

* fixing sizes and debug time_delay_agg_training

* aggregation in training works! 😁

* `top_k_delays` -> `top_k_delays_index` and added `contiguous()`

* wip time_delay_agg_inference

* finish time_delay_agg_inference 😎

* added resize to autocorrelation

* bug fix: added the length of the output signal to `irfft`

* `attention_mask = None` in the decoder

* fixed test: changed attention expected size, `test_attention_outputs` works!

* removed unnecessary code

* apply AutoformerLayernorm in final norm in enc & dec

* added series decomposition to the encoder

* added series decomp to decoder, with inputs

* added trend todos

* added autoformer to README

* added to index

* added autoformer.mdx

* remove scaling and init attention_mask in the decoder

* make style

* fix copies

* make fix-copies

* inital fix-copies

* fix from #22076

* make style

* fix class names

* added trend

* added d_model and projection layers

* added `trend_projection` source, and decomp layer init

* added trend & seasonal init for decoder input

* AutoformerModel cannot be copied as it has the decomp layer too

* encoder can be copied from time series transformer

* fixed generation and made distrb. out more robust

* use context window to calculate decomposition

* use the context_window for decomposition

* use output_params helper

* clean up AutoformerAttention

* subsequences_length off by 1

* make fix copies

* fix test

* added init for nn.Conv1d

* fix IGNORE_NON_TESTED

* added model_doc

* fix ruff

* ignore tests

* remove dup

* fix SPECIAL_CASES_TO_ALLOW

* do not copy due to conv1d weight init

* remove unused imports

* added short summary

* added label_length and made the model non-autoregressive

* added params docs

* better doc for `factor`

* fix tests

* renamed `moving_avg` to `moving_average`

* renamed `factor` to `autocorrelation_factor`

* make style

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: NielsRogge <[email protected]>

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: NielsRogge <[email protected]>

* fix configurations

* fix integration tests

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* fixing `lags_sequence` doc

* Revert "fixing `lags_sequence` doc"

This reverts commit 21e3491.

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Apply suggestions from code review

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* model layers now take the config

* added `layer_norm_eps` to the config

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* added `config.layer_norm_eps` to AutoformerLayernorm

* added `config.layer_norm_eps` to all layernorm layers

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* fix variable names

* added inital pretrained model

* added use_cache docstring

* doc strings for trend and use_cache

* fix order of args

* imports on one line

* fixed get_lagged_subsequences docs

* add docstring for create_network_inputs

* get rid of layer_norm_eps config

* add back layernorm

* update fixture location

* fix signature

* use AutoformerModelOutput dataclass

* fix pretrain config

* no need as default exists

* subclass ModelOutput

* remove layer_norm_eps config

* fix test_model_outputs_equivalence test

* test hidden_states_output

* make fix-copies

* Update src/transformers/models/autoformer/configuration_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* removed unused attr

* Update tests/models/autoformer/test_modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/autoformer/modeling_autoformer.py

Co-authored-by: amyeroberts <[email protected]>

* use AutoFormerDecoderOutput

* fix formatting

* fix formatting

---------

Co-authored-by: Kashif Rasul <[email protected]>
Co-authored-by: NielsRogge <[email protected]>
Co-authored-by: amyeroberts <[email protected]>
* add type hint in pipeline model argument

* add pretrainedmodel and tfpretainedmodel type hint

* make type hints string
SAM shape flexibility fixes for compilation
* move input features to GPU

* skip these tests because undefined behavior

* unskip tests
* docs: ko: fast_tokenizer.mdx

content - translated

Co-Authored-By: Gabriel Yang <[email protected]>
Co-Authored-By: Nayeon Han <[email protected]>
Co-Authored-By: Hyeonseo Yun <[email protected]>
Co-Authored-By: Sohyun Sim <[email protected]>
Co-Authored-By: Jungnerd <[email protected]>
Co-Authored-By: Wonhyeong Seo <[email protected]>

* Update docs/source/ko/fast_tokenizers.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Update docs/source/ko/fast_tokenizers.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Update docs/source/ko/fast_tokenizers.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Update docs/source/ko/fast_tokenizers.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Update docs/source/ko/fast_tokenizers.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Update docs/source/ko/fast_tokenizers.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Update docs/source/ko/fast_tokenizers.mdx

Co-authored-by: Hyeonseo Yun <[email protected]>

* Update fast_tokenizers.mdx

* Update fast_tokenizers.mdx

* Update fast_tokenizers.mdx

* Update fast_tokenizers.mdx

* Update _toctree.yml

---------

Co-authored-by: Gabriel Yang <[email protected]>
Co-authored-by: Nayeon Han <[email protected]>
Co-authored-by: Hyeonseo Yun <[email protected]>
Co-authored-by: Sohyun Sim <[email protected]>
Co-authored-by: Jungnerd <[email protected]>
Co-authored-by: Wonhyeong Seo <[email protected]>
Co-authored-by: Hyeonseo Yun <[email protected]>
* task/video_classification translated

Co-Authored-By: Hyeonseo Yun <[email protected]>
Co-Authored-By: Gabriel Yang <[email protected]>
Co-Authored-By: Sohyun Sim <[email protected]>
Co-Authored-By: Nayeon Han <[email protected]>
Co-Authored-By: Wonhyeong Seo <[email protected]>
Co-Authored-By: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Jungnerd <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Update docs/source/ko/tasks/video_classification.mdx

Co-authored-by: Sohyun Sim <[email protected]>

* Apply suggestions from code review

Co-authored-by: Sohyun Sim <[email protected]>
Co-authored-by: Hyeonseo Yun <[email protected]>
Co-authored-by: Jungnerd <[email protected]>
Co-authored-by: Gabriel Yang <[email protected]>

* Update video_classification.mdx

* Update _toctree.yml

* Update _toctree.yml

* Update _toctree.yml

* Update _toctree.yml

---------

Co-authored-by: Hyeonseo Yun <[email protected]>
Co-authored-by: Gabriel Yang <[email protected]>
Co-authored-by: Sohyun Sim <[email protected]>
Co-authored-by: Nayeon Han <[email protected]>
Co-authored-by: Wonhyeong Seo <[email protected]>
Co-authored-by: Jungnerd <[email protected]>
Co-authored-by: Hyeonseo Yun <[email protected]>
* docs: ko: troubleshooting.mdx

* revised: fix _toctree.yml #23112

* feat: nmt draft `troubleshooting.mdx`

* fix: manual edits `troubleshooting.mdx`

* revised: resolve suggestions troubleshooting.mdx

Co-authored-by: Sohyun Sim <[email protected]>

---------

Co-authored-by: Sohyun Sim <[email protected]>
* initial flyte callback

* lint

* logs should still be saved to Flyte even if pandas isn't install (unlikely)

* cr - flyte team

* add docs for Flytecallback

* fix doc string - cr sgugger

* Apply suggestions from code review

cr - sgugger fix doc strings

Co-authored-by: Sylvain Gugger <[email protected]>

---------

Co-authored-by: Sylvain Gugger <[email protected]>
* Update the processor when changing add_eos and add_bos

* fixup

* update

* add a test

* fix failing tests

* fixup
…is not defined (#23861)

* Better warning

* Update src/transformers/modeling_utils.py

Co-authored-by: Sylvain Gugger <[email protected]>

* format line

---------

Co-authored-by: Sylvain Gugger <[email protected]>
@mishig25 mishig25 merged commit 2d0e384 into mishig25-patch-4 May 30, 2023
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.