Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add bark #24086

Merged
merged 182 commits into from
Jul 17, 2023
Merged

Add bark #24086

merged 182 commits into from
Jul 17, 2023

Conversation

ylacombe
Copy link
Contributor

@ylacombe ylacombe commented Jun 7, 2023

This PR aims at integrating Bark, a TTS model, to transformers.
Bark was designed and trained by Suno-AI team and is made of 4 main components:

  • A semantic model (also named text model), i.e a causal autoregressive transformer (GPT2-like), which takes into input a tokenized text
  • A coarse acoustics model (also named coarse model), also a causal autoregressive transformer, taking into input the results of the last model. It aims at regressing the first two audio codebooks necessary to encodec.
  • A fine acoustics model (fine model), this time a non-causal autoencoder transformer, which iteratively predicts the 6 last codebooks based on the sum of the previous codebooks embeddings.
  • having predicted 8 codebooks channels of encodec, Bark uses encodec to generate the output audio array.

Note that each of the first 3 modules can take optional conditional speaker embeddings aiming at conditioning the output audio according to `specific preset voices.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Jun 7, 2023

The documentation is not available anymore as the PR was closed or merged.

@amyeroberts
Copy link
Collaborator

cc @sanchit-gandhi

@sanchit-gandhi
Copy link
Contributor

PR supersedes #23375

This was referenced Jun 12, 2023
Copy link
Contributor

@sanchit-gandhi sanchit-gandhi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great start @ylacombe - left a lot of comments about design which you'll pick up through experience. My main request is changing the design so that we define a base model, and then three models on top of that with LM heads. Each of these models should have a generate method tied to it that does that section of generation. Then the composite model can call each of these sub-methods individually

Didn't look too deep into the last generation loop since we're still figuring out the design there, can take a look once we're a bit more certain!

src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
@ylacombe ylacombe marked this pull request as ready for review June 21, 2023 15:05
@ylacombe ylacombe changed the title [WIP] Add bark Add bark Jun 21, 2023
logger = logging.get_logger(__name__)


_CHECKPOINT_FOR_DOC = "suno/bark"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(this will also need to be suno/bark-small for the final PR)

src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved

# forward the GPT model itself
input_embeds = [
wte(input_ids[:, :, i]).unsqueeze(-1) for i, wte in enumerate(self.transformer.wtes)
Copy link
Contributor

@sanchit-gandhi sanchit-gandhi Jun 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly, but since we don't return input_embeds to the user it's okay if we change how this is computed in the future (there won't be a breaking change for the user as the shape of input_embeds is the same in both cases, it'll just be marginally more expensive to compute it over all the codebooks)

I would advocate for doing it the cheaper way for now since we can always update it later to sum over all codebooks then just slice as many as we need

The cheaper way being:

input_embeds = sum([self.transformer.wtes[i](input_ids[:, :, i, None]) for i in range(pred_idx + 1)])

Copy link
Contributor

@sanchit-gandhi sanchit-gandhi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing in two parts - GH won't let me continue my current review

Copy link
Contributor

@sanchit-gandhi sanchit-gandhi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! Modelling code is pretty much there. Some small changes needed for generation but largely good too. Let's discuss the processor design offline on Monday and find a nice design for loading the voice presets - the actual call of the processor is clean, just wonder if there's a better loading mechanism we can employ when we initialise or call from_pretrained

Tests by and large look good too! Think we can spend a bit of time on the docs to really showcase how to use the model effectively - all the tips can be accompanied by codesnippets, and we should have a few more examples for generation either in the docstrings or the docs themselves

src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved

return x_semantic_history, x_coarse_history

def generate_text_semantic(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all of this still works if this method belongs to the text model rather than the composite model. We'll do all of the same pre-/post-processing, just under the text model rather than the composite one

  • The tokenization happens with the tokenizer, not the model -> nothing changes here
  • The text_encoding_offset parameter can go in the config no problem, we'll do all of the same token id pre-processing so this will work in the text model as well
  • Same as above
  • We'll do all of the post-processing as we are currently doing, so this will work in the text model as well

src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
utils/check_repo.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
Copy link
Contributor

@sanchit-gandhi sanchit-gandhi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice - mainly just nits for the processor. Let me know when you'd like a re-review on the generation loop!

src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
Copy link
Contributor

@sanchit-gandhi sanchit-gandhi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we clean up the TODOs before getting a final review please? Otherwise just styling comments

docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
Copy link
Contributor

@sanchit-gandhi sanchit-gandhi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lots of TODOs and clean-up still required - could you address these before we get a final review? Could you also make sure that any docstrings / comments remain within the 119 line width boundary? Some of the comments are going excessively long off the side of the page

docs/source/en/model_doc/bark.mdx Outdated Show resolved Hide resolved

class SemanticLogitsProcessor(LogitsProcessor):
r"""
[`LogitsProcessor`] enforcing that logits from the semantic model observe Bark original logic.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a great docstring - we should describe what the logits processor does, rather than say it does 'Bark behaviour'. Also now that we're in the logits processor file, these logits processors should be general across models (as much as possible!)

src/transformers/generation/logits_process.py Show resolved Hide resolved
src/transformers/models/bark/configuration_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/configuration_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/modeling_bark.py Outdated Show resolved Hide resolved
src/transformers/models/bark/processing_bark.py Outdated Show resolved Hide resolved
tests/models/bark/test_modeling_bark.py Show resolved Hide resolved
@ylacombe
Copy link
Contributor Author

Hi @sanchit-gandhi , I think it's finally time for the final review! You might want to check the refactoring of generate_text_semantic, generate_coarse, generate_fine, but otherwise, sounds good!

Copy link
Contributor

@sanchit-gandhi sanchit-gandhi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks @ylacombe for iterating here and cleaning up the code. There are a few merge conflicts that need to be resolved (see the panel where the CI is displayed), but otherwise good to go here

@ylacombe
Copy link
Contributor Author

Hi @amyeroberts, the PR is ready for review ! I'd be delighted to get your feedback on this when you have a chance. Let me know if I can help with anything!

Comment on lines +240 to +269
def _attn(self, query, key, value, attention_mask=None, head_mask=None):
# unlike GPTNeo's SelfAttention, divide by the square root of the dimension of the query and the key
attn_weights = torch.matmul(query, key.transpose(-1, -2)) * (1.0 / math.sqrt(self.head_dim))

if self.is_causal:
query_length, key_length = query.size(-2), key.size(-2)

# fill the upper left part of the attention weights with inf
attn_weights = attn_weights.masked_fill(
self.bias[:, :, key_length - query_length : key_length, :key_length] == 0,
torch.finfo(attn_weights.dtype).min,
)

if attention_mask is not None:
# Apply the attention mask
attn_weights = attn_weights + attention_mask

attn_weights = nn.functional.softmax(attn_weights, dim=-1)
attn_weights = attn_weights.to(value.dtype)
attn_weights = self.attn_dropout(attn_weights)

# Mask heads if we want to
if head_mask is not None:
attn_weights = attn_weights * head_mask

# (batch, num_heads, seq_len, seq_len) x (batch, num_heads, seq_len, attn_head_size)
# -> (batch, num_heads, seq_len, attn_head_size)
attn_output = torch.matmul(attn_weights, value)

return attn_output, attn_weights
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original implementation of Bark uses Flash Attention. Should we include it side-by-side ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not at the moment. We use BetterTransformer as a wrapper to add optimizations like flash attention to our models, however this is only for decoder models atm: https://pytorch.org/blog/out-of-the-box-acceleration/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strictly speaking, two of Bark sub-models should be decoder models, i.e BarkSemanticModel and BarkCoarseModel. Do you think I could/should adapt the code so that it works with BetterTransformer?

Copy link
Contributor

@sanchit-gandhi sanchit-gandhi Jul 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reference, we're working on more seamless BetterTransformer support with @fxmarty in optimum: https://huggingface.co/docs/optimum/bettertransformer/overview

Since we're not natively supporting SDPA natively in transformers, we can open a PR to optimum/bettertransformer to add the Bark attention in optimum. We might also need to add some loading / exporting logic to make the three sub-models work, including the non auto-regressive fine model.

Looking at the source code for BetterTransformer in a bit more detail, we see that there is an implementation for GPT Neo scaled-dot product attention (SDPA) that is quite close to what we have in Bark: https://github.com/huggingface/optimum/blob/2678e74df3b9ff020031831f93e7e343a2405a09/optimum/bettertransformer/models/attention.py#L93

We can adapt this to add the forward signature for the Bark attention and any other necessary changes to export it to BetterTransformer.

A user will then simply have to do:

from transformers import BarkModel
from optimum.bettertransformer import BetterTransformer

model = BarkModel.from_pretrained(...)
model = BetterTransformer.transform(model)

to export the model to BetterTransformer to use SDPA.

Comment on lines 106 to 107
speaker_embeddings = np.load(speaker_embeddings, allow_pickle=True)

Copy link
Contributor Author

@ylacombe ylacombe Jun 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to use pickle here, but I'm not sure about the safety of the method. I don't see another way to save/load a nested dictionary of np.ndarray tbh.

WDYT?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First thought when looking at this is that all of the speaker embeddings have to be downloaded at once. Is it likely that a user will use all of these presets in a single inference / training session?

I would instead structure it like so:

In the model's repo, we have a folder speaker_embeddings, similar to the one you currently have. Within this folder, we could do 1 of 2 things:

  1. Save each of the arrays individually with a suffix e.g. de_speaker_0_semantic_prompt.npy
  2. Have a nested folder structure i.e. MODEL_REPO_NAME/bark-xxx/speaker_embeddings/de_speaker_0/semantic_prompt.py

Personally, I'd go for 1.

Then at the top level of the model repo, we have a file speaker_embedding_paths.json, which is a nested dict, listing where all the files are:

{
    "de_speaker_0": {
        "semantic_prompt": "MODEL_REPO/bark-xxx/speaker_embeddings/de_pseaker_0_semantic_prompt.npy",
        "coarse_prompt":  "MODEL_REPO/bark-xxx/speaker_embeddings/de_pseaker_0_coarse_prompt.npy",
        "fine_prompt": "MODEL_REPO/bark-xxx/speaker_embeddings/de_pseaker_0_fine_prompt.npy",
    },
}

or possibly shorter oaths just with the filename. I'd go for the full repo path, as that means others can point and use the embeddings in other repos, without having to copy.

When someone requests a speaker embedding, we can first check if it's downloaded, if it is, load from there, otherwise we download from the path specified in the speaker_embeddings dict

Copy link
Contributor Author

@ylacombe ylacombe Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @amyeroberts,
I really like 1, I think it might be the best way to be both safe and consistent with the original Bark speaker_embeddings nested dictionary. I will do it in the following hours. Thanks for your insights!

@amyeroberts
Copy link
Collaborator

@ylacombe Great! Could you resolve the conflicts? Once that's done I'll review 👍

@ylacombe
Copy link
Contributor Author

Hi @amyeroberts and @sgugger!

Many thanks for the additional review (and thanks @sanchit-gandhi for your insights)!
I've addressed most of your comments, especially those requiring more consistency with transformers regarding naming the generate_xxx. I still have a few comments to resolve, I'll wait for your returns on that!

@ylacombe
Copy link
Contributor Author

Hi @amyeroberts,
there was a time-out when executing the python snippet of the generate docstrings.
I took advantage of this to add the ability to specify sub-model specific parameters in BarkModel.generate.

To give a concrete example, you can specify now how many max_new_tokens you want for the semantic part of the model:
audio_array = model.generate(**inputs, semantic_max_new_tokens=100)

Now that it is done, there are still a few comments to resolve, so I look forward to hearing from you!

@ylacombe
Copy link
Contributor Author

Hey @amyeroberts,
I've addressed your last remarks! Does that work with you?
Many thanks!

@amyeroberts
Copy link
Collaborator

@ylacombe LGTM! I think we're good to merge 👍

@amyeroberts amyeroberts merged commit f42a35e into huggingface:main Jul 17, 2023
@ylacombe ylacombe deleted the add_bark branch July 19, 2023 08:30
@ylacombe ylacombe mentioned this pull request Jul 20, 2023
11 tasks
@sanchit-gandhi sanchit-gandhi mentioned this pull request Jul 24, 2023
5 tasks
@ylacombe ylacombe mentioned this pull request Jul 24, 2023
1 task
blbadger pushed a commit to blbadger/transformers that referenced this pull request Nov 8, 2023
* first raw version of the bark integration

* working code on small models with single run

* add converting script from suno weights 2 hf

* many changes

* correct past_kv output

* working implementation for inference

* update the converting script according to the architecture changes

* add a working end-to-end inference code

* remove some comments and make small changes

* remove unecessary comment

* add docstrings and ensure no unecessary intermediary output during audio generation

* remove done TODOs

* make style + add config docstrings

* modification for batch inference support on the whole model

* add details to .generation_audio method

* add copyright

* convert EncodecModel from original library to transformers implementation

* add two class in order to facilitate model and sub-models loading from the hub

* add support of loading the whole model

* add BarkProcessor

* correct modeling according to processor output

* Add proper __init__ and auto support

* Add up-to-date copyright/license message

* add relative import instead of absolute

* cleaner head_dim computation

* small comment removal or changes

* more verbose LayerNorm init method

* specify eps for clearer comprehension

* more verbose variable naming in the MLP module

* remove unecessary BarkBlock parameter

* clearer code in the forward pass of the BarkBlock

* remove _initialize_modules method for cleaner code

* Remove unnecessary methods from sub-models

* move code to remove unnecessary function

* rename a variable for clarity and change an assert

* move code and change variable name for clarity

* remove unnecessary asserts

* correct small bug

* correct a comment

* change variable names for clarity

* remove asserts

* change import from absolute to relative

* correct small error due to comma missing + correct import

* Add attribute Bark config

* add first version of tests

* update attention_map

* add tie_weights and resize_token_embeddings for fineModel

* correct getting attention_mask in generate_text_semantic

* remove Bark inference trick

* leave more choices in barkProcessor

* remove _no_split_modules

* fixe error in forward of block and introduce clearer notations

* correct converting script with last changes

* make style + add draft bark.mdx

* correct BarkModelTest::test_generate_text_semantic

* add Bark in main README

* add dummy_pt_objects for Bark

* add missing models in the main init

* correct test_decoder_model_past_with_large_inputs

* disable torchscript test

* change docstring of BarkProcessor

* Add test_processor_bark

* make style

* correct copyrights

* add bark.mdx + make style, quality and consistency

* Apply suggestions from code review

Co-authored-by: Sanchit Gandhi <[email protected]>

* Remove unnecessary test method

* simply logic of a test

* Only check first ids for slow audio generation

* split full end-to-end generation tests

* remove unneccessary comment

* change submodel names for clearer naming

* remove ModuleDict from modeling_bark

* combine two if statements

* ensure that an edge misued won't happen

* modify variable name

* move code snippet to the right place (coarse instead of semantic)

* change BarkSemanticModule -> BarkSemanticModel

* align BarkProcessor with transformers paradigm

* correct BarkProcessor tests with last commit changes

* change _validate_voice_preset to an instance method instead of a class method

* tie_weights already called with post_init

* add codec_model config to configuration

* update bark modeling tests with recent BarkProcessor changes

* remove SubModelPretrainedModel + change speakers embeddings prompt type in BarkModel

* change absolute imports to relative

* remove TODO

* change docstrings

* add examples to docs and docstrings

* make style

* uses BatchFeature in BarkProcessor insteads of dict

* continue improving docstrings and docs + make style

* correct docstrings examples

* more comprehensible speaker_embeddings load/Save

* rename speaker_embeddings_dict -> speaker_embeddings

* correct bark.mdx + add bark to documentation_tests

* correct docstrings configuration_bark

* integrate last nit suggestions

* integrate BarkGeneration configs

* make style

* remove bark tests from documentation_tests.txt because timeout - tested manually

* add proper generation config initialization

* small bark.mdx documentation changes

* rename bark.mdx -> bark.md

* add torch.no_grad behind BarkModel.generate_audio()

* replace assert by ValueError in convert_suno_to_hf.py

* integrate a series of short comments from reviewer

* move SemanticLogitsProcessors and remove .detach() from Bark docs and docstrings

* actually remove SemanticLogitsProcessor from modeling_bark.oy

* BarkProcessor returns a single output instead of tuple + correct docstrings

* make style + correct bug

* add initializer_range to BarkConfig + correct slow modeling tests

* add .clone() to history_prompt.coarse_prompt to avoid modifying input array

* Making sure no extra "`" are present

* remove extra characters in modeling_bark.py

* Correct output if history_prompt is None

* remove TODOs

* remove ravel comment

* completing generation_configuration_bark.py docstrings

* change docstrings - number of audio codebooks instead of Encodec codebooks

* change 'bias' docstrings in configuration_bark.py

* format code

* rename BarkModel.generate_audio -> BarkModel.generate_speech

* modify AutoConfig instead of EncodecConfig in BarkConfig

* correct AutoConfig wrong init

* refactor BarkModel and sub-models generate_coarse, generate_fine, generate_text_semantic

* remove SemanticLogitsProcessor and replace it with SuppressTokensLogitsProcessor

* move nb_codebook related config arguments to BarkFineConfig

* rename bark.mdx -> bark.md

* correcting BarkModelConfig from_pretrained + remove keys_to_ignore

* correct bark.md with correct hub path

* correct code bug in bark.md

* correct list tokens_to_suppress

* modify Processor to load nested speaker embeddings in a safer way

* correct batch sampling in BarkFineModel.generate_fine

* Apply suggestions from code review

Small docstrings correction and code improvements

Co-authored-by: amyeroberts <[email protected]>

* give more details about num_layers in docstrings

* correct indentation mistake

* correct submodelconfig order of docstring variables

* put audio models in alphabetical order in utils/check_repo.my

* remove useless line from test_modeling_bark.py

* makes BarkCoarseModelTest inherits from (ModelTesterMixin, GenerationTesterMixin, unittest.TestCase) instead of BarkSemanticModelTest

* make a Tester class for each sub-model instead of inheriting

* add test_resize_embeddings=True for Bark sub-models

* add Copied from transformers.models.gpt_neo.modeling_gpt_neo.GPTNeoSelfAttention._split_heads

* remove 'Copied fom Bark' comment

* remove unneccessary comment

* change np.min -> min in modeling_bark.py

* refactored all custom layers to have Bark prefix

* add attention_mask as an argument of generate_text_semantic

* refactor sub-models start docstrings to have more precise config class definition

* move _tied_weights_keys overriding

* add docstrings to generate_xxx in modeling_bark.py

* add loading whole BarkModel to convert_suno_to_hf

* refactor attribute and variable names

* make style convert_suno

* update bark checkpoints

* remove never entered if statement

* move bark_modeling docstrings after BarkPretrainedModel class definition

* refactor modeling_bark.py: kv -> key_values

* small nits - code refactoring and removing unecessary lines from _init_weights

* nits - replace inplace method by variable assigning

* remove *optional* when necessary

* remove some lines in generate_speech

* add default value for optional parameter

* Refactor preprocess_histories_before_coarse -> preprocess_histories

Co-authored-by: Sylvain Gugger <[email protected]>

* correct usage after refactoring

* refactor Bark's generate_xxx -> generate and modify docstrings and tests accordingly

* update docstrings python in configuration_bark.py

* add bark files in utils/documentation_test.txt

* correct docstrings python snippet

* add the ability to use parameters in the form of e.g coarse_temperature

* add semantic_max_new_tokens in python snippet in docstrings for quicker generation

* Reformate sub-models kwargs in BakModel.generate

Co-authored-by: amyeroberts <[email protected]>

* correct kwargs in BarkModel.generate

* correct attention_mask kwarg in BarkModel.generate

* add tests for sub-models args in BarkModel.generate and correct BarkFineModel.test_generate_fp16

* enrich BarkModel.generate docstrings with a description of how to use the kwargs

---------

Co-authored-by: Sanchit Gandhi <[email protected]>
Co-authored-by: amyeroberts <[email protected]>
Co-authored-by: Sylvain Gugger <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants