Skip to content

Commit

Permalink
Add attn bias arg to HF wrapper (#458)
Browse files Browse the repository at this point in the history
  • Loading branch information
Muennighoff authored Feb 22, 2024
1 parent 7f7abbb commit cc36709
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions hf_olmo/modeling_olmo.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ def forward(
input_ids: torch.LongTensor = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
attention_bias: Optional[torch.Tensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
Expand All @@ -70,6 +71,7 @@ def forward(
input_ids=input_ids,
input_embeddings=inputs_embeds,
attention_mask=attention_mask,
attention_bias=attention_bias,
past_key_values=past_key_values,
use_cache=use_cache,
output_hidden_states=output_hidden_states,
Expand Down

0 comments on commit cc36709

Please sign in to comment.