Skip to content

Commit

Permalink
[LLM] fix bug when loss is None in llama modeling.py (#8459)
Browse files Browse the repository at this point in the history
  • Loading branch information
cqulilujia authored May 22, 2024
1 parent 87e4c4f commit 70bffa8
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions paddlenlp/transformers/llama/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -1654,8 +1654,11 @@ def forward(self, prediction_scores, masked_lm_labels):
binary_sequence = paddle.where(
masked_lm_loss > 0, paddle.ones_like(masked_lm_loss), paddle.zeros_like(masked_lm_loss)
)
sum_ = paddle.sum(binary_sequence)
loss = 0 if sum_ == 0 else paddle.sum(masked_lm_loss * binary_sequence) / sum_
count = paddle.sum(binary_sequence)
if count == 0:
loss = paddle.sum(masked_lm_loss * binary_sequence)
else:
loss = paddle.sum(masked_lm_loss * binary_sequence) / count

return loss

Expand Down

0 comments on commit 70bffa8

Please sign in to comment.