You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training, I found although the training loss is decreasing, the validation loss doesn't decrease.
But the latter model is indeed better than the former.
Does anyone face the same weird phenomenon?
我在訓練時發現,雖然訓練損失有在下降,但是驗證損失卻幾乎沒下降,
但奇怪的是,訓練越多次的 model 的確比前面的 model 表現更好?
有人也遇到類似的奇怪現象嗎?
要怎麼解釋這種情況呢?
The text was updated successfully, but these errors were encountered:
When training, I found although the training loss is decreasing, the validation loss doesn't decrease.
But the latter model is indeed better than the former.
Does anyone face the same weird phenomenon?
我在訓練時發現,雖然訓練損失有在下降,但是驗證損失卻幾乎沒下降,
但奇怪的是,訓練越多次的 model 的確比前面的 model 表現更好?
有人也遇到類似的奇怪現象嗎?
要怎麼解釋這種情況呢?
The text was updated successfully, but these errors were encountered: