You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Are there any additional resources about calculating the normalizing constants for the blending weights from a practical implementation point of view apart from the literature (what makes training multi-modal classification networks hard?).
But more importantly how to find them for a new dataset? The proof of the proposition for gradient blending is not helpful in this regard. Also going through the repo, they are considered an argument:
Hi,
Are there any additional resources about calculating the normalizing constants for the blending weights from a practical implementation point of view apart from the literature (what makes training multi-modal classification networks hard?).
The weights for the datasets used are given : https://github.com/facebookresearch/VMZ/blob/master/c2/tutorials/gradient_blending.md
But I couldn't find any code where their calculation is implemented.
But more importantly how to find them for a new dataset? The proof of the proposition for gradient blending is not helpful in this regard. Also going through the repo, they are considered an argument:
VMZ/c2/tools/train_net.py
Line 255 in 4cc542d
in the "add_weighted_loss()" method and I couldn't find any helper functions which might have been used to calculate them.
Thank you for your response.
The text was updated successfully, but these errors were encountered: