-
I am trying to add a L1 loss to a single bottleneck layer of an Autoencoder. There seems to be 2 ways one can encode the train and evaluation loop:
I am using I looked for but did not find the weight penalties L1 or L2 for ready use. Looking at the loss functions, I see they do not include this weight decay penalty (regularization), nor do they seem to be parametrized for this. Am I correct in this or did I miss something? Admitting that DJL does not provide pre-baked weight penalty (regularization) on a single layer or on all layers (may have missed this), would I be correct in saying that I should use the
or is there some other way one should go about it. TIA |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
That's exactly how I would go about it. I might even have the l1Penalty take an NDList so it can sum the penalties across multiple weights. This seems like a really good improvement. Would you be interested in adding a PR to DJL with your weight penalty loss? |
Beta Was this translation helpful? Give feedback.
-
PR at: #788 |
Beta Was this translation helpful? Give feedback.
That's exactly how I would go about it. I might even have the l1Penalty take an NDList so it can sum the penalties across multiple weights.
This seems like a really good improvement. Would you be interested in adding a PR to DJL with your weight penalty loss?