You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While yes, there are machine precision/rounding issues that cause it to not be exactly equivalent, you don't get any material benefit from multiple stacked Dense layers, and in fact you get a performance penalty due to the same values moving in and out of CPU cache.
It would be better to either add nonlinearities between these Dense layers to increase model flexibility, or replace them with a single Dense layer that directly drops from rank 200 to 10.
The text was updated successfully, but these errors were encountered:
Good point, the PyTorch version uses relu and I can only assume that was missed when porting because the activation functions weren't included in the constructor.
In the 60 minute blitz tutorial, we use a sequence of stacked Dense layers, each with no activation function. This doesn't make much sense, as multiple linear operators can always be combined down into a single linear operator:
While yes, there are machine precision/rounding issues that cause it to not be exactly equivalent, you don't get any material benefit from multiple stacked
Dense
layers, and in fact you get a performance penalty due to the same values moving in and out of CPU cache.It would be better to either add nonlinearities between these
Dense
layers to increase model flexibility, or replace them with a single Dense layer that directly drops from rank 200 to 10.The text was updated successfully, but these errors were encountered: