-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add EfficientNet benchmark #394
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
@@ -61,8 +61,22 @@ def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None): | |||
) | |||
|
|||
|
|||
def _get_concrete_noise_shape(inputs, noise_shape): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very odd, for some reason tf.stateless_drop
does not support None
in the batch dim, so we need to give it a concrete value. Without this change, TF backend throws an error at dropout layer.
f00e967
to
05e476b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks! What results do you see on GPU across backends?
@fchollet I will share a spreadsheet with the benchmark results, similar to layer benchmark. But briefly JAX is faster than TF backend. |
This is a real model benchmark on real dataset, so it could give us a good overview keras core performance in different backends.
I am creating a new compute engine for TF testing, because the old one hits a cuda version incompatibility (as always!). Will share the metrics later, but the code is ready for review.