-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reusing of softmax mkldnn primitives #10576
Reusing of softmax mkldnn primitives #10576
Conversation
b0a5d41
to
4b3b20a
Compare
@jczaja please merge the latest code to pass the TeamCity. Thanks very much! |
27167c7
to
50e559b
Compare
- Added hash function inside of MKLDNN softmax op to be used as handle for primitives stroing in a context - Style fixes to softmax mkldnn op - Fixes after review - Coding style - Fix to style - style fixes - style fix - style fixes - Fix to cody style check - Rephrasing a comment
50e559b
to
7bf00c3
Compare
@luotao1 Thanks very much for a suggesstion. It helped. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
if (softmax_p == nullptr) { | ||
// Currently only NC data format is supported | ||
auto softmax_md = | ||
MKLDNNMemDesc({softmax_tz}, memory::f32, memory::format::nc); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
f32
should depends on T
, right?
Maybe this should be enhanced, or at least enforce as float.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be enhanced in next PR, since I find:
BDSHYF000120887:operators luotao02$ grep "memory::f32" *.cc
activation_mkldnn_op.cc: ? platform::MKLDNNMemDesc(src_tz, mkldnn::memory::f32,
activation_mkldnn_op.cc: : platform::MKLDNNMemDesc(src_tz, mkldnn::memory::f32,
activation_mkldnn_op.cc: ? platform::MKLDNNMemDesc(src_tz, mkldnn::memory::f32,
activation_mkldnn_op.cc: : platform::MKLDNNMemDesc(src_tz, mkldnn::memory::f32,
pool_mkldnn_op.cc: auto src_md = platform::MKLDNNMemDesc(src_tz, mkldnn::memory::f32,
pool_mkldnn_op.cc: auto dst_md = platform::MKLDNNMemDesc(dst_tz, mkldnn::memory::f32,
pool_mkldnn_op.cc: {{}, mkldnn::memory::f32, mkldnn::memory::format::nchw},
pool_mkldnn_op.cc: auto diff_src_md = platform::MKLDNNMemDesc(diff_src_tz, mkldnn::memory::f32,
pool_mkldnn_op.cc: auto diff_dst_md = platform::MKLDNNMemDesc(diff_dst_tz, mkldnn::memory::f32,
softmax_mkldnn_op.cc: MKLDNNMemDesc({softmax_tz}, memory::f32, memory::format::nc);
Changes presented here are introducing concept of reusing once created MKLDNN primitives for softmax for specific scenario eg. when we have softmax MKLDNN op called for the second time with
same dims of input/output then we can reuse previously created objects rather than recreate them.
Intention of this change is to speedup Softmax MKLDNN operator (more MKLDNN ops will follow)