-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ccapi] add return last output option in incremental inference @open sesame 10/02 11:44 #2740
base: main
Are you sure you want to change the base?
[ccapi] add return last output option in incremental inference @open sesame 10/02 11:44 #2740
Conversation
lhs8928
commented
Sep 26, 2024
- Added return last output option in incremental inference. To support backward compatibility, default value would be set as false.
- Added return last output option in incremental inference. To support backward compatibility, default value would be set as false. Signed-off-by: hyeonseok <[email protected]>
📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2740. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! please check the comments
@@ -849,27 +849,33 @@ std::vector<float *> NeuralNetwork::incremental_inference( | |||
unsigned int step = from ? 0 : to - 1; | |||
|
|||
for (auto &out : output_tensors) { | |||
const auto &out_t = *out.get(); | |||
float *last_out_buf_data = new float[batch_size * out_t.width()]; | |||
auto out_t = *out.get(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any reason why you changed it to auto
?
If out_t is not modified, const auto &
would be a better option to avoid the coverity issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lhs8928, 💯 All CI checkers are successfully verified. Thanks.
@@ -816,7 +816,7 @@ sharedConstTensors NeuralNetwork::incremental_inference( | |||
std::vector<float *> NeuralNetwork::incremental_inference( | |||
unsigned int batch_size, const std::vector<float *> &input, | |||
const std::vector<float *> &label, unsigned int init_seq_len, | |||
unsigned int from, unsigned int to) { | |||
unsigned int from, unsigned int to, bool return_last_output_only) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about setting it to "full_output" or "need_only_last" instead of "return_last_output_only"?
(I think the word 'return' is unnecessary and could be replaced with a better name)
It seems like an internal flag that determines the return value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the current parameter name is also good.
However, if there's a necessity for renaming it, I would recommend considering 'output_hidden_states' for user convenience. Because this term has been utilized in HuggingFace for similar purposes. (if "output_hidden_states == false", then only return the last output values)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a similar case, it would be good to use the same name.
It seems that developers will easily understand this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lhs8928, 💯 All CI checkers are successfully verified. Thanks.
Please Rebase (upstream/main), #2741 resolve cacheLoader_test_case issue |
It seems there is a problem with CI. |