-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ccapi] add return last output option in incremental inference @open sesame 10/02 11:44 #2740
Open
lhs8928
wants to merge
1
commit into
nnstreamer:main
Choose a base branch
from
lhs8928:add_return_last_output_option_on_incremental_inference
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -816,7 +816,7 @@ sharedConstTensors NeuralNetwork::incremental_inference( | |
std::vector<float *> NeuralNetwork::incremental_inference( | ||
unsigned int batch_size, const std::vector<float *> &input, | ||
const std::vector<float *> &label, unsigned int init_seq_len, | ||
unsigned int from, unsigned int to) { | ||
unsigned int from, unsigned int to, bool return_last_output_only) { | ||
sharedConstTensors input_tensors, output_tensors; | ||
auto in_dim = getInputDimension(); | ||
|
||
|
@@ -849,27 +849,33 @@ std::vector<float *> NeuralNetwork::incremental_inference( | |
unsigned int step = from ? 0 : to - 1; | ||
|
||
for (auto &out : output_tensors) { | ||
const auto &out_t = *out.get(); | ||
float *last_out_buf_data = new float[batch_size * out_t.width()]; | ||
auto out_t = *out.get(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is there any reason why you changed it to |
||
float *last_out_buf_data; | ||
|
||
for (unsigned int batch = 0; batch < batch_size; ++batch) { | ||
if (out->getDataType() == ml::train::TensorDim::DataType::FP16) { | ||
if (return_last_output_only) { | ||
last_out_buf_data = out_t.getData(); | ||
} else { | ||
last_out_buf_data = new float[batch_size * out_t.width()]; | ||
|
||
for (unsigned int batch = 0; batch < batch_size; ++batch) { | ||
if (out->getDataType() == ml::train::TensorDim::DataType::FP16) { | ||
#ifdef ENABLE_FP16 | ||
const _FP16 *out_t_batch_ptr = out_t.getData<_FP16>() + | ||
batch * out_t.getDim().getFeatureLen() + | ||
step * out_t.getDim().width(); | ||
scopy(out_t.getDim().width(), out_t_batch_ptr, 1, | ||
last_out_buf_data + batch * out_t.width(), 1); | ||
const _FP16 *out_t_batch_ptr = out_t.getData<_FP16>() + | ||
batch * out_t.getDim().getFeatureLen() + | ||
step * out_t.getDim().width(); | ||
scopy(out_t.getDim().width(), out_t_batch_ptr, 1, | ||
last_out_buf_data + batch * out_t.width(), 1); | ||
|
||
#else | ||
throw std::invalid_argument("Error: enable-fp16 is not set"); | ||
throw std::invalid_argument("Error: enable-fp16 is not set"); | ||
#endif | ||
} else if (out->getDataType() == ml::train::TensorDim::DataType::FP32) { | ||
const float *out_t_batch_ptr = out_t.getData() + | ||
batch * out_t.getDim().getFeatureLen() + | ||
step * out_t.getDim().width(); | ||
scopy(out_t.getDim().width(), out_t_batch_ptr, 1, | ||
last_out_buf_data + batch * out_t.width(), 1); | ||
} else if (out->getDataType() == ml::train::TensorDim::DataType::FP32) { | ||
const float *out_t_batch_ptr = out_t.getData() + | ||
batch * out_t.getDim().getFeatureLen() + | ||
step * out_t.getDim().width(); | ||
scopy(out_t.getDim().width(), out_t_batch_ptr, 1, | ||
last_out_buf_data + batch * out_t.width(), 1); | ||
} | ||
} | ||
} | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about setting it to "full_output" or "need_only_last" instead of "return_last_output_only"?
(I think the word 'return' is unnecessary and could be replaced with a better name)
It seems like an internal flag that determines the return value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the current parameter name is also good.
However, if there's a necessity for renaming it, I would recommend considering 'output_hidden_states' for user convenience. Because this term has been utilized in HuggingFace for similar purposes. (if "output_hidden_states == false", then only return the last output values)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a similar case, it would be good to use the same name.
It seems that developers will easily understand this.