You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Output buffer optimization in runtime module
Describe the solution you'd like
Assuming that the input shape does not change frequently, output buffer is created in previous forward()
Latency hiding by creating the tensor for next output buffer
Potentially Cuda and CPU(preparing next output buffer) can be overlapped
Describe alternatives you've considered
if runtime module maintains persistent output buffers across multiple inference runs, it allows to reuse previously allocated memory for output tensors, potentially improving performance by reducing memory allocation overhead. But it can not handle live tensors from a previous invocation. Second invocation of model will overwrite output buffer of previous run.
Additional context
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Output buffer optimization in runtime module
Describe the solution you'd like
Describe alternatives you've considered
if runtime module maintains persistent output buffers across multiple inference runs, it allows to reuse previously allocated memory for output tensors, potentially improving performance by reducing memory allocation overhead. But it can not handle live tensors from a previous invocation. Second invocation of model will overwrite output buffer of previous run.
Additional context
The text was updated successfully, but these errors were encountered: