You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As defined in the et_def.proto, the attribute that stores duration (duration_micros) uses microsecond precision.
There are some cases where we encounter lots of sub-microsecond-runtime COMP_NODE nodes, which cannot be aggregated into larger compute COMP_NODE. These times can add up, and turn into a significant amount of time.
Proposed Solution
I think it makes sense to have nanoseconds precision. Probably a double type would be the way to go. The simulator can then cap to the precision it's the best for its use cases.
The above seems to be the more straightforward solution, but alternatively, a per-node or per-trace "timescale" field could also do the trick.
The text was updated successfully, but these errors were encountered:
Thanks for reporting this issue. We will consider this in the next revision of schema.
One issue with adopting nanosecond would be to convert the default of every op we collect, incl. from other tools like Kineto. But double seems like a viable solution.
Problem Related to the Feature
As defined in the et_def.proto, the attribute that stores duration (duration_micros) uses microsecond precision.
There are some cases where we encounter lots of sub-microsecond-runtime COMP_NODE nodes, which cannot be aggregated into larger compute COMP_NODE. These times can add up, and turn into a significant amount of time.
Proposed Solution
I think it makes sense to have nanoseconds precision. Probably a double type would be the way to go. The simulator can then cap to the precision it's the best for its use cases.
The above seems to be the more straightforward solution, but alternatively, a per-node or per-trace "timescale" field could also do the trick.
The text was updated successfully, but these errors were encountered: