Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Runtime] Parameterize constants in MISRA-C runtime #5062

Closed
wants to merge 1 commit into from

Conversation

liangfu
Copy link
Member

@liangfu liangfu commented Mar 13, 2020

As part of #5060, this PR parameterize constants in MISRA-C runtime proposed in PR #3934 .

Here is a brief statistics on running the test module:

Code size Peak Memory Footprint
Default Runtime 433 KB 94.64 KiB
Proposed Runtime 45 KB 86.01 KiB

Here is a brief statistics on running the demo module:

Code size Peak Memory Footprint
Default Runtime 631K 6.686 MB
Proposed Runtime 243K 8.863 MB

The memory footprint is larger in demo module, because MISRA-C runtime takes max size for all layers.

@ajtulloch @tmoreau89 Please review.

Change-Id: I5a0a6180d608ec9aa4606d46bd289641aa251874
@tqchen
Copy link
Member

tqchen commented Mar 13, 2020

Thanks @liangfu I think perhaps we could have a even better solution. By asking for a single stack to allocate these data structures from, which accounts for the memory for the containers as well as Tensors. In this way we won't need special magic constants for the runtime.

@tqchen tqchen added status: need test case need test cases to cover the change status: need update need update based on feedbacks and removed status: need test case need test cases to cover the change labels Mar 14, 2020
@liangfu
Copy link
Member Author

liangfu commented Mar 16, 2020

This is closed, since we are going to implement a memory container that returns addresses from a single stack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need update need update based on feedbacks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants