Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Medusa 0.1.5 performance issues vs 0.1.3 #414

Closed
GalloDaSballo opened this issue Jul 25, 2024 · 7 comments · Fixed by #427
Closed

Medusa 0.1.5 performance issues vs 0.1.3 #414

GalloDaSballo opened this issue Jul 25, 2024 · 7 comments · Fixed by #427

Comments

@GalloDaSballo
Copy link

All benchmarks done via Docker, with base image: ubuntu:20.04

It seems like the latest changes to Medusa make it so that it is slower in raising coverage

This appears to be less evident for smaller codebases but more apparent when dealing with more complex codebases

Here's a direct comparison done on eBTC:
Medusa 0.1.3 get's to 100 pretty fast: https://staging.getrecon.xyz/shares/c437d326-d059-48af-9de3-890c01f7bddc
Medusa 0.1.5 remains stuck at 50/60 for a bit: https://staging.getrecon.xyz/shares/653134fc-ae02-48bd-80bc-ca61628f8c8a

Another comparison:
Medusa 0.1.5 stuck at 45: https://staging.getrecon.xyz/shares/7c08cfcc-9e63-4d69-b26d-1cdfcab9e742
Medusa 0.1.3 get's to 100 very quickly: https://getrecon.xyz/dashboard/jobs/47ba31d6-8eed-4c6b-9792-b9ed39c371af

Link to branches are in the links above (scroll to Stats for Nerds)

@0xalpharush
Copy link
Contributor

0xalpharush commented Jul 25, 2024

We are investigating this. Would you mind running with {"testViewMethods": false} and seeing how that affects perf https://github.com/euler-xyz/euler-vault-kit/blob/20973e1dd2037d26e8dea2f4ab2849e53a77855e/medusa.json#L36?

@GalloDaSballo
Copy link
Author

GalloDaSballo commented Jul 25, 2024

We are investigating this. Would you mind running with {"testViewMethods": false} and seeing how that affects perf https://github.com/euler-xyz/euler-vault-kit/blob/20973e1dd2037d26e8dea2f4ab2849e53a77855e/medusa.json#L36?

Have queued a job here, it updates every minute, this one is set to run for an hour but should be compared after 15 mins:
https://staging.getrecon.xyz/shares/ff11b245-d03f-49c9-adbb-af60c6f1d326

It doesn't seem to offer much improvement, I believe the coverage difference is due to the untested view functions, but may be wrong

@0xalpharush
Copy link
Contributor

0xalpharush commented Jul 25, 2024

At a glance, it does seem like call/s is higher in 0.1.5 or am I mistaken? Will you try #415 as well? Not including calls to property tests in the call sequences may reduce the number of entries in the corpus (as does not calling view functions as you pointed out). So comparing the two coverage counts between versions may not be useful (as opposed to the % covered of the actual code being tested), but this is just a hunch

@GalloDaSballo
Copy link
Author

At a glance, it does seem like call/s is higher in 0.1.5 or am I mistaken? Will you try #415 as well? Not including calls to property tests in the call sequences may reduce the number of entries in the corpus (as does not calling view functions as you pointed out). So comparing the two coverage counts between versions may not be useful (as opposed to the % covered of the actual code being tested), but this is just a hunch

I think the 39 we're seeing is equivalent to the 45 we were getting before due to the reduce number of tests

Have stopped the job so you can grab the coverage report (you can also download the /medusa folder by clicking "download corpus")

Running the repo on 0.1.3 should net massively higher coverage (85+) in a matter of seconds

@GalloDaSballo
Copy link
Author

20+ minutes (link shared above)
Screenshot 2024-07-25 at 15 11 01

1 minute (0.1.3 locally)
Screenshot 2024-07-25 at 15 11 26

@GalloDaSballo
Copy link
Author

At a glance, it does seem like call/s is higher in 0.1.5 or am I mistaken? Will you try #415 as well? Not including calls to property tests in the call sequences may reduce the number of entries in the corpus (as does not calling view functions as you pointed out). So comparing the two coverage counts between versions may not be useful (as opposed to the % covered of the actual code being tested), but this is just a hunch

We can try running from #415 but we'll need a while to provision new docker images

@GalloDaSballo
Copy link
Author

GalloDaSballo commented Jul 25, 2024

Maybe a bit early to tell but the numbers does look better after building from #415
https://staging.getrecon.xyz/shares/4c26ba47-284a-43d1-9704-aceae4ee4384

From running a private repo of mine, the performance seems to also have improved quite a lot

@0xalpharush 0xalpharush mentioned this issue Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants