-
-
Notifications
You must be signed in to change notification settings - Fork 802
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks #388
Comments
Stuff like this is not helping perf: public static void NotNull<T>(Expression<Func<T>> reference, T value)
{
if (value == null)
{
throw new ArgumentNullException(GetParameterName(reference));
}
} Using C#7 it can be replaced by: this.underlyingCreateMocks = underlyingCreateMocks ?? throw new ArgumentNullException(nameof(underlyingCreateMocks)); |
(Yep. Now that we have |
The idea of setting up some kind of strategy for measuring the effects of optimizations is a good one, btw.! Speaking about priorities, I think right now it's most important to first deal with extant bug reports and long-standing feature requests. Next, there are some convoluted bits of code that need to be rewritten to make their intent more clear. Finally, optimizations. ("Make it work, make it right, make it fast.") |
Tried updating from Moq 4.2.1402 to Moq 4.99 hoping for a performance improvement, but our unit tests now take 31 minutes, up from 17 minutes. |
@jatouma: Yes, unfortunately performance has suffered in more recent versions. This could be due to a variety of reasons, such as the introduction of |
@JohanLarsson - sorry for not getting back to you sooner regarding benchmarks. I've been spending some time using PerfView lately. I essentially profiled several runs of Moq's own unit test suite, then checked where execution time is spent. It seems that >98% of total execution time is spent on (a) proxy type & object creation via Castle DynamicProxy (which in turn relies on So for me, the shocking conclusion is that we can try optimizing Moq's codebase all we want, but since most time isn't actually spent inside Moq, we're not going to see any significant speed gains. I'm guessing the best we could do would be a <1% speed increase. Anything better than that will require some major rethinking and re-architecting of how Moq currently works. Probably the three most important things to change would be (1) how Moq deals with multi-dot setup expressions, (2) what Until that re-architecting has happened, I fear that using benchmarks as a guide to better performance will be a futile exercise in micro-optimization. Things like avoiding unnecessary I'd be happy to be proven wrong on this. If you have any experience with profiling, perhaps you could run your own analysis and either refute or confirm the above? I'd also value your opinion on whether it would still make sense to benchmark if my above assessment turns out to be correct, and what purpose the benchmarks would then serve. |
I think benchmarks is a good idea for most libraries. Does not need to be anything huge, just a couple of checks for common use cases is nice for tracking performance regressions. Also there is not a huge downside to benchmarks, don't think they will add much maintenance pain. Expressions ~can~ be cached but it requires really nasty comparers. I have not used Castle for anything but maybe there are pooling opportunities? From my experience there are usually a relatively small number of types that are mocked. |
Thanks for the reply Johan. Don't get me wrong, I like the idea of having benchmarks. I'm not even worried about their maintenance cost. I just want to make sure that we know what we can expect from them. I'll get back to them shortly. To answer a few of your other points first: Caching is something worth thinking about. I probably lack the experience in coming up with a cache that will hold on to the right amount and right selection of expressions. Given that Moq is perhaps most frequently used in unit tests, I have no idea what cache hit / miss rates could be expected in real world use cases (and whether caching would make a major difference)... but perhaps someone else has experience with that. It's also worth noting that Moq already has a (basic) class for comparing expression trees, it might be possible to build on top of that. Castle, AFAIK, already performs caching of the types it creates. Not sure though whether it also caches information about the types those refer to (base classes and interfaces). I'll try to find out more about this. Getting back to the benchmarks, though: if you're not discouraged by my profiling analysis, and if you'd like to set up a few simple benchmarks for some common use cases, using BenchmarkDotNet, I'd happily review a PR against
|
I'll try to find time to create a small PR tomorrow. |
Looking forward to it. Thank you! |
#504 |
@jakubozga: If you're interested in #504, why not post there? What about it should be fixed? |
Sorry about not getting a PR done, been crazy busy. |
I added a lambda expression compile cache and it helps performance. |
@JohanLarsson - This issue has been dormant for a while. At this point, I suggest we close it. While Moq 5 (https://github.com/moq/moq) is not yet published, its initial release is definitely getting closer, and I have the feeling that it'll take over the reins from Moq 4 pretty soon. I don't want to prematurely kill off Moq 4—in fact, I'd like to maintain it a while longer for those folks who can't migrate to Moq 5 right away—but bigger investments, such as setting up proper benchmarking, should perhaps better be made against Moq 5 once it's released. Please let me know if that's OK with you. |
Yes, it is ok. |
As it is now it is not uncommon for moq to show up in the profiler. It will probably never be a fast library with all the use of expressions but it can be good to track performance as it is pretty relevant when used in tests.
nodatime has a nice setup I think. Infrastructure for running them is perhaps tricky, not sure running them on CI is worth much.
The text was updated successfully, but these errors were encountered: