-
Notifications
You must be signed in to change notification settings - Fork 1
Discussion: benchmark package from NPM #3
Comments
I'm not the most knowledgeable person when it comes to benchmarks, so I would trust a well-used package much more than the one-off code I would come up with. Your concerns (especially the forceful de-optimization) do sound valid to me however, although they don't seem to be total blockers. I think would opt for either your first or your last option. As I won't be able to do it, this probably depends on how much time you would like to spend on working on the benchmark package. If there is a fork, using it here instead of the original version should be pretty simple. I couldn't find any other alternative package when setting this up either. Having testers do their own statistics seems complicated and error prone to me. |
I decided to look for alternative benchmark libraries first. Searching for "benchmark" on NPM turned up too many results, most being simple one-offs that haven't seen much use, wrappers of benchmark.js (which would suffer from the same problems), or both. However, I was able to identify a few promising options:
None of these options run a Student's t test, but to be honest, I rather have no test than an incorrect one. I checked the source code of these packages for silly things like unwarranted de-optimizations, and found nothing of the kind. @m90 how would you feel about converting the benchmark to Update: uubench, though minimal, might also be an option. Source code here. |
I just had a cursory glance at the docs and it occurred to me this might not support async benchmark functions (at least it is never mentioned anywhere) which the benchmark this repo is running would require. One could remove that requirement from the original code but then it deviates from its real world use. This might be a blocker, but maybe I am missing something. About the other options, I don't know. Browser-only seems like a blocker to me. As we talked about before I don't have too strong of an opinion on this topic, so I would leave choosing and implementing a different benchmark framework / approach up to you. This seems more like a question of personal preferences (which is perfectly fine) than one that really affects the result of the benchmark, and I simply don't have a preference as I don't know enough about the topic. You are the one who wants others to run this benchmark, so you need to be the one who needs to be content with what is being used. |
Oops, totally forgot to check about async. I agree that's necessary. I decided I don't want to invest any time in fixing benchmark.js (at least not at this time), so I guess the option that remains is accepting the quirks. I'll announce the benchmark in the pull request. Again, thanks for your awesome work!! |
The benchmark package does (at least) two questionable things:
I can think of a couple of options (from least to most effort):
The text was updated successfully, but these errors were encountered: