I have a jsperf test case, and the result are pretty confusing. I have three "snippets":
and most of the time, they all come out about the same speed... even the control! I guessed that the JS JIT compiler was removing my "unnecessary" instructions when they didn't seem to have any effect; so I started accumulating the results, and logging them to the console when the test loop is done, e.g.
for (var i = 0; i < nNumbers; i++) {
result += a[i] / b[i];
}
console.log(result);
But then, I got wildly differing results when the console was open from when it wasn't. The slowdown from the console logging seemed to overwhelm any other performance issues.
So I tried cranking up the number of iterations within each "snippet," to minimize the amount of logging relative to the operations I'm trying to test. But I still get no significant speed difference between the three snippets. Really, division and multiplication are both about the same speed as evaluating a constant?? I must be doing something wrong. Or jsperf is broken.
There are related questions already answered, but none that I've found specific to Javascript benchmarking.
Don't put console.logs in your timed sections. It's horribly slow in comparisons to the operations you actually want to measure, so it skews your results. Also - as you noticed - it varies in timing when the console is open or not.
You can prevent deoptimisations by putting your results in a global array. The optimiser can only remove code that does not affect the outcome, which is impossible if it manipulates global state.
Of course, this still does not necessarily prevent loop-invariant code motion, so you also need to make sure that your timed code always operates on different data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With