What I seen: First I have read these two other SO post
Why is Insertion sort better than Quick sort for small list of elements?
Is there ever a good reason to use Insertion Sort?
But the answers on there are not specific enough for me.
From answers from these two post they mainly pointed out Merge Sort and Quick Sort can be slow because of the extra overhead from the recursive function calls. But I am wondering how the specific threshold 7 get set?
My Question:
I want to know why the cut off is around 7 elements where quadratic sorting algorithm like Insertion Sort is faster than O(nlogn)
sorting algorithm like Quick Sort or Merge Sort.
- Use insertion sort on small subarrays. Mergesort has too much overhead for tiny subarrays.
- Cutoff to insertion sort for ~ 7 elements.
I got this from Princeton lecture slide which I think is reputable enough source. see on the 11th slide under Mergesort: Practical Improvements section.
I will really appreciate it if your answer includes examples for mathematical proof.
Big-O only notes the factor that dominates as n gets large. It ignores constant factors and lesser terms, which pretty much always exist and are more significant when n is small. As a consequence, Big-O is near useless for comparing algorithms that will only ever need to work on tiny inputs.
For example, you can have an O(n log n) function with a time graph like t = 5n log n + 2n + 3
, and an O(n^2) function whose time graph was like t = 0.5n^2 + n + 2
.
Compare those two graphs, and you'll find that in spite of Big-O, the O(n^2) function would be slightly faster until n
reaches about 13.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With