I found strand sort very appealing to sort singly linked lists in constant space, because it is much faster than for example insertion sort.
I see why it is O(n)
in the best case (the list is already sorted) and O(n^2)
in the worst case (the list is reversely sorted). But why O(n sqrt n)
in the average case? If algorithm is not based on bisection and has polynomial best-case and worst-case performance, is the average case just O(n^m)
, where m
is arithmetic mean of best-case's and worst-case's exponents (m = (1 + 2) / 2 = 3/2
, O(n sqrt n) = O(n^(3/2))
)?
If the inversion count is O(n), then the time complexity of insertion sort is O(n). In worst case, there can be n*(n-1)/2 inversions. The worst case occurs when the array is sorted in reverse order. So the worst case time complexity of insertion sort is O(n2).
The worst-case (and average-case) complexity of the insertion sort algorithm is O(n²). Meaning that, in the worst case, the time taken to sort a list is proportional to the square of the number of elements in the list. The best-case time complexity of insertion sort algorithm is O(n) time complexity.
As a result its time complexity is O(sqrt(n)) = O(sqrt(2^s)) = O(2^(s/2)) , where s is the size of the input, which is exponential.
Insertion Sort is an easy-to-implement, stable sorting algorithm with time complexity of O(n²) in the average and worst case, and O(n) in the best case.
The original reference to Strand sort is http://groups.google.com/group/fido7.ru.algorithms/msg/26084cdb04008ab3 ... according to that, it is O(n^2). Strand sort was presented as a component of J sort, which it claims is O(n lg n). That the average complexity is O(n^2) makes sense since, in random data, half the strands will be of length 1, and O((n/2)^2) = O(n^2).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With