Jarvis: This algorithm requires O(nh) time in the worst case for n input points with h extreme points.
Graham: O(nlogn) in the worst case scenario.
Source the ref of CGAL, from where I use the two algorithms.
That means that Jarvis can be faster for a dataset (let's say in 2 dimensions), when h is less than logn. However I would like to see it in action, but I fail in finding a dataset for this purpose. Does anybody know?
Googling yield this link, which actually supports what I am claiming above.
I just did similar stuff a while ago, so I'm posting answer even though there is an accepted answer, just for the numbers...
Using CGAL's implementation with 10^6 points and 3 points on the hull, Graham takes ~150ms and Jarvis ~87ms, see setup (blue square are all the other points):

3 points on the hull:
points| Jarvis | Graham
10^7 | 850ms | 1820ms
10^6 | 87ms | 150ms
10^5 | 10ms | 15ms
5 points on the hull:
points| Jarvis | Graham
10^7 | 1500ms | 1820ms
10^6 | 139ms | 150ms
6 points on the hull:
points| Jarvis | Graham
10^7 | 2560ms | 1820ms
10^6 | 170ms | 150ms
But besides these few special cases, Graham is much faster than Jarvis.
Let's assume that we have a big traingle and a lot of points inside it. The number of points on the hull(that is, h) is 3. If the number of points inside is really big, then h = 3 is less than log n. Jarvis would be much faster here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With