Bascially you have first to do a:
SurfFeatureDetector surf(400);
surf.detect(image1, keypoints1);
And then a:
surfDesc.compute(image1, keypoints1, descriptors1);
Why detect and comput are 2 different operation?
Doing the computing after the detect doesn't make redundance loops?
I found myself that .compute is the most expensive in my application.
.detect
is done in 0.2secs
.compute
Takes ~1sec. Is there any way to speed up .compute ?
The detection of keypoints is just a process for selecting points in the image that are considered "good features".
The extraction of descriptors of those keypoints is a completely different process that encodes properties of that feature like contrast with neighbours etc so it can be compared with other keypoints from different images, different sclae and orientation.
The way you describe a keypoint could be crucial for a successful matching, and this is really the key factor. Also the way you describe a keypoint is determinant for the matching speed. For example you can describe it as a float or as a binary secuence.
There is a difference between detecting the keypoints in an image and computing the descriptors for those keypoints. For example you can extract SURF keypoints and compute SIFT features. Note that in DescriptorExtractor::compute method, filters on keypoints are applied:
KeyPointsFilter::runByImageBorder()
KeyPointsFilter::runByKeypointSize();
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With