I understand how FAST, SIFT, SURF work but can't seem to figure out
which ones of the above are only detectors and which are extractors.
Basically, from that list of feature detectors/extractors (link to articles: FAST, GFTT, SIFT, SURF, MSER, STAR, ORB, BRISK, FREAK, BRIEF), some of them are only feature detectors (FAST, GFTT) others are both feature detectors and descriptor extractors (SIFT, SURF, ORB, FREAK).
If I remember correctly, BRIEF is only a descriptor extractor, so it needs features detected by some other algorithm like FAST or ORB.
To be sure which is which, you have to either browse the article related to the algorithm or browse opencv documentation to see which was implemented for the FeatureDetector
class or which was for the DescriptorExtractor
class.
Q1: classify the types of detectors, extractors and matchers based on
float and uchar, as mentioned, or some other type of classification?
Q2: explain the difference between the float and uchar classification
or whichever classification is being used?
Regarding questions 1 and 2, to classify them as float and uchar, the link you already posted is the best reference I know, maybe someone will be able to complete it.
Q3: mention how to initialize (code) various types of detectors,
extractors and matchers?
Answering question 3, OpenCV made the code to use the various types quite the same - mainly you have to choose one feature detector. Most of the difference is in choosing the type of matcher and you already mentioned the 3 ones that OpenCV has. Your best bet here is to read the documentation, code samples, and related Stack Overflow questions. Also, some blog posts are an excellent source of information, like these series of feature detector benchmarks by Ievgen Khvedchenia (The blog is no longer available so I had to create a raw text copy from its google cache).
Matchers are used to find if a descriptor is similar to another descriptor from a list. You can either compare your query descriptor with all other descriptors from the list (BruteForce) or you use a better heuristic (FlannBased, knnMatch). The problem is that the heuristics do not work for all types of descriptors. For example, FlannBased implementation used to work only with float
descriptors but not with uchar
's (But since 2.4.0, FlannBased with LSH index can be applied to uchar descriptors).
Quoting this App-Solut blog post about the DescriptorMatcher
types:
The DescriptorMatcher comes in the varieties “FlannBased”,
“BruteForceMatcher”, “BruteForce-L1” and “BruteForce-HammingLUT”. The
“FlannBased” matcher uses the flann (fast library for approximate
nearest neighbors) library under the hood to perform faster but
approximate matching. The “BruteForce-*” versions exhaustively searche
the dictionary to find the closest match for an image feature to a
word in the dictionary.
Some of the more popular combinations are:
Feature Detectors / Decriptor Extractors / Matchers types
(FAST, SURF) / SURF / FlannBased
(FAST, SIFT) / SIFT / FlannBased
(FAST, ORB) / ORB / Bruteforce
(FAST, ORB) / BRIEF / Bruteforce
(FAST, SURF) / FREAK / Bruteforce
You might have also noticed there are a few adapters (Dynamic, Pyramid, Grid) to the feature detectors. The App-Solut blog post summarizes really nicely their use:
(...) and there are also a couple of adapters one can use to change
the behavior of the key point detectors. For example the Dynamic
adapter which adjusts a detector type specific detection threshold
until enough key-points are found in an image or the Pyramid
adapter
which constructs a Gaussian pyramid to detect points on multiple
scales. The Pyramid
adapter is useful for feature descriptors which
are not scale invariant.
Further reading: