I know from other posts that I can extract dense features from image with following code(python):
dense=cv2.FeatureDetector_create("Dense")
kp=dense.detect(imgGray)
kp,des=sift.compute(imgGray,kp)
Say, I'd like to have the block size of SIFT descriptor set to 20X20(instead of 16X16 default), with 4X4 bins(as default), each bin size 5X5, to compute the gradient statistics, is there a way to do so? (python or c++)
Update:
As suggested by Rick M., I read the document, but still cannot figure out the way the dense detector is constructed. Especially the role of 'scale'. In the document, it wrote:
class DenseFeatureDetector : public FeatureDetector
{
public:
DenseFeatureDetector( float initFeatureScale=1.f, int featureScaleLevels=1,
float featureScaleMul=0.1f,
int initXyStep=6, int initImgBound=0,
bool varyXyStepWithScale=true,
bool varyImgBoundWithScale=false );
protected:
...
};
with explaination as follows:
The detector generates several levels (in the amount of
featureScaleLevels) of features. Features of each level are located in the
nodes of a regular grid over the image (excluding the image boundary of
given size). The level parameters (a feature scale, a node size, a size of
boundary) are multiplied by featureScaleMul with level index growing
depending on input flags, viz.:
-Feature scale is multiplied always.
-The grid node size is multiplied if varyXyStepWithScale is true.
-Size of image boundary is multiplied if varyImgBoundWithScale is true.
I suppose what I want to do is to set grid node size as 20, so I think I should set featureScaleMul = 20.0f/16. Is that right?
The current method I use is use the default dense detector and set the _size of returned key points to 20 one by one, and I'm not sure that's what I want.