Monday 22 July 2013

How to create an OpenCv projcet in visual studio 2010

We need to take following steps to do this:-

1)Create a windows 32 console application project.
2)We need to add following dependencies to the project properties->configuration properties->linker->input->Additonal dependencies

opencv_core220d.lib;opencv_highgui220d.lib;opencv_imgproc220d.lib;opencv_legacy220d.lib;opencv_ml220d.lib;opencv_video220d.lib;




3)We need to add following directories to the project properties->C/C++ ->general->Additional include directories

C:\OpenCV2.2\include;C:\OpenCV2.2\include\opencv;





4)We need to add "C:\OpenCV2.2\lib" directory to the project properties->configuration properties->linker->input->Additonal library directories

C:\OpenCV2.2\lib

Wednesday 10 July 2013

Face Detection algos

Algos of Face detection

PCA:
·      Principal Component Analysis is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.

Eigenface:
·         Eigenfaces are a set of eigenvectors.
·         Averaging each grayscale image in database pixel by pixel.
·         Database subtracts the average image from it.
·         Eigenvectors are formed to the column vector and brought together in one matrix (covariance matrix)

EP (Evolutionary Pursuit):
·         Eigenspace-based adaptive approach that searches for the best set of projection axes in order to maximize a fitness function, measuring at the same time the classification accuracy and generalization ability of the system.

AdaBoost + Haar cascade (Viola-Jones):
·         Haar(square wave output in mathematics) cascade (series of Haar Like features)
·         f(i)= sum(Ri,white)-sum(Rj,black)
Ri (white part of Haar rect.) and Rj (black part of Haar rect.) are the selected region of the selected image pixels.
if f(i)>thershold,1
if f(i)<threshold,-1
·         AdaBoost combines all weak classifiers into a strong classifier for matching the features.
·         s(A)+s(D)-s(B)-s(C)=i(x,y)
Here A, B, C, D are pixels of image.

Gabor jets (EBGM):
·         Faces are represented as graphs, with nodes positioned at fiducial points,(eyes, nose, ends of mouth) and edges labeled with 2-D distance vectors.
·      Node contains a set of 40 complex Gabor wavelet coefficients at different scales and orientations (phase, amplitude) and is called "jets".
·         Recognition is based on set of nodes connected by edges, nodes are labeled with jets, and edges are labeled with distances.

Kernel SVM:
·         Eigenface and fisher methods aim to find projection directions in 2nd order, whereas kernel provides higher order correlations.

LDA:
·         It finds the vectors in the underlying space that best discriminate among classes.

Trace Transform:
·         Generalizations of the Radon transform.
·         Tool for image processing which can be used for recognizing objects under transformations, e.g. rotation, translation and scaling.

Fisher faces:
·         This method for facial recognition is less sensitive to variation in lighting and pose of the face than the method using eigenfaces.

Active appearance model:

·         It decouples the face's shape from its texture: it does an eigenface decomposition of the face after warping it to mean shape. This allows it to perform better on different projections of the face, and when the face is tilted.

Tuesday 18 June 2013

Row smearing and column smearing in image

Row smearing:-Row smearing convert an input image  into a image which is smeared row wise .Here is the code ,input image and output image for the row smearing:-

for(i=0;i<height;i++)
        for(j=0;j<width;j++)
 {
if(data1[i*step+j]==0)
data2[i*step+j]=0;
else
{
for(k=0;data1[i*step+(j+k)]==255;k++);
if(k<h)
{
for(l=0;l<k;l++)
data2[i*step+(j+l)]=0;
}
j=j+(k-1);
}
}


                                                                   Input Image



Output Image



Column smearing:-Column smearing convert an input image into a image smeared column wise .Here is the code ,input image and output image for the column smearing:-


for(j=0;j<width;j++)
{
for(i=0;i<height;i++)
{
if(data1[i*step+j]==0)
{
data3[i*step+j]=0;
}
else
{
if(data1[i*step+j]==255)
{
k=0;
while(data1[(i+k)*step+j]==255)
{
k++;
if((i+k)>=height)
{
break;
}
}
if(k<v)
{
for(l=0;l<k;l++)
{
data3[((i+l)*step)+j]=0;
}
}
i=i+(k-1);
}
else
{
printf("%d",data1[(i*step)+j]);
}
}

}
}

                                                                   
                                                                    Input Image




Output Image





Monday 10 June 2013

SURF, FREAK, BRISK, ORB classes in opencv

SURF:-SURF class is used for or extracting Speeded Up Robust Features from an image.The class SURF implements Speeded Up Robust Features descriptor Bay06 . There is fast multi-scale Hessian keypoint detector that can be used to find the keypoints (which is the default option), but the descriptors can be also computed for the user-specified keypoints. The function can be used for object tracking and localization,
image stitching etc.

FREAK:-The Class implementing the FREAK (Fast Retina Keypoint) keypoint descriptor, described in [AOV12]. The algorithm propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Key- point (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are competitive alternatives to existing keypoints in particular for embedded applications.

BRISK:-Class implementing the BRISK keypoint detector and descriptor extractor, described in [LCS11].

ORB :-Class implementing the ORB (oriented BRIEF) keypoint detector and descriptor extractor, described in [RRKB11]. The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).

Saturday 6 April 2013

Average low pass filter for 24-bit color Image

The average low-pass filter is a algorithm where we take average of all adjacent pixels for the each pixel.We start it from the pixel [1][1] and to the pixel[h-2][w-2] and the boundary pixels copy same as available in input image.So we get a filter which takes a pixel and take average of all 8-neighbor and put it into the pixel.Thus if a pixel having a higher value than its neighbors it  is reduced thus there is only  a pass for the low value pixels and it reduces the pixels with high value.So it is called low pass filter.Here is the input and output image used for the program .There is a slight change in input and output image which very difficult to see.

Input Image

Output Image

Friday 5 April 2013

Horizontal Flip,vertical Flip and Reflection about origin in 24-bit color Image

Horizontal Flip:-Horizontal flip is an exchange of pixels in an image which shift the image left pixels into the right of the image and right pixels in left of the image.We need to traverse the whole image column wise and put all left most pixels into the right most side .Thus we move in each row and the image we get after traversing each row is a horizontally flipped image.Here is an input and output image for the horizontal flip program:-



Input Image
Output Image
 Vertical Flip:-Vertical flip shifts the topmost pixels to the bottom most pixels thus the whole pixels from top to bottom traverse and the pixels which are at the top in the input image goes to the bottom in the output image .In this we move row wise and traverse whole image .Here is the input and output image for the blog.


Input Image
Output Image
  Reflection about origin:-In this algorithms we move pixels diagonally .Thus the pixels which are located at the top left in the input image move to the bottom right in the output image and the output image is reflection about origin .We can say that this the combination of horizontal flip and vertical flip.Here is the input image and  image for the program:-

Input Image

Output Image

Connected component labeling for 24-bit bitmap Image

Connected component labeling is an application where we search the pixel of same pixel value and if found then labeled them with same value .Thus the whole image is processed and the pixels having same value shows the same label.This is also known as blob extraction,region labeling .Connected component labeling  is used in computer vision to detect connected region in binary digital image or color image.There is two method used for connected component labeling 4-way connectivity and 8-way connectivity .In 4-way connectivity the top,bottom ,right and left neighbors are checked while in 8-way connectivity all 8-neighbors are checked and the lowest label is given to the same neighbor .Thus the whole image is traversed and we get a labelled image.
                                          


  Here is the output of a labeled pixels of an image.We can see here the pixels with same color have same      
label.