Wednesday, January 21, 2015

Face Recognition

Chapter 1: Introduction

1. Preliminary

In this project, we implemented a face recognition system by using principal component analysis, which is known as PCA. PCA method provide a mathematical way to reduce the dimension of problem.

Since the most elements of a facial image are highly correlated, it is better to extract a set of interesting and discriminative feature of a facial image. Mathematically speaking, we transform the correlated data to independent data. To implement the transform, we employed some linear algebra method such as SVD (Chapter3). The main idea is to obtain Eigenfaces that every face can be regard as a linear combination of these eigenfaces (Chapter4). Then the face recognition problem convert to a mathematic problem: what is the linear combination of a face? In other words, it simplify a problem from 2D to 1D.


Chapter 2: Normalization

1. How to normalized images

Normalization actually is a projection process. Because we detected the face images with different angle, we hope that we can use a same coordinate system for different angle faces. This coordinate system is a 64 x 64 window which we predetermined five positions as facial features' location. These five locations is \(\bar{F}\). Once we have \(\bar{F}\), we can obtain affine transformation coefficients for each image as following figure shown.

2. Transformation

For each feature, the affine transformation will be computed by using following formula:

$$f_{Pi} = A_{fi} + b$$

Where \(f_{i}\) is the feature location in original image and \(f_{i}^P\) is the position in normalization image. Since we already have \(f_{i}\) and \(f_{i}^P\) is also predetermined, let f = x y, then the transformation can be written into:

$$x_{Pi} = a_{11}x_{i} + a_{12}y_{i} + b_{1}$$

$$y_{Pi} = a_{21}x_{i} + a_{22}y_{i} + b_{2}$$

$$A = \begin{bmatrix} a_{11} a_{12} \\ a_{21} a_{22} \end{bmatrix}
b = \begin{bmatrix} b_{1} \\ b_{2} \end{bmatrix}$$

We apply it on five features, the transformation is given by:

$$
\begin{bmatrix}
a1_{i} a1_{i} 1 \\
a2_{i} a2_{i} 1 \\
a3_{i} a3_{i} 1 \\
a4_{i} a4_{i} 1 \\
a5_{i} a5_{i} 1 \end{bmatrix}
\begin{bmatrix}
a_{11} a_{21} \\
a_{12} a_{22} \\
b_{1}  b_{2}  \end{bmatrix} =
\begin{bmatrix}
x1_{i}^P y1_{i}^P \\
x2_{i}^P y2_{i}^P \\
x3_{i}^P y3_{i}^P \\
x4_{i}^P y4_{i}^P \\
x5_{i}^P y5_{i}^P \end{bmatrix}
$$

Normalization Process

Until now, the problem convert to solve Ax = b in linear algebra. Since we got 10 equations for 6 unknowns, we employ SVD (function pinv() in matlab) to solve it.

3. How to find \(\bar{F}\)

It is very important to get an appropriate \(\bar{F}\). We calculate \(\bar{F}\) by following steps:

• Step 1: We initialize \(\bar{F}\) with first image’s feature to find average of all.

• Step 2: Finding transformation that project \(\bar{F}\) to predefined position in 64x64 window. Then we can find this affine transformation as A and b matrices that has 6 parameters, and we apply this transformation to \(\bar{F}\).


Normalized Picture

• Step 3: For each image feature \(F_{i}\), we computing the A and b matrices for affine transformation that projecting \(\bar{F}\) to \(F_{i}\).

• Step 4: We change \(\bar{F}\) to averaged projected feature locations that we calculated previous step.

• Step 5: If error between \(\bar{F}_{t}\) - \(\bar{F}_{t - 1}\) is less than our threshold, we go back to step 2.

Chapter 3: Principal Component Analysis

We used PCA to transform our correlated data set into smaller uncorrelated data set. So PCA helped us on reducing the large dimensions of data set into smaller dimensions of it. After finding \(\bar{F}\), we followed these steps:

We converted our 64 by 64 normalized images \(X_{i}\) into row vector sized 1x4096 and constructed matrix \(\bar{D}_{pxd}\)(p = number of images, d = 64x64) that each row corresponds to Xi. Then we defined k between 50 and 100 to kth largest eigen values.

Before computing the PCA, we computed the covariance matrix of D as follows:

$$\epsilon = \frac{1}{p - 1}D^{T}D$$

Then we used matlab’s “svd()” function to get eigenvectors from  matrix. This process in our training is taking the most time. We put k eigenvectors corresponding to biggest k eigenvalues to \(\phi\).

We calculated PCA space by;

$$\phi_{i} = X_i\phi$$

PCA space represent the linear combination of eigenfaces of train images. Which means, for any train image, we just need know what is its linear combination of eigenfaces. After all, now we can store all our training images in the PCA space and easily search them to find closest matches.

Chapter 4: Recognition

In face recognition part we created a function called "recognize_face()" it looks at the test image name, e.g. ’Taner_1’, and return us a test image itself, and related 3 images we found in accuracy order, and error 0 or 1 if the first image is correct.

In this function, first we use \(\bar{F}\) to normalize our image. Then we use our \(\phi\) to carry it to our PCA space and get its corresponding feature vector \(\phi_{j}\). Then we calculating euclidean distance between \(\phi_{j}\) and all feature vectors \(\phi_{i}\) and order our result ascending according to this distance. We pick the top 3 rows, and show the result as our face recognition algorithm. So basically we used the Eigenface idea of set of eigenvectors to detect the face.





Experimental Result and Analysis

Chapter 5: Experimental Result and Analysis

We use two images of each person as test images and the rest are train images. For each test image, we calculate the distance of feature vector between test image and train images. Then we apply ascending sort on the train images by distance. Since we have three train images for each person, we shown top three of them.

We are unable to get a ideal result which means three of them are matched, however it is acceptable if the first image is matched. We compute the accuracy which is the ratio of all results that first image is matched. The accuracy we got is shown the following figures.


Accuracy of the Photographs

When you press 'Start' button. We obtain 3 accuracies. The first one shows the accuracy of the selected image that is only matched with the related image 1. The second one shows the accuracy of the selected image that is either matched with the related image 1 OR 2. The third one shows the accuracy of the selected image that is matched with the related image 1 OR 2 OR 3. We got 50\% accuracy. This accuracy might be improved with different parameters, espacially more accurate photographs. I also tried this program with last year's photographs and got about 70\% accuracy.


Accuracy of the Photographs(This Year)

Accuracy of the Photographs(Last Year)

Chapter 6: Summary

This project help us understand a great idea that PCA provide: transform the data from correlated to independent. From image processing point of view, PCA is similar with Fourier transform which represent an image as the combination of some principal spectrum, Face recognition based on PCA actually represent a face by the combination of eigenfaces. It is a powerful method to reduce the dimension of problem.

We also got some deeper understanding of the application of some linear algebra tools such as eigenvalue, eigenvector, SVD, etc. Specifically, we got the point that why and how we use SVD.

PROJECT LINK



REFERENCES

1 - Desire’s Notes - http://mscvision.u-bourgogne.fr/wp-content/uploads/2011/09/applied_maths1.pdf

2 - https://en.wikipedia.org/wiki/Principal_Component_Analysis

3 - https://en.wikipedia.org/wiki/Eigenface

4 - https://en.wikipedia.org/wiki/Eigenvector

No comments:

Post a Comment