Efficient 3D Face Recognition System Based on PCA using Matlab
Prof. Yagnesh J. Parmar, Dr. Kalpesh H. Wandra
Abstract — This thesis describes a face recognition system that overcomes the problem of changes in gesture and mimics in three-dimensional (3D) range images. Here, we propose a local variation detection and restoration method based on the two-dimensional (2D) principal component analysis (PCA). The depth map of a 3D facial image is first smoothed using median filter to minimize the local variation. The detected face shape is cropped & normalized to a standard image size of 101x101 pixels and the forefront nose point is selected to be the image center. Facial depthvalues are scaled between 0 and 255 for translation and scaling-invariant identification. The preprocessed face image is smoothed to minimize the local variations. The 2DPCA is applied to the resultant range data and the corresponding principal-(or eigen-) images are used as the characteristic feature vectors of the subject to find his/her identity in the database of pre-recorded faces.
The system's performance is tested against the GavabDB facial databases. Experimental results show that the proposed method is able to identify subjects with different gesture and mimics in the presence of noise in their 3D facial image.
Keywords —3D PCA, VRML, Eigen Faces, 3D Face
Face detection and recognition system has to attribute a unique identity to each face by matching it to a large database of persons even in the presence of such image-acquisition problems as camera distortion, noise and low image resolution. Despite these rigid design specifications, one needs to maintain the usability of the system on contemporary computational devices. In other words, the processing involved should be efficient with respect to run-time and storage space. Although significant research effort has been devoted to human face recognition in the last decade, it has still remained a very challenging task. The major difficulties encountered are due to either external changes (e.g., varying lightning conditions, different head poses, and occlusion) or internal deformations (e.g., various emotional expressions, facial hair, and aging). While most research efforts have concentrated on recognizing a human face from two dimensional (2D) images , recently three- dimensional (3D) approaches have been receiving more attention . Even though the latest 2D face recognition systems have achieved good performance in constrained environments, they are still unable to deal with such problems as changes in head poses and illumination conditions . Since the human face is a 3D object whose 2D image projection is sensitive to these changes, utilizing the 3D face information can improve the face recognition performance considerably. Further, 3D measurements help solving scale, illumination, and rotation problems encountered in 2D analysis. Despite the above advantages, internal deformation problem still exists in 3D images . Thus, there is a need for a 3D model that deals with the aforementioned non-rigid variations.
Our objective in this research is to develop a face recognition system that overcomes the problem of changes in facial expression and gesture in 3D range images. This can be due to the differences in facial expression from one image to another, and is still the main source of errors in most of the existing 3D systems. The recognition error can be reduced by smoothing the images to compensate the changes. In order to achieve
II. Back ground and Review of past work
Despite that 2D intensity image has been the most popular and commonly used image for face recognition and easy to use but it has intrinsic problem that it cannot handle the change of illumination, facial expression and changing in pose. Recently with new technology and using 3D scanners, 3D face recognition has attracted a lot of researches because it is a more reliable system and able to face facial expression and illumination problem. A review of (3D) face recognition researches is as follows. Most of the 3D approaches for face recognition rely on both the depth and color information extracted from the face . M. Turke and A. pentland used Eigen faces for recognition . Gordon  extracted the facial curvature features in two stages: high level features and
low level features. This kind of method usually has to depend on the high-quality 3D data, which could characterize delicate features. The research work in  represented a new attempt to face recognition based on 3D point clouds by constructing 3D eigenfaces. The work in  represented a survey on 2D and 3D face recognition. In [6, 17, 19, 20], size invariant PCA-based approaches are represented. In , a detailed comparison between the existing approaches of 3D face recognition is given. Chang et al.  report a 92.8% recognition rate by performing PCA on the range-images of 277 people. However, they perform manual normalization which is not desirable in a real system. Lee and Milios  create an extended Gaussian image for each convex region in the image. Some of the existing 3D face recognition systems, except the deformations, are from some form of known gesture. The presented work deals mainly with changes in facial expression. It cannot handle the changes in target pose by training each individual using 2DPCA then minimum Euclidian distance is used for matching. Individual images are smoothed to overcome internal deformation. However, the focus of this paper is to handle changes in facial expression ignoring pose changes. Moreno and Sanchez  report 78% rank-one recognition on the same data set GavaDB. The presented method achieves 80.3% recognition rate on the same dataset with 8 principle component vectors and median filter with window size 9.