人脸识别方法的研究与实现翻译

上传人:痛*** 文档编号:46266609 上传时间:2021-12-11 格式:DOC 页数:40 大小:2.83MB
返回 下载 相关 举报
人脸识别方法的研究与实现翻译_第1页
第1页 / 共40页
人脸识别方法的研究与实现翻译_第2页
第2页 / 共40页
人脸识别方法的研究与实现翻译_第3页
第3页 / 共40页
点击查看更多>>
资源描述
外文文献资料收集:苏州大学 应用技术学院 11电子班(学号1116405021)靳冉International Journal of Artificial Intelligence & Applications (IJAIA), Vol.2, No.3, July 2011DOI : 10.5121/ijaia.2011.2305 45Real time face recognition using adaboost improved fast pca algorithm ABSTRACTThis paper presents an automated system for human face recognition in a real time background world for a large homemade dataset of persons face. The task is very difficult as the real time background subtraction in an image is still a challenge. Addition to this there is a huge variation in human face image in terms of size, pose and expression. The system proposed collapses most of this variance. To detect real time human face AdaBoost with Haar cascade is used and a simple fast PCA and LDA is used to recognize the faces detected. The matched face is then used to mark attendance in the laboratory, in our case. This biometric system is a real time attendance system based on the human face recognition with a simple and fast algorithms and gaining a high accuracy rate.KEYWORDSFace recognition, Eigenface, AdaBoost, Haar Cascade Classifier, Principal Component Analysis (PCA),Fast PCA, Linear Discriminant Analysis (LDA).1. INTRODUCTIONOver the last ten years or so, face recognition has become a popular area of research in computer vision. Face recognition is also one of the most successful applications of image analysis and understanding. Because of the nature of the problem of face recognition, not only computer science researchers are interested in it, but neuroscientists and psychologists are also interested for the same. It is the general opinion that advances in computer vision research will provide useful insights to neuroscientists and psychologists into how human brain works, and vice versa. The topic of real time face recognition for video and complex real-world environments has garnered tremendous attention for student to attend class daily means online attendance system as well as security system based on face recognition. Automated face recognition system is a big challenging problem and has gained much attention from last few decades. There are many approaches in this field. Many proposed algorithms are there to identify and recognize humanbeing face form given dataset. The recent development in this field has facilitated us with fast processing capacity and high accuracy. The efforts are also going in the direction to include learning techniques in this complex computer vision technology.There are many existing systems to identify faces and recognized them. But the systems are not so efficient to have automated face detection, identification and recognition. A lot of research work is going in this direction to increase the visual power of computer. Hence, there is a lot of scope in the development of visual and vision system. But there are difficulties in the path such as development of efficient visual feature extracting algorithms and high processing power for retrieval from a huge image database. As image is a complex high dimension (3D) matrix and processing matrix operation is not so fast and perfect. Hence, this direction us to handle with a huge image database and focus on the new algorithms which are more real-time and more efficient with maximum percentage of accuracy. Efficient and effective recognition of human face from image databases is now a requirement. Face recognition is a biometric method foridentifying individuals by their features of face. Applications of face recognition are widely spreading in areas such as criminal identification, security system, image and film processing.From the sequence of image captured by the capturing device, in our case camera, the goal is to find the best match in the database. Using pre-storage database we can identify or verify one or more identities in the scene. The general block diagram for face recognition system is having three main blocks, the first is face detection, second is face extraction and the third face recognition. The basic overall face recognition model looks like the one below, in figure 1.Different approaches of face recognition for still images can be categorized into tree main groups such as holistic approach, feature-based approach, and hybrid approach 1. Face recognition form a still image can have basic three categories, such as holistic approach, feature-based approach and hybrid approach 2.1.1 Holistic Approach: - In holistic approach, the whole face region is taken as an input in face detection system to perform face recognition.1.2 Feature-based Approach: - In feature-based approach, local features on face such as nose and eyes are segmented and then given to the face detection system to easier the task of face recognition.1.3 Hybrid Approach: - In hybrid approach, both local features and the whole face is used as the input to the face detection system. It is more similar to the behaviour or human being to recognize the face.This paper is divided into seven sections. The first section is the introduction part; the second section is a problem statement; the third section face recognition techniques- literature review;the fourth section is the proposed method for feature extraction form a face image dataset, the fifth division is about the implementation; the second last section shows the results; and the last is the conclusion section.2. PROBLEM STATMENTThe difficulties in face recognition are very real-time and natural. The face image can have head pose problem, illumination problem, facial expression can also be a big problem. Hair style and aging problem can also reduce the accuracy of the system. There can be many other problems such as occlusion, i.e., glass, scarf, etc., that can decrease the performance. Image is a multi-dimension matrix in mathematics that can be represented by a matrix value. Image can be treated as a vector having magnitude and direction both. It is known as vector image or image Vector.If represents a p x q image vector and x is matrix of image vector. Thus, image matrix can be represented as where t is transpose of the matrix x. Thus, to identify the glass in an image matrix is very difficult and requires some new approaches that can overcome these limitations. The algorithm proposed in this paper successfully overcomes these limitations. But before that lets see what all techniques have been used in the field of face identification andface recognition.3. FACE RECOGNITION TECHNIQUES3.1. Face detectionFace detection is a technology to determine the locations and size of a human being face in a digital image. It only detects facial expression and rest all in the image is treated as background and is subtracted from the image. It is a special case of object-class detection or in more general case as face localizer. Face-detection algorithms focused on the detection of frontal human faces, and also solve the multi-view face detection problem. The various techniques used todetect the face in the image are as below:3.1.1. Face detection as a pattern-classification task:In this face detection is a binary-pattern classification task. That is, the content of a given part of an image is transformed into features, after which a classifier trained on example faces decides whether that particular region of the image is a face, or not 3.3.1.2. Controlled background:In this technique the background is still or is fixed. Remove the background and only the faces will be left, assuming the image only contains a frontal face 3.3.1.3. By color:This technique is vulnerable. In this skin color is used to segment the color image to find the face in the image. But this has some drawback; the still background of the same color will also be segmented.3.1.4. By motion:The face in the image is usually in motion. Calculating the moving area will get the face segment 3. But this too have many disadvantages as there may be backgrounds which are in motion.3.1.5. Model-based:A face model can contain the appearance, shape, and motion of faces 3. This technique uses the face model to find the face in the image. Some of the models can be rectangle, round, square, heart, and triangle. It gives high level of accuracy if used with some other techniques.3.2. Face RecognitionFace recognition is a technique to identify a person face from a still image or moving pictures with a given image database of face images. Face recognition is biometric information of a person. However, face is subject to lots of changes and is more sensitive to environmental changes. Thus, the recognition rate of the face is low than the other biometric information of a person such as fingerprint, voice, iris, ear, palm geometry, retina, etc. There are many methods for face recognition and to increase the recognition rate. Some of the basic commonly used face recognition techniques are as below:3.2.1. Neural NetworksA neural network learning algorithm called Backpropagation is among the most effective approaches to machine learning when the data includes complex sensory input such as images,in our case face image. Neural network is a nonlinear network adding features to the learning system. Hence, the features extraction step may be more efficient than the linear Karhunen-Loeve methods which chose a dimensionality reducing linear projection that maximizes the scatter of all projected samples 3. This has classification time less than 0.5 seconds, but has training time more than hour or hours. However, when the number of persons increases, the computing expense will become more demanding 5. In general, neural network approaches encounter problems when the number of classes, i.e., individuals increases.3.2.2. Geometrical Feature MatchingThis technique is based on the set of geometrical features from the image of a face. The overall configuration can be described by a vector representing the position and size of the main facial features, such as eyes and eyebrows, nose, mouth, and the shape of face outline 5. One of the pioneering works on automated face recognition by using geometrical features was done by T.Kanade 5. Their system achieved a peak performance of 75% recognition rate on a database of 20 people using two images per person, one as the model and the other as the test image 4. I.J.Cox el 6 introduced a mixture-distance technique which achieved 95% recognition rate on a query database of 685 individuals. In this, each of the face was represented by 30 manually extracted distances. First the matching process utilized the information presented in a topological graphics representation of the feature points. Then the second will after that will be compensating for the different center location, two cost values, that are, the topological cost,and similarity cost, were evaluated. In short, geometrical feature matching based on precisely measured distances between features may be most useful for finding possible matches in a large database 4.3.2.3. Graph MatchingGraph matching is another method used to recognize face. M. Lades et al 7 presented a dynamic link structure for distortion invariant object recognition, which employed elastic graph matching to find the closest stored graph. This dynamic link is an extension of the neural networks. Face are represented as graphs, with nodes positioned at fiducial points, (i.e., exes,nose,), and edges labeled with two dimension (2-D) distance vector. Each node contains a set of 40 complex Gabor wavelet coefficients at different scales and orientations (phase,amplitude). They are called jets. Recognition is based on labeled graphs 8. A jet describes a small patch of grey values in an image I (x) around a given pixel x = (x; y). Each is labeled with jet and each edge is labeled with distance. Graph matching, that is, dynamic link is superior to all other recognition techniques in terms of the rotation invariance. But the matching process is complex and computationally expensive.3.2.4. EigenfacesEigenface is a one of the most thoroughly investigated approaches to face recognition 4. It is also known as Karhunen-Loeve expansion, eigenpicture, eigenvector, and principal component.L. Sirovich and M. Kirby 9, 10 used principal component analysis to efficiently represent pictures of faces. Any face image could be approximately reconstructed by a small collection of weights for each face and a standared face picture, that is, eigenpicture. The weights here are the obtained by projecting the face image onto the eigenpicture. In mathematics, eigenfaces are the set of eigenvectors used in the computer vision problem of human face recognition. The principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face image is the eigenface. Each face can be represented exactly by a linear combination of the eigenfaces 4. The best M eigenfaces construct an M dimension (M-D) space that is called the “face space” which is same as the image space discussed earlier.Illumination normalization 10 is usually necessary for the eigenfaces approach. L. Zhao andY.H. Yang 12 proposed a new method to compute the covariance matrix using three images each was taken in different lighting conditions to account for arbitrary illumination effects, if the object is Lambertian A. Pentland, B. Moghaddam 13 extended their early work on eigenface to eigenfeatures corresponding to face components, such as eyes, nose, mouth.Eigenfeatures combines facial metrics (measuring distance between facial features) with the eigenface approach 11. This method of face recognition is not much affected by the lighting effect and results somewhat similar results in different lighting conditions.3.2.5. FisherfaceBelhumeur et al 14 propose fisherfaces method by using PCA and Fishers linear discriminant analysis to propduce subspace projection matrix that is very similar to that of the eigen space method. It is one of the most successful widely used face recognition methods. The fisherfaces approach takes advantage of within-class information; minimizing variation within each class,yet maximizing class separation, the problem with variations in the same images such as different lighting conditions can be overcome. However, Fisherface requires several training images for each face, so it cannot be applied to the face recognition applications where only one example image per person is available for training.3.3. Feature Extraction TechniquesFacial feature extraction is necessary for identification of an individual face on a computer. As facial features, the shape of facial parts is automatically extracted from a frontal face image.There can be three methods for the facial feature extraction as given below:3.3.1. Geometry-basedThis technique is prosed by Kanada 15 the eyes, the mouth and the nose base are localized using the vertical edge map. These techniques require threshold, which, given the prevailing sensitivity, may adversely affect the achieved performance.3.3.2. Template-basedThis technique, matches the facial components to previously designed templates using appropriate energy functional. Genetic algorithms have been proposed for more efficient searching times in template matching.3.3.3. Color segmentation techniquesThis technique makes use of skin color to isolate the facial and non-facial part in the image. Any non-skin color region within the face is viewed as a candidate for eyes and or mouth.Research and experiments on face recognition still continuing since many decades but still there is no single algorithm perfect in real time face recognition with all the limitations discussed in second section. Here, in this paper, a new approach is proposed to somewhat overcome the limitations with a very less complexity.4. FACIAL FEATURE EXTRACTIONIn many problem domains combining more than one technique with any other technique(s) often results in improvement of the performance. Boosting is one of such technique used to increase the performance result. Facial features are very important in face recognition. Facial features can be of different types: region 16, 17, key point (landmark) 18, 19, and contour20, 21. In this paper, AdaBoost: Boosting algorithm with Haar Cascade Classifier for face detection and fast PCA and PCA with LDA for the purpose of face recognition. All these algorithms are explained one by one.4.1. Face Detection4.1.1. AdaBoost: The Boosting AlgorithmAdaBoost is used as a short form for Adaptive Boosting, which is a widely used machine learning algorithm and is formulated by Yoav Freund and Robert Schapire. Its a metaalgorithm,algorithm of algorithm, and is used in conjunction with other learning algorithms to improve their performance of that algorithm(s) 24. In our case abaBoost is combined with haar feature to improve the performance rate. The algorithm, AdaBoost is an adaptive algorithm in the sense that the subsequent classifiers built is tweaked in favor of instances of those misclassified by the previous classifiers. But it is very sensitive to noise data and the outliers.AdaBoost takes an input as a training set S =(x1,y1).,(xm,ym),where each instance of S,xi, belongs to a domain or instance space X, and similarly each label yi belongs to the finite label space, that is Y. Here in this paper, we only focus on the binary case when Y = -1,+1.The basic idea of boosting is actually to use the weak learner of the features calculated, to form a highly correct prediction rules by calling the weak learner repeatedly processed on the different-different distributions over the training examples.4.1.2. Haar Cascade ClassifierA Haar Classifier is also a machine learning algorithmic approach for the visual object detection, originally given by Viola & Jones 23. This technique was originally intended for the facial recognition but it can be used for any other object. The most important feature of the Haar Classifier is that, it quickly rejects regions that are highly unlikely to be contained in the object.The core basis for Haar cascade classifier object detection is the Haar-like features. These features, rather than using the intensity values of a pixel, use the change in contrast valuesbetween adjacent rectangular groups of pixels 25. The variance of contrast between the pixel groups are used to determine relative light and dark areas. The various Haar-like-features are shown in the figure 2.a. The set of basic Haar-like-feature is shown in figure 2.b, rotating which the other features can be generated. The value of a Haar-like feature is the difference between the sum of the pixel gray level values within the black and white rectangular regions, i.e.,f(x)=Sumblack rectangle (pixel gray level) Sumwhite rectangle (pixel gray level)Comparing with the raw pixel values, Haar-like features can reduce/increase the in-class/out-ofclass variability, and thus making classification much easier. The rectangle Haar-like features can be computed rapidly using “integral image”. Integral image at location of x, y contains the sum of the pixel values above and left of x, y, inclusive:The sum of pixel values within “D”:Using this Haar-like features the face detection cascade can be designed as in the figure 4,below. In this Haar cascade classifier an image is classified as a human face if it passes all the conditions, f1, f2, fn. If at any stage any of one or more conditions is false then the image does not contain the human face.Figure 4. The cascade classifier classified face and non-face.4.2. Face Recognition4.2.1. PCA and Fast PCA (Principal Component Analysis)Face recognition is one of the nonintrusive biometric techniques commonly used for verification and authentication. Local and global features 26 based extraction techniques are available for face re
展开阅读全文
相关资源
相关搜索

最新文档


当前位置:首页 > 图纸专区 > 成人自考


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!