Human Face Classification using Neural Networks

Vidit Jain

  1. Motivation
  2. Related Work
  3. Methodology
  4. Results
  5. Conclusion
  6. Bibliography
  7. Documentation


"Yaar, the old days were better when the instructor use to call out the names and take the attendance. Now the proxy attendance has become a dream by the advent of this new automated system"

Well, the day is not far when the students will be talking likewise while coming out of the lecture halls. Here, we present one such attempt towards making this dream come true.

Positive identification of individuals is a very basic societal requirement. In small tribes and villages, everyone knew and recognized everyone else. You could easily detect a stranger or identify a potential breach of security. In todayís larger, more complex society, it isnít that simple. In fact, as more interactions take place electronically, it becomes even more important to have an electronic verification of a personís identity. Until recently, electronic verification took one of two forms. It was based on something the person had in their possession, like a magnetic swipe card, or something they knew, like a password. The problem is, these forms of electronic identification arenít very secure, because they can be given away, taken away, or lost and motivated people have found ways to forge or circumvent these credentials. The ultimate form of electronic verification of a personís identity is biometrics; using a physical attribute of the person to make a positive identification. 

There are many robust biometric techniques like fingerprinting which can be used for human authentication then why go for face recognition?

In many applications like the surveillance and monitoring ,say, of a public place, the traditional biometric techniques will fail as for obvious reasons we can not ask everyone to come and put his/her thumb on a slide or something similar. So we need a system which is similar to the human eye in some sense to identify a person. To cater this need and using the observations of human psychophysics, face recognition as a field emerged.

Different approaches have been tried by several groups, working world wide, to solve this problem. Many commercial products have also found their way into the market using one or the other technique. But so far no system / technique exists which has shown satisfactory results in all circumstances. A comparison of these techniques needs to be done. In this project, we will try to do a comparative study of the performances of three algorithms - Artificial Neural Network, Eigenfaces and Active Appearance Model based  methods for face recognition.

Often the problem of face recognition is confused with the problem of face detection. The two are related problems but definitely not the same. The latter is often done as a preprocessing step to obtain the position of the face, in the image, to be recognized. The difference is demonstrated in fig. 1


 Figure 1.      (a)  Face Detection                                                          (b) Face Recognition

OUTPUT  :  (a) There are nine faces detected in the image.
                    (b) The image contains the face of Dr. Amitabh Mukherjee.    

As of now, no database of Indian faces, which can be used by researchers, exists. As part of the project, we will also develop a database of faces of the subjects of Indian origin. We will try to compare the performances of existing face recognition techniques on the Indian face database. 


Different face recognition techniques can be broadly classified into four categories : Eigenfaces, Feature based, Hidden Markov Model, Neural Network based algorithms. Performances of implementations of different algorithms were evaluated on different databases of face. Some of the recognized face databases are : CMU face database, Olivetti Research Laboratory face database and MIT face database.

Eigenfaces for face representation was used first used by Sirovich and Kirvy which was later developed by Turk and Pentland for face recognition. Different techniques have been developed using neural networks. The implementation by Lawrence, Giles, Tsoi and Back showed good results.  Taylor and Cootes used Active Appearance Model to design a system for face identification. Some comparison has been done between eigenfaces vs fisherfaces, eigenfaces vs feature based techniques and likewise.


We have built a face database of people of Indian origin and tried to include some variations over different age and regional groups. This database is available free to the researchers worldwide. 

We have implement the Artificial Neural Network approach using the feature based approach. Here, we also compare the results with those obtained with  Eigenfaces (source code available) on the Indian face database. Different studies have shown that there are 22 critical features in a human face which constitute a feature set which can be used for identification using the face. However, all these 22 features can not be obtained if we also consider slight change in orientation and emotions. In our implementation, we have included the following 8 features to constitute the feature set:

Some of these features are shown in the following figure.

The program first extracts the features from the input image. The features are then fed into the neural network. The neural network outputs a value between 0 and 1. This neural network is a back propagation feedforward network with three layers - input, hidden and output layers. We have used 8x8x1 neural network. For every class, we construct a network during training. Whenever, an image is given for classification, it is fed into each of the networks. The network which gives the maximum output is given as the matched class. 

The source code for our system can be obtained here.


As one of the output of this project, we have developed a database of faces of people of Indian origin. This is the first attempt of its kind. We have included 22 female and 33 male faces. The database includes 11 photographs of each individual, in different orientation and different emotions. The following image shows the images merged into one.

 The following image sequence shows how the features are extracted.

Original image

After histogram equalization

Blurred image

Initial segmentation

After erosion

After dilation

Masked onto the original image

Final segmentation

Best-fit ellipse

Extracted features

The performance was however not very good. We have observed an accuracy of around 65% only. This is due to the fact that our feature set is not complete. Here, we show a comparison of our implementation with the eigenface implementation by Turk et al.

We have tested with 10 classes ( 5 males, 5 females). We trained each of the networks with some of these images (negative cases, for which the network should output 0, were also included). We worked on Pentium-II, 500MHz machines, and it took around 50 hours for the training of all the 14 networks ( some of them however were trained in 2 hours). We tested with sets of 12 images. The following results were obtained.

  Neural Network Eigenface
best-match 2nd match best match 2nd match
Set A 5 3 8 3
Set B 7 2 9 2
Set C 7 3 8 4

Similar results were obtained for large number of classes (=14).

In the above graph, the number of face images(per person) taken for training is shown on the x-axis. The number of successful matches are shown on the y-axis. The graph shows that the recognition rate increases with the increase in the size of training set. The slope of the curve decreases for large values.

From the above results, we can see that the performance of eigenfaces is better than the feature based approach. This has been observed by different implementation by different people. In fact, eigenfaces approach is used in the commercially available system for automated face classification.


We have developed a database which can be used for free for any academic purposes. The implementation of the face recognition system can be used for authentification and classification applications. However, it can not be used for surveillance applications as it works only for frontal face images.

Limitations : The major limitation of the feature based approaches are the images in which one or more features are not distinguishable. This approach also does not work for large changes in orientation and emotion. The initial training takes a lot of time for the networks to become stable.

Extensions: As we have mentioned earlier that the feature set that we have used is not complete, a possible extension would be to include more features. A separate system can be developed to identify the face regions in the input. This then in combination with the current implementation can serve as a content based search engine for directory of photographs of different people. Another useful extensions could be an implementation on parallel machines which would reduce the time required for training of the neural network significantly.


@misc{ zhao-face,
author = "W. Zhao and R. Chellappa and A. Rosenfeld and P.J. Phillips",
title = "Face Recognition: A Literature Survey",
url = "",
annote = {This paper provides a critical survey of still and video-based face recognition research. It also reviews the various issues that are relevant from the psychological point of view. Some technical details of data collection and performance evaluationof face recognition algorithms are also discussed. Finally, some suggestions about the possible ways to overcome the limitations of various approaches are also given. This seems to be a good introductory paper for the beginners.}

@article{ pentland,
author = "M. Turk and A. Pentland",
title = "Eigenfaces for recognition",
journal = "Journal of Cognitive Neuroscience",
volume = "3",
number = "1",
pages = "71--86",
year = "1991",
annote = {This paper is one of the most cited paper in Computer Vision. In this paper, the author describes the use of eigenfaces for representing faces which can be used for recognition. The theory of eigenfaces involves complex mathematics.}

@inproceedings{ brunelli92face, 
author = "Roberto Brunelli and Tomaso Poggio", 
title = "Face Recognition through Geometrical Features", 
booktitle = "European Conference on Computer Vision", 
pages = "792-800", 
year = "1992", 
url = "", 
annote = {This paper describes how the different features of a human face can be used for identification.}

@misc{ edwards98interpreting,
author = "G. Edwards and C. Taylor and T. Cootes",
title = "Interpreting Face Images using Active Appearance Models",
text = "G.J. Edwards, C.J. Taylor and T. Cootes. Interpreting Face Images using Active Appearance Models. Proc. of the 3rd Int. Conf. on Automatic Face and Gesture Recognition, Nara, Japan, pp 300-305, 1998.",
year = "1998",
url = "",
annote = {This paper demonstrates a statistical model of shape and grey-scale appearance for representing human faces. This model known as the Active Appearance Model is then used for interpreting / classifying faces.}

@article{ lawrence97face,
author = "Steve Lawrence and C. Lee Giles and A. C. Tsoi and A. D. Back",
title = "Face Recognition: {A} Convolutional Neural Network Approach",
journal = "IEEE Transactions on Neural Networks",
volume = "8",
number = "1",
pages = "98--113",
year = "1997",
url = "",
annote = {This paper describes the implementation of face recognition system using Artificial Neural Networks. The system used was a hybrid of a self-organizing map neural network and a convolutional neural network..}

More information about face recognition can be obtained from the Face Recognition Homepage


Documentation (in the form of README) can be obtained from here.