ABSTRACT: A facerecognition technology is used to automatically identify a person through a digitalimage. It is mainly used in security systems. The facerecognition will directly capture information about the shapes of faces. The main advantage of facial recognition is it identifies each individual’s skin tone of a human face’s surface, like the curves of the eye hole, nose, and chin, etc. this technology may also be used in very dark condition. It can view the face in different angles to identify. When compared with other biometrics systems using fingerprint and iris, facerecognition has different advantages because it is without touching the person. Trough Face images we can capture the person identification from a distance without touching or interacting with them. This report develops a socio-political analysis that bridges the technical and social-scientific literatures on FRT and addresses the unique challenges and concerns that attend its development, evaluation, and specific operational uses, contexts, and goals. It highlights the potential and limitations of the technology, noting those tasks for which it seems ready for deployment, those areas where performance obstacles may be overcome by future technological developments or sound operating procedures, and still other issues which appear intractable. Its concern with efficacy extends to ethical considerations.
Image acquisition is the process of retrieving an image from any hardware based source like high resolution digital camera. The pre-processing step includes a series of operations performed on the input image. It actually enhances the image, making it suitable for the next phase of operation. The various operations performed on the image in the preprocessing step include binarization, noise removing etc. Binarization process converts a gray scale or colour image into a binary image using global (Otsu‘s method) or local thresholding technique. Image enhancement is done through the noise removal operation. Feature extraction is a type of dimensionality reduction technique that effectively extracts the relevant information called feature vectors from the input data. Classification is the important phase of any recognition system.
[Abin M Sabu, et-al, 2018] gives various optical character recognition techniques that is used for various character recognition. Optical character recognition is a technique in which a scanned images or handwritten notes are converted into digital format. Optical Character Recognition consists of various stages includes pre-processing, Classification, Post Acquisition, Pre-Level processing, Segmented Processing, Post-Level processing, Feature Extraction. His paper gives an insight to the various details about optical character recognition techniques. From the study of various optical character recognition technique, it is clear that by altering the various techniques used in it, he can achieve the maximum perfect results. His paper is arranged in a such a way that it will provide the details of each techniques as well as the results obtained by them. Various modern techniques were introduced to remove the noise and to recognize the characters. Each step in optical character recognition is important such that each steps are interrelated to one other and to obtain an accurate result, results at each stage should be observed. 
LPR systems normally consist of the following units: Camera:- Takes image of a vehicle from either front or rear end.Illumination: -A controlled light that can bright up the plate, and allows day and night operation. In most cases the illumination is Infra-Red (IR) which is invisible to the driver.Frame Grabber:-An interface board between the camera and the PC that allows the software to read the image information. Computer:-Normally a PC running Windows or Linux. It runs the LPR application that controls the system, reads the images, analyzes and identifies the plate, and interfaces with other applications and systems.Software:-The application and the recognition package.Hardware:-Various input/output boards used to interface the external world (such as control boards and networking boards). Database:-The events are recorded on a local database or transmitted over the network. The data includes the recognition results and (optionally) the vehicle or driver-faceimage file.
Face detection is a computer technology being used in a variety of applications that identifies human faces in digital images. Face detection has been studied extensively for improving human computer interaction (HCI). Face detection is the first step to all facial analysis algorithms, like face alignment, face modelling, face relighting, facerecognition, face verification and authentication.However, not all face detectors perform well in unconstrained scenes due to variations in image appearance such as pose variations, illumination, expression variations, occlusions and facial expression. Due to the increasing importance of various applications of face detection like video surveillance and personal authentication, detectors should be highly accurate to locate faces in digital images and videos as opposed to the challenges faced. In this paper, a detailed study is performed on the various face detection algorithms, analysing their advantages and disadvantages. Finally, a fusion algorithm is presented that aims to reduce the disadvantages caused by the well known algorithms.
A facerecognition system is a computer application for automatically identifying or verifying a person from a digitalimage or a video frame. One of the ways to do this is by comparing selected facial features from the image and a facial database. Security and authentication of a person is a crucial part of any industry. There are many techniques used for security and authentication one of them is facerecognition. Facerecognition is an effective means of authenticating a person the advantage of this approach is that, it enables us to detect changes in the face pattern of an individual to an appreciable extent the recognition system can tolerate local variations in the face expressions of an individual. Hence facial recognition can be used as a key factor in crime identification and detection, mainly to identify criminals there are several approaches to facial recognition of which Imageprocessing principal component analysis (PCA) and neural networks (NN) have been incorporated in our project facerecognition as many applicable areas. Moreover it can be categories into facerecognition, face classification, one, or sex determination. The system consists of a database of a set of facial patterns for each individual. The characteristic features called „eigen faces‟ are extracted from the storage images using which the system is trained for subsequent recognition of new images.
Over the past few decades, in the field of computer vision and pattern recognition, the FaceRecognition (FR) is represented as one of the most active research topic. Nowadays, in some of the controlled conditions, the term automatic FR system will results a significant progress. But these systems will give unsatisfactory outcome in some of the unconstrained conditions like varying in pose, illumination with large variations, expression etc. Faces are easily occluded by the subject itself in the real-world scenarios by active or passive ways, for example by using the facial accessories like mask, sunglasses, hat, and scarfs for a personal or traditional reasons. Sometimes the objects that is in front of the face, for example mobile phone, hand, other person face, food and pets will results occlusion in face. And some of the conditions like self-occlusion, extreme illumination and poor image quality like blurring, shadow are also the reason for the occlusion in the faceimage. According to crime or security reasons the subject tend to occlude their faces to hide their identity. And it will increase the difficulty of recognizing the face where the subject’s cooperation is not at all applicable. First of all the discriminative facial features are distorted by this occlusions and in the feature space same subjects two face images distance will be increases. By this the performance of the recognition will be decreases where the occlusion results the inter-class variations to become smaller than that of intra-class variations. And secondly the recognition rate will be degrade when we get registration errors while the facial landmarks have been occluded .
Uvs.O: For each subject, we select K = 1, 2, 3 and 4 unoccluded images respectively to form the gallery sets and use the other four images with synthetic occlusions as the probe set. Fig. 7 shows the recognition results with different values of K. The correct identification rates of all methods increase when more gallery images are available (i.e., greater value of K). When there are multiple gallery images per class and no occlusion (level=0%) in images, HMM performs better than the supervised method SVM and the local matching based NBNN. But its performance is significantly affected by the increasing occlusions. In addition, when K = 1, HMM performs worst among all methods since there are not enough gallery images to train a HMM for each class. For NBNN and DICW, using the difference patch achieves better results than using the original patch (i.e., OP-NBNN and OP-Warp). Especially, by comparing DICW with OP-Warp, and NBNN with OP-NBNN, it can be found that difference patches improve the results of DICW more significantly than that of NBNN. As introduced in Section II-A, the difference patches are generated by the spatially continuous patches so they enhance the order information within a patch sequence, which is compatible with DICW. With the optimal location weights, NBNN-ub and OP-NBNN-ub perform better than SVM. When K = 1, 2, 3 and 4, the average rates for the six occlusion levels of DICW are 2.3%, 4.3%, 5.5% and 4.4% better than that of NBNN-ub, respectively. When the occlusion level = 0%, the performance of SRC is better than DICW. However, the performance drops sharply when the degree of occlusion increases. When K = 1, the Image-to-Class distance degenerates to the Image-to-Image distance. DICW, which allows time warping during matching, still achieves better results while the level of occlusion increases.
2.1 Emotion recognition by facial expressions Facial expressions give important clues about emotions. Therefore, several approaches have been proposed to classify human affective states. The features used are typically based on local spatial position or displacement of specific points and regions of the face, unlike the approaches based on audio, which use global statistics of the acoustic features. Mase proposed an emotion recognition system that uses the major directions of specific facial muscles . With 11 windows manually located in the face, the muscle movements were extracted by the use of optical flow. For classification, K-nearest neighbor rule was used, with an accuracy of 80% with four emotions: happiness, anger, disgust and surprise. Yacoob et al. proposed a similar method Instead of using facial muscle actions, they built a dictionary to convert motions associated with edge of the mouth, eyes and eyebrows, into a linguistic, per- frame, mid-level representation. Black et al. used parametric models to extract the shape and movements of the mouse, eye and eyebrows . They also built a mid- and high-level representation of facial actions by using a similar approach employed in , with 89% of accuracy. Tian et al. attempted to recognize Actions Units (AU), developed by Ekman and Friesen in 1978 , using permanent and transient facial features such as lip, nasolabial furrow and wrinkles . Geometrical models were used to locate the shapes and appearances of these features. They achieved a 96% of accuracy. Essa et al. developed a system that quantified facial movements based on parametric models of independent facial muscle groups . They modeled the face by the use of an optical flow method coupled with geometric, physical and motion-based dynamic models. They generated spatial-temporal templates that were used for emotion recognition. Without considering sadness that was not included in their work, a recognition accuracy rate of 98% was achieved.
As an extension to the existing system, proposed a method to perform gender recognition after facerecognition. Recognizing human gender is important since people respond differently based on gender of the person. From the Recognized face Gender Recognition is to be done and it is shown in Fig.5. For that features needed to be extracted from the faceimage. So HOG features  are extracted from the faceimage. The histogram of oriented gradients descriptor gives the local object appearance and shape within an image can be described by the distribution of intensity gradients or edge directions. First HOG features are obtained from the facial images of male and female. There after SVM classifier is trained with male features and then the classifier is trained with female features. That concludes the training part. For testing a facial image, First the HOG feature is computed for the image then the feature is given to the SVM classifier. Now the classifier will predict the gender of the person based on the HOG features.
Abstract— Performance of the face verification system depends on many conditions. One of the most problematic is varying illumination condition. Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical facerecognition systems. Our paper presents a simple and efficient preprocessing method that eliminates most of the effects of changing illumination and shadows while still preserving the essential appearance details that are needed for recognition. This preprocessing method run before feature extraction that incorporates a series of stages designed to counter the effects of illumination variations, local shadowing, and highlights while preserving the essential elements of visual appearance.In this paper, proposed a robust FaceRecognition System under uncontrolled illumination variation. In this Facerecognition system consists of three phases, illumination insensitive preprocessing method, Feature-extraction and score fusion. In the preprocessing stage illumination sensitive image transformed into illumination-insensitive image, and then to combines multiple classifiers with complementary features instead of improving the accuracy of a single classifier. Score fusion computes a weighted sum of scores, where the weight is am measure of the discriminating power of the component classifier. In this system demonstrated successful accuracy in facerecognition under different illumination condition. The method provides good performance on three sets that are widely used for testing under difficult lighting conditions: Extended Yale-B, FaceRecognition Grand Challenge Version 2 experiment (FRGC-204), FERET datasets. The results obtained from the experiments showed that the illumination preprocessing methods significantly improves the recognition rate and it’s a very important step in face verification system.
Image enhancement is one of the easiest and the most important areas of digitalimageprocessing. The core idea behind image enhancement is to find out information that is obscured or to highlight specific features according to the requirements of an image. Such as changing brightness & contrast etc. Basically, it involves manipulation of an image to get the desired image than original for specific applications. Many algorithms have been designed for the purpose of image enhancement in imageprocessing to change an image’s contrast, brightness, and various other such things. Image Enhancement aims to change the human perception of the images. Image Enhancement techniques are of two types: Spatial domain and Frequency domain. Image enhancement techniques have been widely used in many applications of imageprocessing where the subjective quality of images is important for human interpretation. Contrast is an important factor in any subjective evaluation of image quality. Contrast is created by the difference in luminance reflected from two adjacent surfaces. In other words, contrast is the difference in visual properties that makes an object distinguishable from other objects and the background. In visual perception, contrast is determined by the difference in the colour and brightness of the object with other objects. Our visual system is more sensitive to contrast than absolute luminance; therefore, we can perceive the world similarly regardless of the considerable changes in illumination conditions. Many algorithms for accomplishing contrast enhancement have been developed and applied to problems in imageprocessing. If the contrast of an image is highly concentrated on a specific range, e.g. an image is very dark; the information may be lost in those areas which are excessively and uniformly concentrated. The problem is to optimize the contrast of an image in order to represent all the information in the input image. There are two categories of Spatial domain processing: 1) Intensity Transformations : operate on single pixel, contrast manipulation, and image thresholding. 2) Spatial Filtering : work on a neighborhood of every pixel in an image, image smoothing and image sharpening.
The bundle adjustment is based on the collinear condition, which refers to the perspective center of a camera, the points on the photograph, and the points in the object space being aligned in the bundle of rays. Rotation elements and the perspective center position of camera at the moment of exposure (X0, Y0, Z0) resulted from the least square method. In this paper, the exterior orientation elements of digital images acquired by the CCD camera
In today’s scenario imageprocessing is one of the vast growing fields. It is a method which is commonly used to improve raw images which are received from various resources. It is a kind of signal processing. This paper provides an overview of imageprocessing methods. The main concern of this paper is to define various techniques used in different phases of imageprocessing.
Educational institutions’ administrators in our country and the whole world are concerned about regularity of student attendance. Student overall academic performance is affected by it. The conventional method of taking attendance by calling names or signing on paper is very time consuming, and hence inefficient. This problem gave birth to research on Radio frequency identification (RFID) authentication with faceprocessing and recognition though in this paper we basically highlighted on the faceprocessing and recognition. The system is made up of a camera which take the photos of individuals and a computer unit which performs face detection (locating faces from the image removing the background information) and facerecognition (identifying the persons) First, face images are acquired using webcam to create the database. Facerecognition system will detect the location of face in the image and will extract the features from the detected faces. As a result of feature extraction process, templates or eigenfaces are generated which are reduced set of data that represents the unique features of enrolled user’s face. These templates are stored in database after eigenface calculation. The basis of the eigenfaces calculation in this work is the Principal Component Analysis (PCA). The Principal Component Analysis is a method of projection to a subspace and is widely used in pattern recognition. The objectives of PCA are the replacement of correlated vectors of large dimensions with the uncorrelated vectors of smaller dimensions and to calculate a basis for the data set. C# was used for serial communication, the image training, detection and recognition and for the application interfaces, and in connection other physical components. At the end of this research work, we were able to achieve a classroom attendance system that uses the students’ images for authentication and at the same time, it is able to have high level of security and privacy because another student can never take attendance for the other.
Facerecognition is a non-intrusive form of identifying a given faceimage and matching it against a set of faces in the database, in order to validate a person . Today there are a number of facerecognition algorithms available and it can be attributed to the rigorous research done in this regard. Even then, facerecognition has been an active research domain in computer science. This is due to the fact that the inability or limited functionality of these systems in real time applications. The major reason being the variation in faces due to change in poses , age, goggles and other accessories, facial hair, illumination, etc .
Face is an important physical characteristic of human body, and is widely used in many crucial applications, such as video surveillance, criminal investigation, and security access system. Based on realistic demand, such as useful face images in dark environment and criminal profile, different modalities of face images appeared, e.g. three-dimensional (3D), near infrared (NIR), and thermal infrared (TIR) face images. Thus, researches with various faceimage modalities become a hot area. Most of them are set on knowing the modality of face images in advance, which contains a few limitations. In this thesis, we present approaches to faceimage modality recognition to extend the possibility of cross-modality researches as well as handle new modality-mixed face images. Furthermore, a large facial image database is assembled with five commonly used modalities such as 3D, NIR, TIR, sketch, and visible light spectrum (VIS). Based on the analysis of results, a feature descriptor based on convolutional neural network with linear kernel SVM did an optimal performance.
Abstract: This paper presents an overview of feature extraction methods for off-line recognition of segmented (isolated) digit/chratchter. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representations of the digit/characters, such as solid binary characters, skeletons (thinned digit /characters), or gray level sub images of each individual character. Latest research in this area has been able to grown some new methodologies to overcome the complexity of Guajarati digit writing style. The recognition of handwritten digits which are written in proper way to easily readable. The problem is human can write digit in different styles so it is not identified by the computer but the some feature extraction methodologies like end point, junction point; straight lines etc. For features identification and character classification studied different algorithm and technique.
ABSTRACT: In early stages of engineering design, pen-and-paper sketches are often used to quickly convey concepts and ideas. Free-form drawing is often preferable to using computer interfaces due to its ease of use, fluidity and lack of constraints. The objective of this project is to create a trainable sketched Simulink component recognizer and classifying the individual Simulink components from the input block diagram. The recognized components will be placed on the new Simulink model window after which operations can be performed over them. Noise from the input image is removed by Median filter, the segmentation process is done by K-means clustering algorithm and recognition of individual Simulink components from the input block diagram is done by Euclidean distance. The project aims to devise an efficient way to segment a control system block diagram into individual components for recognition.