frontal face

Top PDF frontal face:

Parallel Algorithm for Image Recognition and Retrieval from Human Frontal Face Image Database

Parallel Algorithm for Image Recognition and Retrieval from Human Frontal Face Image Database

Image Search Interface provides interface to user for searching images. The proposed system provides users two ways for searching images. First one is Searching Images by Image and second is Searching Images by Name. In Searching Images by Image, the user has to submit input image in which at least one human frontal face is present and set the accuracy percentage for searching images. After submitting this information, the proposed system will detect and crop the Human Frontal Face Image in an input image. Then it calculates the histogram for the cropped Human Frontal Face Image. Then the proposed system compare histogram of cropped Human Frontal Face Image from input image with every Cropped GrayScale Image of Human Frontal Face Image saved in Human Frontal Face Image Database by applying Parallel Algorithm. For detection of Human Frontal Face in an input image, histogram calculation and histogram comparison of an image, the proposed system uses the OpenCV Library. Then the proposed system retrieve and shows only those images from Human Frontal Face Image Database which histogram comparison result matches the user accuracy percentage to prevent unwanted result. In searching images by name, the user can search images of a particular person by submitting his/her first name, middle name or last name. The user can give any of the name from first name, middle name and last name as an input or can give full name as an input for searching images by name. Then proposed system search images in Human Frontal Face Image Database for the given input name by applying Parallel Algorithm. The proposed system will retrieve images from Human Frontal Face Image Database and show to the user which names match with the input name.
Show more

9 Read more

Eye Detection in Frontal Face Images

Eye Detection in Frontal Face Images

The intent of this project was to detect eyes in the given frontal face image. This is achieved using the technique called Circular Hough transform. Our method detects eyes in a given frontal face image. Four different tests were proposed to detect the eyes. The proposed algorithm is very efficient in detecting eyes. The proposed approach has been tested on various images of database. Experimental results show that this method works well with the faces without spectacles. The proposed method can be used to check whether the eyes are closed in the given frontal image.
Show more

6 Read more

Detection of Eyes through Spectacle from Human Frontal Face Image using Open CV

Detection of Eyes through Spectacle from Human Frontal Face Image using Open CV

ABSTRACT: Eye detection is an important aspects in many applications such as Human Eyes Recognition, Pupil Detection, Eye tracking, Iris Detection etc. Detecting eyes without spectacle reduce the complexity over to spectacle. The proposed system for detection of eyes which are covered by spectacle from human frontal face image using the predefined classes in Open Source Computer Vision and the pre-trained files in EmguCV library. Open Source Computer Vision is machine learning software library, originally developed by Intel’s research center which provides Haar like features, Cascade Classifiers etc. for image processing. The Open Source Computer Vision library is cross- platform and specially designed for real time applications.
Show more

7 Read more

Co-Occurrence of Local Binary Patterns Features for Frontal Face Detection in Surveillance Applications

Co-Occurrence of Local Binary Patterns Features for Frontal Face Detection in Surveillance Applications

In this work, a type of features called Co-occurrence of Local Binary Patterns features (CoLBP) is proposed. The CoLBP features are used to implement a frontal face detector that is capable to achieve a high-performance rate. This face detector is used for surveillance purposes; it is applied on a low-resolution 2D information from a static camera mounted in a position where mostly frontal faces are captured. The proposed CoLBP features are based on the rotational LBP features [20]. This paper uses the rotational LBP with all possible resolutions in the examined scanning window to capture the maximum possible structure of the window that can be obtained using the rotational LBP operator. Unlike most of the known LBP features extensions in [11, 21, 22] where the pixels of the examined scanning window are transformed to LBP values, then the features are the histogram bins of the LBP values; in this work, the features are the LBP values of the pixels, as explained in Figure 2. Hence, extracting a feature of the proposed features (CoLBP) requires computing one pixel’s rotational LBP value whereas in the histogram based LBP features as in [11, 21, 22], it requires to compute all of the examined
Show more

17 Read more

Effect of Pre Presentation of a Frontal Face on the Shift of Visual Attention Induced by Averted Gaze

Effect of Pre Presentation of a Frontal Face on the Shift of Visual Attention Induced by Averted Gaze

These results in sum suggested that the addition of the first-frame stimulus could capture the attention for visual information. Based on this effect (similar to a priming effect), RTs were shortened. If a frontal face was used as the first-frame stimulus, the processing of the face and attentional concentration may start simulta- neously. Seemingly, the attention captured by the first-frame stimulus was kept on purpose even when the first-frame presentation duration was longer, because of expectation that the visual stimulus would change. However, when there is a change of head orientation between the first and second stimuli, re-processing is re- quired for determination of the gaze direction. For this reason, the facilitation effect of RT is reset when the second-frame averted face is presented at a certain timing, possibly at the timing of the release of attention after the completion of the first-frame stimulus processing. When presenting the first-frame frontal face longer, we speculate that attention is re-focused on the stimulus via the top-down pathway, based on the participants’ awareness that the second stimulus would come.
Show more

10 Read more

Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences

Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences

Usually face detection [1] and localization is the first step in many biometric applications. Face detection is defined as to isolate human faces from their background and exactly locate their position in an image. Automating the process to computer requires the use of various image processing techniques. Due to faces in different scales, orientations, and with different head poses, face detection is challenging task. In our system multiple frames are generated from the given video. From that video we detect the face or multiple faces. For this purpose we use template matching and skin color information as a method of detecting faces.
Show more

5 Read more

A multi-cue spatio-temporal framework for automatic frontal face clustering in video sequences

A multi-cue spatio-temporal framework for automatic frontal face clustering in video sequences

To overcome hard situations due to long periods of unde- tected faces, we first propose to extend detections by using short-term backward and forward tracking. This algo- rithm provides additional space–time information to help in situations involving two detections distant in time. The tracking system is based on the estimation of the optimal position and size of the reference face (which is the patch of the detection in the present frame) at a further frame. The optimization procedure uses a cost function based on the appearance dissimilarity with the reference patch.

12 Read more

Frontal Face Clustering In Video Surveillance

Frontal Face Clustering In Video Surveillance

3.Face Detection:Mutiple Faces present in the video sequence is detected by means of viola-jones algorithm. 3.1 Viola-Jones’s Detector:This framework, designed for rapid object detection, is based on the idea of a boosted cascade of weak classifiers [1] but extends the original feature set and provides different boosting variants for

7 Read more

Pose Regularization Based Automatic Multi-View Face Recognition Method

Pose Regularization Based Automatic Multi-View Face Recognition Method

Unlike previous pose normalization approaches, where non-frontal face images were transformed into frontal images, the proposed 3D model based pose regularization method generates synthetic target images to resemble the pose variations in query images. We should point out that generating non-frontal views from frontal face images is much easier and more accurate than recovering frontal views from non-frontal face images. This is because it is difficult to automatically detect accurate landmarks under large pose variations which are required to build a 3D face model. Additionally, since many areas of a face are significantly occluded under large pose variations, it is problematic to recover the frontal view for the occluded facial regions. The proposed pose regularization approach is similar to the novel view rendering based on 3D GEM, but the proposed method uses a simplified 3D Morphable Model [6]. Additionally, instead of aligning the synthetic target images and testing face images based on eye positions, we perform face alignment using Procrustes analysis under large pose variations. Moreover, our face matching method with blocked MLBP features provides better robustness against face illumination and expression variations. Finally, we show the expansibility of the proposed approach by replacing our MLBP based face matcher with two state- of-the-art face matching systems.
Show more

9 Read more

Graphical Authentication Mechanisms: A Survey

Graphical Authentication Mechanisms: A Survey

The eigenfaces is well known method for face recognition. Sirovich and Kirby[1] had efficiently representing human faces using principle component analysis. M.A Turk and Alex P. Pentland[2] developed the near real-time eigenfaces systems for face recognition using eigenfaces and Euclidean distance.. The research is focused to develop the computational model of emotion recognition that is fast, simple and accurate in different environments. Therefore, in this paper we develop a technique to extract features from an intensity image of human frontal face to represent the features using eigenfaces and then it is demonstrated that the features vectors obtained from the eigenfaces can easily be used with SVM for emotion recognition.
Show more

8 Read more

An Advanced Method for Person Identification from Image Using HMM

An Advanced Method for Person Identification from Image Using HMM

As said before HMMs generally work on sequences of symbols called observation vectors, while an image usually is represented by a simple 2D matrix. In the case of using a one dimensional HMM in face recognition problems, the recognition process is based on a frontal face view where the facial regions like hair, forehead, eyes, nose and mouth come in a natural order from top to bottom. In this paper we divided image faces into seven regions which each is assigned to a state in a left to right one dimensional HMM. Figure 1 shows the mentioned seven face regions.
Show more

5 Read more

I. INTRODUCTION HE concept of the modular robotic system has attracted

I. INTRODUCTION HE concept of the modular robotic system has attracted

In Fig. 9, we show examples of the motion of round or oval objects by octahedral module ABCDEF without (a) and with (b) a guide system. In this mode, octahedral module 1 (Fig. 2) moves within a closed surface and fixes the frontal face (ΔDEF, say). The end of the extended object (such as pipe, rod, or cable) is placed in the rear face ΔABC (Fig. 9, initial position), and linear drives 2 are turned on in reverse. The length of the rods in the rear face is reduced until the object is captured by force specified by the readings of the force sensors 4 at the radial limiters at the midpoints of the rods in rear face ΔABC. Then, at a command from control system 11, linear drives 2 are switched off, and the coordinates of points 7 are determined, in the basic coordinate system. After the object is fixed (Fig. 9, cycle 1), coordinated decrease in length of the rods in the lateral faces (ΔABD, ΔBDE, ΔBCE, ΔCEF, ΔACF, ΔADF) shifts the rear face ΔABC together with the object into the closed surface by some fixed distance, which is recorded with respect to the basic coordinate system (Fig. 9, cycle 2). Then the length of the rods in the rear face is increased until the radial limiters at the midpoints, together with the object, are completely released (Fig. 9, cycle 3) and the object is unable to move in the opposite direction (Fig. 9a, cycle 3). Then the length of the rods in the lateral faces is increased to its initial value (Fig. 9, initial position). Thereafter, the motion of the object is repeated as many times as is necessary, and the total length traversed at the end of the process is determined. Note that the motion of the object may be conducted without (Fig. 9a) or with (Fig. 9b) a guide system. Without a guide system, additional effort is required to prevent inverse motion of the object in cycle 3. With a guide system (Fig. 9b), that is unnecessary. If required, motion in a combination of modes 1 and 4 is possible. In that case, the object will move with simultaneous independent motion of octahedral module 1 within the closed system. In contrast to mode 4 (Fig. 9b), the length of the rods in the rear face ΔABC increases not until the object is released but until fixing of points 7 of the rear face ΔABC
Show more

6 Read more

Face recognition with variation in pose angle using face graphs

Face recognition with variation in pose angle using face graphs

The previous chapter detailed some of the preliminary steps that were used for improving face recognition results across pose. Although the Feature Weighting scheme provided some improvement in recognition rates, it was insufficient. The Face Derotation scheme, introduced in the previous chapter, proved to be much less useful in its current state. This chapter introduces the implementation of a newly proposed mapping scheme focused on improving recognition of faces across poses by altering the feature descriptors, rather than altering the features themselves as was done in the Face Derotation scheme. Although the concept implemented here can work for comparison between faces for any known pose variation, the focus in this work is placed on trying to improve recognition rates of non-frontal faces against their frontal face matches. The following sections explain and evaluate this recognition scheme.
Show more

102 Read more

Perfect selfie that can be used in face recognition with a passport photo

Perfect selfie that can be used in face recognition with a passport photo

To test the stability of our frontal face pose we used the OUR database [7] to compute the pose estimation errors. The Our database contains 6660 images of 90 subjects. Each subject has 74 images, where 37 images were taken every 5 degree from right profile (defined as +90°) to left profile (defined as - 90°) in the pan rotation. The remaining 37 images are generated (synthesized) by the existing 37 images using commercial image processing software in the way of flipping them horizontally. For two fast tests only used the images between -35° to 35° of twenty subjects are used to stay well within the boundaries of the face detection to work. An important feature of the frontal face pose detection is its margin of error for non-frontal face images. The first information we are interested in is how many and what kind of errors our frontal face pose produces for purely 0° rotated frontal face images. The second thing we need to test is what the connection is between the horizontal rotation of the face and the produced errors that say if the face is rotated or a perfect frontal face pose.
Show more

16 Read more

Efficient Modified Normalized Pixel Difference Face Detection Algorithm

Efficient Modified Normalized Pixel Difference Face Detection Algorithm

In this paper a comparison has been made between Viola-Jones, NPD, NPD2 and VNPD based face detection algorithm fordetecting faces in the controlled background. Results show thatViola-Jonesalgorithm is more efficient in comparison to NPD, NPD2 and VNPD based face detection algorithms when frontal face images are used as input. For tilted and curved face images NPD based face detection and VNPD algorithms shows almost same good performance. But still all thesefour algorithms are not able to give very good results in different environmental conditions. The results can change depending on the dataset to being analyzed but in all cases it is possible to identify the presence of faces in the image. Based on theexperiment results of the all four algorithms, it can be concluded that all the four algorithms have their own advantages and disadvantages in different environments.
Show more

10 Read more

IoT Based Classroom Environment Monitoring System

IoT Based Classroom Environment Monitoring System

Face recognition module of this classroom environment monitoring system is implemented in local binary pattern algorithm. Histogram intensity values are calculated with local binary operator. Frontal face of each student are captured and trained the system for authorization. Frontal face is captured in NxM width and height, then cropped by MxM dimension to focus the face image alone. MxM image is converted to gray scale image from its original color image. Then noise are removed from the gray scale image. Noise present in the image may be of blurring, Rayleigh, Gaussian, salt and pepper. Adaptive mean filter is applied for de-noising. In every region Local binary operator is applied on the gray scale image. The center pixel is considered and the remaining neighbor pixels are compared with the center pixel. If the center pixel is less than the neighbor pixel, the neighbor pixel value is replaced by ‘1’, otherwise ‘0’ is replaced to the neighbor pixel. Then intensity values are calculated by translating the binary values to its equivalent decimal values in the range of 0 to 255. From the intensity values, probability density function (PDF) is calculated for duplication identification. LBP operator example is shown in Fig.2.
Show more

5 Read more

AVOID   Accident Prevention of Vehicles by Observing Instantaneous Feeds of Driver Drowsiness

AVOID Accident Prevention of Vehicles by Observing Instantaneous Feeds of Driver Drowsiness

Driver fatigue is one the leading causes of car accidents in the world. The system aims to reduce the number of road accidents by detecting drowsiness and alerting the driver. The purpose of this paper is to develop a driver fatigue detection system. This system uses eye blinking frequency and yawning frequency of the driver to analyze drowsy driver and based on the level of drowsiness system will alert the driver. Driver’s facial features are captured by using a camera then this video input is used by system to monitor the driver's eyes to detect early stages of sleep as well as short periods of sleep; for video capturing system uses mobile phone camera making system portable and cost effective. Working of proposed system is based on the driver drowsiness detection method using Haar Cascade frontal face Classifier to detect the frontal facial structure followed by using dlib and Convolutional Neural Networks (CNNs) to extract information from sequence of images (video frames) for predicting driver fatigue.
Show more

7 Read more

Automated face detection for occurrence and occupancy estimation in chimpanzees

Automated face detection for occurrence and occupancy estimation in chimpanzees

The face detection software detected videos containing chimpanzee frontal face.. views with an acceptable low rate of false positives.[r]

35 Read more

Female and male orbit asymmetry: Digital analysis

Female and male orbit asymmetry: Digital analysis

In the examined group, handedness was determined by Oldfield’s questionnaire; which is used to assess lateral- ity. Based on the results, men and women were classified into groups of dextral and sinistral. Perceptive-motoric integration was assessed using the PMT test (Peg Moving Task). The width of the skull was measured on cephalo- metric radiographs performed using the X-ray technique in the norma frontalis position, which is performed by measuring the distance between the zygion points in the orbitomeatal line. The width of the face on both the left and the right side, and the differences between the right and the left (R-L) sides of the skull were measured in rela- tion to the median frontal line. Multidimensional analysis demonstrated that gender and handedness are important factors influencing the widths of both right and left sides of the face. Gender-handedness interaction was statisti- cally irrelevant.
Show more

8 Read more

SIFT Based Face Recognition for Frontal and Profile Faces

SIFT Based Face Recognition for Frontal and Profile Faces

Abstract:The process is most assert for frontage and portrait faces. It is one of the challenging techniques in face recognition. We know that face is a natural authentication system which plays an important role in identification of a person. In this we can divide them into three types mainly of preprocessing, feature extraction and classification components. In preprocessing we will mainly consider a specific region of interest for face depends on landmark that we consider. we will give these features as input to the classifiersthat recognizes the face accordingly. The performance of the feature descriptors are compared using variety of classification techniques to analyze the best database and thebest.
Show more

6 Read more

Show all 8221 documents...