• No results found

Discrete Orthonormal Stockwell Transform Based Feature Extraction for Pose Invariant Face Recognition

N/A
N/A
Protected

Academic year: 2021

Share "Discrete Orthonormal Stockwell Transform Based Feature Extraction for Pose Invariant Face Recognition"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

Procedia Computer Science 45 ( 2015 ) 290 – 299

1877-0509 © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of scientific committee of International Conference on Advanced Computing Technologies and Applications (ICACTA-2015).

doi: 10.1016/j.procs.2015.03.143

ScienceDirect

*Corresponding author Tel: +91 9449787043 Email: kmanikantan2009@gmail.com

International Conference on Advanced Computing Technologies and Applications

(ICACTA-2015)

Discrete Orthonormal Stockwell Transform based Feature

Extraction for Pose Invariant Face Recognition

Nithya Ba, Y Bhavani Sankaria, Manikantan Ka,כ, S Ramachandranb

aDepartment of Electronics and Communication Engineering, M S Ramaiah Institute of Technology, Bangalore-560054, India

bDepartment of Electronics and Communication Engineering, S J B Institute of Technology, Bangalore-560060, India

Abstract

The statistical description of the face varies drastically with changes in pose. These variations make Face Recognition (FR) even more challenging. In this paper, four novel techniques are proposed, viz., Discrete Orthonormal Stockwell Transform (DOST),

Entropy based image resizing, Denoising and Enhanced Training and Testing methodology, to improve the performance of an

FR system. DOST is used for efficient Feature Extraction and Entropy based image resizing calculates the Optimum Resizing Factor (ORF) to be used. Denoising is done through a combination of Deghosting, Intensity Mapped Unsharp Masking and Anisotropic Diffusion. Pose neutralization is achieved through the enhanced Training and Testing methodology. Thus, a complete FR system for enhanced recognition performance is presented. Experimental results on four benchmark face databases, namely, CAS-PEAL R1, Color FERET, FEI and HP, illustrate the promising performance of the proposed techniques for face recognition.

© 2015 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of scientific committee of International Conference on Advanced Computing Technologies and Applications (ICACTA-2015).

Keywords: Face Recognition; Feature Extraction; Image Pre-processing; Stockwell Transform.

1. Introduction

Face Recognition (FR) is a field of study that always caught the interest of the researchers. While there are many techniques that have come up to do the same1, optimization and generalization still pose a major problem. Any FR

system includes pre-processing, feature extraction, feature selection and testing.

© 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of scientific committee of International Conference on Advanced Computing Technologies and Applications (ICACTA-2015).

(2)

Pre-processing techniques can be either generic or specific. Fundamental pre-processing techniques are explicitly provided in 2. Feature Extractor selects unique features of the face that are independent of pose, illumination, scale

and other such factors. The features selected by the classifier during testing determine the efficiency of the FR system. Feature selection involves selecting an optimal subset of features from the given feature set 3, 4, 5.

Population-based optimization algorithms such as Genetic Algorithm, Ant Colony Optimization provide better solutions 6, 7, 8.

Particle Swarm Optimization (PSO) based feature selection 9 is used to select the optimal feature subset. This

technique gave excellent results with the ORL database. This paper proposes the use of Discrete Orthonormal Stockwell Transform (DOST) as a feature extractor. The applications of efficient Stockwell Transform to image processing have been explored in 10. DOST also has good image compression ability and potentially gives better

approximation of the image than Discrete Wavelet Transform (DWT) and Fourier Transform (FFT) 10, with higher

PSNR over base line compression. DOST is also used in Image Texture Characterization 11. 2. Problem Definition and Contributions

Lack of illumination, varied poses and expressions may lead to the failure of an FR system. Also, selection of a suitable feature extractor is difficult because a feature extractor must extract unique features of the face image independent of pose, illumination, expression and other factors.

i. DOST as feature extractor: Extrapolating the results of image compression verified in 10, DOST is used as

a feature extractor. It proves to be better than Discrete Cosine Transform (DCT), FFT and DWT. DOST is also found to be independent of factors like varying pose, lighting, facial details and expressions.

ii. Entropy based image resizing: The Optimum Resizing Factor (ORF) is found out using DOST and entropy as metric. The anomalous behavior of DOST is exhausted here to find ORF and the method remains the same for all databases. The resizing factor with highest entropy is ORF.

iii. Denoising: A technique called ‘deghost’ is used for CAS-PEAL R1 database. This technique removes the ghost pixel values. Other pre-processing such as Intensity Mapped Unsharp Masking 12 and Anisotropic

Diffusion are used in a unique combination. These techniques remove noise and improve the recognition rate (RR).

iv. Enhanced Training and Testing methodology: This scheme exhausts the various test possibilities to enhance the pose invariance of the FR system against various galleries formed in the training stage.

3. Prior Art

A general FR system with PSO-based feature selection is described in 9.

3.1. Feature extraction

Feature extractors can be broadly classified into two categories – geometrical and statistical feature extractors 13, 14, 15, 16, 17. The goal of feature extraction is to obtain a representative feature matrix or a computational model using a

linear or a non-linear transform 9.

The performance of DCT, DWT 9 and PCA using Eigen faces 14,31, as feature extractors, has been exploited by

many researchers. The DCT uses inter-pixel redundancies to render decorrelation for images while the wavelet based methods prune away noise and focus on the frequency components that contain relevant information 9.

3.2. BPSO-based feature selection

Particle Swarm Optimization (PSO) is a computational algorithm inspired by the social behavior of bird flocking or fish schooling 18. The binary version of PSO (BPSO) algorithm searches for the optimal feature subset through

the extracted features. Each particle itself is a possible solution. Evolution takes place by a fitness function defined in terms of scatter index.

Let Xt

i be the present position of the ith particle varying from 1 to N (Number of particles), ω represent the inertia,

(3)

uniform random number in the range 0-1. The position and velocity constraints are updated as shown by Eq. 1 and Eq. 2 respectively.

ܸ௧ାଵൌ ɘ ൈ ܸ

௜௧൅  ܿଵൈ  ݎܽ݊݀ଵ ൈ ሺ݌ܾ݁ݏݐ௜െ  ܺ௜௧ሻ ൅  ܿଶ ൈ  ݎܽ݊݀ଶ ൈ ሺܾ݃݁ݏݐ െ  ܺ௜௧ሻሺͳሻ

ܺ௜௧ାଵ ൌ  ܺ௜௧൅ ܸ௜௧ାଵሺʹሻ

Where pbest is particle best and gbest is global best.

The BPSO uses the probability function given in Eq. 3 for position update. ݎܽ݊݀ଷ ൏  ͳ ቀͳ ൅ ݁ି௏೔

೟శభ

ቁ

ൗ ሺ͵ሻ The updated position, Xt+1, is either 1 or 0.

Fitness Function: The class scatter fitness function, F, is computed using Eq. 4.

ܨ ൌ  ඥσ௅௜ୀଵሺܯ௜െ ܯ௢ሻ௧ሺܯ௜െ ܯ௢ሻሺͶሻ

where Mi is the class mean and Mo is the grand mean.

3.3. Classifier

Mean Square Error is used for similarity measurement in some recognition systems 19, 20. Euclidean distances 21

between the reference feature subsets and the test feature subsets is used for similarity measurement. For an N-dimensional space, the Euclidean distance between two points pi and qi is given by Eq. 5.

ܦ ൌ  ඥσே௜ୀଵሺ݌௜െ  ݍ௜ሻଶሺͷሻ

3.4. Stockwell Transform and DOST

Stockwell Transform gives a full spatial-frequency decomposition of a signal 10. Consider a one dimensional

signal h(t). The stockwell transform of h(t) is defined as the fourier transform of the product between h(t) and a gaussian window function. Using the frequency domain definition of the stockwell transform, the Discrete Stockwell transform can be written as in Eq. 6.

ܵሾ݆ܶǡ ݊ ܰܶΤ ሿ ൌ  σேିଵܪሾሺ݉ ൅ ݊ሻ ܰܶΤ ሿ݁ିଶగమ௠మΤ௡మ௜ଶగ௠௝ ேΤ

௠ୀ଴ ሺ͸ሻ

For a signal of length N, there can be N2 stockwell coefficients each taking O(N) time to compute. Therefore, all

N2 coefficients take O(N3) time to compute. This necessitates a pared-down version of stockwell transform, which is

the DOST. The DOST basis vectors can be summed analytically by Eq. 7.

ܵሺ݇ܶሻሾ௩ǡఉǡఛሿൌ ݅݁ି௜గఛሺ݁ି௜ଶఈሺሺ௩ିఉ ଶΤ ିଵሻ ଶΤ ሻെ ݁ି௜ଶఈሺሺ௩ାఉ ଶିଵሻΤ Τ ሻଶሻ ʹඥߚ •‹ ߙ ሺ͹ሻ

where, ןൌ ߨሺ݇ ܰΤ െ  ߬ ߚΤ ሻ, is the center of the temporal window and v indicates the center of each frequency band voice of bandwidth β.

To make these basis vectors orthonormal, the parameters β, v and τ must be selected appropriately. Mathematically, these basis vectors can be proved to be orthonormal.

4. Proposed Techniques

(4)

DOST can represent a data of length N with N coefficients. The coefficients decay very quickly making DOST a powerful tool for image compression. The DOST of an image is as shown in Fig. 1. DOST extracts local orientation information. It can be considered as windowed fourier transform. The features extracted by DOST is as shown in Fig. 3.

Any feature extractor must give a representative model of the image by using either a linear or a non-linear transform 9. This can be verified by the PSNR obtained after extraction or compression. The feature extractor with a

higher PSNR represents the image better than the one with a lower PSNR. Experiments were conducted to compare PSNR of DOST and DCT as shown in Fig. 2. The CAS-PEAL R1 image was compressed using 90%, 75%, 50% and 25% of the coefficients column wise and the PSNR was calculated after reconstruction by DOST and DCT respectively.

As seen from Fig. 2, DOST exhibits an anomalous behavior. The PSNR after compression using 75% coefficients is the highest. The PSNR falls on either side of the graph. The PSNR of DCT compressed image is higher than DOST compressed image for compression using 90% coefficients but the latter remains higher than DCT compressed image for other compression ratios. Therefore, DOST represents the image better than DCT at higher compression ratios using less number of coefficients.

Another advantage of DOST over DCT is that it helps find ORF which will be explained in detail in the later sections. The anomalous behavior of DOST, that leads to this result, is that the entropy of transform coefficient increases upto a certain resizing factor and then starts decreasing. The impact of resizing on entropy coefficients can be seen in Fig. 4.

Fig. 3 Figure depicting the steps involved in feature extraction by DOST, (a) CAS-PEAL R1 image, (b) coefficients extracted row wise, (c) coefficients extracted column wise, (d) DOST coefficients.

Fig. 1 Figure depicting DOST coefficients of an image, (a) CAS-PEAL R1 image, (b) DOST coefficients of the image.

Fig. 2 Figure comparing the PSNR of images obtained after compression by DOST and DCT.

(5)

Table 1 Comparison of DOST with other feature extractors based on RR.

A few databases having constraints like varying pose, lighting, expressions and illumination were considered and tested against feature extractors like FFT, DCT, DWT and DOST. The experiments were conducted with 40% of total images as training images per subject and on a basic FR system. Comparison of RR is as seen in Table 1. DOST gives best results compared to other feature extractors. Therefore, DOST is more efficient than other feature extractors.

From the above discussion, the proposed feature extractor, DOST, is an efficient feature extractor. 4.2. Enhanced Training and Testing methodology

4.2.1. Training Scheme

Among the features extracted by DOST, the BPSO selects the best feature subset of each of the training images and stores it in a gallery which may be referred to as Face Gallery-1. The proposed training scheme involves the formation of two other face galleries of the randomly generated training images.

i. Gallery of the flipped images: Flipping is performed on non-frontal images. The image is flipped along its vertical median giving us the opposite horizontal profile. The features of these images are extracted, selected by BPSO and is stored in Face Gallery-2. Fig. 5 shows the comparison between approximated and actual image.

ii. Gallery of the approximated frontal images: RR is low when the testing images are mostly frontal images and the training images are right or left profile images. Therefore, an approximation of the frontal image is obtained by flipping the right or left profile image and fusing it with the original image. The features of these approximated images are extracted, selected by BPSO and stored in Face Gallery-3. Fig. 6 shows an approximated frontal image of an image from the CAS-PEAL R1 database.

4.2.2. Testing Scheme

Testing is done to gauge the efficiency of the trained system. The features after extraction and selection are compared with those contained in Face Gallery-1 using a Euclidean Classifier. If the features match, the image is identified and recognition succeeds. If recognition fails, test image can still be tested for a few conditions 22.

The proposed testing scheme involves testing the image against three face galleries. The image is first tested against Face Gallery-1. If the image is not identified, then it is tested for the second case. In the second case, the image is flipped and the corresponding features of the image are extracted and selected. These are compared with the feature subsets in Face Gallery-2. If the image remains unidentified, then an approximate frontal image is obtained as shown in Fig. 6. The features of this image are extracted, selected and compared with those contained in Face Gallery-3. If the image still remains unidentified, the testing ends. This scheme helps improve the RR for

Database FFT DCT DWT DOST

CAS-PEAL R1 43 42.5 43.1 43.75

Color FERET 75.47 70.23 66.47 80.47

HP 76.04 85.18 86.91 89.88

FEI 92.22 92.22 91.1 94.44

Fig. 4 Figure depicting the impact of resizing on the entropy of transform coefficients.

(6)

databases which contain varying pose images. Thus, the proposed Training and Testing methodology can be summarized as in Fig. 7.

4.3. Pre-processing Techniques

i. Entropy based image resizing: The images are resized to half their sizes successively and the DOST of the images are taken. The entropies of these images are calculated. The resizing factor which gives the highest entropy is considered the Optimum Resizing Factor (ORF). This process follows the flow in the FR system and therefore is an efficient method to calculate ORF. Fig. 9 shows the steps involved in finding ORF. ORF found out for different databases are listed in Table 2.

Table 2 Optimum Resizing Factor of images found using DOST and entropy. Database Image size ORF Optimum Image size

CAS-PEAL R1 360×480 0.125 45×60

Color FERET 256×384 0.125 32×48

HP 384×288 0.125 48×36

FEI 640×480 0.0625 40×30

Fig. 5 Comparison between the flipped and actual images, (a) Right profile, (b) Approximated left profile, (c) Left profile.

Fig. 6 Approximated frontal image obtained from non-frontal image, (a) Original image, (b) Approximated frontal image.

Fig. 8 Figure depicting highest RR for CAS-PEAL R1 image resized using ORF.

Fig. 7 Flowchart that summarizes the proposed training and testing methodology, PP-Pre-processing, FE-Feature Extraction.

(7)

CAS-PEAL R1 database was tested against the FR system described in 9 with DOST as feature extractor

for various resizing factors. The results are as shown in Fig. 8. The FR system gives highest RR for 45×60 which is in agreement with the ORF found.

ii. Denoising: It is done through a combination of Deghosting, Intensity Mapped Unsharp Masking and Anisotropic Diffusion. Deghosting, used only for the images of CAS-PEAL R1 database, removes ghost pixel values and is mainly implemented on images with low contrast, having variable background intensities and noise.

Noise is removed by applying a mean filter to the original image. The window size of the mean filter influences the RR as shown in Fig. 10. The window size which gives the best RR is selected. The average gradient magnitude of the image using sobel’s edge operator is obtained. A threshold value is considered and compared with the obtained average gradient of the edge pixels. If the value obtained is less than the threshold value, it is discarded.

A combination of the pre-processing techniques is found based on RR. Fig. 11 shows RR for various pre-processing techniques on CAS-PEAL R1 database resized by its ORF and with 40% of total images. Intensity Mapped Unsharp Masking (IMUM) 12 was used to enhance the outline of the face and reduce

the background intensities. Anisotropic Diffusion (AD) reduces image noise without altering the edges or other subtle details of the image.

5. Customization of Databases

The following databases were customized and experiments were conducted on the customized databases. A few images from these databases are shown in Fig. 14.

i. CAS-PEAL R1 Database: This database 23, a subset of CAS-PEAL database, contains 30,900 images of

1,040 subjects. Each subject contains 21 images with different pose. Out of these, 20 subjects were selected Fig. 9 Figure depicting the steps involved in finding the ORF.

Fig. 10 Figure depicting the variation of RR with window size of average filter.

Fig. 11 Figure depicting RR for various pre-processing techniques, PP-pre-processing, AD- Anisotropic Diffusion, IMUM- Intensity Mapped Unsharp Masking.

(8)

and each is customized to contain the first 20 pose variations. The images are of size 360×480 and are in the ‘.tif’ format. ORF is 0.125 as shown in Fig. 12. The pre-processing techniques used were Entropy based image resizing and Denoising.

ii. Color FERET Database: This database 24 contains 1,564 sets of images, a total of 14,126 images that

includes 1,199 subjects. The customized database contains 35 random subjects, each consisting of 20 images, 2 each from fa, fb, hl, hr, pl, pr, ql, qr and any 4 of ra, rb, rc, rd, re. (‘fa’ and ‘fb’ – frontal images; ‘hl’ and ‘hr’ – half left and half right profile images; ‘pl’ and ‘pr’ – profile left and profile right; ‘ql’ and ‘qr’ – quarter left and quarter right profile; ‘ra’, ‘rb’, ‘rc’, ‘rd’, ‘re’ – random orientations). The images are of size 256×384 and are in the ‘.ppm’ format. ORF for Color FERET database is found to be 0.125. The pre-processing techniques used were Entropy based image resizing, Intensity Mapped Unsharp Masking and Anisotropic Diffusion.

iii. HP (Head Pose) Database: The database 25 consists of 15 subjects, a total of 2,790 images with variations in

tilt and pan angles from -90º to 90º. Database is customized to contain 15 subjects, 30 images per subject. The images vary in pose angles from 90º to 90º (pan), 5º (tilt) 90º (pan) to 5º (tilt) 90º (pan) and 0º (tilt) -90º (pan) to 0º (tilt) -45º (pan). The images are of size 384×288 and are in the ‘.jpg’ format. ORF for HP database is found to be 0.125. The pre-processing techniques used are Entropy based image resizing, Intensity Mapped Unsharp Masking and Anisotropic Diffusion.

iv. FEI Database: This database 26 consists of 200 subjects, 14 images per subject, a total of 2,800 images. All

images are in an upright frontal position with profile rotation up to 180º. The database is customized by taking random original sized images of 20 subjects, 10 pose variant images per subject. The images are of size 640×480 and are in the ‘.jpg’ format. ORF for FEI database is found to be 0.0625 as shown in Fig. 13. The pre-processing techniques used are Entropy based image resizing, Intensity Mapped Unsharp Masking and Anisotropic Diffusion.

6. Experimental Results

The FR system in 9 was put to test with various feature extractors and the proposed feature extractor, DOST.

DOST gives best RR for all databases as shown in Fig. 15.

The proposed techniques were tested for CAS-PEAL R1, Color FERET, HP and FEI customized databases taking 10%, 40%, 60% and 90% of total images as training images per subject. Table 4 gives training to testing image ratio.

The proposed FR system was trained with less than 50%, i.e., 40% of total images as shown in Table 3. A comparison of the proposed technique with other FR systems is as shown in Table 5. The performance was similar to PCA 31 on ORL Database giving ~100% RR.

(9)

Table 3 Results for FR system trained with 40% of total images per subject.

Table 4 RR for various testing to training image ratios. Table 5 Comparison with other FR systems.

Database Tr:Test Technique Recognition Rate(%)

CAS-PEAL R1 8:12 DCT+PSO FS 42.5 DWT+PSO FS 43.7

FFT+PSO FS 42.92

IMUM+DWT12 58.16

Proposed Technique 62.92

Color FERET 8:12 DDFFE+ThBPSO27 86.81

K-means 28 86.14 SIET 29 85.41 UMC 30 87.35 Proposed Technique 90.38 FEI 4:6 DCT+PSO FS 92.22 DWT+PSO FS 91.11 FFT+PSO FS 92.22 IMUM+DWT 81.5 Proposed Technique 97.75 HP 3:27 DCT+PSO FS 85.18 DWT+PSO FS 86.91 FFT+PSO FS 76.04 IMUM+DWT 84.59 Proposed Technique 96.07 7. Conclusions

Novel techniques for an improved FR system are proposed using a unique combination of Discrete Orthonormal Stockwell Transform (DOST), Entropy based image resizing and an Enhanced Training and Testing methodology. These have played a key role in efficient feature extraction and together with binary particle swarm optimization based feature selection, results in high recognition rates being obtained. These techniques were tested on four face datasets, namely CAS-PEAL R1, Color FERET, FEI and HP. The experimental results indicate that the proposed method was successful in handling pose variations with an average recognition rate of 97.75% for FEI database with training to testing ratio of 4:6.

Reduction in number of features used for analysis also decreases training and testing times, thereby improving the speed of the FR system being used. On a PC with Intel Core i5 1.5 GHz CPU and 3 GB RAM, the proposed technique cause an average testing time of 6.64 ms for Color FERET database with training to testing ratio of 8:12 using MATLAB. The experimentation is not without some constraints. The testing images were selected from the same database as the training set. Apart from the deghosting method used in this paper, by applying different image pre-processing techniques like background removal, geometric normalization etc., the results are expected to improve further.

Database Tr images (%) Tr:Test Avg RR (%)

CAS-PEAL R1 10 2:18 39.16 40 8:12 62.04 60 12:8 65.38 90 18:2 76.25 Color FERET 10 2:18 39.4 40 8:12 90.38 60 12:8 91.58 90 18:2 92.43 HP 10 3:27 96.1 40 12:18 100 60 18:12 100 90 27:3 100 FEI 10 1:9 85 40 4:6 97.75 60 6:4 98.14 90 9:1 100

Fig. 14 A few sample images of different databases, (a) CAS-PEAL R1, (b) Color FERET, (c) FEI, (d) HP.

(10)

References

1. W. Zhao, R. Chellapa, P.J. Phillips, A. Rosenfeld, “Face Recognition: A Literature Survey”, ACM Computing Surveys (CSUR), 35(4):399-458, 2003.

2. Rafael C. Gonzalez, Richard E. Woods, “Digital Image Processing”, Addison Vesley Publishing Company, 3rd Edition, 2008.

3. Chung-JuiTu, Li-Yeh Chuang, Jun-Yang Chang, Cheng-Hong Yang, “Feature Selection using PSO-SVM”, International Journal of Computer Science (IAENG), 33(1):111-116, 2007.

4. E. Kokiopoulou, P. Frossard, “Classification-Specific Feature Sampling for Face Recognition”, In: Proceedings of IEEE 8th Workshop on Multimedia Signal Processing, p. 20-23, 2006.

5. A. Y. Yang, J. Wright, Y. Ma, S. S. Sastry, “Feature Selection in Face recognition: A Sparse Representation Perspective”, UC Berkeley Technical Report, 2007.

6. D. S. Kim, I. J. Jeon, S. Y. Lee, P. K. Rhee, D. J. Chung, “Embedded Face Recognition based on Fast Genetic Algorithm for Intelligent Digital Photography”, Consumer Electronics, IEEE Transactions , 52(3):726-734, 2006.

7. M. L. Raymer, W. F. Punch, E. D. Goodman, L. A. Kuhn, A. K. Jain, “Dimensionality Reduction using Genetic Algorithms”, Evolutionary Computation, IEEE Transactions, 4(2):164-171, 2000.

8. H. R. Kanan, K. Faez, M. Hosseinzadeh, “Face Recognition System using Ant Colony Optimization-based selected Features”, In: Proceedings of Computational Intelligence in Security and Defence Applications (CISDA 2007), IEEE Symposium, p. 57-62, 2007. 9. Rabab M. Ramadan, Rahab F. Abdel-Kader, “Face Recognition Using Particle Swarm Optimization-Based Selected Features”,

International Journal of Signal Processing, Image Processing and Pattern Recognition, 2(2):51-66, 2009.

10. Yanwei Wang, Je Orchard, “On the Use of the Stockwell Transform for Image Compression”, In: IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics, 2009.

11. Sylvia Drabycz, Robert G. Stockwell, J. Ross Mitchell, “Image Texture Characterization using the Discrete Orthonormal S-Transform”, Journal of Digital Imaging, 22(6):696-708, 2009.

12. Gautham Sitaram Yaji, Sankhadeep Sarkar, K. Manikantan, S. Ramachandran, “DWT Feature Extraction based Face Recognition using Intensity Mapped Unsharp Masking and Laplacian of Gaussian Filtering with Scalar Multiplier”, In: 2nd International Conference on Communication, Computing & Security (ICCCS-2012), p.475-484, 2012.

13. C. Liu, H. Wechsler, “Evolutionary Pursuit and its Application to Face Recognition”, Pattern Analysis and Machine Intelligence, IEEE Transactions, 22(6):570-582, 2000.

14. M. A. Turk, A. P. Pentland, “Face Recognition using Eigen faces”, In: Proceedings of Computer Vision and Pattern Recognition CVPR’91, p. 586-591, 1991.

15. L. Du, Z. Jia, L. Xue, “Human Face Recognition based on Principal Component Analysis and Particle Swarm Optimization-BP Neural Network”, In: Proceedings of 3rd Conference on Natural Computation (ICNC 2007), p.287-291, 2007.

16. X. Yi-qiong, L. Bi-cheng, W. Bo, “Face Recognition by Fast Independent Component Analysis and Genetic Algorithm”, In: Proceedings of 4th International Conference on Computer and Information Technology (CIT’04), p. 194-198, 2004.

17. X. Fan, B. Verma, “Face Recognition: a new Feature Selection and Classification Technique”, In: Proceedings of 7th Asia-Pacific Conference on Complex Systems, 2004.

18. R. Eberhart and J. Kennedy, “A New Optimizer using Particle Swarm Theory”, In: Proceedings of 6th International Symposium on Micro Machine and Human Science, p. 39-43, 1995.

19. H. B. Kekre, SudeepThepade, Karan Dhamejani, Sanchit Khandelwal, Adnan Azmi, “Performance Comparison of Assorted Color Spaces for Multilevel Block Truncation Coding based Face Recognition”, International Journal of Computer Science and Information Security (IJCSIS), 10(3):58-63, 2012.

20. H. B. Kekre, Sudeep Thepade, Sanchit Khandelwal, Karan Dhamejani, Adnan Azmi, “Improved Face Recognition with Multilevel BTC using Kekre’s LUV Color Space”, International Journal of Advanced Computer Science and Applications (IJACSA), 3(1):156-160, 2012. 21. H. B. Kekre, Sudeep D. Thepade, AkshayMaloo, “Face Recognition using Texture Features Extracted from Haarlet Pyramid”, International

Journal of Computer Applications (IJCA), 12(5):41-45, 2010.

22. Santosh Shetty, Paritosh Kelkar, K. Manikantan, S. Ramachandran, “Shift Invariance based Feature Extraction and Weighted BPSO based Feature Selection for Enhanced Face Recognition”, In: Proceedings of International Conference on Computational Intelligence: Modelling, Techniques and Applications (CIMTA 2013), p.822-830, 2013.

23. CAS-PEAL R1 Database, http://www.jdl.ac.cn/peal/download.htm 24. Color FERET Database, http://www.nist.gov/itl/iad/ig/colorFERET.cfm 25. HP Database, http://www-prima.inrialpes.fr/perso/Gourier/Faces/HPDatabase.html 26. FEI Database, http://fei.edu.br/~cet/facedatabase.html

27. N. L. Ajit Krisshna, V. Kadetotad Deepak, K. Manikantan, S. Ramachandran, “Face recognition using transform domain feature extraction and PSO-based feature selection”, Applied Soft Computing, 22:141-161, 2014.

28. Surabhi A. R., Shwetha T. Parekh, K. Manikantan, S. Ramachandran, “Background Removal using K-means clustering as a Pre-Processing technique for DWT based Face Recognition”, In: Proceedings of International Conference on Communication, Information and Computing Technology (ICCICT 2012), p. 1-6, 2012.

29. V. Vidya, Nazia Farheen, K. Manikantan, S. Ramachandran, “Face Recognition using Threshold based DWT Feature Extraction and Selective Illumination Enhancement Technique”, In: Procedia Technology 6, p. 334-343, 2012.

30. Roohi S. T., Sachin D. S., K. Manikantan, S. Ramachandran, “Feature Accentuation using Uniform Morphological Correction as Pre Processing Technique for DWT based Face Recognition”, In: Proceedings of International Conference on Computational Intelligence: Modelling, Techniques and Applications (CIMTA 2013), p.690-698, 2013.

31. H. B. Kekre, Sudeep D. Thepade, AkshayMaloo, “Eigenvectors of Covariance Matrix using Row Mean and Column Mean Sequences for Face Recognition”, CSC International Journal of Biometric and Bioinformatics (IJBB), 4(2):42-50.

References

Related documents

The prominent seller makes higher profits, chooses the highest complexity level with higher probability than the rival, sets a lower cut-off price below which prices are associated

Unsettling the culture panacea: The politics of cultural planning, national heritage and urban regeneration in Buenos Aires, (Doctoral thesis submitted to the London School of

The techniques include bulk measurements from (i) a Nevzorov hot-wire probe, (ii) the difference between the measured total water content (condensed plus vapour) and the water

Hkrati pa se delitev opira tudi na izhodi{~a Pregleda sodobne slovenske mladinske proze 2005, v katerem Haramija nazorno uvaja kategorijo kratke fantasti~ne pripovedi; mi pa

John Deere 8430 T (cingolato) Common Rail þ. John Deere Common

For these rea- sons, a metric based on the 1979–2010 SSIE trend must cer- tainly account for these effects, given that (1) only one ob- served climate realization is available,

Even if there is no more published data or finding of all researchers on tannery industries effluent effects on human and animals health and as well as atmospheric air in

Imposed hospitality can also take different forms; for example, during travel and tourism, consuming food, drink and shelter may done out of necessity rather