• No results found

Iris Recognition with Parallel ImplementationSonu Verma, Dr.Renu Dhir

N/A
N/A
Protected

Academic year: 2020

Share "Iris Recognition with Parallel ImplementationSonu Verma, Dr.Renu Dhir"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)IJCSIT, Vol. 1, Issue 6 (December, 2014). e-ISSN: 1694-2329 | p-ISSN: 1694-2345. IRIS RECOGNITION WITH PARALLEL IMPLEMENTATION , 1,2. .. Department of Computer Science and Engineering, National Institute of Technology, Jalandhar, Punjab, India 1. [email protected], [email protected]. Abstract- A biometric system provides automatic identification of an individual based on a unique features or characteristics possessed by the people. Iris recognition is regarded as the most reliable and accurate biometric identification system available. Here we have presented a more direct and parallel processing alternative using GPU (graphic processing unit), offering an opportunity to increase the speed and potentially enhance system performance. In this system the most time consuming processes are directly parallelized. In particular, segmentation and template matching are parallelized on GPU based system. We specially implemented an iris recognition system based on Daugman‘s System for training and classification in Java. Our implementation of iris recognition could simultaneously estimate value for 1000 test patterns in about 7.4 ms. Index Terms - High Performance Computing, Graphics Processing Unit, Iris Recognition, Daugman’s algorithm. I. INTRODUCTION Iris Recognition is one of the most accurate biometric methods in use today. The overall performance, in terms of accuracy, speed, size and shape, of these systems have become of great interest. Currently, iris recognition algorithms are deployed globally in a variety of systems ranging from personal computers to iris scanners that are portable. These type of systems use central processing unit (CPU) – for which the algorithm was originally designed. CPU based systems are considered general purpose machines, which are designed for all types of applications. That is why, CPU- based systems are known as sequential processing devices. Most of today’s CPUs have four cores or at most six cores like intel (R) i9. This method of processing for which the recognition algorithm is currently designed is inefficient since parts of iris recognition can be greatly parallelized. The goal of this research is to get and exploit potential parallelized within an iris recognition system stage. The GPU can be used to process data and deal with computationally intensive applications. The speed-up factor attained compared to the central processing unit(CPU) are dependent on the needed application, as the GPU architecture gives the best performance of algorithms that exhibit high data parallelism and high arithmetic intensity.The focus of this research is on parallelizing portions of the iris recognition system using GPUs. This work describes this by making the following contribution :. www.ijcsit-apm.com. 1) Introduction and calculation of the performance of CPU- based machines executing key components of an iris recognition algorithms. 2) Measurements of CPU performance of the components of an iris recognition system programmed in java. 3) Deconstruction and parallelization of key iris recognition stage: template matching via hamming distance and try to parallelizing the segmantation and feature extraction. 4) Determination of the benefits of parallelization in terms of performance and size. This paper is organised as follows : section (2) an overview on the graphics processing unit (GPUs). Section (3) describes the iris recognition system. Section(4) shows how we implemanted the fractional hamming distance computation on parallelized method on GPU. Section (5) describes the performance evaluation. Section (6) concludes the paper and future work. II. GRAPHICS PROCESSING UNIT (GPUs) The most frequently question arises when a person is interested in the higher computational power of the GPU (graphic processing unit) compared to the CPU (central processing unit) is generally: “what makes GPU faster?”. The answer is based upon different architecture of both processing unit: the CPU architecture addresses the needs of a general purpose processor dealing with arbitrary operations like data transfer, scheduling operations, computing, etc. The GPU in contrast, was conceived for performance optimization in a single narrowly defined task: i.e. carrying out of the vast amount of operation needed for three dimensional (3D) modelling and rendering. 3D graphics computations are done by series of data operations is called as graphic pipeline. This has two characteristic features: 1).High level of data parallelism: The series of operations running on different data points do not have to interact with one another. 2).Large arithmetical intensity: Large amounts of instruction are sequentially applied to the same data point. Hence the optimization of GPU hardware seeks better performance for data processing than for other operations such as e.g. accessing data from memory. III.. IRIS RECOGNITION SYSTEM A. SEGMENTATION The first stage of iris recognition is to isolate the actual iris region in a digital eye image. We can approximate iris region by two circles, exterior for the iris/sclera boundary and interior for the iris/pupil boundary. The upper and lower part of the iris region is occluded by eyelids and International Journal of Computer Science & Information Technology 1.

(2) eyelashes. Also, specula reflections can occur within the iris region corrupting the iris pattern. Method as in [3][4][5]. a) Eyelash and Noise Detection Kong and Zhang [6] present a method to detect eyelashes, where eyelashes are of two types, first one is separable eyelashes, which are isolated in the image, and second one is multiple eyelashes, which overlap in the eye image and bunched together. Detection of separable eyelashes is done by using 1D Gabor filters, because of the convolution of a separable eyelash with the Gaussian smoothing function results in a low output value. So, if we get smaller point than a threshold, then this point belongs to an eyelash. Detection of multiple eyelashes is done by using the variance of intensity. In case variance of intensity values in a small window is lesser than a threshold, the canter is considered as a point in an eyelash. The model given by Kong and Zhang also makes use of connective criterion, so that each and every point in an eyelash should connect with another point to an eyelid or in an eyelash. A standard computer vision algorithm that is The Hough Transform, by which we can determine the parameters of simple geometric objects (lines and circles), present in an image. Another one the circular Hough Transform can be implemented to deduce the radius and centre coordinates of iris regions and pupil. An automatic segmentation algorithm based on the circular Hough transform is used by Wildes et al., Kong and Zhang, Tisse et al., and Ma et al. First of all, an edge map is generated by calculating the first derivatives of intensity values in an eye image and then thresholding the result. Based on the edge map, votes are cast for the parameters of circles passing through each edge point in Hough space. These parameters are the centre coordinates and , and the radius r, which are able to define any circle according to the equation + – =0 A maximum point in the Hough space will correspond to the radius and centre coordinates of the circle best defined by the edge points. Kong and Zhang and Wildes et al. also make use of the parabolic Hough transform for detecting the eyelids, calculating the upper and lower eyelids with parabolic arcs, which are given as. accurate, but also make it more efficient, because there are less edge points to cast votes in the Hough space. Image as in [12].. Fig. 2 a) Eye image b) edge map of eye c) edge map with only Horizontal Gradients d) edge map with only vertical gradients.. Depending on the database used, range of radius values to search for was set manually. For the used database, values of the iris radius range from 90 to 150 pixels, and the pupil radius ranges from 28 to 75 pixels. The Hough transform for the iris/sclera boundary was performed first, in order to make the circle detection process more efficient and accurate, then the Hough transform for the iris/pupil boundary was performed within the iris region. Six parameters are stored, the radius, and x , y centre coordinates for both circles, after this process was complete. Eyelids were isolated by first fitting a line to the upper and lower eyelid using the linear Hough transform. After that a second horizontal line is then drawn, that intersects with the first line at the iris edge that is closest to the pupil. This process is illustrated in Figure 2(b) and is implemented for both the top and bottom eyelids. Second horizontal line allows maximum isolation of eyelid regions. To create an edge map canny edge detection is used, and only horizontal gradient information is taken. If the maximum in Hough space is lower than a set threshold that means no line is fitted. Also, the lines are restricted to lie exterior, and interior to the iris region as in [5].. ( - ( x - ℎ ) sin + ( y – ) cos ) = ( ( x - ℎ ) cos + ( y – ) sin )..............................(1) Where controls, (ℎ , , ) the curvature, is the peak of the parabola and is the angle of rotation relative tothe x-axis. In performing the next step for edge detection, Wildes et al. use the derivatives in the horizontal direction to detect the eyelids, and in the vertical direction to detect the outer circular boundary of the iris, this is illustrated in Figure 2. The motivation for this is that the eyelids are usually aligned horizontally, and if we use all gradient data the eyelid edge map will corrupt the circular iris boundary edge map. Taking only the vertical gradients for locating the iris boundary will reduce influence of the eyelids when performing circular Hough transform, and not all of edge pixels defining the circle are required for successful localization. It is not only makes circle localization more International Journal of Computer Science & Information Technology 2. Fig. 3 Stages of segmentation with eye image. Linear Hough transform has the advantage over its parabolic version, because it has less parameter to deduce, so fewer computations are there. Some eye images also have isolation of specular reflections. This was eliminated, using threshold, since reflection areas are characterized by high pixel values close to 255. b) Normalization. .. www.ijcsit-apm.com.

(3) Once the iris region is successfully segmented from an eye image, further stage is to transform the iris region so that iris region has fixed and specified dimensions to compare. The inconsistencies in dimensions between eye images are because of the stretching of the iris caused by pupil dilation from different levels of lighting conditions. Another source of inconsistencies include, varying imaging distance, head tilt, camera rotation, and rotation of the eye within the eye socket. The process of normalization will produce iris regions, having same constant dimensions, so that two images of a particular iris with different conditions will have characteristic features at the same spatial location. 1. Daugman’s Rubber Sheet Model Daugman’s Homogenous rubber sheet model remaps each point within the iris region to a pair of polar coordinates (r, ) where r is on the interval [0, 1] and is angle [0,2 ].. Fig. 4 Daugman’s Rubber Sheet Model. The remapping of the iris region from (x,y) Cartesian coordinates to the normalized non-concentric polar representation is given as: I ( x ( r , ) , y ( r , ) )→ I ( r , )....................................(2) With x ( r , ) = ( 1 – r) ( )+r ( ) y(r, )=(1–) ( )+r ( ) Where I(x, y) is the iris region image, (r, ) are the corresponding normalized polar coordinates, (x,y) are the original Cartesian Coordinates, and , and , and are the coordinates of the pupil and iris boundaries along direction of . The rubber sheet model takes into account pupil dilation and size inconsistencies in order to produce a normalized representation with constant dimensions. By this way the iris region is modelled as a flexible rubber sheet anchored at the iris boundary with the pupil centre as reference point. Homogenous rubber sheet model does not compensate for rotational inconsistencies even though it accounts for pupil dilation, imaging distance and non-concentric pupil displacement, it. In the Daugman’s system, rotation is accounted during matching by shifting the iris template in the direction until two iris templates are aligned. The normalization process proved to be successful and some results are shown in Figure3. However, the normalization process was not able to perfectly reconstruct the same pattern from images with varying amounts of pupil dilation, since deformation of the iris results in small changes of its surface patterns. Image as in [11]. www.ijcsit-apm.com. Fig.5 An illustration of normalization process for two same iris images.. In this example, the rectangular representation is constructed from 10,000 data points in each iris region. Note that rotational inconsistencies have not been accounted for by the normalization process, and the two normalized patterns are slightly misaligned in the horizontal (angular) direction. For matching stage rotational inconsistencies will be accounted. c) Feature encoding Feature encoding was implemented by convolving the normalized iris pattern with 1D Log-Gabor wavelets. The 2D normalized pattern is broken up into a number of 1D signal, and after that these 1D signals are convolved with 1D Gabor wavelets. Rows of the 2D normalized pattern are taken as the 1D signal, each row corresponds to a circular ring on the iris region. The intensity values at known noise areas in the normalized pattern are set to the average intensity of surrounding pixels to prevent influence of noise in the output of the filtering. The output of filtering is then phase quantized to four levels using the Daugman method, with each filter producing two bits of data for each phase. The output of phase quantization is chosen to be a grey code, so that when going from one quadrant to another, only 1 bit changes. The feature encoding process is illustrated in Figure 4. a) Log –Gabor Filters A disadvantage of the Gabor filter is that the even symmetric filter will have a DC component whenever the bandwidth is larger than one octave. However, zero DC components can be obtained for any bandwidth by using a Gabor filter which is Gaussian on a logarithmic scale, this is known as the Log-Gabor filter. The frequency response of a Log-Gabor filter is given as; ( ( / )). G(f)=exp. ( ( / )) Where represents the centre frequency, and the bandwidth of the filter is given by . Details of the LogGabor filter are examined by Field [10].. International Journal of Computer Science & Information Technology 3.

(4) Fig. 6 An illustration of feature encoding process.. Where A and B are the patterns bits, mask A, mask B features of the iris templates. If two patterns are of different irises then it would be close to 1. On the other hand if they are taken for same irises then it would be close to 0. IV. PARALLELIZATION The main motto for this section is to parallelise the processes i.e. segmentation and pattern matching using GPU. These two processes are the most time consuming in the whole iris recognition. V. RESULTS In this part we present the results which we have taken by using this proposed system. Three different databases are used to test the proposed system: 1) MMU database has 1455 images. 2) UPOL database has 386 images. 3) CASIA database has 108 images.. Fig. 7 Multithreaded framework. TABLE 1 FAR, FRR, MER and Accuracy for various Standard Databases.. Database MMU UPOL CASIA. FAR 6 1.665 1.66. FRR 1 1.655 3.24. MER 3.5 1.664 2.4975. Accuracy(%) 93 96.77 95.16. We can see various standard databases namely CASIA as in [13], MMU as in[14] , and UPOL for each database we can find the FAR, MER, FRR, accuracy which is got by training 30% of the images and storing it in the database and testing it with the rest of the images. The above results are obtained.. Serial vs Parallel vs GPU 7000 6000 5000 4000 3000 2000 1000 0. DB Size. d) Pattern matching For matching, we have used hamming distance as a metric for recognition. Hamming distance is a measure of the number of different bets between two iriscodes. Two pattern A and B, the hamming distance is equal to the sum of disagreeing bits AND by their masks then divided by their AND mask . ||  ||∩ ∩ HD = || || ∩. Serial. 25 0. 50 0. 10 00. 15 00. 17 50. 20 00. 101 190 329 449 559 648. Parallel 142 209 445 593 647 725 GPU. 302 328 374 439 463 492. Fig. 8 Runtime of Serial and Parallel Implementation.. The fig. 8 shows a graphical representation of running time for serial, parallel implementations. The database size is International Journal of Computer Science & Information Technology 4. .. www.ijcsit-apm.com.

(5) considered along x-axis and time is taken along y-axis. The time mentioned in the above graph is in milliseconds. We can see that the runtime is very high compared to the speed time of parallel implementation. VI. CONCLUSION In order to achieve the objectives we have arrived at conclusion that for various databases the accuracy is found out to be very good and the speedup factor has improved for increasing size of databases. The segmentation was computationally complex, which is why threading was performed to increase the execution speed. Also pattern matching for thousands of template took excess time. Hence it was parallelised and it is seen that the results have improved. Our future work is to complete parallelizing the more time taking stage like feature extraction that the gained result will be more satisfactory. Segmentation needs much more accuracy so that all over accuracy of the proposed system would be increased. REFRENCES [1]. Broussard, R. P., Rakvic, R. N., & Ives, R. W. (2008, September). Accelerating iris template matching using commodity video graphics adapters. InBiometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd IEEE International Conference on (pp. 1-6). IEEE. [2]. Woo, M., Neider, J., Davis, T., & Shreiner, D. (1999). OpenGL Architecture Review Board. OpenGL programming guide: the official guide to learning OpenGL, version, 1. [3]. Kong, W. K., & Zhang, D. (2001). Accurate iris segmentation based on novel reflection and eyelash detection model. In Intelligent Multimedia, Video and Speech Processing, 2001. Proceedings of 2001 International Symposium on (pp. 263-266). IEEE. [4]. Wildes, R. P., Asmuth, J. C., Green, G. L., Hsu, S. C., Kolczynski, R. J., Matey, J. R., & McBride, S. E. (1994, December). A system for automated iris recognition. In Applications of Computer Vision, 1994. Proceedings of the Second IEEE Workshop on (pp. 121-128). IEEE. [5]. Kong, W. K., & Zhang, D. (2001). Accurate iris segmentation based on novel reflection and eyelash detection model. In Intelligent Multimedia, Video and Speech Processing, 2001. Proceedings of 2001 International Symposium on (pp. 263-266). IEEE. [6]. Tisse, C. L., Martin, L., Torres, L., & Robert, M. (2002, May). Person identification technique using human iris recognition. In Proc. Vision Interface (2002, May) (pp. 294299). [7]. Ma, L., Wang, Y., & Tan, T. (2002). Iris recognition using circular symmetric filters. In Pattern Recognition, 2002. Proceedings. 16th International Conference on (Vol. 2, pp. 414-417). IEEE. [8]. Daugman, J. (2004). How iris recognition works. Circuits and Systems for Video Technology, IEEE Transactions on, 14(1), 21-30. [9]. Daugman, J. G. (1994). Washington, DC: U.S. Patent and Trademark Office. U.S. Patent No. 5,291,560. [10].Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. JOSA A, 4(12), 2379-2394. [11].Sakr, F. Z., Taher, M., & Wahba, A. (2011, November). High performance Iris Recognition System on GPU. In Computer Engineering & Systems (ICCES), 2011 International Conference on (pp. 237-242). IEEE. [12].Padma Polash, P., & Maruf Monwar, M. (2007, December). Human iris recognition for biometric identification. www.ijcsit-apm.com. In Computer and information technology, 2007. iccit 2007. 10th international conference on (pp. 1-5). IEEE. [13].Chinese Academy of Science - Institute of Automation, Database of the Eye Greyscale Images. http://www.sinobiometrics.com [14]. MMU Iris Database, http://www.inf.upol.cz/iris/MMU!----. AUTHORS First Author – Sonu Verma: recieved his bacholar’s degree in Information Technology from B.R.C.M. College of Engg. and Technology, Haryana, India. He is currently pursuing his master’s degree in Computer Science and Engg. at Dr. B.R. Ambedker National Institute of Technology, Punjab, India. His area of interst are Image Processing, Opreating System, Database and Computer Network. Second Author – Dr.Renu Dhir: Associate Professor in Department of Computer Science and Engg. at Dr. B.R. Ambedker National Institute of Technology, Punjab, India. Her area of interest are Machine learning, Pattern recognition, Natural Language Processing and image Processing.. International Journal of Computer Science & Information Technology 5.

(6)

Figure

Fig. 2 a) Eye image b) edge map of eye c) edge map with onlyHorizontal Gradients d) edge map with only vertical gradients.
Fig.5 An illustration of normalization process for two same iris images.
Fig. 6 An illustration of feature encoding process.

References

Related documents

The indexes are arranged alphabetically by surname in groups of 25 years and are microfilmed copies of typed cards compiled by June and Richard Ross of Redlands, California, filmed

Conclusion: These data suggest that the introduction of an NCG on restrictive blood transfusion leads to lower transfusion frequency in hip fracture patients > 65 years old..

corruption Physical attack outage Node.. So the threat of physical attacks is always associated with such devices, e.g. physical tampering, sensor node destruction and

Maternal asthma, as compared with no maternal asthma, during pregnancy was related to an increased risk of a wide range of diseases in the offspring during childhood,

To the author’s knowledge, Milton (2008) is the only study that has investigated intentional vocabulary learning from watching films with subtitles. He carried out a

For the proposed work, radial basis function neural network (RBFNN) is preferred to model the microstrip line feed slot-loaded patch antenna.. Feed forward neural networks with a

Department of Conservation & Environment, HRDC reports to November 1999 Victoria www.dce.vic.gov.au/trade/food/stonefrt/ Industry development officer: fresh stonefruit stonefrt.htm