• No results found

H M C R Fingerprint Matching

N/A
N/A
Protected

Academic year: 2020

Share "H M C R Fingerprint Matching"

Copied!
9
0
0

Loading.... (view fulltext now)

Full text

(1)

H.M.C.R Fingerprint Matching

ABSTRACT

This paper explores the algorithm to match fingerprints. With regard to the development of ever-increasing crime and the complexity of doing it, in parallel the importance of technology and scientific knowledge in order to prevent the crimes and find offenders will be obvious. Fingerprint is one of the secure methods for identification individuals and used in the field of crime detection, event control systems, national borders control and etc.

Main reason for choosing this method for identification people is uniqueness of each person's fingerprint; also some of its property has no change till the end of life. These features are used in fingerprint matching. There are different standard methods for manual fingerprint matching but doing it manually is difficult and also time consuming, also is not very efficient; of course since databases have millions of fingerprint templates, manually matching is practically impossible. In order to make matching process automatic it requires a method for imaging or coding the fingerprint. This image should have conditions such as Ability of differentiation of any fingerprints in different levels of screen resolution, the ability of the utilization in auto matching algorithms, Simple calculations and etc.

In this paper we try to provide the above conditions or even more efficient algorithm.

Keywords

Fingerprint Matching, Harris, RANSAC, Fingerprint Feature Extraction

1.

INTRODUCTION

The word biometric derived from the Greek words "Bios" meaning life and "Metrikos" meaning Evaluation.

Always people use some unique features to identify each other which are different from one to another, such as speaking, style of walking, face, etc...[1] Nowadays in many fields people identity will be recognized using some devices and applications according to their body features. This context is widening more and more and enthusiastic population is growing daily.

Furthermore nowadays many cards contain parameters like ID and Password which limit the access permission for other, but these parameters are not very secure and can be hacked easily. Biometric cannot be rented or hired, sold or bought and practically is impossible to fake. A biometric system is basically pattern recognition system that will identify a person according to property vector of specific physiological or behavioral features. After derivation the property vector will be stored in a database. According to physiological features, this system usually has a high trust rate and also can be configured in two methods,

Verification and

Recognition. The recognition includes comparison of obtained data in special format with all users in database and verification just includes comparison with one data format which claimed, so it is necessary to separate these two issues to be addressed. [2]

A simple biometric system contains 4 basic parts: [2]

1- Sensor Block : receive biometric information

2- Features extraction block : create property vector according to obtained data in part 1

3- Comparison block : compare created property vector in part 2 with templates

4- Decision block : Identification , Either be approved or rejected

Each human trait can be used as a biometric feature if it has the following conditions: [2]

1- Generality : every person has this trait

2- Be different : this trait be different in each person ( no double existence )

3- Hang on : be constant in a period of time

4- Can be obtained

In everyday life three other factor must be observed : Efficiency ( Precision , Speed ) , Access ( be safe for users ), High Security .

In this paper fingerprint biometric factor and a method for matching different templates will be introduced.

1-1

Fingerprint based identity recognition

This is the oldest method of distance identification experiment method. Though fingerprint was just discussed in the field of crime till some years ago, many researches in group of countries results the level of access which allows this method to be usable in general conditions. Systems can get and store fingerprints detail such as furrows and ridges, or full image. [3]

Reference patterns which are used to store these details are about 100 byte that is really small in comparison with full fingerprint Image in size of about 500 to 1500 byte. Fingerprints are made of flow like furrows and ridges which will create different features according to their position.18 different fingerprint feature detected up to now that ridge ending and ridge bifurcation are two important features which named "Minutiae"[4]. These two features are shown in Figure 1.

Minutiae details information is stored in x, y and ridge angle parameters. Minutiae topological structure of a fingerprint is unique which is always stable and never changes; therefore fingerprint matching can be based on minutiae topological structure matching. There is about 70 to 80 minutiae in one fingerprint image with fair quality, of course this number will reduce to 20 to 30 minutiae in images with lower quality or smaller size, but this few numbers are enough for fingerprint matching. Majority of fingerprint verification systems have

Mohammad Amiri

M.Eng in Computer Architecture

Seyed Iman Meshkat

B.Eng in Information Technology

Engineering

Hossein Javidnia

(2)

minutiae based structure. There are 3 important levels for verification in these systems: [5]

I. Preprocessing

II. Minutiae extraction

III. Minutiae matching

First level used to increase image quality, the second level is used to extract fingerprint features and the third level is used for matching.

In a recent study in 2011 an online verification system was developed, the furrows and ridges was their tool for verification. The difference of this study with others was in wide online pattern database which make this paper efficient. [6]

In other case in 2009 an algorithm named Minutia Score Matching method (FRMSM) was developed that modeled and made the image small in first level. In second level it scanned the edges and ridges to extract minutiae and save image quality with no changes, in final level the extracted pattern from main image matched with template to obtain matching result, Either be matched or not.[7]

In another research in 2005 a method named "FINGERPRINT IDENTIFICATION USING MINUTIAE CONSTELLATION MATCHING" was introduced. Like others in this paper the minutiae was extracted from main and template image, then a group of minutiae in main image was selected and compared with similar group in template image and the result was status of matching. [8]

2.

Harris Corners Detector [15],[16]

"One of the first operators for interest point detection was developed by Hans P. Moravec in 1977 for his research involving the automatic navigation of a robot through a clustered environment. It was also Moravec who defined the concept of "points of interest" in an image and concluded these interest points could be used to find matching regions in different images. The Moravec operator is considered to be a corner detector because it defines interest points as points where there are large intensity variations in all directions [16]". "This often is the case at corners. It is interesting to note, however, that Moravec was not specifically interested in finding corners, just distinct regions in an image that could be used to register consecutive image frames [9].Harris and Stephens[10] improved upon Moravec's corner detector by considering the differential of the corner score with respect to direction directly, instead of using shifted patches. (This corner score is often referred to as autocorrelation, since the term is used in the paper in which this detector is described. However, the mathematics in the paper clearly indicates that the sum of squared differences is used.)Without loss of generality, we will assume a grayscale 2-dimensional image is used. Let this image be given by I .Consider taking an image patch over the area

and shifting it by ."

"Weighted sum of squared differences (SSD) between these two patches, denoted, is given by: [15]"

Can be approximated by Taylor expansion. Let and be the partial derivatives of, such that"

(Relation 2)

"This produces the approximation"

(Relation 3)

"Which can be written in matrix form:"

(Relation 4)

"Where A is the structure tensor,"

(Relation 5)

"This matrix is a Harris matrix, and angle brackets denote averaging (i.e. summation over ). If a circular window (or circularly weighted window, such as a Gaussian) is used, then the response will be isotropic. A corner (or in general an interest point) is characterized by a large variation of in all directions of the vector . By analyzing the eigenvalues of, this characterization can be expressed in the following way: should have two "large" eigenvalues for an interest point. Based on the magnitudes of the eigenvalues, the following inferences can be made based on this argument:[15]"

1. If 2. If 3. If

"Harris and Stephens note that exact computation of the eigenvalues is computationally expensive, since it requires the computation of a square root, and instead suggest the following function , where is a tunable sensitivity parameter:[15]"

(Relation 6)

(3)

(Relation 7)

3.

RANSAC[12]

"RANSAC is an abbreviation for "RANdom SAmple Consensus". It is an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers. It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed. The algorithm was first published by Fischler and Bolles in 1981.[12]"

"A basic assumption is that the data consists of "inliers", i.e., data whose distribution can be explained by some set of model parameters, and "outliers" which are data that do not fit the model. In addition to this, the data can be subject to noise. The outliers can come, e.g., from extreme values of the noise or from erroneous measurements or incorrect hypotheses about the interpretation of data. RANSAC also assumes that, given a (usually small) set of inliers, there exists a procedure which can estimate the parameters of a model that optimally explains or fits this data.[12]"

Example [12]

"A simple example is fitting of a line in two dimensions to a set of observations. Assuming that this set contains both inliers, i.e., points which approximately can be fitted to a line, and outliers, points which cannot be fitted to this line, a simple least squares method for line fitting will in general produce a line with a bad fit to the inliers. The reason is that it is optimally fitted to all points, including the outliers. RANSAC, on the other hand, can produce a model which is only computed from the inliers, provided that the probability of choosing only inliers in the selection of data is sufficiently high. There is no guarantee for this situation, however, and there are a number of algorithm parameters which must be carefully chosen to keep the level of probability reasonably high. [12]"

Overview [15]

"The input to the RANSAC algorithm is a set of observed data values, a parameterized model which can explain or be fitted to the observations, and some confidence parameters.

RANSAC achieves its goal by iteratively selecting a random subset of the original data. These data are hypothetical inliers and this hypothesis is then tested as follows: [15]

1. A model is fitted to the hypothetical inliers, i.e. all free parameters of the model are reconstructed from the inliers.

2. All other data are then tested against the fitted model and, if a point fits well to the estimated model, also considered as a hypothetical inlier.

3. The estimated model is reasonably good if sufficiently many points have been classified as hypothetical inliers.

4. The model is re'estimated from all hypothetical inliers, because it has only been estimated from the initial set of hypothetical inliers.

5. Finally, the model is evaluated by estimating the error of the inliers relative to the model.

This procedure is repeated a fixed number of times, each time producing either a model which is rejected because too few points are classified as inliers or a refined model together with a corresponding error measure. In the latter case, we keep the refined model if its error is lower than the last saved model.[15]"

The parameters [15]

"The values of parameters t and d have to be determined from specific requirements related to the application and the data set, possibly based on experimental evaluation. The parameter k (the number of iterations), however, can be determined from a theoretical result. Let p be the probability that the RANSAC algorithm in some iteration selects only inliers from the input data set when it chooses the n points from which the model parameters are estimated. When this happens, the resulting model is likely to be useful so p gives the probability that the algorithm produces a useful result. Let w be the probability of choosing an inlier each time a single point is selected, that is, w = number of inliers in data / number of points in data.

A common case is that is not well known beforehand, but some rough value can be given. Assuming that the n points needed for estimating a model are selected independently is the probability that all n points are inliers and is the probability that at least one of the n points is an outlier, a case which implies that a bad model will be estimated from this point set. That probability to the power of k is the probability that the algorithm never selects a set of n points which all are inliers and this must be the same as . Consequently, [15]"

(Relation 8)

"Which, after taking the logarithm of both sides, leads to:

(Relation 9)

This result assumes that the n data points are selected independently, that is, a point which has been selected once is replaced and can be selected again in the same iteration. This is often not a reasonable approach and the derived value for k should be taken as an upper limit in the case that the points are selected without replacement.[15]

To gain additional confidence, the standard deviation or multiples thereof can be added to. The standard deviation of is defined as:[15]"

(Relation 10)

4.

PROPOSED METHOD

In this paper the matching process will be done without considering minutiae status. Also a program designed in the Visual C# 2008 environment that functions as follows:

(4)

As it is visible is Figure5, white points are detected by Harris Corners Detector. After detecting match points, we need to correlate them somehow. For this part, Maximum Correlation Rule used to determine matches between two images. The cross correlation works by analyzing a window of pixels around every point in the main image and correlating them with a window of pixels around every other point in the template image. Points which have maximum bidirectional correlation will be taken as corresponding pairs. As it is visible in Figure 6, white lines represent symmetric relation between the points which have maximum bidirectional correlation, but many points have been wrongly correlated. This is the case of the diagonal lines that do not follow the same direction as the majority of other lines. Now there are two sets of correlated points and it is required to define a model which can translate points from one set to the other. The method that solves this problem is some kind of image transformation which can be used to project one of the two images on top of the other. While matching most of the correlated feature points, a homography matrix matching two images is needed. A homography[13] is a projective transformation, a kind of transformation used in projective geometry. It describes what happens to the perceived positions of observed objects when the point of view of the observer changes. In more formal terms, a homography is an invertible transformation from the real projective plane to the projective plane that maps straight lines to straight lines. By using homogeneous coordinates, one can represent a homography matrix as a 3x3 matrix with 8 degrees of freedom. In the System.Drawing namespace, there is a Matrix class which encapsulates a 3-by-3 affine matrix that represents a geometric transform. Despite being a 3x3 matrix, the geometric transform matrix of System.Drawing has only 6 degrees of freedom, whereas the projective transform of Accord.NET [14] has 8 degrees of freedom, Accord.NET framework used for this kind of matrix. In the latter, the last value can be interpreted as a scale parameter and can be fixed at 1, as shown in Figure 7. Using homogeneous coordinates, instead of representing the position of every pixel in the image as a pair <x,y>, it will be represented as a tuple <x,y,w>, where w is also a scale parameter. For simplification, let w to be fixed at 1.

(Relation 11)

Homogeneous coordinates are very useful because they will allow us to perform an image projective transformation by using only standard matrix multiplication, as shown by the equation and schematic diagrams below.

(Relation 12)

Once all the projected points have been computed, we can recover our original coordinate system by dividing each point by its homogeneous scale parameter and then dropping the scale factor, which after division will be set at 1.

(Relation 13)

By estimating the correct values for the homography matrix, transformation will be obtained like Figure 8, which could possibly result in a final projection like Figure 9.

After defining homography and what it is useful for, it is important to create this homography matrix from those set of correlated points. To estimate a robust model from the data, a method known as RANSAC will be used. After the execution of RANSAC, only the correct matches were left in images. This happens because RANSAC found a homography matrix relating most of the points, and discarded the incorrect matches as outliers. (Figure 10) After homography matrix has been computed, all that is left is to blend two images together. To do so, a linear gradient alpha blending will be used from the center of one image to the other. The gradient blending works by simulating a gradual change in one image's alpha channel over the line which connects the centers of the two images. The light gray is main image and the dark one is template. (Figure 11) This color difference that is visible in Figure 11 is a result of Alpha value and other parameters in table, Figure 12.

The Alpha default value is 0.3.

So for clarify matched image, a timer used in program. The first click on Blend button will start the timer, while timer is working in parallel, blending will execute. (Figure 13) Easily matching status and two images difference can be obtained from this algorithm.

5.

CONCLUSION

(5)
[image:5.595.194.399.88.354.2] [image:5.595.212.378.252.434.2]

Figure 1: Fingerprint Features

Figure 2: A data set with many outliers for which a line has to be fitted.

[image:5.595.213.377.472.635.2]
(6)
[image:6.595.108.489.71.441.2]

Figure 4: Program Environment

[image:6.595.182.357.472.619.2]

Figure 5: Harris Corner Detector applied to proposed Images

Figure 6: Maximum Correlation rule applied to proposed images

[image:6.595.270.327.642.688.2]

(7)
[image:7.595.146.451.66.574.2]

Figure 8: First level of transformation

[image:7.595.233.364.580.754.2]

Figure 9: Second level of transformation

Figure 10: RANSAC algorithm applied to proposed images

(8)
[image:8.595.185.415.71.366.2] [image:8.595.230.364.386.561.2]

Figure 12: Filters table

Figure 13: Matching status after 3 times Blending

Figure 14: status after 5 times Blending

6. REFERENCES

[1] "Cyber Meltdown: Bible Prophecy and the Imminent Threat of Cyber terrorism", Ron Rhodes, 2011, ISBN 978-0-7369-4417-5 (pbk), ISBN 987-0-7369-4423-6 (eBook), Published By Harvest House Publishers, Part2: Who are you? Where are you? Page 119.

[2] "Handbook of Fingerprint Recognition", Second Edition, Davide Maltoni, Dario Maio, Anil K.Jain, Salil Prabhakar, 2009, ISBN 978-1-84882-253-5.

[3] "Overview of fingerprint verification technologies", Elsevier Information Security Technical Report, Vol. 3, No. 1, 1998.

[4] "FINGERPRINT MATCHING", Anil K. Jain, Jianjiang Feng, Karthik Nandakumar, Published by the IEEE Computer Society, 0018-9162/10/$26.00 © 2010 IEEE.

[5] "Fingerprint Identification and Verification System using Minutiae Matching", F.A. Afsar, M. Arif and M. Hussain, National Conference on Emerging Technologies 2004.

[6] "Online Fingerprint Verification Algorithm and Distributed System", Ping Zhang, Xi Guo, Jyotirmay Gadedadikar, Journal of Signal and Information Processing, 2011, 2, 79-87.

(9)

[8] "FINGERPRINT IDENTIFICATION USING MINUTIAE CONSTELLATION MATCHING", Nizar Rokbani , Adel Alimi, IADIS Virtual Multi Conference on Computer Science and Information Systems 2005.

[9] H. Moravec (1980). "Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover" (http:/ / www. ri. cmu. edu/ pubs/pub_22. html). Tech Report CMU-RI-TR-3 Carnegie-Mellon University, Robotics Institute.

[10]C. Harris and M. Stephens (1988). "A combined corner and edge detector" (http:/ / www. bmva. org/ bmvc/ 1988/ avc-88-023. pdf).Proceedings of the 4th Alvey Vision Conference. pp. 147–151.

[11]L. Bretzner and T. Lindeberg (1998.). "Feature tracking with automatic selection of spatial scales" (http:/ / www. csc. kth. se/ cvap/abstracts/ cvap201. html). Computer Vision and Image Understanding 71,: pp. 385—392.

[12]http://en.wikipedia.org/w/index.php?oldid=483632469

[13]"Homography Estimation", Elan Dubrofsky, THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver), March 2009.

[14]C. R. Souza, "The Accord.NET Framework," Apr 2012; http://accord.googlecode.com

[15]http://en.wikipedia.org/wiki/Corner_detection

Figure

Figure 1: Fingerprint Features
Figure 6: Maximum Correlation rule applied to proposed images
Figure 10: RANSAC algorithm applied to proposed images
Figure 12: Filters table

References

Related documents

IJSRCSEIT CSEIT16116 | Received 28 July 2016 | Accepted 4 August 2016 | July August 2016 [(1)1 30 34] International Journal of Scientific Research in Computer Science, Engineering

During conducting the literature review for finding prominent models affecting the mGBL usability evaluation, the author believed that proposed usability evaluation

The Data Storage and retrieval subsystem organizes the data, spatial and attribute, in a form, which permits it to be quickly retrieved by the user for analysis, and permits

Informed by the literature on community of inquiry, wikis in education, and scaffolding in technology-supported learning environments, this study reports the design, implementation,

To examine whether homozygosity is associated with an increased risk of developing Hodgkin lymphoma (HL) we analysed 589 HL cases and 5,199 controls genotyped for 484,072 tag

The capability of the Arattano and Savage (1994) model to sim- ulate debris flow propagation and the diffusive effects that take place in nature only derives from a peculiar choice

2.4 Effects of SCA on iNOS and COX-2 proteins and mRNA expressions in LPS-induced RAW 264.7 cells 99.. To study the issue whether SCA suppressed the productions of NO and PGE 2

Dams and reservoirs prevent roughly twenty percent of global sediment load from reaching coasts, and this effect, combined with reforestation and sediment control practices,