1. Field of the Invention
The present invention is directed to the field of rolled fingerprint capture, and more specifically, to capturing and combining multiple fingerprint images to generate a composite rolled fingerprint image.
2. Related Art
A rolled fingerprint scanner is a device used to capture rolled fingerprint images. The scanner captures the image of a user's fingerprint as the user rolls a finger across an image capturing surface. Multiple fingerprint images may be captured by the scanner as the finger is rolled. These images are then combined using a computer to form a composite rolled fingerprint image. Fingerprint images captured by a digital camera are generally comprised of pixels. Combining the pixels of multiple fingerprint images into a composite fingerprint image is commonly referred to as pixel “knitting.”
The captured composite rolled fingerprint image constitutes biometric data for the user. A biometric is a unique, measurable characteristic or trait of a human being for automatically recognizing or verifying identity. Fingerprint biometrics are well-established as an accurate method of identification and verification.
Capturing rolled fingerprints using a fingerprint scanner coupled to a computer may be accomplished in a number of ways. Many current technologies implement a guide to assist the user. These guides primarily come in two varieties. The first type includes a guide located on the fingerprint scanner itself. This type may include guides such as light emitting diodes (LEDs) that move across the top and/or bottom of the scanner. The user is instructed to roll the finger at the same speed as the LEDs moving across the scanner. In doing so, the user inevitably goes too fast or too slow, resulting in poor quality images. The second type includes a guide located on a computer screen. Again, the user must match the speed of the guide, with the accompanying disadvantages.
Various devices have been developed for collecting rolled fingerprint images. For instance, U.S. Pat. No. 4,933,976 describes using the statistical variance between successive fingerprint image “slices” to knit together a composite fingerprint image. This patent also describes techniques for averaging successive slices into the composite image. These techniques have the disadvantage of less than desirable image contrast.
U.S. Pat. No. 6,483,932, assigned to Cross Match Technologies, Inc., discloses a useful method for capturing rolled fingerprint images. The method detects the start of the “roll” and a plurality of image frames are captured until the roll is completed. Pixels of each frame are then knitted into a composite fingerprint image.
Conventional efforts to knit image portions into composite fingerprint images typically result in image discontinuities, particularly where image portions overlap and provide different pixel values for overlapping areas. Discontinuities appear particularly at points where ridge features meet in adjacent image portions. There is a need for an improved method of establishing pixel values at the boundaries of these image portions as they are knitted into a composite print image.
The invention provides an improved system and method for creating a composite image of a moving object by stitching together image data from a plurality of image frames containing image data for the object. In an embodiment, a plurality of fingerprint images are captured as a finger is rolled relative to an imaging device. The areas of each image that contain useful fingerprint information are identified and the speed of movement of the finger relative to the imaging path is determined. The images are stitched together in sequence and data for pixels near the boundary between adjacent images is blended, so that values for those pixels are determined based on redundant data from both adjacent images. The extent of blending is determined based at least in part on the speed of movement of the finger, so that as speed increases blending is applied to an increasing number of the pixels in the boundary area. Blending in this embodiment occurs based on a weighting function where data from a primary frame is given primary weight while data from a secondary or redundant frame is given relatively less weight. The weight given to pixels from the secondary frame declines as distance increases between those pixels and the boundary between frames.
Further embodiments, features, and advantages of the present invention, as well as the structure and operation of the various embodiments of the present invention, are described in detail below with reference to the accompanying drawings.
a is a diagram showing an imaging surface used in an embodiment of the present invention.
b is a series of images of the imaging surface of
c shows a region of the imaging surface over which the fingertip is rolled during roll print capture.
d is an illustration showing the spatial relationship on the imaging surface of two sequential fingertip images taken as the finger rolls across the surface, and the location of a blending region.
e is a magnified view of the blending region shown in
The present invention will now be described with reference to the accompanying drawings. In the drawings, like reference numbers can indicate identical or functionally similar elements. Additionally, the left-most digit(s) of a reference number may identify the drawing in which the reference number first appears.
The present invention provides an improved apparatus and method for combining image data to form a composite image. The invention will be described by way of example in terms of a system and method for creating composite fingerprint images.
Fingerprint scanner 102 captures a user's fingerprint. Fingerprint scanner 102 may be any suitable type of fingerprint scanner, known to persons skilled in the relevant art(s). For example, fingerprint scanner 102 may be a Cross Match Technologies Verifier™ 290 fingerprint capture device. Fingerprint scanner 102 includes a fingerprint image capturing area or surface, where a user may apply a finger, and roll the applied finger across the fingerprint capturing area or surface. Fingerprint scanner 102 periodically samples the fingerprint image capturing area, and outputs captured image data from the fingerprint image capturing area. Fingerprint scanner 102 is coupled to computer system 104.
Fingerprint scanner 102 may be coupled to computer system 104 in any number of ways. Some of the more common methods include coupling by a frame grabber, a Universal Serial Bus port, and a parallel port. Other methods of coupling fingerprint scanner 102 to computer system 104 are known to persons skilled in the relevant art(s), and are within the scope of the present invention.
Computer system 104 receives captured fingerprint image data from fingerprint scanner 102. Computer system 104 may provide a sampling signal to fingerprint scanner 102 that causes fingerprint scanner 102 to capture fingerprint image frames. Computer system 104 combines data from a plurality of captured fingerprint image frames to generate a data set representing a composite fingerprint image. Further details of combining captured fingerprint image frames into composite or overall fingerprint images are provided below.
Computer system 104 may comprise a personal computer, a mainframe computer, one or more processors, specialized hardware, software, firmware, or any combination thereof, and/or any other device capable of processing the captured fingerprint image data as described herein. Computer system 104 may comprise a hard drive, a floppy drive, memory, a keyboard, a computer mouse, and any additional peripherals known to person(s) skilled in the relevant art(s), as necessary. Computer system 104 allows a user to initiate and terminate a rolled fingerprint capture session and to modify rolled fingerprint capture session options and parameters.
Computer system 104 may be optionally coupled to a communications interface 110. If equipped with communications interface 110, computer system 104 may use communications interface 110 to transmit fingerprint image data, or any other related data, and to receive data needed for operations. Communications interface 110 may provide an interface to a network, the Internet, or any other data communication medium known to persons skilled in the relevant art(s). Through this communication medium, the data may be routed to any fingerprint image data receiving entity of interest, as would be known to persons skilled in the relevant art(s). For example, such entities may include the police and other law enforcement agencies. The hardware used to implement communications interface 110 depends on the type of interface desired. Communications interface 110 may, for example, comprise a modem, network card, or other network interface hardware or software appropriate to the selected data communication medium.
Display 106 is coupled to computer system 104. Computer system 104 outputs fingerprint image data, including individual frames and composite rolled fingerprint images, to display 106. Any related rolled fingerprint capture session options, parameters, or outputs of interest, may be output to display 106. Display 106 displays the received fingerprint image data and related rolled fingerprint capture session options, parameters, and outputs. Display 106 may include a computer monitor, or any other applicable display known to persons skilled in the relevant art(s) from the teachings herein.
As shown in
The present invention is described in terms of the exemplary environment shown in
Description in these terms is provided for convenience only. It is not intended that the invention be limited to application in this example environment. In fact, after reading the following description, it will become apparent to a person skilled in the relevant art how to implement the invention in alternative environments known now or developed in the future.
Implementations for a rolled fingerprint capture module 108 are described at a high-level and at a more detailed level. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the rolled fingerprint capture module 108 as described in this section can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. The details of such structural implementations will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein.
a shows prism 210 with platen surface 220 against which a fingertip may be rolled laterally, in the direction of the arrow.
The image data collected may be in any data format and resolution and may use any current or future imaging technology. The example described herein uses 8-bit gray scale image data, but the image data may have a higher or lower resolution, may be a color image, or may relate to an image other than a visible wavelength image, for example an infrared image.
As part of the process of collecting the series of fingertip images and stitching them together to form a composite roll print image, imaging artifacts are discarded; that is, only the data from the fingertip region of interest (referred to as the centroid window) is used for further processing and any image information in other areas is ignored (step 320). Isolation of the centroid window data may be achieved by filtering the image, binarizing the data to obtain a black and white image, localizing the resulting dark areas, ordering the dark areas by size, and selecting the largest dark area as the centroid window area. Then, the original gray scale data for that area is used as an input to the composite roll print image and image data from that frame for other areas of the platen is discarded.
Various methods useful in processing the series of images as the finger is rolled, including further details of methods for identifying and selecting the centroid window for each image, are disclosed in U.S. Pat. No. 6,483,932 to Martinez et al., assigned to the assignee of this application, the entire disclosure of which is incorporated herein by reference.
d shows, in combination, two sequentially captured fingertip image centroid windows, arbitrarily labeled 10 and 11, and illustrates a preferred manner of combining sequentially captured fingertip images to form a composite roll print image. The composite image formation process will be described in terms of combining data from the two centroid windows 10 and 11, but it should be understood that a much larger number of sequentially captured images are typically combined in the manner shown to obtain a complete roll print image. As shown in
The location of centerline C10 may be calculated by identifying the lines defining the left and right sides of centroid window 10, determining the X coordinate of those lines, adding these two X coordinates, and dividing the result by two. In this manner, the X coordinate of the approximate center of centroid window C10 can be established. Similarly, the location of centerline C11 may be calculated by adding together the X coordinates of the lines defining the left and right sides of centroid window 11 and dividing the result by two.
A blending area 450 is located in a region of overlap between centroid windows 10 and 11. In one embodiment, as illustrated in
e illustrates blending of data at the stitch line C11 located at X=Xi. The image data to the left of stitch line C11 (having coordinates X=Xi−n to Xi−1 where n is the distance to the next adjacent stitch line) is taken from centroid window 10, while image data to the right of stitch line C11 (coordinates X=Xi to Xi+n) is taken from the image of centroid window 11. If centroid window 10 is the first image to be assembled into the composite image, data from the entire centroid window 10 is used in the composite image. If there are further images to the left of centroid window 10 that are to be assembled into the composite image, the image data from centroid window 10 is used only between stitch lines C10 and C11, and image data from centroid window 10 is blended with image data from a centroid window 9 to the left (not shown) around stitch line C10 between centroid windows 9 and 10.
This process is illustrated by a single stitching and blending operation between centroid windows 10 and 11 in
To increase accuracy and quality of the resulting composite image, the gray scale values of pixels near the stitch lines are determined based on a blended weighting of the pixel values from two overlapping images providing data for that pixel location. A preferred embodiment of this blending process will now be described in detail. For simplicity, the operation of the inventive blending process (including steps 330 through 350 as illustrated in
First, a desired blending depth “Blend_Depth” is calculated. The desired blending depth is preferably calculated according to a predetermined algorithm based at least in part on roll speed of the finger. In a preferred embodiment blending depth is determined by the following equation:
Blend_Depth=Roll_Speed*0.7 (Equation 1)
The resulting value for Blend_Depth in this embodiment is in units of pixels and defines the number of pixels to the right and to the left of the stitching line that the blending process will modify. Roll_Speed can be defined to be any value varying with the roll speed of the finger, and is defined in a preferred embodiment as the number of pixels a rolling finger moves between two subsequent video frames. The preferred definition produces a value for Roll_Speed in units of “pixels per frame interval.” Roll_Speed can be calculated and represented using a variety of other units representing a value that varies with movement speed of the finger, as desired. For example, Roll_Speed can be calculated in terms of inches per second. However, if Roll_Speed is calculated and represented in another unit format, Equation 1 must be modified to include a unit conversion factor so that the resulting Blend_Depth value is calculated in units of “pixels.”
To ensure that the blending depth is less than or equal to the number of available overlapping pixels a multiplier is provided in Equation 1, in this example 0.7. The value of the multiplier must be less than 1.0 and can be varied based on experimental results or preference. As can be seen, in this embodiment the blending depth (and the number of pixels blended on each side of the stitch line) increases as roll speed increases. Fewer pixels on each side of the stitch line are blended as the roll speed is reduced. In an embodiment using Equation 1 to determine blending depth, in effect, a specified percentage of the image area added to the composite image with each frame is blended. In one exemplary embodiment, each time a frame is stitched to the composite image 70% of the newly added image area is blended with data from the preceding image. As a further measure to avoid excessive blending depth, blending does not occur for pixels that are not within an overlap area. In other words, if there is no data available for a particular pixel from the adjacent image, that pixel will not be blended even though it may fall within the calculated blending region.
The algorithm for determining blend depth may also take into account factors in addition to roll speed, as desired, and such modifications are within the scope of the invention.
The composite image formation process requires stitching together data from a series of consecutive video frames of the finger roll area. Each stitching operation involves identifying a relevant image portion of a frame, adding that data to an image construct at a stitch line (e.g. C11) defining a border with image data from a previous frame, and performing a data blending process to adjust pixel values near the stitch line.
Pixels from the current video frame are copied to the right of the stitch line (that is, in the direction of the finger roll). Pixels from the previous video frame are located on the left of the stitch line (in the opposite direction of the finger roll). The blending process modifies the value of pixels located near the stitch line within a blending region 450, the width of which is two times Blend_Depth as calculated above. Values for blended pixels are calculated based on both (1) the value of the pixel in the main video frame for that pixel, and (2) the value of the pixel from an adjacent video frame that contains information for that pixel because that frame overlaps to some extent the main video frame for that pixel. In the example herein the pixel values are eight-bit gray scale values, although the invention encompasses a variety of other imaging techniques. The pixel value for the main video frame covering the pixel area is given more weight than data from the “overlap” frame.
Referring again to
The blending process on the right of the stitch line (X≧Xi) is performed as follows:
As can be seen, for each pixel data from one of the frames is given primary weight and redundant data from the adjacent frame is given secondary weight. In the embodiment disclosed, the stitch line is the boundary for this weighting determining; to the left of stitch line C11 data from centroid window 10 has primary weight, and to the right data from centroid window 11 has primary weight. In addition, the weight given to the secondary data from the adjacent centroid window diminishes as distance from the stitch line increases.
As noted above, calculations in this example are based on eight bit gray scale image data. The multiplying value 255 used to calculate the blend number is selected based on 255 being the value of white and zero being the value of pure black in the gray scale. If a different scale or data resolution is used, or if non-gray scale data is used, the formulae are adjusted accordingly.
It is useful to establish a maximum allowable value for Roll_Speed, since rolling the finger too quickly may produce diminished print quality and may not produce an appropriate level of overlap of adjacent centroid windows to permit high quality image blending. In addition to the possibility of exceeding the frame rate capability of the imaging hardware, compression of the finger and its position relative to the platen often display significant variations at high roll speeds, resulting in less useful and reproducible image data. Maximum allowable roll speed can be determined experimentally based on experience with the hardware in use. In a preferred embodiment, maximum allowable roll speed is 90 pixels per video frame. Roll speeds greater than the predetermined value will trigger an indication that the user must re-roll the finger.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.