Fingerprints are widely accepted as unique identifiers for individuals. Fingerprinting can be used as a biometric to verify identities to control attendance, access, e.g., to restricted areas, electronic devices, etc. For example, conventional fingerprint detectors typically require a user to place a finger or hand on the detector. The fingerprint is detected by the detector and compared to a catalogued fingerprint for the user.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments. In the drawings, like numerals describe substantially similar components throughout the several views. Other embodiments may be utilized and process, structural, logical, and electrical changes may be made without departing from the scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense.
Fingerprinting system 100 may include a receiver 110 configured to receive a finger and an image-capturing device 120 optically coupled to receiver 110. Fingerprinting system 100 may be configured so that image-capturing device 120 captures a fingerprint from a target region 122 of the finger without target region 122 being in direct physical contact with a solid surface. For example, receiver 110, and thus a finger received therein, may be separated from image-capturing device 120 by a gap 124, e.g., of air. For some embodiments, a fingerprint may be captured from target region 122 while the finger is in mid-air.
Target region 122 may include the fingerprint, e.g., such as friction ridges or epidermal ridges. Target region 122 may include other features (e.g., micro-features) in addition to the fingerprint, such as transient defects, e.g., including, cuts, inflammation, swollen pores, or other injuries, that may be tracked. For example, changes in the micro-features may be tracked for the users. For example, such tracking may be referred to as temporal identity mapping. Keeping track of changes in the micro-features in addition to the fingerprint may create a hard-to-copy biometric that can increase the statistical robustness of a fingerprinting process.
Requiring a finger to contact a solid surface during fingerprinting, as is common in conventional fingerprint detectors, can result in security, health, and equipment risk. An advantage of not having target region 122 touch a solid surface may be higher security since no fingerprint “residue” is left behind in an optical path from image-capturing device 120 to target region 122. For example, a portion previous user's fingerprint (e.g., known as fingerprint “residue”) may be left on the solid surface in the optical path between the finger and the fingerprint sensor in a conventional fingerprint detector.
Touching such a solid surface can also leave pathogens behind that can be transmitted to a finger of a subsequent user, presenting a health risk. An advantage of not having target region 122 touch such a solid surface reduces the risk of transmitting pathogens.
For some embodiments, image-capturing device 120 may include an optical system (e.g., one or more lenses and, for some embodiments, one or more mirrors), such as an afocal optical system 126 (e.g., that may be referred to as an afocal lens system or an afocal lens). Afocal optical system 126 may be optically coupled to a sensor 127. Afocal optical system 126 may receive an image of a fingerprint, in the form of electromagnetic radiation reflected from target region 122, and may transmit the image to sensor 127.
Afocal optical system 126 facilitates capturing a fingerprint from target region 122 when target region 122 is at a distance from afocal optical system 126, thus allowing the fingerprint to be captured without target region 122 contacting a solid surface, such as of afocal optical system 126. An example of afocal optical system 126 is discussed below in conjunction with
In general, afocal optical systems may be effectively focused at infinity (e.g., may have an effectively infinite focal length), may have substantially no net convergence or divergence (e.g., may have no net convergence or divergence for some embodiments) in their light paths, and can operate at non-contact object distances. Some afocal optical systems may produce collimated electromagnetic radiation, such as light, at substantially unity magnification. The advantage of afocality is that a collimated, defined field of view can be at great relative distance, facilitating the non-contact between target region 122 and a solid surface.
For some embodiments, fingerprinting system 100 may include another image-capturing device, such as a camera 129, e.g., a video camera, that is directed at receiver 110 and thus a finger received in receiver 110. Camera 129 may be used for capturing (e.g., recording) various gestures of a user's finger(s) as the user's finger(s) is being received in receiver 110. Camera 129 enables gesture recognition that provides an additional level of security to fingerprinting system 100.
For some embodiments, fingerprinting system 100 may include one or more electromagnetic radiation (e.g., light) sources 130 that are configured to illuminate receiver 110, and thus a finger received in receiver 110, with beams 135 of electromagnetic radiation, such as infrared radiation, visible light, or ultraviolet radiation. As such, image-capturing device 130 may be configured to detect infrared, visible (e.g., light), and/or ultraviolet radiation. Hereinafter, the term light will be used cover all types of electromagnetic radiation, including infrared, visible, and ultraviolet radiation.
For some embodiments, light sources 130 may be configured to emit alignment beams 140 of visible light independently of beams 135. For example, alignment beams 140, and thus the sources thereof, may form at least a portion an alignment system of receiver 110 and thus fingerprinting system 100. Alternatively, beams 135 and beams 140 may be emitted from separate light sources. Beams 140 may be colored red for some embodiments.
Beams 140 may cross each other at a crossing point 142 that is aligned with afocal optical system 126 in image-capturing device 120. For example, positioning a finger so that crossing point 142 lands on a predetermined location of target region 122, e.g., the center of target region 122, may properly align target region 122 with afocal optical system 126. During operation, target region 122 reflects the light from beams 135 to afocal optical system 126.
Controller 150 may be configured to cause fingerprinting system 100 to perform the methods disclosed herein. For example, controller 150 may be configured to receive captured image data, e.g., a bitmap, representing a captured fingerprint from image-capturing device 120 and to compare the captured image data to stored image data, representing a stored fingerprint, stored in a database (e.g., a fingerprint database) within controller 150 or externally to controller 150, such as on a network server 156, e.g., in a local area network (LAN), wide area network (WAN), the Internet, etc. The captured image data representing a captured fingerprint may be referred to as captured fingerprint data (e.g., a captured fingerprint), and the stored image data representing a stored fingerprint may be referred to as stored fingerprint data (e.g., a stored fingerprint).
Controller 150 may be configured to authenticate a user (e.g., by verifying an identity of a user) in response to the user's captured fingerprint matching a stored fingerprint for that user. That is, in response to the captured image data representing the user's captured fingerprint matching the stored image data representing a stored fingerprint.
Controller 150 may be configured to verify a user's identity in response to the fingerprints captured from a plurality of the user's fingers matching a plurality of stored fingerprints. For some embodiments, controller 150 may be configured to require that the user present different fingers in a certain order in order to verify the user's identity. In other words, controller 150 may be configured to verify a user's identity in response to different fingerprints of the user in presented in a certain order matching stored fingerprints in a certain order.
Requiring matches of different fingerprints in a certain order can increase overall security and can reduce the chance for a false positive. As such, fingerprinting system 100 may be configured to authenticate (e.g., verify) a user based on fingerprints captured from target regions 122 of different fingers presented in a certain order.
For example, if the false positive rate is found to be an error probability of 2×10−4 for one finger, then two different fingers provide an error probability of 4×10−8. Requiring that the two different fingers be in a certain order reduces the probability, in that there are 56 combinations of choosing a first one of the 8 non-thumb fingers followed by different one of them. This reduces the overall probability of a false positive to (40/56)×10−9, which less than the 1 chance in the billion required for forensic identification. As such, fingerprinting system 100 may be configured to provide forensic-level security.
For some embodiments, controller 150 may be configured to stop the process of capturing fingerprints from target regions of different fingers presented in a certain order and to authenticate a user in response to the overall probability of a false positive reaching a certain level. For example, controller 150 may stop the process and authenticate a user in response to the fingerprints captured from the target regions of a certain number of fingers presented in the certain order matching (e.g., two different fingers presented in the certain order matching), e.g., when the overall probability of a false positive is less than the 1 chance in a billion.
Controller 150 may inform the user via a display 155 coupled thereto of the verified identity in response to controller 150 verifying the user's identity. Controller 150 may be configured to transmit a signal 157 in response to verifying the user's identity. For example, signal 157 may be transmitted to an electronic device that grants the user access to the electronic device in response to receiving signal 157. The signal 157 may cause a solenoid to unlock a door, etc. For some embodiments, signal 157 may be sent to security personnel, e.g., over a network to a computer, to inform the security personnel that the user's identity is verified.
For other embodiments, signal 157 may be set to a first logic level (e.g., logic high) in response to controller 150 verifying the user's identity, where the first logic level causes the electronic device to grant the user access thereto, causes the door to unlock, informs security personnel that the user's identity is confirmed, etc.
If a user's identity is not verified, e.g., the user's fingerprint(s) does not match any fingerprints in the fingerprint database and/or that user's fingers are presented in the wrong order, controller 150 may inform the user as such via display 155. The controller 150 may be configured not to transmit signal 157 in response to user's identity not being verified. For other embodiments, signal 157 may be set to a second logic level (e.g., logic low) in response to controller 150 not being able to verify the user's identity, where the second logic level prevents the electronic device from granting the user access thereto, prevents the door from unlocking, informs security personnel that the user's identity is not confirmed, etc. As such signal 157 may be indicative of the user's identity, e.g., indicative of whether the user's identity is verified.
In addition to receiving fingerprint data from image-capturing device 120, controller 150 may be configured to receive video data from camera 129 that represents the movement of the user's finger(s) as the user's finger(s) are received in receiver 110. Controller 150 may be configured to compare video data from camera 129 to stored pre-recorded video data that may be stored in a database (e.g., a video database) within controller 150 or externally to controller 150, such as on network server 156.
For example, controller 150 may be configured to compare gestures of a finger captured by camera 129 to gestures of fingers stored in the database. If the gestures captured by camera 129 match gestures stored in the database, the user's identity is further verified when the user's identity is verified through fingerprinting. Controller 150 may cause display 155 to display an error message that requires the user to reenter its fingerprint(s) and/or may send a message to security personnel, indicating a potential security alert, in response gestures of a finger captured by camera 129 mismatching gestures of fingers stored in the database. For some embodiments, controller 150 may be configured to stop the process of capturing and comparing gestures and to indicate a gesture match in response to the overall probability of a false positive reaching a certain level, e.g., when the overall probability of a false positive is less than the 1 chance in a billion. For example, controller 150 may be configured to indicate a gesture match in response to a certain number of gestures in a certain order matching.
Controller 150 may be configured to receive an indication from receiver 110, indicating whether a finger has been received by receiver 110. In response to receiving an indication that a finger has been received by receiver 110, controller 150 may cause image-capturing device 120 to capture an image of a fingerprint from target region 122 of the finger.
Controller 150 may be configured to determine whether target region 122 is in focus and/or whether target region 122 is properly aligned with afocal optical system 126 before causing image-capturing device to capture the fingerprint. Controller 150 may be configured to determine whether target region 122 is in focus and/or whether target region 122 is properly aligned with afocal optical system 126 in response to receiving an indication that a finger has been received by receiver 110. For example, controller 150 may receive a signal having first logic level (e.g., logic high) from receiver 110 in response to a finger being received by receiver 110. When no finger is in receiver 110, controller 150 may receive a signal having a second logic level (e.g., logic low) from receiver 110. Note that when one or more operations are performed in response to an event, such as receiving a signal, without user intervention, the one or more operations may be taken as being performed automatically for some embodiments.
One of beams 135 may be received by a sensor 160, coupled to controller 150, when no finger is in receiver 110, as indicated by a dashed line in
Alternatively, one of beams 140 may be received by a sensor 162, coupled to controller 150, when no finger is in receiver 110, as indicated by a dashed line in
For some embodiments, controller 150 may be configured to perform a feedback alignment method, e.g., in response to determining that target region 122 is not properly aligned with afocal optical system 126, that properly aligns target region 122 with afocal optical system 126 (
For example, the predetermined portions might facilitate a comparison with like portions of a stored fingerprint, thereby allowing controller 150 to determine whether a user's fingerprint matches a fingerprint in the fingerprint database, thus allowing controller 150 to verify the user's identity. Therefore, the controller 150 might determine that a target region 122 is not properly aligned in response to determining that a captured image of target region 122 does not include the predetermined portions.
If controller 150 determines that target region 122 is not properly aligned, controller 150 may inform the user, e.g., via display 155, that its finger is not properly aligned and may instruct the user to reposition its finger. Controller 150 may then cause image-capturing device 120 to capture another image of target region 122 in response to the user repositioning its finger, and controller 150 may determine whether the target region 122 is now properly aligned. If the target region 122 is properly aligned, controller 150 will cause display 155 to inform the user as such. If controller 150 determines that target region 122 is still not properly aligned, controller 150 may inform the user that its finger is not properly aligned and may instruct the user to reposition its finger again. The feedback alignment method may be repeated until controller 150 determines that target region 122 is properly aligned with afocal optical system 126. For example, the feedback alignment method may be an iterative process for some embodiments.
For some embodiments, the feedback alignment method may be used in conjunction with positioning the finger so that crossing point 142 lands on a predetermined point of target region 122. For other embodiments, the feedback alignment method may be used in conjunction with a frame (e.g., discussed below in conjunction with
Note that positioning a finger so that crossing point 142 lands on a predetermined location of target region 122, as discussed above in conjunction with
For some embodiments, controller 150 may be configured to perform a focusing method, e.g., in response to determining that target region 122 is not in focus, to bring target region 122 into focus. Adjusting a distance d (
For example, controller 150 may move afocal optical system 126 until it determines that target region 122 is in focus. Alternatively, controller 150 may instruct a user, e.g., via display 155, to move its finger closer to or further away from afocal optical system 126 until it determines that target region 122 is in focus. For example, controller 150 may cause image-capturing device 120 to capture an image of at least a portion target region 122 and to determine whether the at least the portion target region 122 is in focus at each position of the afocal optical system 126 and/or the user's finger.
Controller 150 may include a processor 165 for processing for processing machine-readable instructions, such as processor-readable (e.g., computer-readable) instructions. These machine-readable instructions may be stored in a memory 167, such as a non-transitory computer-usable medium, and may be in the form of software, firmware, hardware, or a combination thereof. The machine-readable instructions may configure processor 165 to allow controller 150 to cause fingerprinting system 100 to perform the methods and functions disclosed herein. In other words, the machine-readable instructions configure controller 150 to cause fingerprinting system 100 to perform the methods and functions disclosed herein.
In a hardware solution, the machine-readable instructions may be hard coded as part of processor 165, e.g., an application-specific integrated circuit (ASIC) chip. In a software or firmware solution, the instructions may be stored for retrieval by the processor 165. Some additional examples of non-transitory computer-usable media may include static or dynamic random access memory (SRAM or DRAM), read-only memory (ROM), electrically erasable programmable ROM (EEPROM or flash memory), magnetic media and optical media, whether permanent or removable. Some consumer-oriented computer applications are software solutions provided to the user in the form of downloads, e.g., from the Internet, or removable computer-usable non-transitory media, such as a compact disc read-only memory (CD-ROM) or digital video disc (DVD).
Controller 150 may include storage device 169, such as a hard drive, removable flash memory, etc. Storage device 169 may be configured to store the fingerprint database that contains the fingerprints that are compared to the captured fingerprints. Storage device 169 may be further configured to store the video database that contains the video data that are compared to the video data captured by camera 129. Processor 165 may be coupled to memory 167 and storage 169 over a bus 170.
A human-machine interface 175 may be coupled to controller 150. Interface 175 may be configured to interface with a number of input devices, such as a keyboard and/or pointing device, including, for example, a mouse. Interface 175 may be configured to interface with display 155 that may include a touchscreen that may function as an input device.
For some embodiments, a user may initiate the operation of fingerprinting system 100 via interface 175. That is, fingerprinting system 100 may perform at least some of the methods and functions, such as capturing fingerprints, disclosed herein in response to user inputs to interface 175.
Fingerprinting system 100 may instruct the user, via display 155, to position a finger in receiver 110, may capture a fingerprint from the finger, and may compare the fingerprint to a fingerprint in the fingerprint database. Fingerprinting system 100 may also capture the user's gestures using camera 129 and compare them to pre-recorded gestures in the video database.
Fingerprinting system 100 may also instruct the user to insert different fingers into receiver 110 in a certain order, for embodiments where fingerprinting system 100 is configured to detect fingerprints from different fingers in a certain order, may capture fingerprints from those fingers, and may compare those fingerprints in the fingerprint database. For example, the fingerprint database might store different fingerprints in a certain order for each of a plurality of persons.
Controller 150 may compare a first captured fingerprint captured from a first finger of a user to the first stored fingerprint for each person in the database. Then, in response to a match of the first fingerprints, controller 150 might instruct the user to insert a second finger different than the first into receiver 110 and cause image-capturing device 120 to capture a second fingerprint from the second finger. Controller 150 may then compare the second captured fingerprint of the user to the second stored fingerprint of the person in the database whose first fingerprint matched the first captured fingerprint of the user. Controller 150 may then verify the user's identity to be the person in the database whose first and second fingerprints respectively match the first and second captured fingerprints of the user. This may be repeated for any number of different fingers, e.g., up to eight for some embodiments or up to ten, including thumbs, for other embodiments.
For some embodiments, the afocal system 126 (
Controller 150 may detect the micro-features in target region 122 in addition to the fingerprint from captured images of target region 122 and may store these captured images of target region 122, e.g., in memory device 169 or network server 156. Controller 150 may be configured to compare the micro-features detected from subsequent images to the micro-features in the stored images.
For some embodiments, controller 150 may be configured to obtain a baseline image of target region 122, e.g., including a fingerprint and any micro-features. Controller 150 may then might keep a rolling log, e.g., in storage 169, of changes to the baseline image, such as changes in the micro-features in baseline image. For example, controller 150 might update stored image data of target region 122 each time an image is captured of target region 122.
Afocal optical system 126 may include a lens 310 (e.g., a refractive lens) optically coupled to a mirror 320 (e.g., a concave mirror). A turning mirror 325 may be on an opposite side of lens 310 from mirror 320. Lens 310 may be symmetrical about a symmetry axis 327 that passes through a center of lens 310 so that portions 335 and 337 on opposite sides of symmetry axis 327 in the cross-section of lens 310 shown in
For some embodiments, portion 335 of lens 310 may receive light 330 that is reflected from target region 122 of a finger. Light 330 may be refracted as it passes through a curved surface of portion 335 while exiting portion 335. The refracted light 330 is subsequently received at mirror 320. Mirror 320 may reflect light 330 onto a curved surface of portion 337 of lens 310.
Light 330 may be refracted as it passes through the curved surface of portion 337 so that the light passing through portion 337 is symmetrical with the light 330 passing in the opposite direction through portion 335. Passing light through portion 336 of lens 310 and back through portion 337 of lens 310 can result in substantially no net magnification (e.g., no net magnification for some embodiments) of target region 122, e.g., a property of some afocal systems. Note that the curved surfaces of portions 335 and 337 may be contiguous, thus forming a continuous curved surface of lens 310 for some embodiments.
An extension 338 of lens 310 may be aligned with target region 122. For example, extension 338 may be aligned with target region 122 as discussed above in conjunction with
After exiting portion 335 of lens 310, and thus afocal system 126, light 330 may be received at turning mirror 325 that maybe separate from or integral with (as shown in
For example, sensor 127 may be smaller than the image of target region 122, and lens 366 may be configured to reduce the size of the image of target region 122 to the size of sensor 127. Alternatively, sensor 127 may be larger than the image of target region 122, and lens 365 may be configured to increase the size of the image of target region 122 to the size of sensor 127.
Sensor 127 may include a two-dimensional array sensing elements, such as charge coupled device (CCD) sensing elements or CMOS, configured to sense light. For example, each sensing element may correspond to a pixel of the captured image of a target region 122. For some embodiments, sensor 127 may include up to or more than 8000 sensing elements per centimeter in each of the two dimensions, providing a resolution of up to or more than 8000 pixels/cm (e.g., up to or more than 8000 lines of resolution).
For some embodiments, controller 150 may be configured to cause image-capturing device to capture a plurality of resolutions, e.g., different resolutions. For example, a high resolution, such as 8000 lines, may be captured as well as lower resolutions, such as 4000 lines, 2000 lines, etc.
The lower resolutions may be obtained through pixel binning on the sensor or down-sampling or resampling with intentionally lower resolutions. For example, a higher-resolution image may be obtained and lower resolutions may be obtained therefrom by averaging over fewer numbers of pixels of the higher-resolution image. For some embodiments, higher resolutions enable the capture of the micro-features in target region 122. The higher resolutions may also provide higher ridge definition.
For other embodiments, image-capturing device 120 may include an afocal system similar to those used in afocal photography. For example, image-capturing device 120 may include an afocal system (e.g., a telescope/finderscope) optically coupled to (e.g., positioned in front of) a camera, such as a digital camera, and may be directed at target region 122. In such embodiments, the power/magnification of the telescope/finderscope is used to increase the operating/object distance.
A finger is received against frame 400 such that target region 122 is aligned with an opening 410 in frame 400. Opening 410 may be pre-aligned with afocal optical system 126 of image-capturing device 120, e.g., with extension 338. Note that when a finger is placed against frame 400, target region 122 is exposed by opening 410 and is not in direct physical contact with any solid surface. Although frame 400 is shown to have a circular shape, frame 400 may have a square or rectangular shape or any other polygonal shape.
For some embodiments, a sign may be placed on fingerprinting system 100 to indicate how a finger is to be placed against frame 400 so that target region 122 is exposed and is properly aligned with afocal optical system 126. Alternatively, controller 150 may cause display 155 to indicate how a finger is to be placed against platform 400 so that target region 122 is exposed and is properly aligned with afocal optical system 126.
During operation, light beams 135 pass through opening 410 and illuminate target region 122. Target region 122 may then reflect the light from beams 135 through opening 410 and into image-capturing device 120 through afocal optical system 126.
For some embodiments, frame 400 may be configured to move to bring target region 122 into focus. For example, controller 150 may determine whether target region 122 is in focus, as discussed above in conjunction with
Although specific embodiments have been illustrated and described herein it is manifestly intended that the scope of the claimed subject matter be limited only by the following claims and equivalents thereof.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2012/033174 | 4/12/2012 | WO | 00 | 6/12/2014 |