This disclosure relates generally to systems in which imagery is acquired primarily to determine or verify the identity of a person using a biometric recognition system, and more specifically to systems in which there is a need to detect the presence of a live, human eye in the imagery. The biometric used for recognition may be the iris, for example.
Like a fingerprint, an iris can be used to uniquely identify a person. A number of systems have been implemented for this purpose. For one example, U.S. Pat. No. 4,641,349, titled “Iris Recognition System,” issued to Flom et al. on Feb. 3, 1987, and U.S. Pat. No. 5,291,560, titled “Biometric Personal Identification Based on Iris Analysis,” issued to Daugman on Mar. 1, 1994, discloses a system for identifying a person based upon unique characteristics of the iris. A camera captures an image of the iris, the iris is segmented, and then the iris portion is normalized to compensate for pupil dilation. The normalized iris features are then compared with previously stored image information to determine whether the iris matches.
For another example, U.S. Pat. No. 5,572,596, titled “Automated Non-Invasive Iris Recognition System and Method,” issued to Wildes et al. on Nov. 5, 1996, discloses an alternate method of performing iris recognition using normalized correlation as a match measure. Further advantages and methods are set forth in detail in this patent.
For another example, U.S. Pat. No. 6,247,813, titled “Iris Identification System and Method of Identifying a Person through Iris Recognition,” issued to Kim et al. on Jun. 19, 2001, discloses another system used for iris recognition, which implements a unique identification methods. The system divides a captured image of an iris into segments and applies a frequency transformation. Further details of this method are set forth in the patent.
For yet another example, U.S. Pat. No. 6,714,665, titled “Fully Automated Iris Recognition Systems Utilizing Wide and Narrow Fields of View,” issued to Hanna et al. on Mar. 30, 2004, discloses a system designed to automatically capture and identify a person's iris. This system uses a camera with a wide field of view to identify a person and a candidate iris. Once identified, a second camera with a narrow field of view is focused on the iris and an image captured for iris recognition. Further details of this method are set forth in the patent.
One problem faced by iris recognition systems involves the possibility of spoofing. Specifically, a life-sized, high-resolution photograph of a person may be presented to an iris recognition system. The iris recognition systems may capture an image of this photograph and generate a positive identification. This type of spoofing presents an obvious security concerns for the implementation of an iris recognition system. One method of addressing this problem has been to shine a light onto the eye, then increase or decrease the intensity of the light. A live, human eye will respond by dilating the pupil. This dilation is used to determine whether the iris presented for recognition is a live, human eye or merely a photograph—since the size of a pupil on a photograph obviously will not change in response to changes in the intensity of light. One disadvantage of this type of system involves the time required to obtain and process data as well as the irritation a person may feel in response to having a light of varying intensity shone into their eye.
U.S. Pat. No. 6,760,467, titled “Falsification Discrimination Method for Iris Recognition System,” issued to Min et al. on Jul. 6, 2004, attempts to address this problem. This system positions a pair of LED's on opposite sides of a camera. These LED's are individually lighted and images captured through a camera. These images are analyzed to determine whether light from the LED's was reflected back in a manner consistent with a human eye. Because a flat photograph will not reflect light back in the same manner, this system aims to deter this type of spoofing. One disadvantage of this system, involves the simplicity of the approach and the placement of the LED's. With two LED's positioned at a fixed, known location, the method can be defeated by appropriate placement of two small illuminators in an iris image. Also, while this system may operate more quickly than systems that dilate a pupil, it still requires time to capture at least two separate images: one when each of the LED's are individually lit. Further, a third image needs to be captured if the system requires both LED's to be illuminated to capture imagery that is sufficiently illuminated for recognition.
The above identified patents are each incorporated herein by reference in their entirety as well as each of the patents and publications identified below.
As mentioned above, it is well known that imagery of the iris can be reliably matched to previously recorded iris imagery in order to perform reliable verification or recognition. For example, see Daugman J (2003) “The importance of being random: Statistical principles of iris recognition.” Pattern Recognition, vol. 36, no. 2, pp 279-291. However, since the iris patterns are not easily recognizable to a human, it is impossible to demonstrate to a user who has been rejected from any iris recognition system the reason for the rejection. On the other hand, if a face image of the person whose iris has been used for recognition is acquired, it is easy to demonstrate the reason for rejection since face imagery can be easily interpreted by humans. Therefore, especially in unattended systems, there is a need for a highly secure method of associating an acquired face image to an acquired iris image, preferably (although not necessarily) with just one sensor in order to reduce cost and size of the solution.
This summary is provided solely to introduce a more detailed description of the invention as shown in the drawings and explained below.
Apparatus and methods for detecting a human iris use a computer screen on which an image is presented. The image is reflected off of a person's eye. The reflection is analyzed to determine whether changes to the reflected image are consistent with a human eye.
According to one aspect of the invention, a human eye is detected by presenting a first image on a computer screen that is oriented to face a user. At least one camera (and in some preferred embodiments at least two cameras) is positioned near the computer screen and oriented to face the user so that light emitted by the computer screen as the first image is reflected by the user and captured by the camera as a second image. The camera may be attached as part of the computer screen or separately mounted. A computer is operably coupled with the computer screen and the camera and the computer detects a human eye when at least a portion of the second image includes a representation of the first image on the computer screen reflected by a curved surface consistent with a human eye. The computer may be operated locally or operated remotely and connected through a network.
According to further aspects of the invention, the human eye is detected when the representation of the first image included in the second image is approximately equal to a human-eye magnification level, which is determined by dividing 3 to 6 millimeters by a distance from the computer screen to the user. For an implementation where the user is at least 100 millimeters from the computer screen, the representation of the first image is at least ten times smaller than the first image. For an implementation where the user is approximately 75 to 500 millimeters from the computer screen and the camera, the representation of the first image is approximately 12.5 to 166.7 times smaller than the first image. The determination can further require that the magnification at the center of the representation is smaller than a magnification in areas surrounding the center of the representation. Likewise, the determination can detect a human eye when the second image includes the representation of the first image on the computer screen reflected by an ellipsoidal surface with an eccentricity of approximately 0.5 and a radius of curvature at the apex of the surface of approximately 7.8 millimeters.
According to further aspects of the invention, the portion of the second image containing a representation is isolated. The comparison is made between the first image and the portion of the second image containing the human iris. In addition or alternatively, the determination can be made by searching the second image for a warped version of the first image. For example, a checkered pattern may be presented on the computer screen. The second image is then searched for a warped version of the checkered pattern.
According to further aspects of the invention, a third image is presented on the computer screen that is different than the first image. For example, the first image may be a checkered pattern and the third image may also be a checkered pattern but with a different arrangement of checkered squares. A fourth image is captured through the camera(s). The computer then aligns the second and fourth image. The computer then determines a difference image representing the difference between the second image and the fourth image. The portion of the difference containing an eye, and thus containing a reflection of the first and the third image are isolated. This may be found by identifying the portion of the difference image containing the greatest difference between the second and fourth images. A human eye is detected when the portion of the difference image is consistent with a reflection formed by a curved surface. For example, this can be detected determining the size of the portion containing a reflection of the first and third images; where the ratio between the image size and the image reflection size is greater than 10 to 1 then a human eye is detected. This ratio can be calculated for a particular application by dividing the distance between the user and the computer screen by approximately 3 to 6 millimeters, where the camera is at or near the computer screen.
According to still further aspects of the invention, a skin area is found in the second image and a determination is made as to whether the reflection of light from the skin area is consistent with human skin.
According to another aspect of the invention, a human eye is detected by presenting a first image on a computer screen positioned in front of a user. A first reflection of the first image off of the user is captured through a camera. The computer screen presents a second image on the computer screen positioned in front of the user. The camera captures a second reflection of the second image off of the user. The first and second images can be, for example, a checkered pattern of colors where the second image has a different or inverted arrangement. A computer compares the first reflection of the first image with the second reflection of the second image to determine whether the first reflection and the second reflection were formed by a curved surface consistent with a human eye. This comparison can be made, for example, by aligning the first reflection and the second reflection then calculating a difference between them to provide a difference image. The portion of the difference image containing a difference between a reflection of the first image and a reflection of the second image is identified. The size of this portion is determined. A human eye is detected when the ratio of the size of this portion to the size of the first and second image is approximately equal to a human-eye magnification level. When the camera is located at or near the computer screen, the human-eye magnification level is determined by dividing the distance from the computer screen to the user by approximately 3 to 6 millimeters.
According to another aspect of the invention, a human eye is detected by obtaining a first image of a user positioned in front of a computer screen from a first perspective and obtaining a second image of the user positioned in front of the computer screen from a second perspective. A computer identifies a first portion of the first image and a second portion of the second image containing a representation of a human eye. The computer detects a human eye when the first portion of the first image differs from the second portion of the second image. For example, the computer may detect changes in specularity consistent with a human eye. For another example, the computer may align the first image with the second image and detect an area of residual misalignment. In this case, a human eye is detected if this area of residual misalignment exceeds a predetermined threshold.
According to further aspects of the invention, the first perspective is obtained by presenting a first graphic on the computer screen at a first location and instructing the user to view the first image. The second perspective is obtained by presenting a second graphic on the computer screen at a second location, different than the first, and instructing the user to view the second image.
According to another aspect of the invention, a human eye is detected by presenting one or more illuminators oriented to face a user. At least one camera is positioned proximate the illuminators. The camera(s) is oriented to face the user so that light emitted by the illuminators is reflected by the user and captured by the camera(s). The camera(s) also obtain a second image through at a different time than the first image. A computer detects a first position of a reflection in the first image and a second position of a reflection in the second image. The computer normalizes any positional change of the user in the first image and the second image based upon the first position and the second position. This normalizing includes compensating for motion during the time between the first image and the second image by using at least a translation motion model to detect residual motion of the position of the reflection. A human eye is detected when a change between the first image and the second image is consistent with reflection by a curved surface consistent with that of a human eye.
In another aspect of the invention, the invention includes a method of biometric recognition that associates face and iris imagery so that it is known that the face and iris images are derived from the same person. The methodology allows face acquisition (or recognition) and iris recognition to be associated together with high confidence using only consumer-level image acquisition devices.
In general, the inventive method of biometric recognition that associates face and iris imagery includes a method of biometric recognition. Multiple images of the face and iris of an individual are acquired, and it is determined if the multiple images form an expected sequence of images. If the multiple images are determined to form an expected sequence, the face and iris images are associated together. If the face and iris images are associated together, at least one of the iris images is compared to and stored iris image in a database. Preferably, the iris image comparison is performed automatically by a computer. Additionally or in the alternative, if the face and iris images are associated together, at least one of the face images is compared to a stored face image in a database. Preferably, the face image comparison is performed manually by a human.
Preferably, the acquiring of both face and iris images is performed by a single sensing device. That single sensing device is preferably a camera that takes multiple images of a person's face. Optionally, a midpoint of the camera's dynamic range is changed while taking the multiple images of the person's face. In addition or in the alternative, the position of the user relative to the camera is changed while taking the multiple images of the person's face. In addition or in the alternative, the zoom of the camera is changed while taking the multiple images of the person's face. Preferably, the acquiring of images occurs at a frame rate of at least 0.5 Hz.
To prevent fraudulent usage of the system (e.g., a person inserting a photo of someone else's iris into the field of view), at least one imaging parameter is determined from the multiple images acquired, and the at least one imaging parameter determined from the multiple images is compared to at least one predetermined expected imaging parameter. If the at least one imaging parameter determined from the multiple images is significantly different from the at least one predetermined expected imaging parameter, then it is determined that the multiple images do not form an expected sequence. Regarding the at least one imaging parameter, it may include at least one of determining if the accumulated motion vectors of the multiple images is consistent with an expected set of motion vectors; or ensuring that the iris remains in the field of view of all of the multiple images. This preferably takes places at substantially the same time as the acquiring step. If it is detected that at least one of i) inconsistent accumulated motion vectors or ii) that the iris is not in the field of view of all of the multiple images, then an error message is generated and the acquisition of images ceases and is optionally reset.
Because the face and the iris have very different reflectivity properties, the imaging device that captures both face and iris images must be adjusted accordingly. As such, preferably, the sensitivity of the camera is altered between a first more sensitive setting for acquiring iris images and a second less sensitive setting for acquiring face images. For example, the altering of the sensitivity may include alternating back and forth between the first and second settings during the acquiring step. This alternating step may be performed for every image so that every other image is acquired under substantially the same first or second setting. Whatever the timing of the altering of the sensitivity of the camera may be, how the altering may be accomplished may include at least one of the following: adjusting the gain settings of the camera; adjusting the exposure time; or adjusting the illuminator brightness. Preferably, the first more sensitive setting is substantially in a range of 1 to 8 times more sensitive than the second loss sensitive setting.
Preferably, the acquiring step of the inventive method is performed until at least one face image suitable for human recognition is acquired and at least one iris image suitable for computer recognition is acquired. The acquisition of the at least one suitable face image is preferably required to occur within a predetermined amount of time of the acquisition of the at least one suitable iris image, either before or afterwards.
More generally, the inventive method of biometric recognition includes the steps of acquiring at least one non-iris image suitable for human recognition, and acquiring at least one iris image suitable for computer recognition within a predetermined period of time from the non-iris image acquiring step to ensure that both suitable images are from the same person. The non-iris image includes at least one of a body image, a face image, an identification code image, or a location image.
Embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures and accompanying drawings and in which like reference numerals refer to similar elements and in which:
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of invention described herein. It will be apparent, however, that embodiments of the invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the description of embodiments of the invention. It should also be noted that these drawings are merely exemplary in nature and in no way serve to limit the scope of the invention, which is defined by the claims appearing hereinbelow.
In iris recognition applications, the iris is imaged behind the transparent corneal surface which has a particular convex shape. Light is also typically reflected off the cornea itself and back into the imager. In addition, the retinal surface is imaged through the pupil although it typically appears dark due to the relatively small amount of light that is returned to the imager from it. In order to determine whether a detected iris is live, parameters of the geometrical and/or photometric relationships and/or properties of the iris, retina and cornea determined. The reflective properties of a human eye are detailed at length in “The World in an Eye,” published by Ko Nishino and Shree K. Nayar, in IEEE Conference on Pattern Recognition, Vol 1, pp 444-451, June 2004, which is incorporated by reference in its entirety. In the present invention, using these reflective parameters, a determination is made as to whether an eye is live or not. Following the methods disclosed below, various components and their configuration in an iris recognition system can be varied in order to optimize performance for specific applications where it is easier to modify some configuration parameters compared to others.
More specifically, preferred techniques are discussed for determining whether an image detected through a camera is a live, human eye or a false representation such as a photograph. One or more images are presented on a computer screen positioned in front of a user. To deter attempts at spoofing, the image used for this determination may vary. Images presented on the computer screen may include a solid color, a regular or warped checkered pattern, random noise, etc. In addition, a number of different images may be presented in quick succession so that a person is unable to tell which image is being used for this determination, and is unable to predict which image will be displayed at which time. One or more cameras are positioned near the computer. The cameras are positioned to face a person in front of the computer screen. Due to the relatively sharp curvature of the human eye and particularly the cornea, the image projected on the computer screen will be reflected back and captured by the cameras.
The captured image may then be analyzed to determine whether it is consistent with an image reflected by a human eye. A number of methods may be used for this purpose. For example, the image reflected by the cornea and captured by the camera will appear substantially smaller than the image presented on the computer screen. A threshold level of magnification is set based upon the distance to the person's eye and the average radius of curvature for a human eye. If the captured image contains a reflection of the image presented on the computer screen and it is consistent in size with the expected size of this reflection, a human eye will be detected. This deters spoofing with a photograph because the flat surface of a photograph will not provide the substantial reduction in size caused by a reflection off the surface of the cornea. A number of other methods and variations can be used to make this same determination. These are explained in further detail below.
A human eye is identified by capturing image data using a particular configuration of a set of components such as those shown in
In the first method, imagery is captured using at least two different geometrical or photometric configurations of components. The captured image data (or features derived from the data) acquired using one configuration is then compared to the captured image data or derived features acquired using the second or further configurations. The computer calculates a set of change parameters that characterize the difference between the captured image data. The set of change parameters is compared with those change parameters that are predicted using knowledge of the expected change in geometrical or photometric configuration of the components. If the measured change parameters are different from the expected change parameters, then the geometric or photometric configuration of the corneal and iris or retinal surfaces are not as expected, for example the iris and cornea may appear to lie on the same surface. In this case it can be inferred that the iris is not live. Similarly, if the corneal surface is not consistent with a partially spherical surface, then again it is known that an iris is not live.
In another preferred method, imagery is captured using one geometric or photometric configuration of components. The captured image data (or features derived from the data) is compared with data that is predicted using absolute knowledge of the expected geometrical or photometric configuration of the components. For example, for a given image projected on the screen of the computer, a particular illumination pattern would be expected to appear on the surface of the cornea. While these two methods are described separately, they can be combined.
Introduction:
With reference to
The process if further shown in
In another method of measuring 3D structure, a full model of the scene is recovered without a 3D planar assumption. This type of method is disclosed in U.S. Pat. No. 5,259,040, titled “Method for determining sensor motion and scene structure and image processing system thereof,” which is incorporated herein by reference in its entirety. Notwithstanding that a specularity is not a real structure but an image artifact, its position in the image changes with viewpoint and therefore is detected by measuring the residual misalignment. If there is significant misalignment, as measured by thresholding the residual misalignment, then there is an indication that a 3D structure is present. Methods for thresholding residual misalignments are well-known and an example is given in “Recovering Motion Fields: An Evaluation of Eight Optical Flow Algorithms” B. Galvin, B. McCane, K. Novins, D. Mason, S. Mills, Proceedings of the British Machine Vision Conference (BMVC), 1998, which is incorporated herein by reference in its entirety. Given the corneal curvature, there is not only a residual misalignment, but the magnitude and distribution of the residual misalignment across the image is consistent with the 3D structure of the cornea. An example of modeling the reflected image off a curved surface is given in “Omnidirectional Vision,” by Shree Nayar, British Machine Vision Conference, 1998, which is incorporated herein by reference in its entirety. Another example of modeling the reflected image off the cornea is “The World in an Eye,” published by Ko Nishino and Shree K. Nayar, in IEEE Conference on Pattern Recognition, Vol 1, pp 444-451, June 2004, which is incorporated herein by reference in its entirety. In this latter case a camera observes imagery reflected off the cornea that is modeled as an ellipsoid. It is shown how the deformation introduced by the ellipsoid can be removed in order to provide a standardized perspective image. This standardized perspective image can then be processed using standard 3D structure recovery algorithms, as described earlier in this specification. Parameters for the shape of the cornea are well known. For example, the Gullstrand-LeGrand Eye model notes that the radius of the cornea is approximately 6.5 mm-7.8 mm. In another example, in “Adler's Physiology of the Eye: Clinical Application,” Kaufman and Alm editors, published by Mosby, 2003, the radius of curvature at the apex of the cornea is noted to be approximately 7.8 mm and the eccentricity of the ellipsoid is approximately 0.5. The same model that removes the deformation introduced by the corneal surface can be used in reverse in order to introduce the expected deformation into a standard geometrical pattern (such as a checkerboard) that can be presented onto the screen. When this deformed image is reflected off the cornea, it is substantially non-deformed so that the image acquired by the camera is simply the standard geometrical pattern. This simplifies the image processing methods that are required for detecting the patterns in the acquired imager.
In another example implementation, the illumination screen or device can be located close to one of the cameras. The reflection off the retinal surface appears brighter in the camera located closer to the imager due to the semi-mirrored surface of the retina, and this also indicates whether an eye has the appropriate geometric and photometric properties. This approach takes advantage of the “red-eye-effect” whereby a light source is reflected off the retina and directly into the camera lens. If a second light source is placed at a more obtuse angle to the eye and camera, then less light will be reflected off the retina, although a similar quantity of light will be reflected off the face and other surfaces of the scene that scatter light in all directions (such a surface is Lambertian). Lambertian reflectance is described in Horn, “Robot Vision,” MIT Press, pp. 214-315, which is incorporated herein by reference in its entirety.
Further methods that exploit configuration changes are described below. The methods are separated into two steps: (1) illumination control and image acquisition; and (2) measuring deformation or change in characteristics. Further examples of these two steps are now described.
Illumination Control and Image Acquisition:
In steps P and Q in
Another example of the sequential method is to create a projected image that varies over time—a video sequence for example. The video sequence may comprise a checkerboard pattern. For example, an example projection may have 4×4 black or white squares shown on the screen in a random binary arrangement. The squares may be pre-deformed as described above so that the reflected image off the cornea is close to a perfect checkerboard pattern.
Another example of a sequential method takes advantage of any combined motion of the cornea A and iris or retinal surfaces B. As the candidate cornea and iris or retina move through 3D space over time, different images are acquired at different time periods and due to the self-motion of the surfaces, the geometry between the said components changes. An example of a simultaneous method is to keep the image or light source fixed, but to have two cameras that acquire images from slightly different locations.
Measuring Deformation or Change in Characteristics:
The images captured from steps Q and S in
The size of the pattern can also be predicted from the expected geometrical shape of the cornea. The detected size of the reflection can then be measured and used to determine whether the size is consistent with that of a human cornea. For example, it is known that the focal length of a convex mirror reflector is half the radius of curvature. Using standard lens equations, then 1/f=1/d0+1/d1, where f is the focal length, d0 is the distance of the screen from the cornea and d1 is the distance of the reflected virtual image from the cornea. It is known from, for example, “Adler's Physiology of the Eye: Clinical Application,” Kaufman and Alm editors, published by Mosby, 2003, that the radius of curvature at the apex of the cornea is approximately 7.8 mm and the eccentricity of the ellipsoid shape of the cornea is approximately 0.5. The focal length of the corneal reflective surface at the apex is therefore half this focal length: approximately 3.9 mm. Using the ellipsoidal model, the radius of curvature of the cornea at a radial distance of 6 mm from the apex of the cornea can be computed to be approximately 9.6 mm. The focal length of the corneal reflective surface in this region is therefore approximately 4.8 mm. If the cornea is situated approximately 150 mm from the computer screen, then from the standard lens equation above, d1 can be computed to be 4.0 mm at the apex, and 4.96 mm at a radial distance of 6 mm from the apex of the cornea. The magnification is computed to be d1/d0=4.0/150=1/37.46 at the apex of the cornea, and 4.96/150=1/30.25 at a radial distance of 6 mm from the apex of the cornea. This means that the cornea has the effect of reducing the size of the graphic on the computer screen by a factor of 37.46 to 30.25 in this case, over different regions of the cornea, whereas the magnification expected if the reflective surface is flat is 1. If the detected graphic is significantly larger or smaller than the reduction factors 37.46 to 30.25, then the curvature of the cornea is inconsistent with that of a live person.
If the local radius of curvature of the cornea is substantially less then the distance of the cornea to the computer screen, then the magnification can be simplified to be R/(2×d1), where d1 is the distance from the cornea to the computer screen and R is the local radius of curvature of the cornea. Due to human variation, the radius of curvature of local regions of a cornea may lie within the bounds of 6 to 12 mm. The magnification therefore may lie in the range of 3/d1 to 6/d1.
In another example, if d1 lies within the range of 75 to 500 mm then using the parameters and formula above, it is expected that the magnification is 1/12.5 to 1/166.7.
The distance d1 may be unknown, however, the ratio of the magnification at the apex of the cornea and the magnification elsewhere in the cornea is independent of the distance d1. For example, using the parameters above, the magnification ratio between the apex and a point 6 mm radially from the apex is (1/37.46)/1/30.25)=0.807. At a distance of 4 mm radially from the apex, the expected magnification ratio is computed to be 0.909. The iris is approximately 11 mm in diameter, and therefore localization of the iris/sclera boundary can be used to identify the approximate location of any radial position of the cornea with respect to the apex.
In another example, consider the change in configuration caused by the movement of a person with respect to a camera and one or more illuminators or computer screens. The position of the reflection of the illuminators, or the detected shape and magnification of the computer screen, will change as the person moves. In one preferred implementation to detect this change, a sequence of images are acquired and image alignment is performed using a hierarchical, iterative method such as described by Bergen et al., “Hierarchical Model-Based Motion-Estimation,” European Conference on Computer Vision, 1993, which is incorporated herein by reference in its entirety. For example, a translation and zoom model can be applied between the warped images W(Q) and the original images O(R). In this case the motion of the user will be stabilized, and, for example, the image of the iris may be aligned throughout the sequence. Any residual motion is an indication of a change in the position of the reflection of the illuminators, or of a change in the shape and magnification of the computer screen, due to an eye consistent with that of a live person. For example, one preferred method of detecting the residual motion or change is shown in R. Kumar, P. Anandan, and K J. Hanna, “Direct Recovery of Shape from Multiple Views: a Parallax Based Approach,” Proceedings of the 12th IAPR International Conference on Pattern Recognition, vol. 1, pp. 685-688, 1994, which is incorporated by reference in its entirety. In an alternate method, a nonparametric flow model as described in Bergen et al., “Hierarchical Model-Based Motion Estimation,” European Conference on Computer Vision, 1993, can be applied to detect residual motion.
In another example, consider the presentation of illumination of a particular wavelength, the recording of an image, and then presentation of illumination with a different wavelength and the recording of one or more additional images. Depending on the photometric properties of the material, the ratio of the digitized intensities between the images can be computed and compared to an expected ratio that has been previously documented for that material. The response of iris tissue has a unique photometric signature which can indicate whether the iris is live or not. Equally, the response of skin tissue has a unique photometric signature which can indicate whether the skin is live or not. This method can be implemented by acquiring two or more images with the computer screen projecting different wavelengths of light, such as red, green, and blue. These colors can be projected in a checkerboard or other pattern. For example, a first image may contain a red checkerboard pattern, and a second image may contain a blue checkerboard pattern. The methods described above can then be used to align the images together, and to detect the location of the eye or eyes by detecting the patterns reflected off the cornea. The iris and the sclera (the white area around the iris) are then detected. Many methods are known for detecting the iris and the sclera. For example, a Hough transform can be used to detect the circular contours of the pupil/iris and iris/sclera boundaries as explained by R. Wildes, “Iris Recognition: An Emerging Biometric Technology,” Proc IEEE, 85(9): 1348-1363, September 1997, which is incorporated herein by reference in its entirety. Intensities of the iris and sclera can then be sampled and used to measure the liveness of the eye. These ratios can be computed in several ways. In one preferred method, the ratio of the iris reflectance and the scleral reflectance is computed. This ratio is substantially independent of the brightness of the original illumination. The iris/scleral ratio is then computed on the other aligned images. This process can be repeated by measuring the scleral/skin ratio. The skin region can be detected by measuring intensities directly under the detect eye position, for example. Ratios can also be computed directly between corresponding aligned image regions captured under different illumination wavelengths. These ratios are then compared to pre-stored ratios that have been measured on a range of individuals. One method of comparison is to normalize the set of ratios such that sum of the magnitudes of the ratios is unity. The difference between each normalized ratio and the pre-stored value is then computed. If one or more of the normalized ratios is different from the pre-stored ratio by more than a pre-defined threshold ratio, then the measured intensity values are inconsistent with those of a real eye.
In yet another example, the user may be asked to fixate on two or more different locations on the computer screen while a single camera records two or more images. The specular reflection off the cornea will remain substantially in the same place since the cornea is substantially circular, but the iris will appear to move from side to side in the imagery. In order to detect this phenomenon, the alignment methods described above can be used to align the images acquired when the user is looking in the first and second directions. The high-frequency filtering methods and the image differencing method described above can then be used to identify the eye regions. The alignment process can be repeated solely in the eye regions in order to align the iris imagery. The residual misalignment of the specular image can then be detected using the methods described earlier.
Introduction:
In the previous section, images were captured using at least two different geometrical or photometric configurations of components. The captured image data (or features derived from the data) acquired using each configuration were compared to each other and a set of change parameters between the captured image data were computed. The set of change parameters were then compared with those change parameters that were predicted using knowledge of the expected change in geometrical or photometric configuration of the components. In a second method, imagery is captured using one geometric or photometric configuration of components. The captured image data (or features derived from the data) is compared with data that is predicted using absolute knowledge of the expected geometrical or photometric configuration of the components. Both the first and second methods can optionally be combined.
To illustrate an example of the second method, consider that the shape of the cornea results in a particular reflection onto the camera. For example, an image projected on the screen of computer L may be rectangular, but if the candidate corneal surface is convex then the image captured by imager C comprises a particular non-rectangular shape, that can be predicted from, for example, the ellipsoidal model described earlier in this specification. This particular reflected shape can be measured using methods described below, and can be used to determine whether the cornea has a particular shape or not.
To illustrate the combination of the first and second methods, consider the previous example but also consider that the projected image L changes over time. Both the absolute comparison of the reflected image with the expected absolute reflection as well as the change over time in the reflected image compared to the expected change over time can be performed to validate the geometrical relationship and/or photometric relationship between or within the corneal and iris or retinal surfaces.
Optimizing Performance:
As set forth above, the number and configuration of the various system components that include (I, L, C, A, B, (X, Y, T)) can vary widely, and the methods are still capable of determining the parameters of the geometrical and/or photometric relationship between or within either surface A, B which are the corneal and iris or retinal surfaces. In order to optimize the particular configuration of the various system components, many factors in the optimization need to be included, for example: cost, size, and acquisition time. Depending on these various factors, an optimal solution can be determined. For example, consider an application where only one camera C can be used, the candidate corneal surface A and iris or retinal surface B is fixed, and only a projected light source from a computer L can be used. Using the first method, variation in the configuration may be derived from the remaining configuration parameters (X, Y, T) and L. For example, the surfaces may move in space and time, and imagery captured. In another example where the orientation and position of the eye (X, Y) is fixed, then the projected image L can be varied, and imagery acquired by camera C. Note that variation in all parameters can be performed simultaneously and not just independently. All variations provide supporting evidence about the parametric relationship between or within the corneal and iris/retinal surfaces that is used to determine whether an iris is live or not. If anyone of the measured variations does not equal the expected variation, then the iris is not live.
As mentioned above, it is well known that imagery of the iris can be reliably matched to previously recorded iris imagery in order to perform reliable verification or recognition. However since the iris patterns are not easily recognizable to a human, it is impossible to demonstrate to a user who has been rejected from any iris recognition system the reason for the rejection. On the other hand, if a face recognition system is used instead of an iris recognition system, it is easy to demonstrate the reason for rejection since face imagery can be easily interpreted by humans. However, automated face recognition systems are widely known to be much less reliable than iris recognition systems.
We propose a method whereby iris imagery is acquired and used for automatic iris matching, face imagery is acquired at least for the purposes of human inspection generally in the case of rejection, and where the face and iris imagery is acquired and processed such that it is known that the face and iris imagery were derived from the same person. We present a design methodology and identify particular system configurations, including a low-resolution single camera configuration capable of acquiring and processing both face and iris imagery so that one can confirm and corroborate the other, as well as give assurances to the user who cannot properly interpret an iris image. We first present a single-sensor approach.
Single-Sensor Approach:
Most methods for acquiring images of the face or iris use camera imagers. The simplest method for acquiring imagery of the face and iris with some evidence that the imagery is derived from the same person is to capture a single image of the face and iris from a single imager. However, in order to capture the face in the field of view, the number of pixels devoted to the iris will be quite small, and not typically sufficient for iris recognition. High resolution imagers can be used, but they are expensive and not as widely available as low-cost consumer cameras. Also, the albedo (reflectance) of the iris is typically very low compared to the face, and this means that the contrast of the iris is typically very small in cases where the full face is properly imaged within the dynamic range of the camera. It is generally much more difficult to perform reliable iris recognition using a low-contrast image of the iris. It is also more difficult to implement reliable anti-spoofing measures for iris recognition when the acquired data is low resolution or low contrast.
We propose a method whereby multiple images of the face and iris are collected, the images are processed, and a determination is made as to whether the face and iris images are part of an expected sequence of images. If a determination can be made that the images were collected as part of an expected sequence of images, then a determination can be made that the face and iris imagery are of the same person. The multiple images may be collected under different imaging conditions, for example: change in the midpoint of the camera's dynamic range; change in position of the user; and/or change in zoom of the camera.
For example,
Associating Face and Iris Imagery:
We now describe a method for associating face and iris imagery in a sequence.
It is important that the method track and perform alignment from the image at or close to the iris image used for biometric recognition to another image taken later or earlier in the sequence of images. Ideally but not necessarily, image tracking would be from the actual image used for iris matching. However, using the very same image used for iris matching is not required. The key constraint is that the iris image at or near the matched iris image has to be close enough in time to prevent a user from suddenly switching the camera from one person to the next, or from inserting a picture of someone else's iris or face, without detection. If the frame rate were as low as 0.5 Hz, then it is plausible to see that this could happen. A preferred time interval between frames is thus 2 seconds or less.
The image acquiring process of the method must include acquiring an iris image suitable for biometric recognition. The determination of what constitutes a suitable image can be made automatically by known methods. Once that suitable iris image has been determined to have been acquired, as mentioned above, tracking and alignment must be performed between at least that iris image (or a nearby image in the sequence) and another image, e.g., the image at the other end of the sequence where the iris image is at one end. The other image is described as being preferably an image of the user's face, however it need not be so limited. The other image used for tracking an alignment could be an image of the whole body of a person, a place, a face, an ID number on a wall, or pretty much anything that allows for confirmation of the user's iris image in a manner that is perceptible to the human eye. The selection of that other image can be accomplished in one or more of several ways. For example, it could be selected manually (e.g., by a button press holding the device far away or at a target), and then the end of the sequence (where the iris imagery is acquired) is detected automatically. As another example, it could also be selected via an automatic face finding algorithm, or an automatic symbol detection algorithm. The selection can also be made using the zoom difference between a face image and an iris image, since if an iris image is selected, then taking an image at least 10 times zoomed out and in the same location will result in the face.
Regarding this last method, if an iris image is selected, one can be sure there is a face in the image without doing processor-intensive face finding if the zoom and position parameters of images in the sequence are examined. If the position hasn't moved by more than the field of view of the camera, and the zoom is a certain amount, then the face is surely in the field of view. Put another way, if there are N pixels across the iris when matched (the ISO standard for N is in the range of 100 to 200 pixels), and P pixels between the eyes in the face image are desired (also 100-200 pixels, per ISO standards), then we wait until the zoom difference measured is approximately 10, since the ratio of the typical iris to the typical eye separation is about 10.
Example Implementation Methods:
There are many methods for performing steps M, E, C, D in
In addition, we can use the recovered model parameters to ensure that a face image has actually been acquired. For example, an iris finder algorithm may have located the iris precisely. The motion parameters from the model-fitting process defined above can be cascaded in order to predict whether a full face image is in fact visible in any part of the sequence by predicting the coverage of the camera on the person's face for every image with respect to the location of the iris. For example, we may measure a translation T and a zoom Z between an image containing the iris at location L and a second image.
We can then predict the face coverage on the second image using the parameters T, Z and L and the typical size of a person's head compared to their iris. For example, an iris is typically 1 cm in diameter and a person's head is typically 10 cm in diameter. A zoom factor of approximately 10 between the iris image and the second image will indicate that the second image is at least at the correct scale to capture an image of the face. The translation parameters can be inspected similarly. This inspection method can also be used to stop the acquisition process given an initial detection of the iris.
Multi-Sensor Approach:
The single-sensor approach above can be extended to the use of multiple sensors. For example,
One imager may have higher resolution than the other. We now can perform image processing both within a single sequence and also between the two sequences. For example, if one imager is low resolution and the second imager is high resolution, then the parameters of motion recovered from each image sequence using the methods described above will be directly related—for example if the imagery moves to the right in the low-resolution imager, then the imagery will move to the right at a faster speed in the second imager. If this does not occur, then the imagery being sent from each imaging device are likely not derived from the same person. This is important in non-supervised scenarios where video connections to the two sensors may be tampered with. In addition to comparison of motion parameters between sequences, images themselves can be compared between sequences. For example, if the approximate zoom factor between the high and low resolution cameras is known, then the high resolution image can be warped to the resolution of the low resolution image, and an image correlation can be performed to verify that the imagery from the two or more sensors are in fact derived from the same scene.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The summary, specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application is a continuation of, and claims priority to U.S. application Ser. No. 15/477,633 filed Apr. 3, 2017, entitled “Methods for Performing Biometric Recognition of a Human Eye and Corroboration of the Same”, which is a continuation of, and claims priority to U.S. application Ser. No. 14/712,460 filed May 14, 2015, entitled “Methods for Performing Biometric Recognition of a Human Eye and Corroboration of the Same”, which is a continuation of, and claims priority to: U.S. application Ser. No. 14/336,724 filed Jul. 21, 2014, entitled “Methods for Performing Biometric Recognition of a Human Eye and Corroboration of the Same”, which is a continuation of, and claims priority to: U.S. application Ser. No. 13/800,496, filed Mar. 13, 2013, entitled “Methods for Performing Biometric Recognition of a Human Eye and Corroboration of the Same”, issued as U.S. Pat. No. 8,818,053 on Aug. 26, 2014 which is a continuation of, and claims priority to: U.S. Ser. No. 13/567,901, filed Aug. 6, 2012, entitled “Methods for Performing Biometric Recognition of a Human Eye and Corroboration of the Same”, issued as U.S. Pat. No. 8,798,330 on Aug. 5, 2014 which is a continuation of, and claims the benefits of and priority to: U.S. application Ser. No. 12/887,106 filed Sep. 21, 2010 entitled “Methods for Performing Biometric Recognition of a Human Eye and Corroboration of the Same”, issued as U.S. Pat. No. 8,260,008 on Sep. 4, 2012 which is a continuation-in-part of, and claims priority to: U.S. application Ser. No. 11/559,381 filed Nov. 13, 2006, entitled “Apparatus and Methods for Detecting the Presence of a Human Eye”, issued as U.S. Pat. No. 7,801,335 on Sep. 21, 2010 which claims priority to: U.S. Provisional Application No. 60/597,130 filed on Nov. 11, 2005, entitled “Measuring the Geometrical Relationship between Particular Surfaces of Objects”, and U.S. Provisional Application No. 60/597,152 filed on Nov. 14, 2005, entitled “Measuring the Geometrical and Photometric Relationships between or within Particular Surfaces of Objects”, and U.S. Application No. 60/597,231, filed on Nov. 17, 2005 entitled “Method for associating Face and Iris Imagery”, and U.S. Provisional Application No. 60/597,289, filed on Nov. 21, 2005, entitled “Method for Reliable Iris Matching Using a Personal Computer and Web-Camera”, and U.S. Provisional Application No. 60/597,336 filed on Nov. 25, 2005, entitled “Methodology for Detecting Non-Live Irises in an Iris Recognition System” each of them is incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4641349 | Flom et al. | Feb 1987 | A |
5259040 | Hanna | Nov 1993 | A |
5291560 | Daugman | Mar 1994 | A |
5474081 | Livingstone et al. | Dec 1995 | A |
5488675 | Hanna | Jan 1996 | A |
5572596 | Wildes et al. | Nov 1996 | A |
5581629 | Hanna et al. | Dec 1996 | A |
5613012 | Hoffman et al. | Mar 1997 | A |
5615277 | Hoffman | Mar 1997 | A |
5737439 | Lapsley et al. | Apr 1998 | A |
5748238 | Wakabayashi et al. | May 1998 | A |
5751836 | Wildes et al. | May 1998 | A |
5764789 | Pare et al. | Jun 1998 | A |
5802199 | Pare et al. | Sep 1998 | A |
5805719 | Pare et al. | Sep 1998 | A |
5838812 | Pare et al. | Nov 1998 | A |
5901238 | Matsushita | May 1999 | A |
5953440 | Zhang et al. | Sep 1999 | A |
5978494 | Zhang | Nov 1999 | A |
6021210 | Camus et al. | Feb 2000 | A |
6028949 | McKendall | Feb 2000 | A |
6055322 | Salganicoff et al. | Apr 2000 | A |
6064752 | Rozmus et al. | May 2000 | A |
6069967 | Rozmus et al. | May 2000 | A |
6070159 | Wilson et al. | May 2000 | A |
6088470 | Camus et al. | Jul 2000 | A |
6144754 | Okano et al. | Nov 2000 | A |
6192142 | Pare et al. | Feb 2001 | B1 |
6246751 | Bergl et al. | Jun 2001 | B1 |
6247813 | Kim et al. | Jun 2001 | B1 |
6252977 | Salganicoff et al. | Jun 2001 | B1 |
6289113 | McHugh et al. | Sep 2001 | B1 |
6366682 | Hoffman et al. | Apr 2002 | B1 |
6373968 | Okano et al. | Apr 2002 | B2 |
6377699 | Musgrave et al. | Apr 2002 | B1 |
6424727 | Musgrave et al. | Jul 2002 | B1 |
6447119 | Stewart et al. | Sep 2002 | B1 |
6483930 | Musgrave et al. | Nov 2002 | B1 |
6532298 | Cambier et al. | Mar 2003 | B1 |
6542624 | Oda | Apr 2003 | B1 |
6546121 | Oda | Apr 2003 | B1 |
6554705 | Cumbers | Apr 2003 | B1 |
6594376 | Hoffman et al. | Jul 2003 | B2 |
6594377 | Kim et al. | Jul 2003 | B1 |
6652099 | Chae et al. | Nov 2003 | B2 |
6700998 | Murata | Mar 2004 | B1 |
6714665 | Hanna | Mar 2004 | B1 |
6760467 | Min et al. | Jul 2004 | B1 |
6819219 | Bolle et al. | Nov 2004 | B1 |
6850631 | Oda et al. | Feb 2005 | B1 |
6917695 | Teng et al. | Jul 2005 | B2 |
6944318 | Takata et al. | Sep 2005 | B1 |
6950536 | Houvener | Sep 2005 | B2 |
6957770 | Robinson | Oct 2005 | B1 |
6980670 | Hoffman et al. | Dec 2005 | B1 |
6985608 | Hoffman et al. | Jan 2006 | B2 |
7007298 | Shinzaki et al. | Feb 2006 | B1 |
7020351 | Kumar et al. | Mar 2006 | B1 |
7047418 | Ferren et al. | May 2006 | B1 |
7095901 | Lee et al. | Aug 2006 | B2 |
7146027 | Kim et al. | Dec 2006 | B2 |
7152782 | Shenker et al. | Dec 2006 | B2 |
7248719 | Hoffman et al. | Jul 2007 | B2 |
7253739 | Hammoud et al. | Aug 2007 | B2 |
7271939 | Kono | Sep 2007 | B2 |
7318050 | Musgrave | Jan 2008 | B1 |
7346472 | Moskowitz et al. | Mar 2008 | B1 |
7369759 | Kusakari et al. | May 2008 | B2 |
7385626 | Aggarwal et al. | Jun 2008 | B2 |
7398925 | Tidwell et al. | Jul 2008 | B2 |
7414737 | Cottard et al. | Aug 2008 | B2 |
7418115 | Northcott et al. | Aug 2008 | B2 |
7428320 | Northcott et al. | Sep 2008 | B2 |
7542590 | Robinson et al. | Jun 2009 | B1 |
7545962 | Peirce et al. | Jun 2009 | B2 |
7558406 | Robinson et al. | Jul 2009 | B1 |
7558407 | Hoffman et al. | Jul 2009 | B2 |
7574021 | Matey | Aug 2009 | B2 |
7583822 | Guillemot et al. | Sep 2009 | B2 |
7606401 | Hoffman et al. | Oct 2009 | B2 |
7616788 | Hsieh et al. | Nov 2009 | B2 |
7627147 | Loiacono et al. | Dec 2009 | B2 |
7639840 | Hanna et al. | Dec 2009 | B2 |
7660700 | Moskowitz et al. | Feb 2010 | B2 |
7693307 | Rieul et al. | Apr 2010 | B2 |
7697786 | Camus et al. | Apr 2010 | B2 |
7715595 | Kim et al. | May 2010 | B2 |
7719566 | Guichard | May 2010 | B2 |
7770019 | Ferren et al. | Aug 2010 | B2 |
7797606 | Chabanne | Sep 2010 | B2 |
7801335 | Hanna et al. | Sep 2010 | B2 |
7847688 | Bernard et al. | Dec 2010 | B2 |
7869627 | Northcott et al. | Jan 2011 | B2 |
7925059 | Hoyos et al. | Apr 2011 | B2 |
7929017 | Aggarwal et al. | Apr 2011 | B2 |
7929732 | Bringer et al. | Apr 2011 | B2 |
7949295 | Kumar et al. | May 2011 | B2 |
7949494 | Moskowitz et al. | May 2011 | B2 |
7978883 | Rouh et al. | Jul 2011 | B2 |
8009876 | Kim et al. | Aug 2011 | B2 |
8025399 | Northcott et al. | Sep 2011 | B2 |
8028896 | Carter et al. | Oct 2011 | B2 |
8090246 | Jelinek | Jan 2012 | B2 |
8092021 | Northcott et al. | Jan 2012 | B1 |
8132912 | Northcott et al. | Mar 2012 | B1 |
8159328 | Luckhardt | Apr 2012 | B2 |
8170295 | Fujii et al. | May 2012 | B2 |
8181858 | Carter et al. | May 2012 | B2 |
8195044 | Hanna et al. | Jun 2012 | B2 |
8212870 | Hanna et al. | Jul 2012 | B2 |
8214175 | Moskowitz et al. | Jul 2012 | B2 |
8233680 | Bringer et al. | Jul 2012 | B2 |
8243133 | Northcott et al. | Aug 2012 | B1 |
8260008 | Hanna et al. | Sep 2012 | B2 |
8279042 | Beenau et al. | Oct 2012 | B2 |
8280120 | Hoyos et al. | Oct 2012 | B2 |
8289390 | Aggarwal et al. | Oct 2012 | B2 |
8306279 | Hanna | Nov 2012 | B2 |
8317325 | Raguin et al. | Nov 2012 | B2 |
8364646 | Hanna et al. | Jan 2013 | B2 |
8411909 | Zhao et al. | Apr 2013 | B1 |
8442339 | Martin et al. | May 2013 | B2 |
8443202 | White et al. | May 2013 | B2 |
8553948 | Hanna | Oct 2013 | B2 |
8604901 | Hoyos et al. | Dec 2013 | B2 |
8606097 | Hanna et al. | Dec 2013 | B2 |
8719584 | Mullin | May 2014 | B2 |
8798330 | Hanna et al. | Aug 2014 | B2 |
8818053 | Hanna | Aug 2014 | B2 |
20010014124 | Nishikawa | Aug 2001 | A1 |
20010017584 | Shinzaki | Aug 2001 | A1 |
20010022815 | Agarwal | Sep 2001 | A1 |
20010036297 | Ikegami et al. | Nov 2001 | A1 |
20020051116 | Van Saarloos et al. | May 2002 | A1 |
20020080256 | Bates et al. | Jun 2002 | A1 |
20020112177 | Voltmer et al. | Aug 2002 | A1 |
20020131622 | Lee et al. | Sep 2002 | A1 |
20020136435 | Prokoski | Sep 2002 | A1 |
20020158750 | Almalik | Oct 2002 | A1 |
20020176623 | Steinberg | Nov 2002 | A1 |
20030046553 | Angelo | Mar 2003 | A1 |
20030098914 | Easwar | May 2003 | A1 |
20030163710 | Ortiz et al. | Aug 2003 | A1 |
20040005083 | Fujimura et al. | Jan 2004 | A1 |
20040114782 | Cho | Jun 2004 | A1 |
20040136592 | Chen | Jul 2004 | A1 |
20040165147 | Della Vecchia et al. | Aug 2004 | A1 |
20040170303 | Cannon | Sep 2004 | A1 |
20040233037 | Beenau et al. | Nov 2004 | A1 |
20050002572 | Saptharishi et al. | Jan 2005 | A1 |
20050041876 | Karthik | Feb 2005 | A1 |
20050058324 | Karthik | Mar 2005 | A1 |
20050084137 | Kim et al. | Apr 2005 | A1 |
20050084179 | Hanna et al. | Apr 2005 | A1 |
20050094895 | Baron | May 2005 | A1 |
20050151620 | Neumann | Jul 2005 | A1 |
20050175225 | Shinzaki | Aug 2005 | A1 |
20050207614 | Schonberg et al. | Sep 2005 | A1 |
20050219360 | Cusack et al. | Oct 2005 | A1 |
20050223236 | Yamada et al. | Oct 2005 | A1 |
20050229007 | Bolle et al. | Oct 2005 | A1 |
20050238214 | Matsuda et al. | Oct 2005 | A1 |
20050289582 | Tavares et al. | Dec 2005 | A1 |
20060016872 | Bonalle et al. | Jan 2006 | A1 |
20060028552 | Aggarwal et al. | Feb 2006 | A1 |
20060050933 | Adam et al. | Mar 2006 | A1 |
20060056664 | Iwasaki | Mar 2006 | A1 |
20060073449 | Kumar et al. | Apr 2006 | A1 |
20060074986 | Mallalieu et al. | Apr 2006 | A1 |
20060088193 | Muller et al. | Apr 2006 | A1 |
20060115130 | Kozlay | Jun 2006 | A1 |
20060120577 | Shinzaki et al. | Jun 2006 | A1 |
20060123239 | Martinian et al. | Jun 2006 | A1 |
20060123242 | Merrem | Jun 2006 | A1 |
20060192868 | Wakamori | Aug 2006 | A1 |
20060210122 | Cleveland | Sep 2006 | A1 |
20060250218 | Kondo et al. | Nov 2006 | A1 |
20060274918 | Amantea et al. | Dec 2006 | A1 |
20060274919 | Loiacono et al. | Dec 2006 | A1 |
20060279630 | Aggarwal et al. | Dec 2006 | A1 |
20070017136 | Mosher et al. | Jan 2007 | A1 |
20070035623 | Garoutte et al. | Feb 2007 | A1 |
20070047772 | Matey et al. | Mar 2007 | A1 |
20070110285 | Hanna et al. | May 2007 | A1 |
20070174633 | Draper et al. | Jul 2007 | A1 |
20070206839 | Hanna et al. | Sep 2007 | A1 |
20070206840 | Jacobson | Sep 2007 | A1 |
20070211922 | Crowley et al. | Sep 2007 | A1 |
20080025575 | Schonberg et al. | Jan 2008 | A1 |
20080033301 | Dellavecchia et al. | Feb 2008 | A1 |
20080044063 | Friedman et al. | Feb 2008 | A1 |
20080092341 | Ahmadshahi | Apr 2008 | A1 |
20080114697 | Black et al. | May 2008 | A1 |
20080122578 | Hoyos et al. | May 2008 | A1 |
20080170758 | Johnson et al. | Jul 2008 | A1 |
20080181467 | Zappia | Jul 2008 | A1 |
20080235515 | Yedidia et al. | Sep 2008 | A1 |
20080253622 | Tosa et al. | Oct 2008 | A1 |
20080267456 | Anderson | Oct 2008 | A1 |
20080291279 | Samarasekera et al. | Nov 2008 | A1 |
20090016574 | Tsukahara | Jan 2009 | A1 |
20090049534 | Chung | Feb 2009 | A1 |
20090074256 | Haddad | Mar 2009 | A1 |
20090097715 | Cottard et al. | Apr 2009 | A1 |
20090161925 | Cottard et al. | Jun 2009 | A1 |
20090226053 | Matsuda et al. | Sep 2009 | A1 |
20090231096 | Bringer et al. | Sep 2009 | A1 |
20090232367 | Shinzaki | Sep 2009 | A1 |
20090274345 | Hanna et al. | Nov 2009 | A1 |
20100014720 | Hoyos et al. | Jan 2010 | A1 |
20100021016 | Cottard et al. | Jan 2010 | A1 |
20100021017 | Bell et al. | Jan 2010 | A1 |
20100046805 | Connell et al. | Feb 2010 | A1 |
20100046808 | Connell et al. | Feb 2010 | A1 |
20100074477 | Fujii et al. | Mar 2010 | A1 |
20100074478 | Hoyos et al. | Mar 2010 | A1 |
20100127826 | Saliba et al. | May 2010 | A1 |
20100141380 | Pishva | Jun 2010 | A1 |
20100183199 | Smith et al. | Jul 2010 | A1 |
20100204571 | Dellavecchia et al. | Aug 2010 | A1 |
20100232655 | Hanna | Sep 2010 | A1 |
20100246903 | Cottard | Sep 2010 | A1 |
20100253816 | Hanna | Oct 2010 | A1 |
20100266165 | Matey et al. | Oct 2010 | A1 |
20100278394 | Raguin et al. | Nov 2010 | A1 |
20100310070 | Bringer et al. | Dec 2010 | A1 |
20110002510 | Hanna | Jan 2011 | A1 |
20110007949 | Hanna et al. | Jan 2011 | A1 |
20110015945 | Addy | Jan 2011 | A1 |
20110087611 | Chetal | Apr 2011 | A1 |
20110106700 | Deguchi et al. | May 2011 | A1 |
20110119111 | Hanna | May 2011 | A1 |
20110119141 | Hoyos et al. | May 2011 | A1 |
20110138187 | Kaga et al. | Jun 2011 | A1 |
20110157347 | Kalocsai | Jun 2011 | A1 |
20110158486 | Bringer et al. | Jun 2011 | A1 |
20110194738 | Choi et al. | Aug 2011 | A1 |
20110206243 | Vlcan | Aug 2011 | A1 |
20110211054 | Hanna et al. | Sep 2011 | A1 |
20110213709 | Newman et al. | Sep 2011 | A1 |
20110263972 | Dellavecchia et al. | Oct 2011 | A1 |
20110277518 | Lais et al. | Nov 2011 | A1 |
20120127295 | Hanna et al. | May 2012 | A9 |
20120187838 | Hanna | Jul 2012 | A1 |
20120212597 | Hanna | Aug 2012 | A1 |
20120219279 | Hanna et al. | Aug 2012 | A1 |
20120239458 | Hanna | Sep 2012 | A9 |
20120240223 | Tu | Sep 2012 | A1 |
20120242820 | Hanna et al. | Sep 2012 | A1 |
20120242821 | Hanna et al. | Sep 2012 | A1 |
20120243749 | Hanna et al. | Sep 2012 | A1 |
20120257797 | Leyvand et al. | Oct 2012 | A1 |
20120268241 | Hanna et al. | Oct 2012 | A1 |
20120293643 | Hanna | Nov 2012 | A1 |
20120300990 | Hanna et al. | Nov 2012 | A1 |
20120321141 | Hoyos et al. | Dec 2012 | A1 |
20120328164 | Hoyos et al. | Dec 2012 | A1 |
20130051631 | Hanna | Feb 2013 | A1 |
20130110859 | Hanna et al. | May 2013 | A1 |
20130162798 | Hanna et al. | Jun 2013 | A1 |
20130162799 | Hanna et al. | Jun 2013 | A1 |
20130182093 | Hanna | Jul 2013 | A1 |
20130182094 | Hanna | Jul 2013 | A1 |
20130182095 | Hanna | Jul 2013 | A1 |
20130182913 | Hoyos et al. | Jul 2013 | A1 |
20130182915 | Hanna | Jul 2013 | A1 |
20130194408 | Hanna et al. | Aug 2013 | A1 |
20130212655 | Hoyos et al. | Aug 2013 | A1 |
20130294659 | Hanna et al. | Nov 2013 | A1 |
20140064574 | Hanna et al. | Mar 2014 | A1 |
20140072183 | Hanna et al. | Mar 2014 | A1 |
Entry |
---|
Notice of Allowance for U.S. Appl. No. 14/712,460 dated Nov. 23, 2016. |
Notice of Allowance on U.S. Appl. No. 12/887,106 dated May 8, 2012. |
Notice of Allowance on U.S. Appl. No. 13/567,905 dated Apr. 25, 2014. |
Notice of Allowance on U.S. Appl. No. 13/567,905 dated May 28, 2014. |
Notice of Allowance on U.S. Appl. No. 13/800,496 dated Jun. 20, 2014. |
Notice of Allowance on U.S. Appl. No. 13/800,525 dated Mar. 24, 2014. |
Office Action on U.S. Appl. No. 13/567,901 dated Jul. 26, 2013. |
Office Action on U.S. Appl. No. 13/567,905 dated Jan. 15, 2014. |
Office Action on U.S. Appl. No. 13/567,905 dated Apr. 2, 2013. |
Office Action on U.S. Appl. No. 13/800,525 dated Dec. 17, 2013. |
Office Action on U.S. Appl. No. 14/336,724 dated Nov. 17, 2014. |
Office Action on U.S. Appl. No. 14/712,460 dated Jul. 7, 2016. |
U.S. Notice of Allowance on U.S. Appl. No. 14/712,460 dated Nov. 23, 2016. |
U.S. Notice of Allowance on U.S. Appl. No. 15/477,633 dated Aug. 23, 2017. |
U.S. Office Action on U.S. Appl. No. 14/712,460 dated Jul. 7, 2016. |
U.S. Office Action on U.S. Appl. No. 15/477,633 dated May 10, 2017. |
Number | Date | Country | |
---|---|---|---|
20180053052 A1 | Feb 2018 | US |
Number | Date | Country | |
---|---|---|---|
60597130 | Nov 2005 | US | |
60597152 | Nov 2005 | US | |
60597231 | Nov 2005 | US | |
60597289 | Nov 2005 | US | |
60597336 | Nov 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15477633 | Apr 2017 | US |
Child | 15785062 | US | |
Parent | 14712460 | May 2015 | US |
Child | 15477633 | US | |
Parent | 14336724 | Jul 2014 | US |
Child | 14712460 | US | |
Parent | 13800496 | Mar 2013 | US |
Child | 14336724 | US | |
Parent | 13567901 | Aug 2012 | US |
Child | 13800496 | US | |
Parent | 12887106 | Sep 2010 | US |
Child | 13567901 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11559381 | Nov 2006 | US |
Child | 12887106 | US |