Object imaging is useful in a variety of applications. By way of example, biometric recognition systems image biometric objects for authenticating and/or verifying users of devices incorporating the recognition systems. Biometric imaging provides a reliable, non-intrusive way to verify individual identity for recognition purposes. Various types of sensors may be used for biometric imaging including optical sensors.
The present disclosure generally provides optical sensing systems and methods for imaging objects. Various embodiments include one or more in-display aperture regions and one or more under-display light source elements with one or multiple discrete light detector elements positioned on, in or under the display.
According to an embodiment, an optical sensing system is provided that includes a display substrate, a plurality of display elements, e.g., for displaying visible images, a sensor light source for illuminating a sensing region, wherein the sensor light source is separate from the plurality of display elements, a detector for detecting light from the sensing region, and one or more aperture regions defined in the display between the display elements to facilitate and/or enhance illumination of the sensing region by the sensor light source.
According to an embodiment, an optical sensing system is provided that includes a display substrate, a plurality of display elements (e.g., pixel elements) including at least one aperture region in the plurality of display elements, a sensor light source for illuminating a sensing region, wherein the sensor light source is separate from the plurality of display elements, and wherein the sensor light source is disposed under the display substrate and under the plurality of display elements and is located proximal to the at least one aperture region, and a detector for detecting light from the sensing region, e.g., illumination light reflected by an object proximal to the sensing region.
According to another embodiment, an optical display device is provided that includes a display substrate, a plurality of display elements (e.g., pixel elements) including a plurality of aperture regions disposed in the plurality of display elements, a sensor light source including a plurality of light emitting elements for illuminating a sensing region, wherein the sensor light source is separate from the plurality of display elements, and wherein the sensor light source is disposed under the display substrate and under the plurality of display elements and wherein the plurality of light emitting elements are located proximal to corresponding aperture regions, and a detector for detecting light from the sensing region, e.g., illumination light reflected by an object proximal to the sensing region.
Reference to the remaining portions of the specification, including the drawings and claims, will realize other features and advantages of the present invention. Further features and advantages of the present invention, as well as the structure and operation of various embodiments of the present invention, are described in detail below with respect to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements.
The detailed description is described with reference to the accompanying figures. The use of the same reference numbers in different instances in the description and the figures may indicate similar or identical items.
The following detailed description is exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the following detailed description or the appended drawings.
Turning to the drawings, and as described in detail herein, embodiments of the disclosure provide methods, devices and systems useful to image, e.g., optically image, an input object such as a fingerprint.
By way of example, basic functional components of the electronic device 100 utilized during capturing, storing, and validating a biometric match attempt are illustrated. The processing system 104 may include one or more processors 106, memory 108, template storage 110, operating system (OS) 112 and power source(s) 114. The one or more processors 106, memory 108, template storage 110, and operating system 112 may be connected physically, communicatively, and/or operatively to each other directly or indirectly. The power source(s) 114 may be connected to the various components in processing system 104 to provide electrical power as necessary.
As illustrated, the processing system 104 may include processing circuitry including one or more processors 106 configured to implement functionality and/or process instructions for execution within electronic device 100. For example, one or more processors 106 may execute instructions stored in memory 108 or instructions stored on template storage 110 to normalize an image, reconstruct a composite image, identify, verify, or otherwise match a biometric object, or determine whether a biometric authentication attempt is successful. Memory 108, which may be a non-transitory, computer-readable storage medium, may be configured to store information within electronic device 100 during operation. In some embodiments, memory 108 includes a temporary memory, an area for information not to be maintained when the electronic device 100 is turned off. Examples of such temporary memory include volatile memories such as random access memories (RAM), dynamic random access memories (DRAM), and static random access memories (SRAM). Memory 108 may also maintain program instructions for execution by the processor 106.
Template storage 110 may comprise one or more non-transitory computer-readable storage media. In the context of a fingerprint sensor device or system, the template storage 110 may be configured to store enrollment views or image data for fingerprint images associated with a user's fingerprint, or other enrollment information, such as template identifiers, enrollment graphs containing transformation information between different images or view, etc. More generally, the template storage 110 may store information about an input object. The template storage 110 may further be configured for long-term storage of information. In some examples, the template storage 110 includes non-volatile storage elements. Non-limiting examples of non-volatile storage elements include magnetic hard discs, solid-state drives (SSD), optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories, among others.
The processing system 104 may also host an operating system (OS) 112. The operating system 112 may control operations of the components of the processing system 104. For example, the operating system 112 facilitates the interaction of the processor(s) 106, memory 108, and template storage 110.
According to some embodiments, the one or more processors 106 may implement hardware and/or software to obtain data describing an image of an input object. In some implementations, the one or more processors 106 may also determine whether there is a match between two images, e.g., by aligning two images and compare the aligned images to one another. The one or more processors 106 may also operate to reconstruct a larger image from a series of smaller partial images or sub-images, such as fingerprint images when multiple partial fingerprint images are collected during a biometric process, such as an enrollment or matching process for verification or identification.
The processing system 104 may include one or more power sources 114 to provide power to components of the electronic device 100. Non-limiting examples of power sources 114 include single-use power sources, rechargeable power sources, and/or power sources developed from nickel-cadmium, lithium-ion, or other suitable material as well power cords and/or adapters, which are in turn connected to electrical power. A power source 114 may be external to the processing system 104 and/or electronic device 100.
Display 102 can be implemented as a physical part of the electronic system 100 or can be physically separate from the electronic system 100. As appropriate, display 102 may communicate with parts of the electronic system 100 using various wired and/or wireless interconnection and communication technologies, such as buses and networks. Examples technologies may include Inter-Integrated Circuit (I2C), Serial Peripheral Interface (SPI), PS/2, Universal Serial bus (USB), Bluetooth®, Infrared Data Association (IrDA), and various radio frequency (RF) communication protocols defined by the IEEE 802.11 standard. In some embodiments, display 102 is implemented as an image sensor, e.g., a fingerprint sensor to capture a fingerprint of a user. More generally, the components of display 102, or components integrated in or with the display (e.g., one or more light sources, detectors, etc.) may be implemented to image an object. In accordance with some embodiments, display 102 may use optical sensing for object imaging including imaging biometrics such as fingerprints.
Some non-limiting examples of electronic systems 100 include personal computing devices (e.g., desktop computers, laptop computers, netbook computers, tablets, web browsers, e-book readers, and personal digital assistants (PDAs)), composite input devices (e.g., physical keyboards, joysticks, and key switches), data input devices (e.g., remote controls and mice), data output devices (e.g., display screens and printers), remote terminals, kiosks, video game machines (e.g., video game consoles, portable gaming devices, and the like), communication devices (e.g., cellular phones, such as smart phones), and media devices (e.g., recorders, editors, and players such as televisions, set-top boxes, music players, digital photo frames, and digital cameras).
In some embodiments, the processing system 104 includes display driver circuitry, LED driver circuitry, receiver circuitry or readout circuitry for operating or activating light sources, or for receiving data from or reading out detectors in accordance with some embodiments described elsewhere in this document. For example, the processing system 104 may include one or more display driver integrate circuits (ICs), LED driver ICs, OLED driver ICs, readout ICs, etc.
The light sources 202 and 203 are of a suitable type described below (e.g., OLEDs, micro-LEDs, etc.). In some embodiments, the light sources 202 and 203 may include native display elements (e.g., one or more native OLED pixels/emitters), or dedicated emitters integrated in or with the display (e.g., micro-LEDs integrated in or with an OLED or LCD display). Although only two light sources 202, 203 are shown in
The photosensors or detector pixels 204 and 205 may detect light transmitted from light sources 202, 203. Examples of types of photosensors are CMOS sensors, phototransistors and photodiodes. Thin film transistor-based sensors may also be used in accordance with some embodiments.
Although the light sources 202, 203 and photosensors 204, 205 are depicted as distinct elements, in some embodiments the same type of element may be used to both transmit light and detect transmitted light. For example, the light sources 202, 203 themselves may be reverse-biased to function as detector pixels, using LED, OLED, or another suitable display driver technology. The light sources 202, 203 can be individually reverse biased to function as detector pixels, or may be collectively reverse-biased, e.g., to function as rows or columns of detector pixels. Further, all of the light sources 202, 203 may be addressable in a reverse biased state, or a smaller subset may be addressable in a reverse bias state to minimize the amount of additional routing circuitry that is included, in which case the display 200 may include a special area of fingerprint sensing corresponding to those light sources 202, 203 that can be set to a reverse biased detector state. In addition, although the detector pixels 204, 205 are shown on the same substrate 206 as the light sources 202, 203, the detector pixels 204, 205 can be otherwise arranged within the device, for example, on a different plane from the light sources 202, 203.
The cover layer 208 may include a cover lens, cover glass, or cover sheet, which protects the inner components of the display 200, such as the light sources 202, 203 and the detector pixels 204, 205. The cover layer 208 may be made of any suitable material such as chemically strengthened glass, crystalline materials (e.g., synthetic sapphire), transparent polymeric materials, and the like. The cover layer 208 may also include one or more additional layers associated with display and/or touch screen functionality, such as capacitive touch screen functionality. The cover layer 208 may be transparent thereby allowing light from light sources 202, 203 and the native display elements (e.g., native OLED emitters) to be transmitted and observed outside of the display 200. A top surface of the cover layer 208 forms a sensing surface or input surface 212, which provides a contact area for the input object 210.
The input object 210 is an object to be imaged and may include a biometric object such as a fingerprint. The input object 210 may have various characteristics, for example, ridges 214 and valleys 216. Due to their protruding nature, the ridges 214 contact the sensing surface 212 of the cover layer 208. In contrast, the valleys 216 generally do not contact the sensing surface 212 and instead form a gap 218 between the input object 210 and the sensing surface 212. The input object 210 may have other characteristics 221, such as moisture, stain, or ink, that do not create significant structural differences in portions of the input object 210, but which may affect its optical properties.
The light sources 202, 203 transmit beams of light within the cover layer 208 and the transmitted light becomes incident on the sensing surface 212 of the cover layer 208 at various angles. Depending on the angles, some of the transmitted light is reflected and some of the transmitted light is refracted. However, for cases where no fingerprint ridge is present on the sensing surface 212, light beams which arrive at the sensing surface 212 at an angle exceeding a critical angle θc undergo total internal reflection, i.e., all light from the transmitted beam exceeding the critical angle is reflected at the sensing surface 212.
As will be appreciated, since the medium above the sensing surface 212 may vary, the critical angle at various points along the sensing surface 212 may likewise vary. For example, the ridges 214 of the input object 210 and gaps 218 formed within the valleys 216 of the input object 210 may have different indices of refraction. As a result, different critical angles may exist at the boundaries between the sensing surface 212 and ridges 214 as compared to the boundaries formed by the gaps 218 and the sensing surface 212. These differences are illustratively shown in
In accordance with some embodiments, detector pixels 204 falling within region 228 are used to detect reflected light to image part of input object 210 when light source 202 is illuminated. With respect to the detection of ridges and valleys, region 228 is an area of relatively high contrast. The relative high contrast occurs because light reflected from the sensing surface 212 in contact with valleys 216 (e.g., air) undergoes total internal reflection whereas light reflected from the sensing surface 212 in contact with the input object 210 (e.g., skin) does not. Thus, light beams transmitted from light source 202 which have an angle of incidence at the sensing surface falling between θcv and θcr are reflected and reach detector pixels 204 falling within region 228.
In accordance with another aspect of the disclosure, detector pixels 205 falling within region 230 (relative to light source 202) may also be used to image the input object 210. In particular, transmitted beams from light source 202, which become incident on the sensing surface 212 with angles smaller than both critical angle of ridge (θcr) and critical angle of valley (θcv) result in reflected beams falling within region 230. Due to scattering, the contrast of reflected beams falling within region 230 from ridges 214 and valleys 216 may be less than the contrast of reflected beams falling within high contrast region 228. However, depending on factors such as the sensitivity of the detector pixels 204, 205 and resolution requirements, region 230 may still be suitable for sensing ridges 214 and valleys 216 on the input object 210. Moreover, region 230 may be suitable for detecting non-structural optical variations in the input object 210 such as moisture or stains or ink 221.
It will be appreciated that the reflected light beams detected in region 228 may provide a magnified view of a partial image of the input object 210 due to the angles of reflection. The amount of magnification depends at least in part upon the distance between the light source 202 and the sensing surface 212 as well as the distance between the detectors 204 and the sensing surface 212. In some implementations, these distances may be defined relative to the normal of these surfaces or planes (e.g., relative to a normal of the sensing surface or relative to a plane containing the light source or detectors). For example, if the light source 202 and the detector pixels 204 are coplanar, then the distance between the light source 202 and the sensing surface 212 may be equivalent to the distance between the detectors 204 and the sensing surface 212. In such a case, an image or partial image of the input object 210 may undergo a two-times magnification (2×) based on a single internal reflection from the sensing surface 212 reaching the detector pixels 204 in region 228.
The critical angles θcr and θcv resulting from ridges 214 and gaps 218 at the sensing surface 212 are dependent at least in part on the properties of the medium in contact with the boundary formed at the sensing surface 212, which may be affected by a condition of the input object 210. For example, a dry finger in contact with the sensing surface 212 may result in a skin to air variation across the sensing surface 212 corresponding to fingerprint ridges and valleys, respectively. However, a wet finger in contact with the sensing surface 212 may result in a skin to water or other liquid variation across the sensing surface 212. Thus, the critical angles of a wet finger may be different from the critical angles formed by the same finger in a dry condition. Thus, in accordance with the disclosure, the intensity of light received at the detector pixels 204, 205 can be used to determine the relative critical angles and/or whether the object is wet or dry, and perform a mitigating action such as processing the image differently, providing feedback to a user, and/or adjust the detector pixels or sensor operation used for capturing the image of the input object. A notification may be generated to prompt correction of an undesirable input object condition. For example, if a wet finger is detected, a message may be displayed or an indicator light may be lit to prompt the user to dry the finger before imaging.
In certain embodiments, when the light source corresponding to display pixel 302 is illuminated, detector pixels falling within the high contrast region 308, such as detector pixels 310 and 312 may be used to detect reflected light from the display pixel 302 to image a portion of the input object. In other embodiments, or in combination with the collection of data from region 308, detector pixels, such as detector pixels 314 falling within region 318 may be used.
Also shown in
In the example of
It will be understood that
In the example of
In some applications, image data from various partial images obtained during patterned illumination (e.g., sequential or simultaneous illumination of display pixels) of the individual display pixels is combined into composite image data of the input object. The partial image data may be aligned based on known spatial relationships between the illumination sources in the pattern. By way of example, the partial image data may be combined by stitching together the partial images into a larger image, or by generating a map that relates the image data from the various partial images according to their relative alignments. Demagnification of the images may be useful prior to such piecing together or mapping. In addition, it may be useful to apply a weighting function to the image data to account for the different intensities of light received at detector pixels having different distances from the display pixels. In some applications, if pixels inside of region 508 are used, the resulting data from the various partial images may be deconvolved to reconstruct the larger image. Alternatively, the data inside of this region may convey sufficient information for some applications, so that no deconvolution is used. U.S. patent application Ser. No. 16/006,639, filed Jun. 12, 2018, and titled “Systems And Methods For Optical Sensing Using Point-Based Illumination,” which is hereby incorporated by reference, discusses image stitching and construction of images, for example at
As shown, the device 600 includes an active display area 604. The active display area 604 may encompass a portion of a surface of the device 600 as shown, or it may encompass the entire device surface or multiple portions of the device surface. Also, the sensing surface or input surface may encompass a portion of the active display area 604, or the sensing surface may encompass the entire active display area 604 or multiple portions of the active display area 604. An object 606, such as a finger, is placed over (e.g., proximal to or in contact with) the active display area 604. One or more light sources (not shown) underneath the object 606 are illuminated according to a pattern to image part or all of the object 606 in accordance with the description herein. During or after imaging of the object 606, display pixels or other light sources at or about the perimeter of the object 606 may be illuminated to provide a visually perceptible border 608. The displayed border 608 may change in appearance to signify status. For example, while the object 606 is being imaged and/or during an authentication period, the border could be a first color (e.g., yellow). Once the imaging and authentication is completed, the color could change to a second color (e.g., green) if the authentication is successful or a third color (e.g., red) if the authentication is unsuccessful. It will be appreciated that changes in color provide one example of how the border 608 may be altered to signal status to the user. Other changes in the appearance of the border, such as a change from dashed line to a solid line, or an overall change in the shape of the border could be employed as well.
In step 702, the presence of an input object proximal to or in contact with the sensing surface of the display is detected. Such detection may occur, for example, as the result of detection of changes of intensity in light received at detector pixels in the display. Alternatively, presence of the input object may be detected via capacitive sensing or other conventional techniques using a touch screen for example.
In step 704, moisture content of the input object to be imaged is determined. The moisture content can be determined, for example, by illuminating display pixels to determine the inner boundary of the high contrast area. By comparing the determined inner boundary of the high contrast to an expected boundary for a dry object, the relative moisture content can be estimated. The moisture content can be used for various purposes. For example, the detected moisture content can be used as a metric of expected image quality. The detected moisture content may also be used to establish the boundaries of high contrast and, therefore, used to establish which detector pixels will be used to collect data when a given light source is illuminated as part of the imaging process. The detected moisture content may also be used to notify the user that a suitable image cannot be obtained. The user may then be instructed to dry the object (e.g., finger) and initiate another imaging attempt.
In step 706, one or more light sources (e.g., display pixels, separate LEDs, etc.) are illuminated to image the input object. The light sources to be illuminated and sequence of illumination depend on the illumination pattern used. If a spatial pattern is used, multiple spatially separated light sources are simultaneously illuminated. If a temporal pattern is used, different light sources, or different clusters of light sources that are collectively operated as a point source, are illuminated at different times. As previously described, the pattern used for imaging may include a combination of temporal and spatial patterns. For example, a first set of display pixels may be illuminated first where the corresponding high contrast areas are non-overlapping. This may then be followed by a second set of distinct display pixels being illuminated, which likewise provide non-intersecting high contrast regions and so on. The display pixels illuminated and sequence of illumination may be guided by a touch position detected by capacitive sensor or touch screen, for example.
It is further contemplated that multiple display pixels may be illuminated even though they provide overlapping high contrast areas. In such an arrangement, the display pixels transmit light of different wavelengths (e.g., colors), which can be separately detected to resolve different partial images of the object. Alternatively, techniques such as code division multiplexing (CDM) may be used to transmit the light. In such an arrangement, the collected data may be deconvolved to resolve the different subparts of the fingerprint. Other methods to distinguish between light transmitted from different display pixels may be used provided that light transmitted from different display pixels can be detected and distinguished.
In step 708, image data is obtained from appropriate detector pixels. The appropriate detector pixels will, for example, be the detector pixels in the corresponding high contrast region(s) for the display pixel(s) illuminated. However, as previously described, a region inside of the high contrast region may be used. Further, in some implementations, the entire detector array is read out or scanned and then the undesired pixel region can be filtered out with image processing.
In step 710, a determination is made as to whether the illumination pattern is complete. The pattern is complete when data for all of the partial images that will make up the entirety of a desired image of the object is collected. If the pattern is not complete, the process returns to step 706. In step 706, the next light source or set of light sources is illuminated.
In step 712, the collected data for the various partial images undergo processing. By way of example, the processing may include demagnification of the image data and/or normalization or the application of weighting factors to the image data to account for the different intensities of light detected at detector pixels further away from the light sources. The processing may further include combining the data for the various partial images into a complete image or creating a template that relates the partial images to one another even though they are kept separate. The image data from the various partial images may be combined according to the known geometric relationships between the pixels in the pattern. The image data may also be combined based on other parameters, such as the thickness of the cover layer, which provides additional information about the light beam paths from the illumination and detector pixels to the sensing surface to resolve physical transformations between the partial images. The thickness of the cover layer may be pre-defined or may be computed at image capture time based on the location of the inner boundary of the high contrast region. For example, the location of the inner boundary may be closer or further away from the illuminated display pixel for thinner or thicker cover layers, respectively.
In step 714, the image data may be compared to previously stored images of the object. For example, an image of a fingerprint taken during an authentication attempt may be compared to previously stored enrollment views of the fingerprint. If a match is detected, the user is authenticated. If a match is not detected, authentication may be denied. As another example, an image of a fingerprint taken during a control input may be compared to previously stored enrollment views of the fingerprint to identify which finger provided the input. If a match is detected to a specific finger, a finger specific display response or other device operation may then be initiated based on the identified finger.
As described in connection with
After image processing, the collected data for the object may be stored for later use, e.g., in memory 108 or template storage 110.
The sensing region(s) 812 encompasses one or more spaces or areas in which the optical system 800 is capable of detecting the object(s) 810 and capturing sufficient information associated with the object(s) 810 that is of interest to the optical system 800. The sensing region(s) 812 is optically coupled to both the light source(s) 802 and the light detector(s) 805, thereby providing one or more illumination optical paths for the emitted light 820 to reach the sensing region(s) 812 from the light source(s) 802 and one or more return optical path(s) for the returned light 822 to reach the light detector(s) 805 from the sensing region(s) 812. The illumination optical path(s) and the detection optical path(s) may be physically separate or may overlap, in whole or in part. In some implementations of the optical system 800, the sensing region(s) 812 includes a three-dimensional space within a suitable depth or range of the light source(s) 802 and the optical detector(s) 805 for depth imaging or proximity sensing. In some implementations, the sensing region(s) 812 includes a sensing surface (e.g., a sensor platen) having a two dimensional area for receiving contact of the object(s) 810 for contact imaging or touch sensing. In some implementations, the sensing region(s) 812 may encompasses a space or area that extends in one or more directions until a signal to noise ratio (SNR) or a physical constraint of the optical system 800 prevents sufficiently accurate detection of the object(s) 810.
The light source(s) 802 includes one or more light emitters (e.g., one or more light emitting devices or materials) configured to illuminate the sensing region(s) 812 for object detection. In some implementations of the optical system 800, the light source(s) 802 includes one or more light emitting diodes (LEDs), lasers, or other electroluminescent devices, which may include organic or inorganic material and which may be electronically controlled or operated. In some implementations, the light source(s) 802 includes a plurality of light sources, which may be arranged in a regular array or irregular pattern and which may be physically located together or spatially segregated in two or more separate locations. The light source(s) 802 may emit light in a narrow band, a broad band, or multiple different bands, which may have one or more wavelengths in the visible or invisible spectrum, and the light source(s) 802 may emit polarized or unpolarized light. In some implementations, the light source(s) 802 includes one or more dedicated light emitters, which are used only for illuminating the sensing region(s) 812 for object detection. In some implementations, the light source(s) 802 includes one more light emitters associated with one or more other functions of an electronic system, such as emitters or display elements used for displaying visual information or images to a user.
The light detector(s) 805 includes one or more light sensitive devices or materials configured to detect light from the sensing region(s) 812 for object detection. In some implementations of the 800, the light detector(s) 805 includes one or more photodiodes (PDs), charge coupled devices (CCDs), phototransistors, photoresistors, or other photosensors, which may include organic or inorganic material and which may be electronically measured or operated. In some implementations, the light detector(s) 805 includes a plurality of light sensitive components, which may be arranged in a regular array or irregular pattern and may be physically located together or spatially segregated in two or more separate locations. In some implementations, the light detector(s) 802 includes one or more image sensors, which may be formed using a complementary metal-oxide-semiconductor (CMOS), a thin film transistor (TFT), or charge-coupled device (CCD) process. The light detector(s) 805 may detect light in a narrow band, a broad band, or multiple different bands, which may have one or more wavelengths in the visible or invisible spectrum. The light detector(s) 805 may be sensitive to all or a portion of the band(s) of light emitted by the light source(s) 802.
The object(s) 810 includes one or more animate or inanimate objects that provide information that is of interest to the optical system 800. In some implementations of the optical system 800, the object(s) 810 includes one or more persons, fingers, eyes, faces, hands, or styluses. When the object(s) 810 is positioned in the sensing region(s) 812, all or a portion of the emitted light 820 interacts with the object(s) 810, and all or a portion of the emitted light 820 returns to the light detector(s) 805 as returned light 822. The returned light 822 contains effects corresponding to the interaction of the emitted light 820 with the object(s) 810. In some implementations of the optical system 800, when the emitted light 820 interacts with the object(s) 810 it is reflected, refracted, absorbed, or scattered by the object(s) 810. Further, in some implementations the light detector(s) 805 detects returned light 822 that contains light reflected, refracted, or scattered by the object(s) 810 or one or more surfaces of the sensing region(s) 812, and the returned light 822 is indicative of effects corresponding to the reflection, refraction, absorption, or scattering of the light by the 810. In some implementations, the light detector 805 also detects other light, such as ambient light, environmental light, or background noise.
The light detector(s) 805 converts all or a portion of the detected light into optical data 830 containing information regarding the object(s) 810, and corresponding to the effects of the interaction of the emitted light 820 with the object(s) 810. In some implementations, the optical data 830 includes one or more images, image data, spectral response data, biometric data, or positional data. The optical data 830 may be provided to one or more processing components for further downstream processing or storage.
Components of the optical system 800 may be contained in the same physical assembly or may be physically separate. For example, in some implementations of the optical system 800, the light source(s) 802 and the optical detector(s) 805, or subcomponents thereof, are contained in the same semiconductor package or same device housing. In some implementations, the light source(s) 802 and the light detector(s) 805, or subcomponents thereof, are contained in two or more separate packages or device housings. Some components of the optical system 800 may or may not be included as part of any physical or structural assembly of the optical system 800. For example, in some implementations, the sensing region(s) 812 includes a structural sensing surface included with a physical assembly of the optical system 800. In some implementations, the sensing region(s) 812 includes an environmental space associated with the optical system 800 during its operation, which may be determined by the design or configuration of the optical system 800 and may encompass different spaces over different instances of operation of the optical system 800. In some implementations, the object(s) 810 is provided by one or more users or environments during operation of the optical system 800, which may include different users or environments over different instances of operation of the optical system 800.
The optical system 800 may include one or more additional components not illustrated for simplicity. For example, in some implementations of the optical system 800, the optical system 800 includes one or more additional optics or optical components (not pictured) included to act on the light in the optical system 800. The optical system 800 may include one or more light guides, lenses, mirrors, refractive surfaces, diffractive elements, filters, polarizers, spectral filters, collimators, pinholes, or light absorbing layers, which may be included in the illumination optical path(s) or return optical path(s) and which may be used to modify or direct the light as appropriate for detection of the object(s) 810.
The display 900 is an electronic visual display device for presenting images, video, or text to one or more viewers or users. The display 900 includes display pixel circuitry 910 (e.g., one or more electrodes, conductive lines, transistors, or the like) disposed fully or partially over the display substrate 906 for operating one or more display elements or display pixels in the display 900. The display pixel circuitry 910 may be disposed over the display substrate 906, directly on a surface of the display substrate 906, or on one or more intervening layers that are disposed on the display substrate 906. The cover 908 includes one or more layers (e.g., one or more passivation layers, planarization layers, protective cover sheets, or the like) disposed over the display substrate 906 and disposed over the display pixel circuitry 910. In some embodiments of the display 900, the display 900 forms a flat, curved, transparent, semitransparent, or opaque display panel. In some embodiments, the display 900 includes a plurality of layers arranged in a display stack. The display stack may include all layers making up a display panel or any plural subset of stacked layers in a display panel.
The display 900 may utilize a suitable technology for displaying two or three-dimensional visual information, such as organic light emitting diode (OLED) technology, micro-LED technology, liquid crystal display (LCD) technology, plasma technology, electroluminescent display (ELD) technology, or the like. In some embodiments of the display 900, the display pixel circuitry 910 includes an active matrix or passive matrix backplane. In some embodiments, the display 900 is an emissive or non-emissive display. In some emissive embodiments of the display 900, the display pixel circuitry 910 controls or operates pixel values of a plurality of light emitting display pixels (e.g., subpixels R, G, B), and the display pixels are top emitting or bottom emitting. In some non-emissive embodiments of the display 900, the display pixel circuitry 910 controls or operates pixel values of a plurality of transmissive or reflective display pixels. In some embodiments, the display 900 presents or displays visible images that are viewable from one or more sides of the display that may be above the cover side, below the substrate.
With reference to
In-display optical fingerprint sensor embodiments based on point source illumination (e.g., using light sources 902, 1002) provide a higher signal-to-noise ratio (SNR) as compared with collimator-based optical fingerprint sensors (FPSs) because a collimating filter (collimator) does not need to be used and bright axillary sources (e.g., light sources 902, 1002) with intensities considerably higher than the display can be used to directly illuminate the finger (transmission through display can be 5-10% while a 1/10 aspect ratio collimator has a transmission of 0.5%, as an example). Moreover, collimator-based optical FPSs are difficult to implement in displays other than OLED displays, while the in-display optical FPS based on point source illumination can be implemented on other displays such as LCD displays.
In the embodiments shown and described with reference to
As shown, one or several LEDs 1002 can be bonded to the back of the display substrate 1006 as shown in
For an LED placed under a backplane, for example, the light that illuminates the sensing region (e.g., finger in sensing region) may be blocked by TFTs, metal lines, OLED pixel elements, a black mask (in case of LCD), etc. Therefore, for example, if a small LED is used to illuminate the finger, parts of the finger may not be illuminated, which may prevent capturing a useful image from the shadowed location. On the other hand, a larger LED may result in a blurring effect as the light arrives on the sensor from different angles. This has been schematically illustrated in
The distance between individual LEDs or each cluster of LEDs may depend on the sensitivity and dynamic range of photodetectors (e.g., photosensors such as photodiodes) making up the detector as well as the output power of the source and location of the display and the source with respect to the cover-layer interface. The useful area on the detector is usually determined by the intensity of the source and dynamic range and noise in the detector, because the intensity of the light arriving at the detector follows an inverse relationship with the square of the radius. For a fixed light intensity, the noise of the sensor may determine the maximum imaging radius. This may result in a useful imaging area on the finger that is given by the useful image area on the detector divided by the magnification factor. For a fixed radius of the useful image, if a continuous image of the finger is needed, the light sources could have close distances so the finger images taken using each source overlap or meet.
In certain fingerprint sensing device embodiments, one or more separate LEDs may be used as the light source for illuminating the finger, and the detector (e.g., photodiodes in the display) may be used to measure the reflected light. Typically, photodiodes are placed within areas of the display that are free from circuitry and/or display elements. The reflected light may be measured and used to determine the fingerprint. However, as it is important to illuminate the finger with as much light as possible to provide a more accurate reading of the fingerprint, there is a need to allow more light to pass through the display structure (e.g., from below or within the display) and onto the finger to be reflected back to the detector element(s) on in or below the display.
In display-integrated fingerprint sensing devices, for example, photodiodes are typically uniformly placed within the display area. When the photodiodes are placed on a layer above the light source (e.g., LEDs), some of the photodiodes may block the light emitted by the light source, thereby reducing the amount of light that could illuminate the finger.
In certain embodiments, one or more of these photodiodes are replaced or supplanted with an opening (or hole) in that space instead, so that the amount of light that is provided to the finger for fingerprint sensor may be increased. For example, the amount of light provided to a fingerprint sensing region may be doubled or further increased. Also, as a larger portion of the light is emitted by a small number of defined holes or aperture regions, a larger portion of the light may be provided to one or more well-defined locations. Additionally, in certain embodiments, wherein the photodiodes are disposed in a uniform array, the openings may also be deposed in a uniform manner, maintaining a uniform display for aesthetic purposes.
In certain embodiments, the detector includes one or more photodetectors (e.g., 1105, 1205) disposed within the plurality of pixel elements. In certain embodiments, the detector includes one or more photodetectors (e.g. 1105, 1205) disposed on, within and/or under the same layer that the sub-pixel elements are disposed.
In certain embodiments, where more than one aperture region may be incorporated in a display, the pitch of the aperture regions may be between about 50 μm and about 10 mm, depending on the display resolution and/or the desired application.
For device embodiments incorporating an array of multiple aperture regions and an underlying array of one or more light source elements, the entire display or separate regions of the entire display be used to image objects proximal the sensing surface of the display. For example, all light sources can be illuminated simultaneously, wherein the detector may detects an image of the entire illuminated area. Alternatively, separate light source elements may be activated in a sequence to capture various sub-images of the object(s) being imaged. U.S. patent application Ser. No. 16/006,639, filed Jun. 12, 2018, and titled “Systems And Methods For Optical Sensing Using Point-Based Illumination,” which is hereby incorporated by reference, discusses various useful features pertaining to the various embodiments herein, including for example combination (stitching together) of various images captured as light source elements are illuminated in a sequence to produce a larger image, as well as techniques for correcting or adjusting brightness of individual images of different portions of a finger.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein.
All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Certain embodiments of this invention are described herein. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the embodiments to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.