The present disclosure relates to cameras having an optical channel that includes spatially separated sensors for sensing different parts of the optical spectrum.
Recent developments in camera and sensor technologies, such as consumer-level photography, is the ability of sensors to record both IR and color (e.g., RGB). Various techniques can be provided for joint IR and color imaging. One approach is to swap color filters on a camera that is sensitive to IR. Taking sequential images after swapping filters, however, can present challenges when imaging moving objects. Another approach is to use one camera dedicated to IR imaging and another camera for color imaging. Using two cameras, however, can result in higher costs, larger overall footprint, and/or misalignment of the IR and color images.
The present disclosure describes cameras having an optical channel that includes spatially separated sensors for sensing different parts of the optical spectrum.
For example, in one aspect, an apparatus includes an image sensor module having an optical channel and including a multitude of spatially separated sensors to receive optical signals in the optical channel. The multitude of spatially separated sensors includes a first sensor operable to sense optical signals in a first spectral range, and a second sensor spatially separated from the first sensor and operable to sense optical signals in a second spectral range different from the first spectral range.
Some implementations include one or more of the following features. For example, in some cases, the first spectral range is in a part of the spectrum visible to humans, and the second spectral range is in an infra-red part of the spectrum. Thus, the first spectral range can be in a RGB part of the spectrum.
In some instances, an optical assembly is disposed over the spatially separated sensors, wherein the optical assembly has a circular cross-section in a plane parallel to an image plane of the image sensor module. Further, in some implementations, the first sensor is a rectangular array of pixels. The second sensor also can be a rectangular array of pixels. In some cases, a third sensor is spatially separated from the first and second sensors and is operable to sense optical signals in the second spectral range. The third sensor also can be a rectangular array of pixels. In some cases, the first sensor is larger than each of the second and third sensors (e.g., a pixel array that consumes more surface area). The second sensor can be located, for example, at one side of the first sensor, and the third sensor can be located at an opposite side of the first sensor.
In some implementations, a transparent cover is disposed between the optical assembly and the sensors, wherein the transparent cover has a first thickness directly over the first sensor and a second different thickness directly over the other sensor(s).
The image sensor module can be integrated, for example, into a host device that includes a display screen. The apparatus further can include a readout circuit, and one or more processors operable to generate an image for display on the display screen based on output signals from pixels in the first sensor when the host device is in a first orientation, and to perform iris recognition based on output signals from pixels in one of the other sensor(s) when the host device is in a second orientation.
Another aspect describes a method performed by an apparatus such as those mentioned above. The method includes receiving a user input indicative of a request to acquire image data using the image sensor module. In response to receiving the user input, an image is generated and displayed on a display screen based on output signals from pixels in the first sensor if the host device is in a first orientation. On the other hand, if the host device is in a second orientation, iris recognition of the user is performed based on output signals from pixels in the second sensor.
In some case, the method further includes displaying, on the display screen, an image based on the output signals from the pixels in the second sensor if the host device is in the second orientation. In accordance with some implementations, in the first orientation, the apparatus is oriented in a portrait format, and in the second orientation, the apparatus is oriented in a landscape format. The first sensor can be used, for example, to sense radiation in a part of the spectrum visible to humans, and the second sensor can be used, for example, to sense radiation in the infra-red part of the spectrum.
In some implementations, the apparatus further includes an eye illumination source operable to illuminate a subject's eye with IR radiation. In some instances, the eye illumination source is operable to emit modulated IR radiation, for example, toward a subject's face. The apparatus can include a depth sensor (e.g., an optical time-of-flight sensor) operable to detect optical signals indicative of distance to the subject's eye and to demodulate the detected optical signals. The one or more processors can be configured to generate depth data based on signals from the depth sensor. In some cases, the one or more processors are configured to perform eye tracking based on the depth data.
Providing spatially separated sensors for sensing different part of the optical spectrum (e.g., RGB and IR) in the same optical channel can be advantageous in some cases, because manufacturing costs can be reduced since the same optical assembly is used for signals in both parts of the spectrum. The arrangements described here also can allow areas of the image plane to be used more efficiently. In particular, areas of the image plane that otherwise would be unused can be used, e.g., for the IR sensors without increasing the overall footprint of the module. Some implementations can make it easier for a user to use a camera module in a host device for multiple applications, such as capturing and displaying a color imaging as well as for iris recognition. In some cases, a host device into which the camera module is integrated is more aesthetically pleasing because fewer holes are needed in the exterior surface of the host device.
Other aspects, features and advantages will be readily apparent from the following detailed description, the accompanying drawings, and the claims.
As illustrated in
In the illustrated example, an optical assembly, including a stack 106 of one or more optical beam shaping elements such as lenses 108, is disposed over the image sensor 102. The lenses 108 can be disposed, for example, within a circular lens barrel 114 that is supported by a transparent cover 110 (e.g., a cover glass), which in turn is supported by one or more vertical spacers 112 separating the image sensor 102 from the transparent cover 110. The vertical spacers 112 can rest directly (i.e., without adhesive) on a non-active surface of the image sensor 102. The vertical spacers 112 can thus help establish a focal length for the optical assembly 106 and/or correct for tilt.
As illustrated in the example of
In some cases the cover 110 is composed of glass or another inorganic material such as sapphire that is transparent to wavelengths detectable by the image sensor 102. The vertical and horizontal spacers 112, 116 can be composed, for example, of a material that is substantially opaque for the wavelength(s) of light detectable by the image sensor 102. The spacers 112, 16 can be formed, for example, by a vacuum injection technique followed by curing. Embedding the side edges of the transparent cover 110 with the opaque material of the horizontal spacers 116 can be useful in preventing stray light from impinging on the image sensor 102. The outer walls 118 can be formed, for example, by a dam and fill process.
In the illustrated example, the RGB sensor 103A is a rectangular-shaped array of 2560×1920 pixels (i.e., 5 Mpix) at or near the center of the image circle 105, whereas each IR sensor 103B is a rectangular-shaped array of 640×480 pixels closer to the periphery of the image circle. In particular, each IR sensor 103B is located adjacent a longer edge of the RGB sensor 103A, and the longer edges of the IR sensors 103B are parallel to the longer edges of the IR sensor 103A. Such an arrangement can make use of space within the image circle 105 that would remain unused if only the rectangular-shaped RGB sensor 103A were included. In some implementations, color filters are disposed over the sensor 103A to selectively allow wavelengths in the visible part of the spectrum to pass, but to block or significantly attenuate IR radiation. On the other hand, IR pass filters can be provided over the other sensors 103B.
In some implementations, the size, shape or location of the sensors may differ the foregoing example. Likewise, although the illustrated example is designed with RGB and IR sensors 103A, 103B, in other instances, the spatially separated sensors may be sensitive to other spectral ranges that differ from one another.
The sensors 103A, 103B can be implemented, for example, as CCDs or photodiodes. The RGB and IR sensors 103A, 103B can be implemented as devices formed in the same or different semiconductor or other materials. For example, in some instances, different semiconductor or other materials that maximize sensitivity to the respective wavelengths of interest can be used. Thus, a material that is particularly sensitive to radiation in the visible part of the spectrum can be used for the sensor 103A, and a different material that is particularly sensitive to IR radiation can be used for the sensors 103B. The spatially separated RGB and IR sensors 103A, 103B can be implemented, for example, in different integrated circuit chips from one another.
To provide for different focal-lengths of the lenses 108 with respect to the different sensors 103A and 103B, the thickness of the transparent cover 110 can vary across its diameter. For example, in some instances, the region 110A of the transparent cover 110 directly over the RGB sensor 103A can be thicker than the regions 110B directly over the IR sensors 103B. More generally, the thickness of the one part of the transparent cover 110 over an active area of the image sensor 102 may differ from its thickness over another active area of the image sensor, depending on the different spectral ranges the sensors are designed to detect.
Providing spatially separated sensors in the same optical channel, where the sensors are sensitive, respectively, to different spectral ranges, can be advantageous. First, using the same optical assembly for both the RGB and IR pixels can reduce the number of optical assemblies that otherwise would be needed. Further, the overall footprint of the module can be maintained relatively small since separate channels are not needed for sensing the color and IR radiation. At the same time, a given size image circle can be more used more efficiently by including multiple spatially separated sensors.
In some instances, the module 100 is operable for iris recognition or other biometric identification. Iris recognition is a process of recognizing a person by analyzing the random pattern of the iris. In such implementations, as shown in
As further shown in
As indicated by
In some instances, when the smart phone 200 is in a vertical orientation for portrait format (
As shown in
A shown in
In some applications, iris recognition can be performed as follows. Upon imaging an iris, a 2D Gabor wavelet filters and maps the segments of the iris into phasors (vectors). These phasors include information on the orientation and spatial frequency and the position of these areas. This information is used to map the codes, which describe the iris patterns using phase information collected in the phasors. The phase is not affected by contrast, camera gain, or illumination levels. The phase characteristic of an iris can be described, for example, using 256 bytes of data using a polar coordinate system. The description of the iris also can include control bytes that are used to exclude eyelashes, reflection(s), and other unwanted data. To perform the recognition, two codes are compared. The difference between two codes (i.e. the Hamming Distance) is used as a test of statistical independence between the two codes. If the Hamming Distance indicates that less than one-third of the bytes in the codes are different, the code fails the test of statistical significance, indicating that the codes are from the same iris. Different techniques for iris algorithm can be used in other implementations.
The IR image 202 captured by the IR sensor 103A of the image sensor 102 in the camera module 100 also can be displayed, for example, on the display screen 204 of the host device 200, which can help the user determine whether he properly positioned the camera module 100 in front of his face.
Although some implementations of the module 100 may include only a single IR sensor 103B, it can be advantageous in some cases to provide two IR sensors 103B, located near the periphery of the image circle 105 on opposite sides of the RGB sensor 103A (see
As noted above, the host device 200 or the module 100 itself can include an IR eye-illumination source 130. In some implementations, the eye illumination source 130 is operable to emit modulated IR radiation (e.g., for time-of-flight (TOF)-based configurations). In such implementations, an optical time-of-flight (TOF) sensor 132 (see
In some instances, iris recognition (based on signals from the IR sensor 103B) can be combined with other applications, such as eye tracking or gaze tracking. Eye tracking refers to the process of determining eye movement and/or gaze point and is widely used, for example, in psychology and neuroscience, medical diagnosis, marketing, product and/or user interface design, and human-computer interactions. In such implementations, the eye illumination source 130 is operable to emit homogenous IR illumination toward a subject's face (including the subject's eye), and can be modulated, for example, at a relatively high frequency (e.g., 10-100 MHz). A depth sensor such as a time-of-flight (TOF) sensor 132 detects optical signals indicative of distance to the subject's eye, demodulates the acquired signals and generates depth data. Thus, in such implementations, the TOF sensor 132 can provide depth sensing capability for eye tracking. In such implementations, operations of both the image sensor 102 and TOF sensor 132 should be synchronized with the eye illumination source 130 such that their integration timings are correlated to the timing of the eye illumination source. Further, the optical axes of the eye illumination source 130 and the image sensor 102 (which includes the IR pixels 103D) should be positioned such that there is an angle between them of no less than about five degrees. Under such conditions, the pupil of the subject's eye appears as a black circle or ellipse in the image of the eye acquired by the IR sensor 103B. It also can help reduce the impact of specular reflections from spectacles or contact lenses worn by the subject.
The module 100, as well as the illumination source 130 and depth sensor 132, can be mounted, for example, on the same or different PCBs within a host device.
Various modifications can be made within the spirit of this disclosure. Accordingly, other implementations are within the scope of the claims.
The present applications claims the benefit of U.S. Provisional Patent Application No. 62/143,325 filed on Apr. 6, 2015. The contents of the earlier application are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62143325 | Apr 2015 | US |