The invention relates generally to image sensors with enhancing optical imaging quality, and in particular to imaging with light intensity adjustment through an optical filter.
An image sensor or imager is a device that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, digital imaging tends to replace analog imaging. In the situation of objects reflecting and/or transmitting lights with very large variations, conventional imaging sensors suffers often technical difficulty in dynamic range of response in brightness. For instance, lights from “very bright objects” will saturate the sensor whereas lights from “very dark objects” will under expose the sensor. This problem, in either case, will leads to the loss of information.
An image sensor having a brightness self-adjust capability (ISWBSA) is provided. Such an image sensor system can include a light adjustment layer made of adaptive optical materials and a set of pixels. The image sensor system can also include a control circuit configured to generate control signals to control adjustment of one or more light transparency levels at the set of pixels on the light adjustment layer based on the light intensity distribution. The control signals can include a first control signal to control adjustment of a first light transparency level at a first pixel of the light adjustment layer. The image sensor system can further include a sensory array configured to receive lights and facilitate output of image and/or video signals based on the received lights.
For facilitating light adjustment, the control circuit can be further configured to obtain light intensity distribution information regarding light intensity distribution in a field of view (FOV) of the image sensor system; and control the light adjustment layer to adjust, for the sensor array, the lights through the set of pixels according to the light intensity distribution information. In some embodiments, the control circuit can obtain the light intensity distribution from the sensor array. In some embodiments, the control circuit can obtain the light intensity distribution from a pre-sensor included in the image sensor system.
In some embodiments, the light adjustment layer can be arranged at one of two positions including a first position and a second position. In those embodiments, at the first position, the light adjustment layer may not be operable to adjust the lights, and at the second position, the light adjustment layer may be operable to adjust the lights.
In some embodiments, the image sensor system may further include a beam-splitter configured to split the lights into different components before the lights hit the light adjustment layer. In those embodiments, the light adjustment layer is configured to receive and adjust one of the different components split by the beam-splitter.
Human eyes are known to be able to adaptively adjust the “effective sensitivity” of retinal via visual neurons. Therefore, humans can perceive objects under various light conditions. However, current photo image sensors are much limited compared with human eyes in dynamic range of reposing to incident light intensity levels. For example, it has been a challenge for optical image sensors to produce quality images when there are brightness variations in different zones in the FOV of an optical image sensor. This drawback can result in an image being saturated in bright zones or having no response at all in dark zones when there such brightness variations in FOV. Either would lead to information loss in the image.
One motivation behind the present disclosure is to facilitate adjusting variations in brightness in a FOV of image sensor of an imaging device such that intensities of incident lights from the FOV are adjusted when the image sensor receives them. That is, if intensities of lights in brighter zones in the FOV can be somehow toned down relative to the rest of light intensities in the FOV such that the overall brightness variation in the entire FOV would be contained inside of the linear response range of the sensor, the aforementioned imaging problem can be somewhat addressed.
However, a challenge for achieving this is form factor and cost. Naturally, a good solution should lie in simple of use and being viable even for the ever increasing digital imaging for casual users. Another challenge is that brightness variations in the FOV are typically random. That is, locations of brighter zones in the FOV are not known until the lights hit the image sensor. While there are some existing solutions employing filtering of lights before they hit the image sensor, these solutions presume certain patterns of lights and thus may only work in a limited number of situations where brightness variations in the FOV are consistent with those patterns.
In accordance with the disclosure, embodiments provide a ISWBSA capable of facilitating dynamically adjustment of brightness variations in the lights from optical field before outputting signals for imaging. For achieving this, an ISWBSA in accordance with the disclosure can include a SOF which, in some embodiments, can be either constructed together with the sensor chip with a close-loop feedback control micro-electronics or simply placed in front of the imaging shutter with a Left-Drop structure for ON/OFF purpose.
It should be understood the example shown in
A key to such dynamic light transparency level adjustment lies in an adaptive optical material employed by the SOF 100. Several adaptive optical materials such as opto-electrical crystals, dynamic optical polymers, and liquid crystals have optical properties such as transmittance, polarization, as well as phases that can be employed to adjust transmittance level of lights passing though these materials. For example liquid crystal has advantages in manufacturability, lower driving voltages, and effective cost, has been widely applied in real-time display industry.
As shown, liquid crystal molecules in layer 202a can be structured between two transparent electrodes 202d and 202e, and the two polarizers 202b and 202c can be arranged such that the polarization axes are perpendicular to each other. The initial orientation of the liquid-crystal molecules at two sides can be manipulated by mechanically rubbing polymer coated surfaces. In a twisted nematic (TN) device, the surface alignment directions at the two sides are arranged in perpendicular to each other, and so that the liquid crystal molecules can arrange themselves in a helical structure, or twist. As explained above, the orientation of the liquid crystal can be used to induce the rotation of the polarization of the incident light, thus the layer 202a in conjunction with the polarizers act as an adjustable light transparency filter. That is, when the applied voltage is large enough, the liquid crystal molecules in the center of the layer can almost be completely untwisted and the polarization of the incident light is not rotated as it passing through the liquid crystal layer—see for example
With general principles of the SOF in accordance with the disclosure having been described and illustrated, attention is now directed to
The sensor array circuit 306 can include one or more senor arrays for receiving lights from FOV and for facilitating imaging with the received lights. As shown, with the ISWBSA 300, lights 302 can be first received by and pass through the SOF 304 (a first pass), and hits the sensor array(s) on sensor array circuit 306. As shown the control circuit 308 can be configured to detect whether the lights when hitting the sensor array(s) on sensor array circuit 306 have already been adjusted by SOF 304 (i.e., whether it is the first pass). In the first pass, pixels on the SOF 304 may be in a state as shown in
In this implementation, if the control circuit 308 determines the lights have not already been adjusted SOF 304 (i.e., it is the first pass), control circuit 308 may be configured to control the SOF 304 to adjust the lights according to the general principles described and illustrated in
In implementations, after detecting it is the first pass, the control circuit 308 can be configured to control adjustment of the light transparency level at various pixels on the SOF 304 based on the light intensity distribution detected by sensor array circuit 306. By way of illustration, the control circuit 308 may be configured with various thresholds corresponding to different light transparency levels for such adjustment. For example, transparency level adjustment may be instigated through a pixel on the SOF 304 based on the intensity value of the light corresponding to pixel. In one embodiment, it is contemplated that the detected light intensity value at a given spot in the FOV is compared to one or more thresholds and determining a difference value with respect to the one or more thresholds. In that embodiment, the control circuit 308 may be configured to generate a control signal to adjust (e.g. smooth) the difference value. The control signal may include information indicating a location of the pixel on the light adjustment layer 308 and one or more instructions for adjusting the difference value.
After adjusting the SOF 304, the control circuit 308 can then change the value to 1. When the lights 302 hit the sensor array(s) on the sensor array circuit 308 again, the control circuit 308 can then determine the lights have already been adjusted by SOF 304 by reading the value 1. The control circuit 308 may then be configured to facilitate the sensor array circuit 306 to output the image and/or video signals from the sensor array circuit 306.
As shown in
As shown in
As shown, in this example, the brightness of both of p and s components is adjusted by the corresponding SOF and reflected back to PBS with 90 degrees rotation relative to their input polarization status. The status of both p and s components are thus reversed as they come back from their corresponding SOF. The lights from two different directions are re-combined but output towards to down direction where the image sensor 606 is mounted. Please refer to FIGS 3A and 3B, for example implementations for achieving the light adjustments using the SOF 604 and SOF 606 to adjust the brightness of the lights in accordance with the disclosure.
In some embodiments, method 800 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 800 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 800.
At an operation 802, incident lights in an optical field may be received. In some embodiments, the operation 802 may involve splitting the lights into multiple components such as those shown in
At an operation 804, it can be determined the lights are unadjusted when hitting the sensor array. In various implementations, operations performed at 804 can be implemented by a control circuit the same as or substantially similar to the control circuit 308 as described and illustrated herein
At an operation 806, intensity distribution for the lights received at 802 may be obtained. This may involve obtaining intensity values for the lights received at 802. In various implementations, operations performed at 806 can be implemented by a control circuit using pre-sensor and/or a sensor array circuit the same as or substantially similar to the control circuit 308, pre-sensor 302 and/or sensor array circuit 306 as described and illustrated herein.
At an operation 808, adjustment of light intensities in the optical field may be determined for one or more zones in the optical field. As described and illustrated herein, light intensities in the optical field may vary in certain situations that can lead information loss in an final image capturing one or more objects in the optical field. In some implementations, one or more thresholds for light intensities can be predetermined and stored. The light intensity values generated/detected at 806 in those implementations can be compared with the one or more thresholds to determine respective difference values. These difference values can then be processed to determine amounts of adjustment for “smoothing”/“neutralizing” the light intensities differences reflected by the difference values. In some implementations, operation 808 may be performed by a control circuit the same as or substantially similar to the control circuit 306 illustrated and described herein.
At an operation 810, one or more control signals may be generated to adjust the light intensities from the optical field based on the adjustment determined at 808. As described and illustrated herein, a smart optical filter in accordance with the disclosure can be employed to achieve such adjustment. The smart optical filter can comprise a light adjustment layer of optical material such as liquid crystal modules. The light adjustment layer may be divided into pixels corresponding to different zones in the optical field. An example of the light adjustment layer is provided in
At 812, the adjustment of the light intensities in the optical filed can be effectuated through the light adjustment layer in accordance with the one or more control signals generated at 808. In some implementations, operation 812 may be performed by a driving circuit the same as or substantially similar to the driving circuit 304 illustrated and described herein.
At an operation 814, it can be determined that lights are adjusted by the light adjustment layer at 812. In various implementations, operations performed at 814 can be implemented by a control circuit the same as or substantially similar to the control circuit 308 as described and illustrated herein.
At an operation 816, a control signal can be generated to facilitate image/video signal output. In various implementations, operations performed at 816 can be implemented by a control circuit the same as or substantially similar to the control circuit 308 as described and illustrated herein.
Specific details are given in the description to provide a thorough understanding of exemplary configurations including implementations. However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Also, configurations may be described as a process which is depicted as a schematic flowchart or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks.
Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the technology. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bind the scope of the claims.
As used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Thus, for example, reference to “a user” includes a plurality of such users, and reference to “the processor” includes reference to one or more processors and equivalents thereof known to those skilled in the art, and so forth.
Also, the words “comprise”, “comprising”, “contains”, “containing”, “include”, “including”, and “includes”, when used in this specification and in the following claims, are intended to specify the presence of stated features, integers, components, or steps, but they do not preclude the presence or addition of one or more other features, integers, components, steps, acts, or groups.