One disclosed aspect of the embodiments relates to an imaging apparatus and a control method thereof.
It is known that a dynamic range of an image sensor used for a digital camera is generally smaller than a dynamic range of the natural world. Therefore, conventionally, a method for expanding the dynamic range of the image sensor has been discussed.
Respective techniques discussed in the documents of conventional art have the following issues.
With a method discussed in Japanese Patent Application Laid-Open No. 2010-136205, it is not possible to obtain a dynamic range that exceeds a range of exposure time (e.g., 1/30 sec. to 1/61440 sec.). Herein, the dynamic range obtained within a range of exposure time is the number of gradations that can be achieved by a ratio between a maximum value and a minimum value of the exposure time.
With a method discussed in Japanese Patent Application Laid-Open No. 2009-303010, the image sensor needs to have a plurality of analog-to-digital (A/D) conversion units in parallel, or an A/D conversion unit needs to be driven at high speed through a time division method.
According to an aspect of the embodiments, an imaging apparatus includes an exposure time control unit, a gain control unit, and a synchronization control unit. The exposure time control unit is configured to control exposure time of each of divided areas of an imaging area of an image sensor that is configured to convert light into an electric charge and to store the electric charge. The gain control unit is configured to control an analog gain of an output of each of the areas of the image sensor when an analog-to-digital conversion is executed on the output of each of the areas of the image sensor. The synchronization control unit is configured to synchronize and control the exposure time control unit and the gain control unit.
Further features of the disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Thus, the disclosure is directed to a technique of expanding a dynamic range, which uses only one A/D conversion unit and needs not drive the A/C conversion unit at high speed. Hereinafter, exemplary embodiments of the disclosure will be described in detail with reference to the appended drawings. Configurations described in the following exemplary embodiment are merely examples, and the disclosure is not limited to the configurations illustrated in the appended drawings. Same reference numerals are applied to similar constituent elements or similar processing.
First, an outline of each constituent element of the imaging apparatus 100 in
In the present exemplary embodiment, an imaging area of the image sensor unit 103 is divided into a plurality of areas, and can be driven for each of the divided areas. Then, the image sensor circuit or unit 103 has a function of executing exposure operation (i.e., a function of storing electric charge) during the exposure time different for each of the areas. In the present exemplary embodiment, the exposure time is set for each area by an area-by-area exposure control signal 117 supplied from an exposure time control unit 109 described below, and the image sensor unit 103 executes exposure with the exposure time set for each of the areas. Then, the image sensor unit 103 reads electric charges stored in pixels during the exposure time set for each area by the area-by-area exposure control signal 117 as a pixel potential 118, and output the read electric charge to an analog-to-digital (A/D) conversion circuit or unit 104. Exposure of respective areas and reading of the pixel potential 118 in the image sensor unit 103 will be described below in detail.
The A/D conversion unit 104 converts the pixel potential 118 read from the image sensor unit 103 into a digital value through analog-to-digital conversion. While details will be described below, in the present exemplary embodiment, an analog gain corresponding to each of the areas (hereinafter, called “area-by-area analog gain 121) is set to the A/D conversion unit 104 by a gain control unit 110. After applying the area-by-area analog gain 121 to the pixel potential 118 received from the image sensor unit 103, the A/D conversion unit 104 converts the resultant pixel potential 118 to a digital value through analog-to-digital conversion. Hereinafter, an image composed of a digital signal obtained by subjecting the pixel potential 118 to the analog/digital conversion after applying the area-by-area analog gain 121, is called an area-by-area exposure image 122. The area-by-area exposure image 122 output from the A/D conversion unit 104 is transmitted to an exposure condition calculation unit 111 and an exposure correction unit 105.
In order to set an optimum imaging condition, the exposure condition calculation unit 111 calculates and updates an area-by-area exposure time 112 and an area-by-area analog gain value 113 based on the area-by-area exposure image 122. Then, a value of the area-by-area exposure time 112 is transmitted to the exposure time control unit 109, and the area-by-area analog gain value 113 is transmitted to the gain control unit 110. The calculation processing of the area-by-area exposure time 112 and the area-by-area analog gain value 113 executed by the exposure condition calculation unit 111 will be described below in detail.
A synchronization control circuit or unit 101 generates an exposure time output pulse 120 and a gain output pulse 114 synchronized with each other, outputs the exposure time output pulse 120 to the exposure time control unit 109, and outputs the gain output pulse 114 to the gain control unit 110. The synchronization control unit 101, the exposure time output pulse 120, and the gain output pulse 114 will be described below in detail.
Based on the values of the exposure time output pulse 120 and the area-by-area exposure time 112, the exposure time control unit 109 generates the area-by-area exposure control signal 117 for setting the exposure time for each of the areas of the image sensor unit 103, and outputs the area-by-area exposure control signal 117 to the image sensor unit 103. In this way, the exposure time of each area based on the area-by-area exposure time 112 is set to the image sensor unit 103.
Based on the gain output pulse 114 and the area-by-area analog gain value 113, the gain control unit 110 generates the area-by-area analog gain 121 for the pixel potential 118 of each of the areas of the image sensor unit 103, and outputs the area-by-area analog gain 121 to the A/D conversion unit 104. In this way, the A/D conversion unit 104 executes analog-to-digital conversion after applying the area-by-area analog gain 121 corresponding to the pixel potential 118 of each area. The analog gain generation processing executed by the gain control unit 110 will be described below in detail.
The exposure correction unit 105 generates a gradation expanded image 123 by executing gradation expansion processing on the area-by-area exposure image 122 transmitted from the A/D conversion unit 104 based on the area-by-area exposure time 112 and the area-by-area analog gain value 113. While details will be described below, the exposure correction unit 105 generates the gradation expanded image 123 expressed in 17 bits through the gradation expansion processing executed on the area-by-area exposure image 122 expressed in 10 bits. Then, the gradation expanded image 123 is transmitted to a gradation conversion unit 106.
The gradation conversion unit 106 executes gradation conversion on the gradation expanded image 123 and outputs a gradation converted image 124 to a gap correction unit 107. In the present exemplary embodiment, gradation conversion is processing of generating an 11-bit gradation converted image 124 through gamma conversion of the 17-bit gradation expanded image 123. In addition, the gradation conversion processing according to the present exemplary embodiment is executed in order to reduce a data rate in the latter part of the processing.
The gap correction unit 107 executes boundary gap correction processing on the gradation converted image 124 in order to reduce a gap (also called “boundary gap”) between the pixel values, which is likely to occur in a boundary between the areas because of the change of the exposure time and the analog gain of each area described above. In the present exemplary embodiment, the boundary gap correction processing is smoothing processing such as filter processing using a low-pass filter, which smooths a gap occurring in the boundary between the areas. A boundary gap corrected image 125 output from the gap correction unit 107 is transmitted to an image output unit 108.
The image output unit 108 outputs the boundary gap corrected image 125 to a latter-part constituent element of the imaging apparatus 100 or an external unit.
An imaging area of the image sensor unit 103 includes a plurality of pixel blocks 201, and each of the pixel blocks 201 includes a plurality of pixels 202. In the example according to the present exemplary embodiment, the number of pixels in a width 206 direction (horizontal line direction) of the imaging area of the image sensor unit 103 is 2000, and the number of pixels in a height 205 direction thereof is 1000 (i.e., the number of horizontal lines in a vertical direction is 1000). Further, the number of pixels in a width 204 direction (horizontal line direction) of the pixel block 201 is 100, and the number of pixels in a height 203 direction thereof is 100 (i.e., the number of horizontal lines in a vertical direction is 100). In this case, the number of pixel blocks 201 in the imaging area of the image sensor unit 103 is 20 in the horizontal direction and 10 in the vertical directions. Further, “pixel blocks [0,0] to [19,9]” described in the respective pixel blocks 201 in
Then, in the present exemplary embodiment, each of the pixel blocks 201 is configured as a unit of which the exposure time and the analog gain can be controlled.
In this case, the exposure time corresponds to a time during which electric charge is stored in the pixel (light receiving element) of the image sensor unit 103 when imaging is executed. Thus, in a case where the amount of light incident on the image sensor unit 103 is the same and the pixel is not saturated, the pixel potential 118 becomes higher as the exposure time becomes longer, i.e., a brighter image can be captured. In other words, when images are captured under the same amount of incident light with different exposure times, 1/480 sec. and 1/30 sec., for example, an image captured at the exposure time of 1/30 sec. will be brighter if saturation of the pixel is not taken into consideration.
When imaging is executed, the A/D conversion unit 104 applies the analog gain to the pixel potential 118. Thus, a digital pixel value output from the A/D conversion unit 104 (digital value obtained by analog/digital conversion executed after application of the gain) will be greater as the analog gain value is greater.
A configuration and operation of the imaging apparatus 100 according to the present exemplary embodiment will be described in detail referring back to
The image sensor unit 103 executes imaging in a state where the exposure time is controlled in a unit of the pixel block 201 based on the area-by-area exposure control signal 117. Then, the image sensor unit 103 outputs the pixel potential 118 for each pixel depending on the electric charge stored therein.
The A/D conversion unit 104 applies the area-by-area analog gain 121 set for each of the pixel blocks 201 of the image sensor unit 103 to the pixel potential 118 output from the image sensor unit 103. Thereafter, the A/D conversion unit 104 executes digital conversion of the pixel potential 118 and outputs the area-by-area exposure image 122. In the present exemplary embodiment, the area-by-area exposure image 122 is a 10-bit digital value. Further, the area-by-area analog gain 121 can take four values, 1 time (×1), 2 times (×2), 4 times (×4), and 8 times (×8), as the gain values.
The exposure correction unit 105 executes gradation expansion processing on the area-by-area exposure image 122 received from the A/D conversion unit 104 based on the area-by-area exposure time 112 and the area-by-area analog gain value 113, and outputs the gradation expanded image 123. In the present exemplary embodiment, while a bit width of the area-by-area exposure image 122 is 10 bits, a bit width of the gradation expanded image 123 is 17 bits because increase in the dynamic range caused by the gradation expansion processing based on the area-by-area exposure time 112 and the area-by-area analog gain value 113 is taken into consideration.
In the present exemplary embodiment, the gradation expanded image 123 having the bit width of 17 bits is merely an example. Of the 17 bits, the increased number of bits (7 bits) from the bit width (10 bits) of the area-by-area exposure image 122, includes 4 bits corresponding to the area-by-area exposure time 112 ( 1/30 sec. to 1/480 sec.), and 3 bits corresponding to the area-by-area analog gain value 113 (1 time to 8 times). The number of increased bits necessary for expressing each of the exposure time and the analog gain value is a log base 2 of a ratio of the maximum value to the minimum value of each of the exposure time and the analog gain. More specifically, the number of bits necessary for expressing the exposure time can be acquired as 4 bits (=log2(( 1/30)/( 1/480))). For the sake of simplicity, in the present exemplary embodiment, a combination of the exposure time and the analog gain falling within comparatively small ranges is taken as an example. However, the combination is not limited thereto. For example, when the exposure time ranges from 1/30 sec. to 1/61440 sec., the increased bit width necessary for expressing the exposure time is 11 bits (=log2(( 1/30)/( 1/61440))). In addition, the correction processing executed by the exposure correction unit 105 will be described below in detail with reference to
Before details of the processing executed by the exposure correction unit 105 is described, the area-by-area exposure time 112 and the area-by-area analog gain value 113 will be described with reference to
First, the area-by-area exposure time 112 set for each of the pixel blocks 201 will be described with reference to
As illustrated in
Next, the area-by-area analog gain value 113 will be described with reference to
As illustrated in
As described above, the exposure time ID takes a value 0 and 4 as an index value. An index value 0 of the exposure time ID corresponds to the exposure time 1/30 sec. Similarly, an index value 1 of the exposure time ID corresponds to the exposure time 1/60 sec., an index value 2 corresponds to the exposure time 1/120 sec., an index value 3 corresponds to the exposure time 1/240 sec., and an index value 4 corresponds to the exposure time 1/480 sec. The exposure time is one of the parameters relating to an imaging condition. In the present exemplary embodiment, 0 is specified as an index value of the exposure time ID corresponding to a condition for obtaining the brightest captured image. In a case where the brightness when imaging is executed at the exposure time for obtaining the brightest captured image (i.e., 1/30 sec.) is taken as a reference, the brightness when imaging is executed at the exposure time of 1/30 sec. to 1/480 sec. ranges from 1 to 1/16 of the reference brightness, if the same amount of light is incident on the image sensor unit 103 and no saturation occurs in the pixels. For example, when imaging is executed at the exposure time specified by the exposure time ID of the index value 4 (i.e., 1/480 sec.), the brightness is 1/16 (=( 1/480 sec.)/( 1/30 sec.)) of the brightness when imaging is executed at the exposure time specified by the exposure time ID of the index value 0 (i.e., 1/30 sec.).
Further, the exposure correction coefficient is a correction coefficient for adjusting a level of the pixel value when imaging is executed at the exposure times corresponding to the respective indexes of the exposure time IDs as described above. In the present exemplary embodiment, the exposure correction coefficient is specified so as to adjust a level of the pixel value at the exposure time ( 1/30 sec. to 1/480 sec.) by taking a level of a pixel value when imaging is executed at the exposure time for obtaining the brightest captured image (i.e., 1/30 sec.) as a reference. Thus, an inverse number of a ratio of the brightness at the time of imaging is used as the exposure correction coefficient. As described above, when the brightness when imaging is executed at the exposure time for obtaining the brightest captured image (i.e., 1/30 sec.) is taken as a reference, the brightness when imaging is executed at the exposure time of 1/30 sec. to 1/480 sec. ranges from 1 to 1/16 of the reference brightness. Accordingly, as illustrated in
Next, a gain ID and an analog gain and a gain correction coefficient corresponding to the gain ID will be described with reference to
As described above, the gain ID takes a value from 0 to 3 as an index value. An index value 0 of the gain ID corresponds to the analog gain equivalent to 8 times. Similarly, index values 1, 2, and 3 of the gain IDs correspond to the analog gains equivalent to 4 times, 2 times, and 1 time, respectively. Similar to the above-described exposure time, the analog gain is one of the parameters relating to the imaging condition. In the present exemplary embodiment, 0 is specified as an index value of the gain ID corresponding to a condition for obtaining the brightest captured image.
The gain correction coefficient is a correction coefficient for adjusting a level of a pixel value when the analog gain corresponding to each of the indexes of the gain IDs is applied on the pixel value. In the present exemplary embodiment, a level of a pixel value when the analog gain for obtaining the brightest captured image (i.e., 8 times) is applied is taken as a reference, and levels of pixel values when respective analog gains (8 times to 1 time) are applied thereto are adjusted by the gain correction coefficients. Thus, as illustrated in
Next, a combination of the exposure time (sec.) and the exposure correction coefficient that correspond to the exposure time ID and an analog gain and a gain correction coefficient that correspond to the gain ID will be described.
As described above, the exposure time and the analog gain are parameters relating to the imaging condition. In the present exemplary embodiment, 0 is specified as the index values of the exposure time ID and the gain ID corresponding to a condition for obtaining the brightest captured image. Accordingly, for example, a combination of the exposure time ID (exposure time 1/30 sec.) and the gain ID (analog gain 8 times) of the index values 0, indicated by a symbol “A” in
On the other hand, a combination of the exposure time ID and the gain ID of the greatest index values, indicated by a symbol “C” in
Next, with reference to
In
Hereinafter, transitions of respective values occurring in the process of processing from capturing an object image to outputting the gradation expanded image will be described.
As described above, the imaging condition setting A in
With the above-described imaging condition setting A, the A/D conversion unit 104 executes A/D conversion using the analog gain of 8 times. Because a gain correction coefficient is 1 when the analog gain of 8 times is applied thereto, the exposure correction unit 105 applies the gain correction coefficient (1 time) to the area-by-area exposure image to output the gain corrected image. Further, in the example in
The area-by-area exposure image is an image obtained through imaging executed for each area of the image sensor unit 103 with the above-described combination of various imaging conditions illustrated in
As illustrated in
Next, with reference to
Herein, the exposure time of 1/480 sec. is one-sixteenth times the exposure time of 1/30 sec. Therefore, if the brightness (illuminance) of the object captured at 1/480 sec. is the same as the brightness of the object captured at the reference exposure time ( 1/30 sec.), the pixel potential output when the object is captured at the exposure time of 1/480 sec. is one-sixteenth times the pixel potential output when the exposure time is the reference exposure time of 1/30 sec. Further, in the imaging condition setting B, the analog gain is set to be 2 times. This is a gain one-fourth times the reference analog gain of 8 times. Accordingly, a level of the area-by-area exposure image acquired when the analog gain is 2 times is one-fourth times the level of the area-by-area exposure image acquired when the analog gain is the reference analog gain (8 times). As a result, when the processing is executed with the imaging condition setting B in
Next, the exposure correction unit 105 adjusts a value of the area-by-area exposure image to a level of the image captured at the reference imaging condition (i.e., the exposure time of 1/30 sec. and the analog gain of 8 times). When the imaging condition setting is the imaging condition setting B (the exposure time of 1/480 sec. and the analog gain of 2 times), as illustrated in
In other words, as illustrated in the example in
Next, with reference to
As illustrated in
Further, in an example illustrated in
Thus, as illustrated in the example in
As described with reference to the
Next, gradation conversion processing executed by the gradation conversion unit 106 illustrated in
The gradation conversion unit 106 executes gradation conversion processing on the gradation expanded image 123 (17 bits) for each pixel to convert the gradation expanded image 123 to the gradation converted image 124 (11 bits, in the present exemplary embodiment). Through the gradation conversion processing, a bit length of the gradation expanded image 123 (17 bits) is reduced, so that an output data rate from the imaging apparatus 100 can be reduced. Specifically, as illustrated in
Next, boundary gap correction processing executed by the gap correction unit 107 in
As illustrated in
For this reason, the gap correction unit 107 applies filter processing (smoothing processing) to a boundary portion between the pixel blocks 201 to reduce the gap between the pixel values of the boundary portion. The boundary portion is the portion at a boundary of adjacent pixel blocks 201. More specifically, as illustrated in
By executing the filter processing, the gap correction unit 107 can reduce the gap occurring in the boundary between the pixel blocks 201 of the gradation converted image 124, and output the boundary gap corrected image 125. In addition, sharpness of the image is lowered when the filter processing is executed by using a low-pass filter. However, in the present exemplary embodiment, since the filter processing is only executed on the boundary portion of the pixel blocks, lowering of sharpness of the image can be limited to only the boundary portion of the pixel blocks.
Further, in the present exemplary embodiment, although the filter processing (smoothing processing) using the low-pass filter has been described as an example, a type of the filter is not limited to the low-pass filter, and another known filter such as an epsilon filter can also be used.
In the present exemplary embodiment, respective pieces of processing have been executed by the exposure correction unit 105, the gradation conversion unit 106, and the gap correction unit 107 in
Respective constituent elements from the image sensor unit 103 to the image output unit 108 have been described above. Hereinafter, the synchronization control unit 101, the exposure time control unit 109, the exposure condition calculation unit 111, and the gain control unit 110 in
The synchronization control unit 101 generates an exposure time output pulse 120 and a gain output pulse 114, which are synchronized with each other, transmits the exposure time output pulse 120 to the exposure time control unit 109, and transmits the gain output pulse 114 to the gain control unit 110. With this processing, the synchronization control unit 101 synchronizes and controls the processing executed by the exposure time control unit 109 and the gain control unit 110. The exposure time output pulse 120 is a signal for controlling a timing for outputting the area-by-area exposure control signal 117 from the exposure time control unit 109 to the image sensor unit 103. The exposure time control unit 109 outputs the area-by-area exposure control signal 117 to the image sensor unit 103 based on the exposure time output pulse 120 to change the exposure time for each pixel block of the image sensor unit 103. The gain output pulse 114 is a signal for controlling a timing for outputting the area-by-area analog gain 121 from the gain control unit 110 to the A/D conversion unit 104. The gain control unit 110 outputs the area-by-area analog gain 121 to the A/D conversion unit 104 based on the gain output pulse 114 to change the gain applied to the pixel potential of an optional pixel block. As described above, in the present exemplary embodiment, the synchronization control unit 101 synchronizes the exposure time control unit 109 and the gain control unit 110 to execute operation control, so that the area-by-area exposure image 122 can be acquired by appropriately changing the exposure time and the analog gain for each of the pixel blocks of the image sensor unit 103.
Hereinafter, with reference to
In
In
As described above, the pixel block [0,0] is the pixel block 201 corresponding to the 0th to 99th horizontal lines. Thus, when the exposure processing and the analog gain processing are executed on the pixel block [0,0], the synchronization control unit 101 generates and outputs the exposure time output pulse 120 and the gain output pulse 114 corresponding to the 0th to 99th lines of the image sensor unit 103.
Then, based on the exposure time output pulse 120 and the area-by-area exposure time 112, the exposure time control unit 109 generates the area-by-area exposure control signal 117 for driving the image sensor unit 103 so as to execute exposure on the pixel block 201 of the 0th to 99th lines at the exposure time of 1/30 sec. Further, based on the gain output pulse 114 and the area-by-area analog gain value 113, the gain control unit 110 generates the area-by-area analog gain 121 for driving and controlling the A/D conversion unit 104 to apply the analog gain of 8 times to the pixel potential of the pixel block 201 of the 0th to 99th lines. At this time, at the pixel block [0,0], the same exposure time ( 1/30 sec.) is uniformly applied to each of the horizontal lines while shifting a timing for starting exposure indicated by a solid line 1103 and a timing for ending exposure indicated by a solid line 1104 of each of the horizontal lines. Further, the synchronization control unit 101 adjusts a timing for starting exposure so that the exposure of each of the lines of the pixel block [0,0] is ended at the same timing as the driving timing of the analog gain.
Further, as described above, the pixel block [0,1] is the pixel block 201 corresponding to the 100th to 199th horizontal lines. Thus, when the exposure processing and the analog gain processing are executed on the pixel block [0,1], the synchronization control unit 101 generates and outputs the exposure time output pulse 120 and the gain output pulse 114 corresponding to the 100th to 199th lines of the image sensor unit 103.
Then, based on the exposure time output pulse 120 and the area-by-area exposure time 112, the exposure time control unit 109 generates the area-by-area exposure control signal 117 for driving the image sensor unit 103 so as to execute exposure on the pixel block 201 of the 100th to 199th lines at the exposure time of 1/60 sec. Further, based on the gain output pulse 114 and the area-by-area analog gain value 113, the gain control unit 110 generates the area-by-area analog gain 121 for driving and controlling the A/D conversion unit 104 to apply the analog gain of 4 times to the pixel potential of the pixel block 201 of the 100th to 199th lines. Similarly, at the pixel block [0,1], the same exposure time ( 1/60 sec.) is uniformly applied to each of the horizontal lines while shifting the timings for starting and ending exposure respectively indicated by solid lines 1105 and 1106 of each of the horizontal lines. Further, the synchronization control unit 101 adjusts a timing for starting exposure so that the exposure of each of the lines of the pixel block [0,1] is ended at the same timing as the driving timing of the analog gain.
As illustrated in
Focusing on the pixel blocks [0,0] to [0,9] positioned at the left ends in the horizontal direction, the area-by-area exposure control signal 117 is divided into the pixel driving pulses sb0p0 to sb0p999. Then, for example, the pixel driving pulse sb0p0 is connected to the first horizontal line of the pixel block [0,0]. In a general image sensor, a pixel driving pulse is connected to each of the horizontal lines in the order of the vertical direction, and a uniform exposure time is applied to the entire portion of the image sensor unit 103 by employing a rolling shutter system. On the contrary, in the present exemplary embodiment, the area-by-area exposure control signal 117 is connected to each of the pixel blocks in the horizontal direction, and different exposure time is applied to each of the pixel blocks in the horizontal direction. Further, with respect to the pixel blocks in the vertical direction, the exposure time is changed at a boundary between the pixel blocks, so that different exposure time can be applied to each of the pixel blocks.
The processing is described with reference to
The gain control unit 110 outputs the area-by-area analog gain 121 based on the gain output pulse 114 and the area-by-area analog gain value 113. The area-by-area analog gain 121 of a desired analog gain value is applied to each of the pixel blocks specified by the area-by-area analog gain value 113.
Similarly, in
The processing executed by the exposure condition calculation unit 111 will be described with reference to
In the diagram in
As described above, the imaging apparatus 100 according to the present exemplary embodiment synchronizes and controls the exposure time and the analog gain for each pixel block to execute imaging. As described above, the imaging apparatus 100 of the present exemplary embodiment can expand the dynamic range corresponding to the analog gain by controlling the analog gain in addition to controlling the exposure time for each area. With respect to the case of digital gain processing, in which a digital value acquired through the A/D conversion is multiplied by a coefficient, if a value on the lower luminance side becomes 0 (i.e., underexposure) or a value on the higher luminance side becomes the maximum digital value (i.e., overexposure) after the A/D conversion is executed, gradation cannot be recovered because of the characteristic of the digital gain processing, i.e., the digital value is multiplied by the coefficient. On the contrary, adjustment of the analog gain as described in the present exemplary embodiment is advantageous in that a problem of underexposure or overexposure can be solved within a range of the analog gain. Further, according to the present exemplary embodiment, an image can be output after adjusting a level of the pixel value acquired by applying different exposure time and different analog gain for each pixel block, reducing a bit length through the gradation conversion processing, and reducing a gap occurring in the boundary between the pixel blocks.
For this reason, the imaging apparatus 1300 of the present exemplary embodiment outputs the 10-bit area-by-area exposure image 122 from the image output unit 108 as it is. The image output unit 108 also outputs the area-by-area exposure time 112 and the area-by-area analog gain value 113. In other words, according to the present exemplary embodiment, it is assumed that the processing subsequent to the processing executed by the exposure correction unit 105 in
As illustrated in
Herein, since a different exposure time is set for each area (for each pixel block) by the area-by-area exposure control signal 117, the number of wiring lines is increased. As a result, if the wiring is implemented with a normal structure consisting of a single layer, the image sensor unit 103 is interrupted by the wiring provided for the area-by-area exposure control signal 117, so that a sufficient pixel area cannot be acquired. On the contrary, the present exemplary embodiment is advantageous in that a sufficient pixel area can be acquired because only the image sensor unit 103 can be basically arranged on the sensor layer 1400. In addition, the number of layers is not limited to two, and the laminated structure may consist of three or more layers (more than two layers). Further, the configuration of the sensor layer 1400 and the circuit layer 1401 is merely an example, and constituent elements other than the image sensor unit 103 may be arranged on the sensor layer 1400.
As described above, in comparison with the method discussed in Japanese Patent Application Laid-Open No. 2010-136205, according to the first to the third exemplary embodiments, the dynamic range can be expanded within a range of the analog gain (e.g., 1 time to 8 times) in addition to a range of the exposure time. In other words, a dynamic range that exceeds a range of the exposure time (e.g., 1/30 sec. to 1/61440 sec.) cannot be acquired through the method discussed in Japanese Patent Application Laid-Open No. 2010-136205. However, through the method according to the present exemplary embodiment, a dynamic range can be expanded within a range of the analog gain. Further, compared with the method discussed in Japanese Patent Application Laid-Open No. 2009-303010, through the method according to the present exemplary embodiment, only one A/D conversion unit is used, a circuit size can be reduced, and the A/D conversion unit does not have to be driven at high speed. In other words, with the method discussed in Japanese Patent Application Laid-Open No. 2009-303010, a plurality of A/D conversion units has to be arranged in parallel, or an A/D conversion unit has to be driven at high speed through a time division method. However, according to the present exemplary embodiment, a single A/D conversion unit can be used without being driven at high speed.
Embodiment(s) of the disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions or units of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions or units of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM) device, a read only memory (ROM) device, a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM) a flash memory device, a memory card, and the like.
While the disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2020-020791, filed Feb. 10, 2020, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2020-020791 | Feb 2020 | JP | national |