The present invention relates to an imaging device and an imaging system.
In a single-plate type imaging element, color filters (CF) through which specific wavelength components, for example, lights of respective colors of red (R), green (G), and blue (B) pass are arranged in a particular pattern on pixels in order to obtain a color image. As a CF pattern, those having a so-called Bayer arrangement are widely used. Further, in addition to a CF of RGB, there is a growing use of a CF of an RGBW arrangement that includes W pixels having a filter that transmits light in the entire wavelength range of visible light.
While an imaging element having a CF of RGBW arrangement can improve sensitivity by using W pixels and acquire an image of a high S/N ratio, W pixels are easily saturated compared to the RGB pixels that are color pixels, which makes capturing under a high brightness environment difficult. This means that saturation is likely to occur even in the capturing condition with the same light amount and results in a narrow dynamic range, and this is a common issue in achieving higher sensitivity by using detection of a non-spectral signal or a wide-wavelength range component signal.
Japanese Patent Application Laid-Open No. 2017-055330 discloses a method of performing multiple times of exposure operations and readout operations within one frame to reduce occurrence of output saturation in W pixels in a solid state imaging device having a CF with an RGBW arrangement.
However, there is a demand for further expanding the dynamic range while ensuring color reproducibility in order to obtain a higher quality image.
The present invention intends to provide an imaging device and an imaging system that can acquire a good quality image with a wide dynamic range and a high color reproducibility.
According to one aspect of the present invention, there is provided an imaging device including an imaging element including a plurality of pixels that includes a plurality of first pixels, each of which outputs a signal including color information on any of a plurality of colors, and a plurality of second pixels having a higher sensitivity than the first pixels, and a signal processing unit that processes signals output from the imaging element, wherein the signal processing unit includes a luminance value acquisition unit that acquires luminance values of the first pixels based on signals output from the second pixels, and a color acquisition unit that acquires color ratios of the plurality of colors in a predetermined unit region from color values and the luminance values of the first pixels and acquires, from the acquired color ratios, color components of each of the first pixels and each of the second pixels included in the unit region, and wherein the color acquisition unit acquires each of the color ratios by using color values in the first pixels acquired in a first capturing condition and luminance values in the first pixels based on signals of the second pixels acquired in a second capturing condition of a lower sensitivity than the first capturing condition.
According to another aspect of the present invention, there is provided a signal processing device that processes signals output from an imaging element including a plurality of pixels that include a plurality of first pixels, each of which outputs a signal including color information on any of a plurality of colors, and a plurality of second pixels having a higher sensitivity than the first pixels, the signal processing device including a luminance value acquisition unit that acquires luminance values of the first pixels based on signals output from the second pixels, and a color acquisition unit that acquires color ratios of the plurality of colors in a predetermined unit region from color values and the luminance values of the first pixels and acquires, from the acquired color ratios, color components of each of the first pixels and each of the second pixels included in the unit region, wherein the color acquisition unit acquires each of the color ratios by using color values in the first pixels acquired in a first capturing condition and luminance values in the first pixels based on signals of the second pixels acquired in a second capturing condition of a lower sensitivity than the first capturing condition.
According to yet another aspect of the present invention, there is provided an imaging system including an imaging device including an imaging element including a plurality of pixels that include plurality of first pixels, each of which outputs a signal including color information on any of a plurality of colors, and a plurality of second pixels having a higher sensitivity than the first pixels, and a signal processing unit that processes signals output from the imaging device, wherein the signal processing unit includes a luminance value acquisition unit that acquires luminance values of the first pixels based on signals output from the second pixels, and a color acquisition unit that acquires color ratios of the plurality of colors in a predetermined unit region from color values and the luminance values of the first pixels and acquires, from the acquired color ratios, color components of each of the first pixels and each of the second pixels included in the unit region, and wherein the color acquisition unit acquires each of the color ratios by using color values in the first pixels acquired in a first capturing condition and luminance values in the first pixels based on signals of the second pixels acquired in a second capturing condition of a lower sensitivity than the first capturing condition.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Preferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings.
An imaging device and a method of driving the same according to a first embodiment of the present invention will be described with reference to
First, a general configuration of the imaging device according to the present embodiment will be described by using
As illustrated in
The imaging element 100 converts a light signal (object image) received through an optical system (not illustrated) into an electric signal and outputs the converted signal. The imaging element 100 is formed of a so-called single-plate-type color sensor in which a color filter (hereinafter, also referred to as “CF”) is arranged on a CMOS image sensor or a CCD image sensor, for example. The term “RGBW12 arrangement” illustrated in
The signal processing unit 200 performs signal processing described later on a signal output from the imaging element 100. The signal processing unit 200 includes an RGBW12 signal processing unit 210 and an image processing system unit 220. The RGBW12 signal processing unit 210 includes a pre-stage processing unit 212 and a high accuracy interpolation unit 214.
The RGBW12 signal processing unit 210 processes output signals from the imaging element 100 having a color filter arrangement of the RGBW12 arrangement. The pre-stage processing unit 212 performs as appropriate pre-processing of signal processing on the output signals from the imaging element 100, that is, a correction process such as offset correction or gain correction on each signal. The high accuracy interpolation unit 214 performs an accurate interpolation process on the output signals from the pre-stage processing unit 212. The high accuracy interpolation unit 214 has a function as a luminance information acquisition unit that acquires a luminance value of a color pixel based on the signal output from the W pixel. Further, the high accuracy interpolation unit 214 has a function as a color acquisition unit that acquires a color ratio from the color values of the color pixels and the luminance value of the W pixel and acquires a color component of each pixel from the acquired color ratio.
The image processing system 220 uses output from the RGBW12 signal processing unit 210 to generate an output image. The image processing system unit 220 is a functional block that generates an RGB color image and thus can be referred to as an RGB signal processing unit. To form a color image from the output from the imaging element 100, the image processing system unit 220 performs various processes such as a demosaic process, a color matrix operation, a white balance process, a digital gain, a gamma process, a noise reduction process, or the like where appropriate. Among these processes, demosaic process is particularly important for resolution information, and an advanced interpolation process is performed assuming a CF with the Bayer arrangement.
The imaging element 100 and the signal processing unit 200 may be provided on the same chip or may be provided on different chips or devices. When configured to be provided on a single chip, the imaging element 100 and the signal processing unit 200 may be both provided on a single semiconductor substrate or may be separately provided on different semiconductor substrates and then stacked. Further, the imaging element 100 and the signal processing unit 200 are not necessarily required to be configured as a single unit, but the signal processing unit 200 may be configured as a signal processing device or an image processing device that processes a signal output from the imaging element 100 or the imaging device.
The imaging element 100 includes an imaging region 10, a vertical scanning circuit 20, a column readout circuit 30, a horizontal scanning circuit 40, an output circuit 50, and a control circuit 60, as illustrated in
In the imaging region 10, a plurality of pixels 12 are provided in a matrix over a plurality of rows and a plurality of columns. For example, a total of 2073600 pixels including 1920 pixels in the column direction and 1080 pixels in the row direction are arranged in the imaging region 10. The number of pixels arranged in the imaging region 10 is not particularly limited, and a larger number of pixels or a smaller number of pixels may be applicable.
On each row of the imaging region 10, a control line 14 is arranged extending in a first direction (horizontal direction in
The control line 14 on each row is connected to the vertical scanning circuit 20. The vertical scanning circuit 20 supplies a control signal used for controlling a transistor of the pixel 12 to be turned on (conductive state) or off (nonconductive state). The output line 16 on each column is connected to the column readout circuit 30. The column readout circuit 30 performs a predetermined process such as an amplification process, for example, on the pixel signal read out via the output line 16 and holds the processed signal. The horizontal scanning circuit 40 supplies a control signal used for controlling a switch connected to a signal holding unit of each column of the column readout circuit 30. The output circuit 50 is formed of a buffer amplifier or a differential amplifier circuit and outputs a pixel signal read out from the signal holding unit of the column readout circuit 30 in response to the control signal from the horizontal scanning circuit 40 to the signal processing unit 200. The control circuit 60 is a circuit unit that supplies, to the vertical scanning circuit 20, the column readout circuit 30, and the horizontal scanning circuit 40, control signals for controlling the operations or the timings thereof. Some or all of the control signals to be supplied to the vertical scanning circuit 20, the column reading circuit 30, and the horizontal scanning circuit 40 may be supplied from the outside of the imaging element 100.
Each of the pixels 12 includes a photoelectric converter PD, a transfer transistor M1, a reset transistor M2, an amplifier transistor M3, and a select transistor M4, as illustrated in
In the case of the pixel 12 of the circuit configuration illustrated in
As illustrated in
The column amplifier 32 is formed of the differential amplifier circuit having an inverting input node, a non-inverting input node, and an output node. The inverting input node of the column amplifier 32 is connected to the output line 16 via the switch SW0 driven by a signal PL and the capacitor C0. A voltage VREF is supplied to the non-inverting input node of the column amplifier 32. A first feedback path formed of the switch SW1 driven by a signal ϕC1a and the capacitor C1a, which are connected in series, is provided between the inverting input node and the output node of the column amplifier 32. Further, a second feedback path formed of the switch SW2 driven by a signal ϕC1b and the capacitor C1b, which are connected in series, is provided between the inverting input node and the output node of the column amplifier 32. Furthermore, a third feedback path formed of the switch SW3 driven by a signal ϕC is provided between the inverting input node and the output node of the column amplifier 32.
To the output node of the column amplifier 32, the capacitor CTN and one primary node of the switch SW6 are connected via the switch SW4, and the capacitor CTS and one primary node of the switch SW7 are connected via the switch SW5, respectively. The switches SW4 and SW5 are driven by signals ϕCTN and ϕCTS, respectively.
The other primary node of the switch SW6 is connected to a horizontal output line 34. Further, the other primary node of the switch SW7 is connected to a horizontal output line 36. The horizontal scanning circuit 40 outputs signals ϕHn subsequently to control nodes of the switches SW6 and SW7 of the column readout circuit 30 on each column. The output circuit 50 includes an output amplifier 52. The horizontal output lines 34 and 36 are connected to the output amplifier 52.
On each pixel 12 arranged in the imaging region 10, a color filter having predetermined spectral sensitivity characteristics is arranged in accordance with a color filter arrangement (hereinafter, referred to as “CF arrangement”) illustrated in
The W pixel is a pixel that directly detects an incident light without color separation. The W pixel is characterized by a wide transmission wavelength range and high sensitivity in the spectral sensitivity characteristics compared to the R pixel, the G pixel, and the B pixel and has the widest wavelength full width at half maximum of the transmission wavelength range in the spectral sensitivity characteristics, for example. Typically, the transmission wavelength range in the spectral sensitivity characteristics of the W pixel covers the transmission wavelength range in the spectral sensitivity characteristics of the R pixel, the G pixel, and the B pixel.
In the CF arrangement illustrated in
In other words, the RGBW12 arrangement includes color pixels as first pixels and white pixels as second pixels, and the total number of the second pixels is three times (twice or more) the total number of the first pixels. The first pixels include multiple types of pixels (the R pixel, the G pixel, and the B pixel) each of which outputs a signal including color information of any of a plurality of colors (R, G, B). The second pixel has higher sensitivity than the first pixel. Note that, while the imaging element 100 may include not only effective pixels but also a pixel that does not output a signal used for forming an image, such as an optical black pixel, a dummy pixel, a null pixel, or the like, which is not included in the first pixel and the second pixel described above.
When using the RGBW12 arrangement, since the RGB pixels are surrounded by only the W pixels, the accuracy in acquiring a W value (luminance value) of the RGB pixel portion by using interpolation is improved. Since the luminance value of the RGB pixel portion can be interpolated with high accuracy, an image with high resolution can be obtained. Here, the RGB pixels being surrounded by the W pixels means that each of the W pixels is arranged adjacent to each of the R pixel, the G pixel, and the B pixel in the vertical direction, the horizontal direction, and the diagonal direction in a planar view.
As a CF arrangement used in acquiring a color image, a so-called Bayer arrangement is known. In the Bayer arrangement, as illustrated in
Since the ratio of W pixels that determines a resolution is larger in the RGBW12 arrangement, it is possible to acquire an image with a higher resolution than in a case of a CF arrangement in which pixels which determine the resolution are arranged in a checkered pattern as with the Bayer arrangement. That is, information with a higher spatial frequency (that is, a finer pitch) can be acquired than in a case of a CF arrangement in which pixels which determine the resolution are arranged in a checkered pattern. Therefore, an image with a sufficiently high resolution can be obtained by merely calculating values of portions including no W pixel (that is, a portion including color pixels) from the average of eight pixels nearby. Further, interpolation can be performed by detecting the edge directionality based on edge information or information on a periodical shape or the like. In this case, it is possible to obtain a sharper image (that is, a higher resolution image) than in a case of using the average of eight pixels nearby.
While various CF arrangements are possible, it is preferable to increase the number of pixels which mainly create the resolution (G pixels in the Bayer arrangement) in order to acquire an image with a higher resolution than the resolution in a single-plate image sensor. In particular, G pixels which create the resolution are arranged in a checkered pattern in the Bayer arrangement, and thus an interpolation error may occur. In this regards, since the RGBW12 arrangement includes more pixels which create the resolution (W pixels), the interpolation error can be reduced as much as possible.
Next, the operation of the imaging device according to the present embodiment will be described by using
At time t0, a readout operation from the pixel 12 on the 0-th row is started. After the completion of the readout operation from the pixel 12 on the 0-th row, the process proceeds to a readout operation from the pixel 12 on the first row. Similarly, readout operations are sequentially performed from the second row to the N-th row, and a readout operation from the pixel 12 on the N-th row is started at time t40.
Further, at a predetermined time t20 after the completion of the readout operation from the pixel 12 on the 0-th row, a reset operation of the pixel 12 on the 0-th row (“reset 1” in
Further, at a predetermined time t50 after the completion of the readout operation from the pixel 12 on the N-th row, the process proceeds to a readout operation of the next frame and repeatedly performs the same operation as the operation from the time t0. Note that the period from time t0 to time t50 is determined by a framerate.
In the timing diagram described above, a period from a reset operation to a readout operation on each row is defined as an accumulation time of signal charges in the photoelectric converter PD of the pixel 12 (hereinafter, simply referred to as “accumulation time”). When a reset scan to perform a reset operation of the first row at time t20 and perform a reset operation of the N-the row at time t60 (“reset 1” in
First, at time t0, the vertical scanning circuit 20 controls the control signal PSEL on a row to be read out to H-level to turn on the select transistor M4. Thereby, the row to be read out is selected, and the amplifier transistors M3 of the pixels 12 belonging to the row of interest (selected row) are connected to the output lines 16 via the select transistors M4.
Further, at the same time t0, the vertical scanning circuit 20 controls the control signal PRES on the row to be read out to H-level to turn on the reset transistor M2. Thereby, the floating diffusions FD of the pixels 12 belonging to the selected row are connected to the power supply node (voltage VDD) via the reset transistors M2, and the potentials of the floating diffusions FD of the pixels 12 are reset. The amplifier transistor M3 outputs a signal based on the reset potential of the floating diffusion FD (N-signal) to the output line 16 via the select transistor M4.
Further, at the same time t0, the control circuit 60 controls the signal PL to H-level to turn the switch SW0 into a conductive state. This causes the output line 16 to be connected to the capacitor C0.
Further, at the same time t0, the control circuit 60 controls the signals ϕC, ϕC1a, ϕC1b, ϕCTN, and ϕCTS to H-level to turn the switches SW1, SW2, SW3, SW4, and SW5 into a conductive state. This causes the capacitors C1a, C1b, CTN, and CTS to be in a reset state. The potentials of the capacitors CTN and CTS become the voltage VREF. Note that the signal ϕC1 in
Next, at time t1, the control circuit 60 controls the signals ϕCTN and ϕCTS to L-level to turn the switches SW4 and SW5 into a nonconductive state. Thereby, the reset state of the capacitors CTN and CTS are released.
Next, at time t2, the vertical scanning circuit 20 controls the control signal PRES to L-level to turn on the reset transistor M2. Thereby, the reset of the potential of the floating diffusion FD is released. The potential in which a reset noise (kTC noise) is mixed is held in the floating diffusion FD.
Next, at time t3, the control circuit 60 controls the signal ϕC to L-level to turn the switch SW3 into a nonconductive state. Thereby, reset is released with the column amplifier 32 being reset by the N-signal. Thereby, the column amplifier 32 is in a state of amplifying the difference between a signal from the pixel 12 and the N-signal at a gain determined by the ratio C0/C1 and outputting the amplified signal. Further, the potential corresponding to the N-signal is clamped in the capacitor C0 at the voltage VREF.
The gain of the column amplifier 32 can be determined by appropriately controlling the signals ϕC1a and ϕC1b. That is, if only the signal ϕC1a of the signals ϕC1a and ϕC1b is at H-level, the gain of the column amplifier 32 will be C0/C1a. Hereinafter, the gain at this time is referred to as a gain G1. If both of the signals ϕC1a and C1b are at H-level, the gain of the column amplifier 32 will be C0/(C1a+C1b). Hereinafter, the gain at this time is referred to as a gain G2. If only the signal ϕC1b of the signals ϕC1a and ϕC1b is at H-level, the gain of the column amplifier 32 will be C0/C1b. Hereinafter, the gain at this time is referred to as a gain G3. The signal ϕC1 in
Next, at time t4, the control circuit 60 controls the signal ϕCTN to H-level to turn the switch SW4 into a conductive state. Thereby, the output terminal of the column amplifier 32 is connected to the capacitor CTN via the switch SW4.
Next, at time t5, the control circuit 60 controls the signal ϕCTN to L-level to turn the switch SW4 into a nonconductive state. Thereby, a signal obtained by amplifying the N-signal at a predetermined gain by the column amplifier 32 is sampled and held in the capacitor CTN. Note that, at this time, the offset of the column amplifier 32 is held at the same time.
Next, in a period from time t6 to time t7, the vertical scanning circuit 20 controls the control signal PTX to H-level to turn on the transfer transistor M1. Thereby, charges accumulated in the photoelectric converter PD are transferred to the floating diffusion FD, and the amplifier transistor M3 outputs a signal based on the potential of the floating diffusion FD to the output line 16 via the select transistor M4.
The signal output by the amplifier transistor M3 is a signal based on charges that have been accumulated in the photoelectric converter PD. A signal output by the amplifier transistor M3 based on charges that have been accumulated by the photoelectric converter PD is referred to as a light signal (which may be denoted as an S-signal).
Next, at time t8, the control circuit 60 controls the signal ϕCTS to H-level to turn the switch SW5 into a conductive state. Thereby, the output terminal of the column amplifier 32 is connected to the capacitor CTS via the switch SW5.
Next, at time t9, the control circuit 60 controls the signal ϕCTS to L-level to turn the switch SW5 into a nonconductive state. Thereby, a signal obtained by amplifying a light signal at a predetermined gain by the column amplifier 32 is sampled and held in the capacitor CTS.
Next, at time t10, the vertical scanning circuit 20 controls the control signal PSEL to L-level to turn off the select transistor M4 and thereby separate the pixel 12 from the output line 16. Further, the vertical scanning circuit 20 controls the signal PL to L-level to turn the switch SW0 into a nonconductive state and thereby separate the input of the column amplifier 32. Further, the vertical scanning circuit 20 controls the signal ϕC1 to L-level to turn the switches SW1 and SW2 into a nonconductive state and thereby stops the amplification operation of the column amplifier 32.
Next, in a period from time t11 to time t12, the horizontal scanning circuit 40 performs an operation of outputting the signal ϕHn sequentially to each column of the column readout circuit 30, namely, a horizontal scan. Thereby, the output amplifier 52 sequentially outputs a signal based on a signal held in the capacitors CTN and CTS to the outside. The offset component of the column amplifier 32 is subtracted from the output signal of the output amplifier 52.
First, at time t20, the vertical scanning circuit 20 controls the control signal PRES to H-level to turn on the reset transistor M2. Thereby, the floating diffusion FD is connected to a power supply node (voltage VDD) via the reset transistor M2, and the potential of the floating diffusion FD is reset.
Next, in a period from the time t21 to the time t22, the vertical scanning circuit 20 controls the control signal PTX to H-level to turn on the transfer transistor 24. Thereby, the cathode of the photoelectric converter PD is reset to the same potential as the floating diffusion FD, that is, the voltage VDD.
Next, at time t23, the vertical scanning circuit 20 controls the control signal PRES to L-level to turn off the reset transistor M2. Thereby, the reset state is released.
Other signals, that is, the control signals PSEL and signals PL, ϕC, ϕC1a, ϕC1b, ϕCTN, ϕCTS, and ϕHn are maintained in L-level during the reset period from time t20 to time t23.
A pixel signal output from the imaging element 100 by the readout operation described above is processed in the signal processing unit 200 in accordance with the flow illustrated in
The pixel signal input from the signal processing unit 200 is first input to the pre-stage processing unit 212 of the RGBW12 signal processing unit 210. The pre-stage processing unit 212 appropriately performs, on the pixel signal (input signal Din), a correction process (pre-stage process) such as offset (OFFSET) correction, gain (GAIN) correction, or the like to create a corrected output signal (data Dout) (step S101). This process is typically expressed by the following Equation (1).
Dout=(Din−OFFSET)·GAIN (1)
This correction can be performed in various units. For example, correction may be performed on a pixel 12 basis, on a column amplifier 32 basis, on an analog-digital conversion (ADC) unit basis, on an output amplifier 52 basis, or the like. With the correction of the pixel signal, so-called fixed pattern noise can be reduced, and thereby a higher quality image can be obtained.
Next, the data Dout processed by the pre-stage processing unit 212 is input to the high accuracy interpolation unit 214. In the high accuracy interpolation unit 214, as illustrated in
The process in the high accuracy interpolation unit 214 will be more specifically described by using
In step S102, the data Dout in Diagram (a) in
In step S103, an interpolation process is performed on the separated data Dres for resolution, and pixel values of four pixels whose pixel values are unknown (“?” in the diagram) are filled. The interpolation process in step S103 is performed in a luminance value acquisition unit (not illustrated) of the high accuracy interpolation unit 214. Various methods may be employed for a method of interpolating pixel values. The method may be, for example, a method of acquiring the average of surrounding eight pixels, a method of acquiring the average of four pixels, namely, upper, under, left, and right pixels (bilinear method), a method of detecting edges of surrounding pixels to perform interpolation in the direction orthogonal to the edge direction, and a method of detecting a pattern such as a thin line to perform interpolation in the direction thereof, or the like.
For illustration purposes of the interpolation method, X coordinate and Y coordinate are added in Diagram (c) in
When a pixel value is interpolated by the average of the surrounding eight pixels, for example, the luminance interpolation value iWb (3, 3) at the pixel at the coordinates (3, 3) can be acquired from the following Equation (2).
While
The directivity of a luminance change may be detected from pixel values of surrounding pixels, and the interpolation of pixel values in the data Dres for resolution may be performed based on the detected directionality of a luminance change. Performing an interpolation process based on the directivity of a luminance change enables more accurate interpolation of pixel values.
Note that the sum of coefficients for respective items of difference is eight when deriving these four correlation values. This is intended to have closer weighting on the places where differences are obtained in calculation and to have the same weighting among four correlation values. Further, the positions of pixels where differences are obtained (arrows) are arranged line-symmetrically with respect to pixel B (3, 3). This is for enhancing the symmetry when deriving correlation to reduce an error in correlation values.
The direction corresponding to the smallest correlation value of the four correlation values acquired in such a way, namely, the correlation value (horizontal), the correlation value (vertical), the correlation value (left-diagonal), and the correlation value (right-diagonal) is the direction with a small gradient, that is, the direction with strong correlation. Accordingly, data on pixels aligned in the direction with strong correlation is used to acquire an interpolation value of a pixel. For example, when correlation in the horizontal direction is strong (the correlation value (horizontal) is the smallest), the interpolation value of pixel B (3, 3) will be the averaged value of data on pixel W (2, 3) and data on pixel W (4, 3).
In such a way, a less gradient direction is derived from data on W pixels near a pixel of interest (pixel B (3, 3) in this example), and interpolation is performed by estimating W data on the pixel of interest from W pixels aligned in the derived direction. By doing so, it is possible to perform an interpolation process in accordance with information on gradient on a single pixel basis, which can improve the resolution.
In step S104, the data Dint after interpolation illustrated in Diagram (c) of
The synthesis of the data Drgb is performed by utilizing the feature that the ratio of local colors is strongly correlated with a luminance and acquiring a ratio of data on a color representing a pixel block of pixels of four rows by four columns (color ratio) and resolution data thereof. Various methods may be employed for the acquisition of a color ratio.
The first method is a method of normalizing and deriving RGB data. This method is expressed by the following Equation (7). Note that the value G is G=(Gr+Gb)/2 in Equation (7).
The second method is a method of obtaining a ratio of RGB data and the luminance interpolation values iWr, iWg, and iWb. This method is expressed by the following Equation (8).
The third method is a method of performing a normalizing process from Equation (8). This method is expressed by the following Equation (9). The third method has a higher effect of reducing color noise when separating a luminance value into values for RGB color components compared to the second method. This will be described later.
In the present embodiment, the third method among the above methods is used.
With a use of data on color ratio RGB ratio and data on a W value or interpolation values iWr, iWgr, iWgb, and iWb acquired in such a way, the RGB value of each pixel can be acquired from the following Equation (10).
RGB=[R_ratio·W G_ratio·W B_ratio·W] (10)
In Equation (10), the values R_ratio, G_ratio, and B_ratio are expressed by the following Equation (11), which correspond to RGB components of the color ratio expressed by Equation (7) to Equation (9).
RGB_ratio=[R_ratio G_ratio B_ratio] (11)
After a series of processes described above, data on a pixel block of pixels of four rows by four columns is extended to the 4×4×3 data Drgb including data on three colors of R, G, and B to respective pixels and is output.
Next, capturing and signal processing in a High Dynamic Range (HDR) mode in the imaging device according to the present embodiment will be described by using
In capturing in the HDR mode, an image with a wide dynamic range is formed by acquiring a plurality of images captured in capturing conditions of different sensitivities and synthesizing the plurality of images. While there are various methods in the methods of changing the sensitivity at capturing, a method of switching the gain and a method of switching the exposure time will be described here as a typical example.
An example of a method of switching the gain may be a method of switching the gain of the column amplifier 32 of the column readout circuit 30. As described above, the gain of the column amplifier 32 of the column readout circuit 30 illustrated in
The timing of switching the gains G1, G2, and G3 is not particularly limited. For example, by switching the gain on a frame-by-frame basis, it is possible to output images captured in capturing conditions of different sensitivities on a frame-by-frame basis. Alternatively, multiple times of exposure may be performed within one frame to output a signal by changing the gain every time. Alternatively, a signal obtained by one time of exposure may be amplified for multiple times at different gains to output the amplified signal.
The exposure time can be switched by controlling the accumulation time of signal charges in the photoelectric converter PD. As illustrated using
The timing of switching the exposure time is not particularly limited. For example, images captured in capturing conditions of different sensitivities can be output on a frame-by-frame basis by switching the accumulation time on a frame-by-frame basis. Alternatively, multiple times of exposure may be performed within one frame to output a signal by changing the accumulation time every time.
Note that the scheme of switching the sensitivity is not limited to the above, and other schemes may be applied. For example, the gain of a readout circuit within a pixel can be changed by switching the floating diffusion capacitor. Further, the sensitivity may be switched by using any combination of the schemes described above.
Next, a process in the signal processing unit 200 of an image captured in the HDR mode in the imaging element 100 will be described.
When captured in the HDR mode, a plurality of images captured in the capturing condition of different sensitivities are output from the imaging element 100 to the signal processing unit 200. An interpolation process is performed on the plurality of images input to the signal processing unit 200 in the same manner as the process from step S101 to step S103 described above, and a luminance value (interpolation value) iW in a color pixel is acquired.
The expression “saturation” represented in the vertical axis indicates a value corresponding to a saturation level of the pixel 12. As described above, since the W pixel has a higher sensitivity than the RGB pixels, a light amount at which the luminance value iW reaches the saturation level is less than a light amount at which the color value Col reaches the saturation level under the same capturing condition. For the purpose of illustration here, a range of the light amount up to a light amount Lx1 at which the luminance value iWH is saturated is defined as “range 1”, a range of the light amount from the light amount Lx1 to a light amount Lx2 at which the color value ColH is saturated is defined as “range 2”, and a range of the light amount from the light amount Lx2 to a light amount Lx3 at which the color value ColL is saturated is defined as “range 3”.
When the incident light amount is in the range 1, acquisition of a color ratio in the color pixel is performed in accordance with the color value ColH in the color pixel of interest and the luminance value iWH that is an interpolation value acquired from the surrounding W pixels of the color pixel of interest. Further, when the incident light amount is in the range 3, acquisition of a color ratio in the color pixel is performed in accordance with the color value ColL in the color pixel of interest and the luminance value iWL that is an interpolation value acquired from the surrounding W pixels of the color pixel of interest.
On the other hand, when the incident light amount is in the range 2, the luminance value iWH is saturated, and thus no acquisition of a color ratio in the color pixel can be performed in accordance with the color value ColH and the luminance value iWH. Accordingly, when the incident light amount is in the range 2, acquisition of a color ratio in the color pixel is performed in accordance with the color value ColH in the color pixel of interest and the luminance value iWL that is an interpolation value acquired from the surrounding W pixels of the color pixel of interest.
That is, the color acquisition unit acquires a color ratio by using a color value of the first pixel acquired in the first capturing condition and a luminance value of the first pixel based on a signal of the second pixel acquired in the second capturing condition having a lower sensitivity than the first capturing condition. This acquisition is performed when the luminance value of the first pixel based on the signal of the second pixel acquired in the first capturing condition is greater than or equal to a level at which the second pixel is saturated. Further, the color acquisition unit acquires a color ratio by using the color value of the first pixel acquired in the first capturing condition and the luminance value of the first pixel based on the signal of the second pixel acquired in the first capturing condition. This acquisition is performed when the luminance value of the first pixel based on the signal of the second pixel acquired in the first capturing condition is less than a level at which the second pixel is saturated.
For example, in the range 2 in
Col/iW=P2/(P3×N) (12)
Note that it is possible to use the color value ColL and the luminance value iWL in acquisition of a color ratio when the incident light amount is in the range 2. However, acquisition of a color ratio by using the color value ColH as described above allows an image with a better S/N ratio to be acquired.
In particular, in a synthesis process in the RGBW-12 arrangement, acquisition will be performed by using the same color ratio in 16 pixels, as illustrated in
It is therefore preferable to use an acquired value having a good S/N ratio acquired from Equation (12) in the synthesis process when the incident light amount is in the range 2.
The acquisition process of a color ratio in a color pixel described above can be implemented in accordance with the flowchart illustrated in
First, in step S201, the data separation process in step S102 described above is performed on data of a plurality of images captured in capturing conditions of different sensitivities, and values of color pixel (color values ColH and ColL) are acquired.
Next, in step S202, the interpolation process in step S103 described above is performed on data of the plurality of images captured in capturing conditions of different sensitivities, and luminance values of color pixel (luminance values iWH and iWL) are acquired.
Next, in step S203, it is determined whether or not the luminance value iWH is saturated. If the luminance value iWH is less than or equal to a saturation level of the pixel (step S203, “No”), it is determined that the luminance value iWH indicates a right value, and the color value ColH and the luminance value iWH are used for acquisition of a color ratio (step S204). If the luminance value iWH is greater than the saturation level of the pixel (step S203, “Yes”), it is determined that the luminance value iWH is saturated, and the color value ColH and the luminance value iWL are used for acquisition of a color ratio (step S205).
Next, in step S206, the synthesis process in step S104 described above is performed, and RGB data of each pixel is acquired by using data of the color ratio acquired in step S204 or step S205.
By performing the process described above on an image captured in the HDR mode in such a way, it is possible to acquire a high quality image having a wide dynamic range and a high color reproducibility.
As described above, according to the present embodiment, an imaging device that can acquire a high quality image having a wide dynamic range and a high color reproducibility can be realized.
An imaging system according to a second embodiment of the present invention will be described with reference to
An imaging system 300 of the present embodiment includes an imaging device to which the configuration described in the above first embodiment is applied. A specific example of the imaging system 300 may be a digital still camera, a digital camcorder, a surveillance camera, or the like.
The imaging system 300 illustrated as an example in
The imaging system 300 further includes a signal processing unit 308 that processes an output signal output from the imaging device 301. The signal processing unit 308 performs an operation of signal processing for performing various correction and compression on an input signal, if necessary, to output the signal. For example, the signal processing unit 308 performs on an input signal, predetermined image processing such as a conversion process of converting pixel output signals of RGB to Y, Cb, Cr color space, gamma correction, or the like. Further, the signal processing unit 308 may have some or all of the functions of the signal processing unit 200 in the imaging device described in the first embodiment.
The imaging system 300 further includes a memory unit 310 for temporarily storing image data therein and an external interface unit (external I/F unit) 312 for communicating with an external computer or the like. The imaging system 300 further includes a storage medium 314 such as a semiconductor memory used for performing storage or readout of imaging data and a storage medium control interface unit (storage medium control I/F unit) 316 used for performing storage or readout on the storage medium 314. Note that the storage medium 314 may be embedded in the imaging system 300 or may be removable.
The imaging system 300 further includes a general control/operation unit 318 that performs various computation and controls the entire digital still camera and a timing generation unit 320 that outputs various timing signals to the imaging device 301 and the signal processing unit 308. The timing signal or the like may be externally input, and the imaging system 300 may include at least the imaging device 301 and the signal processing unit 308 that processes an output signal output from the imaging device 301. The general control/operation unit 318 and the timing generation unit 320 may be configured to perform some or all of control functions of the imaging device 301.
The imaging device 301 outputs an imaging signal to the signal processing unit 308. The signal processing unit 308 performs predetermined signal processing on an imaging signal output from the imaging device 301 and outputs image data. Further, the signal processing unit 308 generates an image by using the imaging signals. The image generated in the signal processing unit 308 is stored in the storage medium 314, for example. Further, the image generated in the signal processing unit 308 is displayed as a moving image or a static image on a monitor formed of a liquid crystal display or the like. An image stored in the storage medium 314 can be hard-copied by a printer or the like.
By forming an imaging system using the imaging device of the first embodiment, it is possible to realize an imaging system capable of acquiring a higher quality image.
An imaging system and a movable object according to a third embodiment of the present invention will be described by using
The imaging system 400 is connected to a vehicle information acquisition device 420 and can acquire vehicle information such as a vehicle speed, a yaw rate, a steering angle, or the like. Further, the imaging system 400 is connected to a control ECU 430, which is a control device that outputs a control signal for causing a vehicle to generate braking force based on a determination result by the collision determination unit 418. That is, the control ECU 430 is an example of a movable object control unit for controlling a movable object based on the distance information. Further, the imaging system 400 is also connected to an alert device 440 that issues an alert to the driver based on a determination result by the collision determination unit 418. For example, when the collision probability is high as the determination result of the collision determination unit 418, the control ECU 430 performs vehicle control to avoid a collision or reduce damage by applying a brake, pushing back an accelerator, suppressing engine power, or the like. The alert device 440 alerts a user by sounding an alert such as a sound, displaying alert information on a display of a car navigation system or the like, providing vibration to a seat belt or a steering wheel, or the like.
In the present embodiment, an area around a vehicle, for example, a front area or a rear area is captured by using the imaging system 400.
Although an example of control for avoiding a collision to another vehicle has been described in the description above, it is applicable to automatic driving control for following another vehicle, automatic driving control for not going out of a traffic lane, or the like. Furthermore, the imaging system is not limited to a vehicle such as the subject vehicle and can be applied to a movable object (moving apparatus) such as a ship, an airplane, or an industrial robot, for example. In addition, the imaging system can be widely applied to a device which utilizes object recognition, such as an intelligent transportation system (ITS), without being limited to movable objects.
The present invention is not limited to the embodiments described above, and various modifications are possible.
For example, an example in which a part of the configuration of any of the embodiments is added to another embodiment or an example in which a part of the configuration of any of the embodiments is replaced with a part of the configuration of another embodiment is one of the embodiments of the present invention.
Further, although the case of forming a high dynamic range image by synthesizing a plurality of images captured in capturing condition of different sensitivities has been described as an example in the above embodiments, it is not necessarily required to form a high dynamic range image. For example, capturing in the low sensitivity mode may be used only in acquisition of the luminance value iWL of the color pixel. This can improve the S/N ratio of an image captured in the high sensitivity mode.
Further, the circuit configuration of the pixel 12 or the column readout circuit 30 is not limited to that illustrated in
Further, although the case where the RGBW12 arrangement is employed as a color filter arrangement has been described in the above embodiments, it is not necessarily required to be the color filter of the RGBW12 arrangement. For example, a color filter of RGBW arrangement having a different ratio of W pixels, for example, a color filter of RGBW-8 arrangement may be employed. Alternatively, it may be a color filter of a CMYW arrangement including a C pixel having a cyan CF, an M pixel having a magenta CF, a Y pixel having a yellow CF, and the W pixels, for example.
Further, although an imaging element that performs so-called rolling shutter drive in which the accumulation time of pixels on each row is started sequentially on a row-by-row basis has been described as an example in the above embodiment, the present invention is not always limited to the imaging element that performs the rolling shutter drive. For example, the present invention can be applied to an imaging element that performs so-called global electronic shutter drive with the same accumulation time for pixels on respective rows.
Further, the imaging systems illustrated in the second and third embodiments are examples of an imaging system to which the imaging device of the present invention may be applied, the imaging system to which the imaging device of the present invention can be applied is not limited to the configuration illustrated in
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2017-236393, filed Dec. 8, 2017 which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2017-236393 | Dec 2017 | JP | national |