This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2013-159842, filed on Jul. 31, 2013; the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a solid-state imaging device.
In a solid-state imaging device, there is a demand for an ultra high speed moving image such as a slow motion moving image. There is also a digital camera that can pick up a moving image at a frame rate of 1000 fps.
In general, according to one embodiment, a solid-state imaging device includes a pixel array unit, a binning control unit, a frame-read control unit, and a reconfiguration processing unit. In the pixel array unit, pixels configured to accumulate photoelectrically-converted charges are arranged in a matrix shape. The binning control unit performs control to lump together several pixels among the pixels between different lines of the pixel array unit. The frame-read control unit thins out and reads the lines to vary thinning positions of the lines lumped together by the binning control unit among two or more frames. The reconfiguration processing unit combines the two or more frames, in which the thinning positions are different, to thereby configure one frame.
Exemplary embodiments of a solid-state imaging device will be explained below in detail with reference to the accompanying drawings. The present invention is not limited to the following embodiments.
In
In the solid-state imaging device, a vertical scanning circuit 2 configured to scan read target pixels PC in the vertical direction, a load circuit 3 configured to perform a source follower operation with the pixels PC to thereby read out signals from the pixels PC to the vertical signal lines Vlin column by column, a column ADC circuit 4 configured to detect signal components of the pixels PC column by column in a CDS, a reference voltage generating circuit 6 configured to output a reference voltage VREF to the column ADC circuit 4, and a timing control circuit 7 configured to control timing for readout from and accumulation in the pixels PC are provided. As the reference voltage VREF, a ramp wave can be used.
In the pixel array unit 1, to colorize a picked-up image, a Bayer array HP including four pixels PC as one set can be formed. In the Bayer array HP, two pixels for green Gr and Gb are arranged in one diagonal direction and one pixel for red R and one pixel for blue B are arranged in the other diagonal direction.
In the timing control circuit 7, a binning control unit 7A and a frame read control unit 7B are provided. The binning control unit 7A performs control to lump together several pixels PC among the pixels PC between different lines of the pixel array unit 1. The frame-read control unit 7B thins out and reads lines to vary thinning positions of the lines lumped together by the binning control unit 7A among two or more frames.
In the solid-state imaging device, a reconfiguration processing unit 8 configured to combine two or more frames, in which the thinning positions are different, to thereby configure one frame is provided. In the reconfiguration processing unit 8, a frame memory 8A configured to store an output signal S1 of the column ADC circuit 4 frame by frame is provided.
The vertical scanning circuit 2 scans the pixels PC in the vertical direction in the vertical scanning circuit 2 to thereby select the pixel PC in the row direction RD. The load circuit 3 performs a source follower operation with the pixel PC to thereby transmit a signal read from the pixel PC via the vertical signal line Vlin and send the signal to the column ADC circuit 4. The reference voltage generating circuit 6 sets a ramp wave as the reference voltage VREF and sends the ramp wave to the column ADC circuit 4. The column ADC circuit 4 performs a count operation for a clock until a signal level and a reset level read out from the pixel PC coincide with a level of the ramp wave. The column ADC circuit 4 calculates a difference between the signal level and the reset level at that point to thereby detect signal components of the pixels PC in the CDS and outputs the signal components as the output signal S1.
The binning control unit 7A performs control to read out charges of the pixels PC between different lines of the pixel array unit 1 all together. That is, the binning control unit 7A can perform charge addition binning between the different lines of the pixel array section 1. For example, when it is assumed that K (K is an integer equal to or larger than 2) lines are read all together, it is possible to multiply sensitivity with K and multiply an angle of view with K.
The frame-read control unit 7B thins out and reads lines to vary thinning positions of the lines lumped together by the binning control unit 7A among two or more frames. For example, when it is assumed that thinning positions are circulated between two frames A and B, odd number lines after binning can be thinned out in the frame A and even number lines after binning can be thinned out in the frame B. When the number of frames in which thinning positions are circulated is represented as M (M is an integer equal to or larger than 2), it is possible to multiply a frame rate with M. When an exposure period of one frame is represented as EX, one frame can be read in time of EX/M to make the exposure period to overlap an exposure period of another frame in time of EX*(M−1)/M. Consequently, compared with sensitivity obtained when exposure times do not overlap between frames, it is possible to multiply the sensitivity with M.
The reconfiguration processing unit 8 combines two or more frames, in which the thinning positions are different, to thereby configure one frame and outputs the frame as an output signal S2. For example, when it is assumed that the two frames A and B are combined to configure one frame, it is possible to maintain an angle of view without deteriorating a frame rate.
That is, the charge addition binning is performed in the K lines, thinning positions are circulated in the M frames, exposure periods are set to overlap among the frames, and the frames are reconfigured. Consequently, at the same frame rate, it is possible to multiply the sensitivity with K×M and multiply the angle of view with K. Alternatively, it is possible to multiply the sensitivity with K, multiply the angle of view with K, and multiply the frame rate with M.
In
A source of the readout transistor Td is connected to the photodiode PD. A read signal READ is input to a gate of the readout transistor Td. A source of the reset transistor Tc is connected to a drain of the readout transistor Td. A reset signal RESET is input to a gate of the reset transistor Tc. A drain of the reset transistor Tc is connected to a power supply potential VDD. A row selection signal ADRES is input to a gate of the row selection transistor Ta. A drain of the row selection transistor Ta is connected to the power supply potential VDD. A source of the amplifier transistor Tb is connected to the vertical signal line Vlin. A gate of the amplifier transistor Tb is connected to the drain of the readout transistor Td. A drain of the amplifier transistor Tb is connected to a source of the row selection transistor Ta.
The horizontal control line Hlin shown in
In
After the charges accumulated in the photodiode PD in the non-exposure period NX are discharged to the power supply potential VDD, when the read signal READ changes to the low level, the photodiode PD starts accumulation of effective signal charges and shifts from the non-exposure period NX to an exposure period EX.
Subsequently, when the row selection signal ADRES changes to the high level (ta2), the row selection transistor Ta of the pixel PC is turned on. The power supply potential VDD is applied to the drain of the amplifier transistor Tb.
When the reset signal RESET changes to the high level in an ON state of the row selection transistor Ta (ta3), the reset transistor Tc is turned on. Excess charges generated by a leak current or the like in the floating diffusion FD are reset. A voltage corresponding to a reset level of the floating diffusion FD is applied to the gate of the amplifier transistor Tb. The voltage of the vertical signal line Vlin follows the voltage applied to the gate of the amplifier transistor Tb, whereby the pixel signal VSIG at the reset level is output to the vertical signal line Vlin.
The pixel signal VSIG at the reset level is input to the column ADC circuit 4 and compared with the reference voltage VREF. The pixel signal VSIG at the reset level is converted into a digital value based on a result of the comparison and retained.
Subsequently, when the read signal READ changes to the high level in the ON state of the row selection transistor Ta of the pixel PC (ta4), the readout transistor Td is turned on. Charges accumulated in the photodiode PD in the exposure period EX are transferred to the floating diffusion FD. A voltage corresponding to a read level of the floating diffusion FD is applied to the gate of the amplifier transistor Tb. The voltage of the vertical signal line Vlin follows the voltage applied to the gate of the amplifier transistor Tb, whereby the pixel signal VSIG at a signal read level is output to the vertical signal line Vlin.
The pixel signal VSIG at the signal read level is input to the column ADC circuit 4 and compared with the reference voltage VREF. A difference between the pixel signal VSIG at the reset level and the pixel signal VSIG at the signal read level is converted into a digital value based on a result of the comparison and output as the output signal S1 corresponding to the exposure period EX.
In
The line LA1 can be generated by charge addition binning for the lines L1, L3, L5, and L7. The line LA2 can be generated by charge addition binning for the lines L2, L4, L6, and L8. The line LA3 can be generated by charge addition binning for the lines L17, L19, L21, and L23. The line LA4 can be generated by charge addition binning for the lines L18, L20, L22, and L24. The line LB1 can be generated by charge addition binning for the lines L9, L11, L13, and L15. The line LB2 can be generated by charge addition binning for the lines L10, L12, L14, and L16. The line LB3 can be generated by charge addition binning for the lines L25, L27, L29, and L31. The line LB4 can be generated by charge addition binning for the lines L26, L28, L30, and L32.
At this point, in the frame FA before the binning, time t1 to t3 can be set as an exposure period and time t2 to t3 can be set as a read period Tf. In the frame FB before the binning, time t2 to t4 can be set as an exposure period and time t3 to t4 can be set as the read period Tf. The time t2 can be set in the center between the time t1 and the time t3. The time t3 can be set in the center between the time t2 and the time t4.
When the frames FA and FB are read, as shown in
Consequently, in the examples shown in
A frame reconfiguration method is explained below. In the following explanation, as an example, K=1 and M=2.
In
A value of a thinned pixel of the present frame Fi is interpolated based on values of same color pixels above and below the present frame Fi and a value of a pixel of the past frame Fi−1 corresponding to the position of the thinned pixel. For example, when a value of a pixel P4 of the present frame Fi is interpolated, a weighted average of values of same color pixels P2 and P3 above and below the present frame Fi and a value of a pixel P1 of the past frame Fi−1 can be calculated.
A value of an original pixel of the present frame Fi is converted based on the value of the original pixel of the present frame Fi and values of same color pixels above and below a pixel of the past frame Fi−1 corresponding to the position of the original pixel. For example, when a value of a pixel P7 of the present frame Fi is converted, a weighted average of the value of the pixel P7 of the present frame Fi and values of pixels P5 and P6 of the past frame Fi−1 can be calculated.
When the frame FS is configured, an image of the frame FS can be blurred by calculating an average of values of peripheral pixels between the frames Fi−1 and Fi. Therefore, it is possible to reduce artifacts such as jaggies and false colors in a high speed moving images.
In
A value of a thinned pixel of the future frame Fi+1 is interpolated based on a value of a pixel of the present frame Fi corresponding to the position of the thinned pixel. For example, when a value of the pixel P2 of the future frame Fi+1 is interpolated, a value of the pixel P1 of the present frame Fi can be used. A value of an original pixel of the future frame Fi+1 is used as it is.
When the frame FS is configured, it is possible to prevent deterioration in resolution by using values of pixels of the frames Fi and Fi+1 as they are. It is possible to set the resolution high compared with the method shown in
In
A value of a thinned pixel of the present frame Fi is interpolated based on values of pixels of the past frame Fi−1 and the future frame Fi+1 corresponding to the position of the thinned pixel. For example, when a value of the pixel P3 of the present frame Fi is interpolated, an average of a value of the pixel P1 of the past frame Fi−1 and a value of the pixel P2 of the future frame Fi+1 can be calculated. A value of an original pixel of the present frame Fi is used as it is.
When the frame FS is configured, it is possible to suppress deterioration in resolution by using values of pixels of the frames Fi−1, Fi, and Fi+1. It is possible to set the resolution high compared with the method shown in
In
When the sum of the difference absolute values exceeds a predetermined value, the frame reconfiguration method shown in
Consequently, in a part where the motion of the object is large, it is possible to reduce artifacts while compensating deterioration in resolution with a blur. In a part where the motion of the object is small, it is possible to increase resolution without causing artifacts.
In
In the position of a thinned pixel of the present frame Fi, an average of difference absolute values can be calculated between values of same color pixels above and below the present frame Fi corresponding to the position of the thinned pixel and values of pixels of the past frame Fi−2 corresponding to the positions of the same color pixels above and below the present frame Fi. For example, in the position of the pixel P3 of the present frame Fi, a difference absolute value between a value of the pixel P5 of the present frame Fi and a value of the pixel P4 of the past frame Fi−2 is calculated, a difference absolute value between a value of the pixel P7 of the present frame Fi and a value of the pixel P6 of the past frame Fi−2 is calculated, and the difference absolute values are averaged.
When an inter-frame error used for frame reconfiguration is calculated, the method shown in
In
In
In the frame A, the lines L1 and L4 are read all together, whereby the line LA1 is read. The lines L13 to L16 are read all together, whereby the line LA2 is read. In the frame B, the lines L5 to L8 are read all together, whereby the line LB1 is read. The lines L17 to L20 are read all together, whereby the line LB2 is read. In the frame C, the lines L9 to L12 are read all together, whereby the line LC1 is read. The lines L21 to L24 are read all together, whereby the line LC2 is read.
The frames A, B and C are configured by the pixels U1 to U4. That is, in the frame A, the pixel for green Gr and the pixel for red R are read in the line LA1 and the pixel for green Gb and the pixel for blue B are read in the line LA2. In the frame B, the pixel for green Gb and the pixel for blue B are read in the line LB1 and the pixel for green Gr and the pixel for red R are read in the line LB2. In the frame C, the pixel for green Gr and the pixel for red R are read in the line LC1 and the pixel for green Gb and the pixel for blue B are read in the line LC2. Therefore, in the frames A, B, and C, a pixel array is the Bayer array. It is possible to make it easy to reconfigure one frame from the frames A, B, and C.
In
In the frame A, the pixel for green Gb and the pixel for red R are read in the line LA1 and the pixel for green Gr and the pixel for blue B are read in the line LA2. In the frame B, the pixel for green Gr and the pixel for blue B are read in the line LB1 and the pixel for green Gb and the pixel for red R are read in the line LB2. In the frame C, the pixel for green Gb and the pixel for red R are read in the line LC1 and the pixel for green Gr and the pixel for blue B are read in the line LC2. Therefore, in the frames A, B, and C, it is possible to read signals from all columns while setting a pixel array as the Bayer array. It is possible to improve AD conversion speed compared with the method shown in
In
In the frames A and B are configured by the pixels U to U4. That is, in the frame A, the pixel for green Gb and the pixel for red R are read in the lines LA1 and LA3 and the pixel for green Gr and the pixel for blue B are read in the lines LA2 and LA4. In the frame B, the pixel for green Gb and the pixel for red R are read in the lines LB1 and LB3 and the pixel for green Gr and the pixel for blue B are read in the lines LB2 and LB4. Therefore, in the frames A and B, a pixel array is the Bayer array and all phases are aligned. Therefore, it is possible to make it easy to reconfigure one frame from the frames A and B.
In
The frames A to D are configured by the pixels U1 to U4. Therefore, a pixel array is the Bayer array and all phases are aligned. Therefore, it is possible to make it easy to reconfigure one frame from the frames A to D.
In
The imaging optical system 14 captures light from an object and images an object image. The solid-stage imaging device 15 picks up the object image. The ISP 16 subjects an image signal obtained by the image pickup in the solid-state imaging device 15 to signal processing. The storing unit 17 stores an image subjected to the signal processing in the ISP 16. The storing unit 17 outputs the image signal to the display unit 18 according to, for example, operation by a user. The display unit 18 displays the image according to the image signal input from the ISP 16 or the storing unit 17. The display unit 18 is, for example, a liquid crystal display. The camera module 12 can be applied to an electronic apparatus such as a portable terminal with a camera besides the digital camera 11.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2013-159842 | Jul 2013 | JP | national |