The present disclosure relates to a configuration of an imaging device, and particularly relates to a pixel configuration in which a plurality of photoelectric conversion elements share a floating diffusion and pixels having different sensitivities are provided.
In recent years, an imaging device is used as sensing means for implementing automated driving and driving assistance function of an automobile. In the automated driving, performance which detects an object and measures a distance to the object irrespective of ambient brightness or the like is required, and hence performance capable of acquiring an image in a high dynamic range is required. As a method of increasing the dynamic range, a method which synthesizes images having different exposure periods and a method which synthesizes a plurality of images based on charges acquired from pixels having different sensitivities are known. In the case where HDR synthesis processing is performed with one imaging device, exposure timing differs in the former, and it is necessary to perform matching of differences in blurring amount and movement positions of an object in the imaging device mounted on an automobile which moves at high speed. Accordingly, in the case where the HDR synthesis processing is performed with one imaging device, it is preferable to adopt the latter in theory.
In addition, with regard to distance measurement function, while distance measurement is often performed by calculating parallax information from images acquired by using two imaging devices spaced apart from each other in a horizontal direction, a plurality of devices are necessary, and hence a problem arises in that cost is high. In contrast, a method in which a plurality of photoelectric conversion units are disposed under one microlens and distance measurement is performed on the basis of a lateral difference is known as the technique of autofocus of an imaging device or the like. With this, it is possible to reduce the number of imaging devices and suppress cost.
In enlargement of the dynamic range, it is necessary to increase sensitivity in low light, and hence a method which increases an area of a photoelectric conversion unit is used as an example. For example, there is known a method in which a plurality of photoelectric conversion units share one floating diffusion (FD) and the area of the photoelectric conversion unit is increased while an increase in the number of pixel circuits is suppressed. For example, according to Japanese Patent Application Publication No. 2010-028423, in a configuration in which four photoelectric conversion units share one FD, a structure in which charges are read from the same FD for increasing an area of a light receiving unit is disclosed. In addition, Japanese Patent Application Publication No. 2014-33054 discloses a method in which the position of an FD is moved from a position immediately under a microlens and an area of a photoelectric conversion unit is further increased. In this configuration, it is possible to read charges of four photoelectric conversion units under the same microlens from different FDs simultaneously.
In the case of Japanese Patent Application Publication No. 2010-028423, charges of the individual photoelectric conversion units are read from the same FD, and hence it is necessary to perform the reading a plurality of times. Accordingly, in order to determine parallax, it is necessary to accumulate a pixel signal based on the charge read earlier in a memory until a desired parallax image is obtained. In addition, Japanese Patent Application Publication No. 2010-028423 also discloses a configuration in which charges are read simultaneously from different FDs provided in the individual photoelectric conversion units. In this case, the accumulation of the pixel signal in the memory becomes unnecessary, but a photosensitive area is reduced as compared with the configuration in which the pixel signal is accumulated in the memory. On the other hand, in the case of Japanese Patent Application Publication No. 2014-33054, charges of the photoelectric conversion units under the same microlens are read from different FDs, and hence it becomes possible to read the parallax image simultaneously, and it is possible to suppress a memory amount for parallax acquisition as compared with the configuration of Japanese Patent Application Publication No. 2010-028423.
In addition, for the enlargement of the dynamic range, it is required that saturation does not occur even in high light. As a mechanism for balancing a high-sensitivity pixel and a low-sensitivity pixel, for example, Japanese Patent Application Publication No. 2010-028423 discloses a configuration in which, in the low-sensitivity pixel, light input into the photoelectric conversion unit is reduced as compared with that of the high-sensitivity pixel by covering incident light with a light-shielding unit.
However, Japanese Patent Application Publication No. 2010-028423 has the configuration in which the charges of one pixel are read from the common FD, and hence it is necessary to provide a memory for HDR synthesis processing and acquisition of parallax information, and a circuit scale is increased. In addition, in the case where reading from the individual FDs is performed, while it is possible to read charges for images having a plurality of sensitivities simultaneously, a non-photosensitive area is increased, and sensitivity is reduced.
In Japanese Patent Application Publication No. 2014-33054, no consideration is given to the placement of photoelectric conversion units having a difference in sensitivity in the case where pixels having different sensitivities are provided for enlarging the dynamic range. Consequently, in Japanese Patent Application Publication No. 2014-33054, the task of acquiring a high dynamic range image and parallax information with a minimum memory configuration is not assumed.
The present disclosure has been achieved in view of the foregoing, and an object thereof is to provide an imaging device capable of simultaneously acquiring charges for images having different sensitivities in a configuration in which a plurality of photoelectric conversion units are disposed under a microlens.
According to some embodiments, an imaging device includes a plurality of pixels each of which includes a microlens and a plurality of photoelectric conversion units, wherein the plurality of pixels include a first pixel and a second pixel adjacent to the first pixel, a first photoelectric conversion unit included in the first pixel and a second photoelectric conversion unit included in the second pixel are configured to share a first floating diffusion unit, sensitivity of the second photoelectric conversion unit is lower than sensitivity of the first photoelectric conversion unit in a wavelength range in which the first photoelectric conversion unit has a peak of the sensitivity, and a charge of a photoelectric conversion unit other than the first photoelectric conversion unit and the second photoelectric conversion unit is read from a floating diffusion unit different from the first floating diffusion unit in each of the first pixel and the second pixel.
According to some embodiments, an imaging device includes a plurality of pixels each of which includes a microlens and a plurality of photoelectric conversion units, wherein the plurality of pixels include a seventh pixel, an eighth pixel adjacent to the seventh pixel, a ninth pixel adjacent to the eighth pixel, and a tenth pixel adjacent to the ninth pixel, a seventh photoelectric conversion unit included in the seventh pixel, an eighth photoelectric conversion unit included in the eighth pixel, a ninth photoelectric conversion unit included in the ninth pixel, and a tenth photoelectric conversion unit included in the tenth pixel are configured to share a second floating diffusion unit, in a wavelength range in which one of at least two photoelectric conversion units out of the seventh to tenth photoelectric conversion units has a peak of sensitivity, another one of the at least two photoelectric conversion units has sensitivity different from the sensitivity of the one of the at least two photoelectric conversion units, and a charge of a photoelectric conversion unit other than the seventh to tenth photoelectric conversion units is read from a floating diffusion unit different from the second floating diffusion unit in each of the seventh to tenth pixels.
According to some embodiments, equipment including the imaging device as described above, the equipment further including at least any of an optical device which is compliant with the imaging device, a control device which controls the imaging device, a processing device which processes a signal output from the imaging device, a display device which displays information obtained by the imaging device, a storage device which stores the information obtained by the imaging device, and a mechanical device which operates on the basis of the information obtained by the imaging device.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinbelow, embodiments of the present disclosure will be described by using the drawings. Note that the present disclosure is not limited to the following embodiments, and can be changed appropriately without departing from the gist thereof. In addition, in the drawings described below, components having the same function are designated by the same reference numeral and the description thereof will be omitted or simplified in some cases.
Hereinbelow, a description will be given of a solid-state imaging device which is an example of an imaging device in a first embodiment.
In addition, as shown in
As shown in
Each of
In
A charge transfer switch 501 is driven by a transfer pulse signal ϕTX1, and transfers the charge accumulated by the photoelectric conversion unit 303d to the FD 304. A charge transfer switch 502 is driven by a transfer pulse signal ϕTX2, and transfers the charge accumulated by the photoelectric conversion unit 310b to the FD 304. A charge transfer switch 503 is driven by a transfer pulse signal ϕTX3, and transfers the charge accumulated by the photoelectric conversion unit 307c to the FD 304. A charge transfer switch 504 is driven by a transfer pulse signal ϕTX4, and transfers the charge accumulated by the photoelectric conversion unit 313a to the FD 304. The FD 304 is configured to be able to retain transferred charges.
The photoelectric conversion units in the unit pixels 302, 306, 309, and 312 other than those described above share FDs with adjacent pixels which are not shown. A reset switch 505 is driven by a reset pulse signal ϕRES, and supplies a reference potential VDD to the FD 304. The FD 304 retains charges which are transferred from the photoelectric conversion units 303d, 307c, 310b, and 312a via the charge transfer switches 501, 502, 503, and 504. In addition, the FD 304 functions as a charge-voltage conversion unit which converts the retained charge to a voltage signal.
An amplification unit 506 amplifies the voltage signal based on the charge retained in the FD 304, and outputs the voltage signal as a pixel signal. In
A selection switch 507 is driven by a vertical selection pulse signal ϕSEL, and a signal amplified in the amplification unit 506 is output to a vertical signal line 508. The signal output to the vertical signal line 508 is read by the reading unit 203 provided for each column, and is then output to the AFE 102.
Herein, when attention is focused on the unit pixel 302, the photoelectric conversion units 303a to 303d are connected to different FDs. Accordingly, it is possible to read charges accumulated by the individual photoelectric conversion units constituting the unit pixel 302 from different vertical lines simultaneously. Specifically, charges for a high-sensitivity image having lateral parallax are read from the photoelectric conversion units 303a and 303b, and charges for a low-sensitivity image having lateral parallax are read from the photoelectric conversion units 303c and 303d. On the other hand, the unit pixels 306, 309, and 312 share the FD 304 with the unit pixel 302. Accordingly, in a period in which charges are read in the unit pixel 302, it is not possible to read charges in the unit pixels 306, 309, and 312. That is, in the four unit pixels 302, 306, 309, and 312 which share the FD 304, reading of charges is performed sequentially. Herein, as an example, the unit pixels 302, 306, 309, and 312 are seventh, eighth, ninth, and tenth pixels, and the photoelectric conversion units 303d, 307c, 310b, and 313a are seventh, eighth, ninth, and tenth photoelectric conversion units. In addition, the photoelectric conversion unit 303c is an example of an adjacent photoelectric conversion unit which is adjacent to the seventh photoelectric conversion unit in a horizontal direction. Further, the FD 304 is an example of a second floating diffusion unit.
The AFE 102 performs AD conversion which converts an analog charge input from the unit pixel to a digital signal. The digital signal output from the AFE 102 is input to the image processing unit 103. The image processing unit 103 performs high dynamic range image generation processing by using images having different sensitivities based on charges read from the solid-state imaging element.
The image processing unit 103 is a distance measurement processing unit which acquires a parallax image based on charges read from the floating diffusion unit and performs distance measurement processing, and performs the distance measurement processing by using the digital signal input from the AFE 102. In the distance measurement processing, a high-sensitivity image A acquired from a photoelectric conversion unit group A (the photoelectric conversion units 303a, 307a, and the like in
Similarly, distance measurement processing is performed from a low-sensitivity image C acquired from a low-sensitivity photoelectric conversion unit group C (the photoelectric conversion units 303c, 307c, and the like in
In addition, the image processing unit 103 performs HDR synthesis processing by using the high-sensitivity images A and B and the low-sensitivity images C and D. Herein, a description will be given of an example of the HDR synthesis processing executed in the present embodiment. First, a high-sensitivity image with no parallax is produced by using the high-sensitivity image A and the high-sensitivity image B. Similarly, a low-sensitivity image with no parallax is produced by using the low-sensitivity image C and the low-sensitivity image D. Next, normalization processing to an output value in the case where the low-sensitivity combined image has sensitivity equal to that of the high-sensitivity combined image is performed on the low-sensitivity combined image. Thereafter, as shown in
Herein, a description will be given of distance measurement processing and HDR synthesis processing in a configuration of a conventional solid-state imaging device, and a description will be given of effects of the present embodiment. For example, in the configuration disclosed in Japanese Patent Application Publication No. 2010-028423, as can be seen from arrows shown in
In addition, in the configuration disclosed in Japanese Patent Application Publication No. 2014-33054, charges of a plurality of photoelectric conversion units in a pixel under one microlens are read from different FDs, but no consideration is given to the placement of the photoelectric conversion units having different sensitivities. Accordingly, in Japanese Patent Application Publication No. 2014-33054, depending on a formation position of a light-shielding unit, it is not possible to read charges of the photoelectric conversion units having different sensitivities simultaneously, and a memory for performing the distance measurement processing and the HDR synthesis described above becomes necessary.
Consequently, according to the configuration of the solid-state imaging device according to the present embodiment, the memory for retaining the digital signal based on the charge of the photoelectric conversion unit becomes unnecessary in the distance measurement processing and the HDR synthesis processing. In addition, there is also a configuration in which the image processing unit (distance measurement processing unit) includes a memory. In this case, it is possible to perform the distance measurement processing and the HDR synthesis processing without causing the memory to retain the digital signal. With this, it can be said that the solid-state imaging device according to the present embodiment is superior to the conventional art described above in terms of a circuit scale and power consumption.
Next, a description will be given of a solid-state imaging device according to a second embodiment. In the present embodiment, in order to enlarge the dynamic range such that the enlarged dynamic range becomes wider than that in the configuration of the solid-state imaging element in the first embodiment, a unit pixel having three different sensitivities is provided, as shown in an example in
In addition, in a unit pixel 806, similarly to the photoelectric conversion unit 803c, each of photoelectric conversion units 807a and 807b has medium sensitivity, and the photoelectric conversion units 807a and 807b can acquire a medium-sensitivity image A and a medium-sensitivity image B having parallax. Further, similarly to the photoelectric conversion units 803a and 803b, a photoelectric conversion unit 807c has high sensitivity and, similarly to the photoelectric conversion unit 803d, a photoelectric conversion unit 807d has low sensitivity.
In addition, in a unit pixel 809, similarly to the photoelectric conversion unit 803d, each of photoelectric conversion units 810a and 810b has low sensitivity, and the photoelectric conversion units 810a and 810b can acquire a low-sensitivity image A and a low-sensitivity image B having parallax. Further, similarly to the photoelectric conversion unit 803c, a photoelectric conversion unit 810c has medium sensitivity and, similarly to the photoelectric conversion units 803a and 803b, a photoelectric conversion unit 810d has high sensitivity. Photoelectric conversion units of a unit pixel 812 are configured similarly to the photoelectric conversion units of the unit pixel 802.
According to the configuration described above, it is possible to perform the distance measurement processing by using the images output from the photoelectric conversion units having the same sensitivity. In addition, in the case where the output images of the photoelectric conversion units having different sensitivities are used, it is also possible to perform the distance measurement processing in the case where the sensitivities of the photoelectric conversion units are normalized, the high sensitivity side is not saturated, and the low sensitivity side can secure an adequate SN ratio.
In addition, in the HDR processing, as shown in
Consequently, according to the present embodiment, also in the configuration in which the unit pixel includes the photoelectric conversion units having three different sensitivities, it is possible to simultaneously read charges used to generate the image having lateral parallax and the images having different sensitivities, similarly to the first embodiment. With this, in the distance measurement processing and the HDR synthesis processing, a memory for retaining a digital signal based on charges read from the photoelectric conversion units in the same unit pixel becomes unnecessary. In addition, there is also a configuration in which the image processing unit (distance measurement processing unit) includes a memory. In this case, it is possible to perform the distance measurement processing and the HDR synthesis processing without causing the memory to retain the digital signal.
Next, a description will be given of a solid-state imaging device according to a third embodiment. In the present embodiment, a description will be given of a configuration of a solid-state imaging element in the case where distance measurement processing in a vertical direction is executed with reference to
Next, a description will be given of a solid-state imaging device according to a fourth embodiment. In the present embodiment, a description will be given of a configuration of a solid-state imaging element in the case where distance measurement processing in the vertical direction and the horizontal direction is executed with reference to
Charges of the photoelectric conversion units in one unit pixel are simultaneously read by different FDs. While the pixels including the light-shielding units are arranged in the horizontal direction in the first embodiment, in the solid-state imaging element of the present embodiment, a row in which the photoelectric conversion units for a low-sensitivity image including the light-shielding units are arranged in the horizontal direction and a row in which the photoelectric conversion units therefor are arranged in the vertical direction are present in a mixed manner. Consequently, in the solid-state imaging device of the present embodiment, in the distance measurement processing, matching in the horizontal direction and matching in the vertical direction are performed by using information on the rows having the same direction of arrangement of the light-shielding units. In addition, in the distance measurement processing in the vertical direction, matching processing is performed while a digital signal based on charges obtained from the unit pixel in another row is retained in a memory. On the other hand, in the HDR synthesis processing, charges of the photoelectric conversion units in the unit pixel are simultaneously read, and hence it is possible to execute the HDR synthesis processing without retaining the digital signal based on charges in the memory. In addition, there is also a configuration in which the image processing unit (distance measurement processing unit) includes a memory. In this case, it is possible to perform the HDR synthesis processing without causing the memory to retain the digital signal.
Next, a description will be given of a solid-state imaging device according to a fifth embodiment with reference to
In the present embodiment, in the HDR synthesis processing, the HDR synthesis processing is performed by using the charge for the high-sensitivity image and the charge for the low-sensitivity image which are simultaneously read. While left and right parallax images are subjected to addition processing before the HDR synthesis processing in the first embodiment, it is possible to omit the addition processing of the parallax images and perform the HDR synthesis processing by generating an image by using charges subjected to addition in the FD in the present embodiment.
In addition, in the distance measurement processing, a parallax image of the right side of the microlens (the side which overlaps the photoelectric conversion units 1203b, 1203d, 1207b, and 1207d in the drawing) is acquired on the basis of a difference between charges which are read subsequently and are subjected to addition in the FD and charges which are read first and are subjected to addition in the FD. Further, a digital signal based on the charges which are read first and are subjected to addition in the FD is retained in a memory. Then, the distance measurement processing is performed by using an image of the left side of the microlens (the side which overlaps the photoelectric conversion units 1203a, 1203c, 1207a, and 1207c in the drawing) acquired based on charges which are read first and the image of the right side of the microlens acquired by the above-described processing.
Also in the present embodiment, in the HDR synthesis processing, the charge for the high-sensitivity image and the charge for the low-sensitivity image are simultaneously read, and hence it is possible to execute the HDR synthesis processing without retaining the digital signal based on the charges in the memory. In addition, in the distance measurement processing, as described above, the processing is performed while the digital signal based on the charge is retained in the memory.
Note that, while the photoelectric conversion units having a difference in sensitivity are disposed in the vertical direction in the present embodiment, the photoelectric conversion units having a difference in sensitivity may also be disposed in the horizontal direction, as shown in
It is possible to omit the addition processing of the parallax images and perform the HDR synthesis processing by generating an image by using charges subjected to addition in the FD. In addition, in the distance measurement processing, a parallax image of the upper side of the microlens (the side which overlaps the photoelectric conversion units 1303a, 1303b, 1307a, and 1307b in the drawing) is acquired on the basis of a difference between charges which are read subsequently and are subjected to addition in the FD and charges which are read first and are subjected to addition in the FD. In addition, a digital signal based on the charges which are read first and are subjected to addition in the FD is retained in a memory. Then, the distance measurement processing is performed by using an image of the lower side of the microlens (the side which overlaps the photoelectric conversion units 1303c, 1303d, 1307c, and 1307d in the drawing) acquired on the basis of charges which are read first, and the image of the right side of the microlens acquired by the above-described processing.
Consequently, also with the configuration of the unit pixels and the photoelectric conversion units shown in
Any of the first to fifth embodiments described above can be applied to a sixth embodiment.
The equipment 1491 can include at least any of an optical device 1440, a control device 1450, a processing device 1460, a display device 1470, a storage device 1480, and a mechanical device 1490. The optical device 1440 is compliant with the semiconductor apparatus 1430. The optical device 1440 is, e.g., a lens, a shutter, or a mirror. The control device 1450 controls the semiconductor apparatus 1430. The control device 1450 is a semiconductor apparatus such as, e.g., an ASIC.
The processing device 1460 processes a signal output from the semiconductor apparatus 1430. The processing device 1460 is a semiconductor apparatus such as a CPU or an ASIC for constituting an AFE (analog front end) or a DFE (digital front end). The display device 1470 is an EL display device or a liquid crystal display device which displays information (image) obtained by the semiconductor apparatus 1430. The storage device 1480 is a magnetic device or a semiconductor device which stores information (image) obtained by the semiconductor apparatus 1430. The storage device 1480 is a volatile memory such as an SRAM or a DRAM, or a non-volatile memory such as a flash memory or a hard disk drive.
The mechanical device 1490 has a moving unit or a propulsive unit such as a motor or an engine. In the equipment 1491, a signal output from the semiconductor apparatus 1430 is displayed in the display device 1470, and is transmitted to the outside by a communication device (not shown) provided in the equipment 1491. In order to do so, it is preferable that the equipment 1491 further includes the storage device 1480 and the processing device 1460 in addition to a storage circuit and an operation circuit of the semiconductor apparatus 1430. The mechanical device 1490 may also be controlled on the basis of a signal output from the semiconductor apparatus 1430.
In addition, the equipment 1491 is suitably used as electronic equipment such as an information terminal having photographing function (e.g., a smartphone or a wearable terminal) or a camera (e.g., an interchangeable-lens camera, a compact camera, a video camera, or a surveillance camera). The mechanical device 1490 in the camera can drive components of the optical device 1440 for zooming, focusing, and shutter operation. Alternatively, the mechanical device 1490 in the camera can move the semiconductor apparatus 1430 for vibration isolation operation.
The equipment 1491 can be transport equipment such as a vehicle, a ship, or a flight vehicle. The mechanical device 1490 in the transport equipment can be used as a moving device. The equipment 1491 serving as the transport equipment is suitably used as equipment which transports the semiconductor apparatus 1430, or performs assistance and/or automation of driving (manipulation) with photographing function. The processing device 1460 for assistance and/or automation of driving (manipulation) can perform processing for operating the mechanical device 1490 serving as the moving device based on information obtained in the semiconductor apparatus 1430. Alternatively, the equipment 1491 may also be medical equipment such as an endoscope, measurement equipment such as a distance measurement sensor, analysis equipment such as an electron microscope, office equipment such as a copier, or industrial equipment such as a robot.
According to the sixth embodiment, it becomes possible to obtain excellent pixel characteristics. Consequently, it is possible to enhance the value of the semiconductor apparatus 1430. At least any of addition of function, an improvement in performance, an improvement in characteristics, an improvement in reliability, an improvement in product yield, a reduction in environmental load, a reduction in cost, a reduction in size, and a reduction in weight corresponds to the enhancement of the value thereof mentioned herein.
Consequently, if the semiconductor apparatus 1430 according to the sixth embodiment is used in the equipment 1491, it is possible to improve the value of the equipment as well. For example, when the semiconductor apparatus 1430 is mounted on transport equipment and photographing of the outside of the transport equipment or measurement of an external environment is performed, it is possible to obtain excellent performance. Therefore, when the transport equipment is manufactured and sold, it is advantageous to determine that the semiconductor apparatus 1430 according to the sixth embodiment is mounted on the transport equipment in terms of increasing the performance of the transport equipment itself. The semiconductor apparatus 1430 is suitably used particularly as the transport equipment which performs driving assistance and/or automated driving of the transport equipment by using information obtained by the semiconductor apparatus 1430.
According to the present disclosure, it is possible to reduce manufacturing cost required for the configuration in which the high dynamic range image and the parallax image are acquired in the imaging device.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2022-089503, filed on Jun. 1, 2022, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2022-089503 | Jun 2022 | JP | national |