This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2014-059054, filed on Mar. 20, 2014; the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a filter-array-equipped microlens and a solid-state imaging device.
In regard to an imaging technology in which the distance in the depth direction can be obtained as two-dimensional array information, various methods are being studied such as a method of using a reference beam or a method of performing stereo distance measurement using a plurality of cameras. In recent years, as new distance measuring devices for civilian use, there is a high demand for products having a relatively moderate price.
In such imaging technology for obtaining distances, the triangulation method using parallaxes is known as one of the imaging methods in which a reference beam is not used with the aim of holding down the system cost. As the types of camera capable of implementing the triangulation method, a stereo camera and a multiple camera are known. However, in a stereo camera or a multiple camera, a plurality of cameras is used. Hence, there is a risk of an increase in the failure rate due to an increase in the size of the system or due to an increase in the number of components.
Meanwhile, regarding an imaging optical system, a structure has been proposed in which a microlens array is disposed on the upper side of pixels; a plurality of pixels is arranged in the lower part of each microlens; and an image from a main lens is further formed on the pixels using the microlens array. In this structure, a group of images having parallaxes can be obtained in the units of pixel blocks. The parallaxes enable performing a refocusing operation based on distance estimation and distance information of a photographic subject. An optical configuration in which an image from a main lens is further formed using a microlens array is called a refocus optical system.
One of the factors leading to degradation in the image quality of images taken by an image sensor is a phenomenon called crosstalk in which the light falling on a pixel also enters the neighboring pixels. For example, when crosstalk occurs in a Bayer array implemented in a commonly-used image sensor, there occurs a phenomenon of mixed colors, and the light of a different color component is mistakenly detected in each pixel. As a result, the color reproducibility of the captured image undergoes a decline. Particularly, in the case of an image sensor comprising infrared (IR) detection pixels for the purpose of infrared light detection, infrared light having a longer wavelength than visible light is hard to undergo attenuation inside pixels, thereby easily leading to the occurrence of crosstalk among the neighboring pixels.
In a refocus optical system mentioned above, the light coming from the main lens passes through each microlens, and then falls on the light receiving surface of an image sensor at an angle of incident dependent on the position of the concerned microlens. Thus, in a refocus optical system too, there is a risk of the occurrence of crosstalk among the pixels.
It is an object of the invention to provide a filter-array-equipped microlens and a solid-state imaging device that enable achieving prevention of a decline in the color reproducibility caused due to inter-pixel crosstalk.
According to an embodiment, a filter-array-equipped microlens includes a filter array and a microlens array. The filter array includes a plurality of first optical filters for selectively transmitting light of an infrared region and a plurality of second optical filters for selectively transmitting light of a first visible wavelength region. The microlens array includes a plurality of microlenses each corresponding to any one of the first optical filters and the second optical filters.
Exemplary embodiments of a filter-array-equipped microlens and a solid-state imaging device are described below. In
The camera module 10 includes an imaging optical system having a main lens 11; a solid image sensor having a microlens array 12 and an image sensor 13; an imaging unit 14; and a signal processor 15. The imaging optical system includes one or more lenses, and guides the light coming from a photographic subject to the microlens array 12 and the image sensor 13. Of the lenses included in the imaging optical system, the main lens 11 is assumed to be the lens positioned closest to the image sensor 13.
As far as the image sensor 13 is concerned; for example, a charge coupled device (CCD) or a CMOS imager (CMOS stands for Complementary Metal Oxide Semiconductor) is used. Moreover, the image sensor 13 includes a pixel array of a plurality of pixels, each of which converts the received light into an electrical signal by means of photoelectric conversion and outputs the electrical signal.
The microlens array 12 includes a plurality of microlenses 120 arranged according to predetermined rules. Regarding a group of light beams that result in the formation of an image on an image forming surface due to the main lens 11, the microlens array 12 re-forms the image in a reduced manner in pixel blocks each of which includes a plurality of pixels on the image sensor 13 and corresponds to one of the microlenses 120.
Meanwhile, although not illustrated in
As a result of using an infrared transmission filter, it becomes possible to deal with imaging in the dark such as imaging during nighttime or imaging inside a room.
Among a plurality of types of optical filters included in the filter array, the optical filters other than the infrared transmission filters can be, for example, a plurality of color filters that separate the three primary colors of red (R), green (G), and blue (B). However, that is not the only possible case. Alternatively, the optical filters other than the infrared transmission filters can be colorless filters (called white color filters) that transmit light of the visible light region. Still alternatively, instead of using colorless filters, it is possible to use color filters which have some portion left open and which transmit light of the visible light region.
Meanwhile, the camera module 10 can be configured in such a way that, for example, the imaging optical system including the main lens 11 is separated from the other portion, thereby making it possible to replace the main lens 11. However, that is not the only possible case. Alternatively, the camera module 10 can be configured as a unit in which the imaging optical system, which includes the main lens 11, and the microlens array 12 are housed in a single housing. In that case, the entire unit including the imaging optical system and the microlens array 12 becomes replaceable.
The imaging unit 14 includes a driver circuit for driving each pixel of the image sensor 13. The driver circuit includes, for example, a vertical selection circuit for sequentially selecting the pixels to be driven in the vertical direction in the units of horizontal lines (rows); a horizontal selection circuit for sequentially selecting the pixels to be driven in the vertical direction in the units of columns; and a timing generator that drives the vertical selection circuit and the horizontal selection circuit at various pulses. The imaging unit 14 reads, from the pixels selected by the vertical selection circuit and the horizontal selection circuit, the electrical charge obtained by means of photoelectrical conversion of the received light; converts the electrical charge into electrical signals; and outputs the electrical signals.
With respect to the analog electrical signals output from the imaging unit 14; the signal processor 15 performs gain adjustment, noise removal, and amplification. Moreover, the signal processor 15 includes an A/D conversion circuit for converting the processed signals into digital signals and outputting them as image data of a RAW image.
The ISP 20 includes a camera module I/F 21, a memory 22, an image processor 23, and an output I/F 24. The camera module I/F 21 is an interface for signals with respect to the camera module 10. The image data of a RAW image that is output from the signal processor 15 of the camera module 10 is stored in, for example, the memory 22, which is a frame memory, via the camera module I/F 21.
Of the image data stored in the memory 22, based on the image data which is formed on the basis of the light coming from the microlens array 12 and the color filter array; the image processor 23 performs a refocusing operation in which the image of the area corresponding to each microlens is enlarged and the images are superimposed while shifting positions thereof, and obtains a refocused image that has been reconstructed (described later). Then, the refocused image is output from the output I/F 24 and is either displayed on a display device (not illustrated) or stored in an external memory medium.
Meanwhile, instead of storing the image data in the memory 22, it can be stored in an external memory medium. In that case, the image data read from the external memory medium is stored in the memory 22 via, for example, the camera module I/F 21. Then, the image processor 23 performs the refocusing operation with respect to that image data. Thus, it becomes possible to obtain a refocused image at a desired timing.
Optical System Implementable in First Embodiment
Given below is the explanation of an optical system that can be implemented in the first embodiment. Herein, the optical system includes the main lens 11, the microlens array 12, and the image sensor 13. In
In the optical system, using the light beams coming from the main lens 11, the microlenses 120 disposed in the microlens array 12 form images of all viewpoints on the image sensor 13. Meanwhile, although not illustrated in
In the example illustrated in
In
Herein, it is desirable that the microlens images 30 formed on the image sensor 13 due to the microlenses 120 are formed without any mutual overlapping. Moreover, with reference to
Explained below with reference to
In the main lens 11, a relationship given below in Equation (1) according to the lens formula is established between the distance A to the photographic subject, the distance B at which an image is formed by the light coming from the photographic subject, and the focal distance f. In an identical manner, regarding the microlenses 120 of the microlens array 12 too, a relationship given below in Equation (2) according to the lens formula is established.
When there is a change in the distance A between the main lens 11 and the photographic subject, the value of the distance B in the lens formula given in Equation (1) undergoes a change. Based on the positional relationship in the optical system, addition of the distance B and the distance C is equal to the distance E as described above. Moreover, the distance E is fixed. Hence, along with the change in the distance B, the value of the distance C also undergoes a change. Regarding the microlenses 120, as a result of using the lens formula given in Equation (2), along with the change in the distance C, it is found that the value of the distance D also undergoes a change.
Hence, as far as the image formed due to each microlens 120 is concerned, it becomes possible to obtain an image that is the result of reducing the image forming surface, which is a virtual image of the main lens 11, to a magnification N (where, N=D/C). The magnification N can be expressed as Equation (3) given below.
According to Equation (3), it is found that the reduction ratio of the images formed on the image sensor 13 due to the microlenses 120 is dependent on the distance A from the main lens 11 to the photographic subject. Hence, in order to reconstruct the original two-dimensional image; for example, microlens images 301, 302, and 303 that are formed due to the microlenses 120 and that have points 311, 312, and 313 as the respective central coordinates as illustrated in (a) in
During superimposition, regarding the portion at distances other than the distance A, the enlarged microlens images 301′, 302′, and 303′ get superimposed in a misaligned manner. As a result, it becomes possible to achieve a blurring-like effect. Thus, the refocusing operation points to an operation in which an arbitrary position is brought into focus from such microlens images.
Filter Array According to First Embodiment
Given below is the explanation of the filter array according to the first embodiment. Firstly, explained with reference to
In
In the case of using an optical system in which the light coming from the main lens 11 is made to pass through the microlens array 12 and then to fall on the image sensor 13 as illustrated in
In this case, not only the light that has passed through the blue color filter 7003 and the pixel 1302 falls at an oblique angle on the pixel 1303; but also the light that has passed through the green color filter 7003, which is disposed corresponding to the pixel 1303, falls directly on the pixel 1303. As a result, inter-pixel crosstalk occurs in the pixel 1303, thereby leading to a risk of having a decline in the color reproducibility of the captured image.
Particularly, the infrared light, which has a longer wavelength than the visible light, travels for a longer distance from the time of falling on a pixel till being absorbed as compared to the visible light. Hence, the infrared light has a significant impact on the neighboring pixels. For example, consider a case in which the color filter 7002 illustrated in
In
In the first embodiment, with respect to the optical system, a filter array 40 is disposed that includes a plurality of types of optical filters 4001, 4002, disposed corresponding to the microlenses 1201, 1202, . . . , respectively. In the example illustrated in
In
In this case too, for example, the light that falls on the pixel 1302 at a predetermined oblique angle also falls on the neighboring pixel 1302, thereby resulting in inter-pixel crosstalk in the pixel 1303. However, in this case, the light that directly falls on the pixel 1303 has passed through the same optical filter 400 through which the light falling obliquely from the neighboring pixel 1302 had passed. Hence, it becomes possible to prevent a decline in color reproducibility attributed to inter-pixel crosstalk.
Meanwhile, with reference to
Furthermore, in the example illustrated in
In
Consider the microlens images 30 in the case in which the filter array 40 includes infrared transmission filters and color filters of RGB colors. In the following explanation, the infrared light is written as Ir color light, and the infrared transmission filters are written as Ir filters. In this case, the microlens images 30 that are formed when the light which has passed through the filter array 40 and the microlens array 12 falls on the image sensor 13 include monochromatic microlens images 30R, 30G, 30B, and 30Ir of RGBIr colors as illustrated in (a) in
That is, as illustrated in (b) in
Arrangement in Filter Array
Given below is the explanation about the arrangement of various types of optical filters included in the filter array 40. Examples of the types of optical filters in the filter array 40 include Ir filters and white color filters. Alternatively, examples of the types of optical filters in the filter array 40 include Ir filters and RGB color filters.
Since there are several ways of arranging the various types of optical filters 400 included in the filter array 40, the arrangements are classified. In the following explanation, it is assumed that the arrangements of optical filters in the filter array 40 are expressed in smallest repeating units.
Explained below with reference to
In the arrangement illustrated in
In the arrangement illustrated in
In the arrangement of various types of optical filters 400 in the hexagonally-arranged filter array 40, the minimum magnification for reconstruction and the distance accuracy assume significance. As an example, consider a case in which the filter array 40 includes four types of optical filters made up of color filters of RGBIr colors. In this case, in a reconstructed image obtained as a result of superimposing the enlarged microlens images 50R, 50G, 50B, and 50Ir that are formed by enlarging the microlens images 30R, 30G, 30B, and 30Ir, respectively, at a particular magnification; the minimum magnification for reconstruction points to the smallest magnification for having all color components (for example, color components of RGBIr colors) constituting a color image with all pixels included in the unit area that serves as the smallest repeating unit. That is, when the microlens images 30R, 30G, 30B, and 30Ir are enlarged at a magnification equal to or greater than the minimum magnification for reconstruction, it becomes possible to obtain an image that includes RGBIr colors at all pixels included in the unit area.
Given below with reference to
In the arrangement illustrated in
Herein, focusing on red color filters 400R1 to 400R4 at the four corners of a unit area 60 that serves as the smallest repeating unit illustrated in (a) in
In the example illustrated in (b) in
Given below is the explanation about the distance calculation method. As already described with reference to Equation (3), when there is a change in the value of the distance A illustrated in
If Equation (3) is organized for the distance A, then Equation (4) given below is obtained. From Equation (4), the reduction ratio N of the images formed by the microlenses 120 is calculated by means of image matching. Moreover, if the distances D and E and the focal distance f are known, then the value of the distance A can be calculated from Equation (4).
In the case of the optical system illustrated in
If Δ′ represents the amount of shift of the microlens images 30 between the microlenses 120 and if a value L represents the center distance between the microlenses 120, then the reduction ratio N can be expressed using Equation (7) according to the geometric relationship of light beams. Thus, in order to obtain the reduction ratio N, the image processor 23 can implement an evaluation function such as the sum of absolute difference (SAD) or the sum of squared difference (SSD), perform image matching with respect to each microlens image 30, and obtain the amount of shift Δ′ between the microlenses 120.
Meanwhile, according to Equation (7), the minimum magnification for reconstruction is expressed as 1/N.
In the case of using the filter array 40 according to the first embodiment, the image processor 23 performs image matching among the microlens images 30 formed due to the optical filters of same types (same colors). At that time, depending on the arrangement of various types of the filter array 40; due to the distance to the photographic subject or the edge direction of images, there are times when a large error occurs in the distance accuracy of the amount of shift Δ′ obtained by means of image matching.
In order to prevent such an error in the distance accuracy, the arrangement of the various types of optical filters in the filter array 40 needs to satisfy a first condition and a second condition explained below.
The following explanation is about the first condition. For example, in the filter array 40, consider a case in which a particular optical filter does not have optical filters of the same type (the same color) in the vicinity. In this case, as described above, since the amount of shift Δ′ between the microlens images 30 depends on the distance A to the photographic subject, if an image of the photographic subject is formed only between the microlenses 120 disposed in the vicinity of each other, then the distance cannot be measured. Thus, each optical filter needs to have optical filters of the same color in the vicinity. This condition is set as the first condition.
The following explanation is given for the second condition. Herein, the second condition is related to the directional dependency of the color filter arrangement as far as the distance accuracy is concerned. In the example illustrated in (a) in
In this arrangement, if the direction of change in luminance value on the edges of a photographic subject image is parallel to the direction of the axis in which the optical filters of same colors are arranged, then it may lead to a decline in the accuracy of image matching. That is, the image processor 23 performs image matching using the microlens images 30, each of which is formed by the light passing through the optical filters of the same color. Hence, for example, if an edge of an image is parallel to the axis direction in which the optical filters of same colors are arranged, then the microlens images 30 formed adjacent to each other in that axis direction are likely to be substantially same images. In this case, it becomes difficult to perform distance measurement using image matching.
In this way, when the optical filters of same colors are lined in a single axis direction, it leads to a directional dependency in which the distance accuracy becomes dependent on the edge direction of the photographic subject. Hence, during image mapping, in order to reduce the directional dependency of the direction accuracy with respect to the edge direction, the arrangement of the optical filters in the filter array 40 is desirably such that the optical filters of same colors are present in a plurality of axis directions.
Herein, an axis is determined by three optical filters of the same color. When three optical filters of the same color are linearly aligned, they are present on a single axis. In this case, these three optical filters of the same color do not satisfy the second condition. In contrast, consider a case of a first line that joins two of the three optical filters, and consider a case of second line that joins the centers of two optical filters including optical filters other than the two optical filters mentioned above. If the first line and the second line intersect with each other, then the concerned three optical filters of the same color are present in two axis directions. In that case, the three optical filters of the same color satisfy the second condition.
Thus, in the arrangement illustrated in
On the other hand, in the arrangement illustrated in
Herein, consideration is given to the cyclic nature of the arrangement of optical filters of each color in the hexagonally-arranged filter array 40. In that case, the second condition, that is, the condition of having the optical filters of same colors in different axis directions can be, in other words, said to be the condition in which, regarding a particular optical filter, two optical filters that are present in the vicinity of the particular optical filter and that have the same color as the particular optical filter are not positioned to be point symmetric with respect to the particular optical filter.
That is, as a condition for a preferable optical filter arrangement in the hexagonally-arranged filter array 40, a condition can be applied that, in six neighboring optical filters of the optical filter of interest, the optical filters of at least one color are disposed in a point asymmetric manner. This condition can be set as a third condition. Thus, the third condition implies the same meaning as the second condition described above.
In the filter array 40A illustrated in
Explained below in concrete terms and with reference to
In
In contrast, the optical filters 40023 and 40024 are not positioned to be point symmetric with respect to the optical filter 40020. Hence, the third condition is satisfied. Therefore, the calculation for image matching can be performed in two axis directions, namely, the axis direction joining the optical filters 40020 and 40023 and the axis direction joining the optical filters 40020 and 40024. That enables achieving reduction in the directional dependency of the direction accuracy with respect to the edge direction of the photographic subject.
Specific Example of Color Filter Arrangement According to First Embodiment
Explained below with reference to
In
The arrangement illustrated in
In the arrangement illustrated in
In the filter array 40B illustrated in
In the filter array 40B, for example, as a result of using the optical filters 400W1 and 400W2 that are positioned across one intervening optical filter from the optical filter 400W0, that have the same color as the optical filter 400W0, and that are positioned in a point asymmetric manner with respect to the optical filter 400W0; it becomes possible to perform image matching while reducing the edge dependency.
Meanwhile, if the arrangement of the filter array 40B illustrated in
In
Furthermore, in the filter array 40C illustrated in
In the arrangement illustrated in
In
In the example illustrated in
In the arrangement illustrated in
In
In the arrangement illustrated in
In the arrangements illustrated in
Meanwhile, if the distance to the photographic subject is infinite or is such a long distance that it can be treated to be infinite, then the light coming from the main lens 11 and falling on the microlens array 12 becomes parallel light or a light close to parallel light. At that time, the images formed due to the microlenses 120 are all different images, thereby making it difficult to perform image matching. Thus, longer the distance to the photographic subject, greater is the difference in images formed due to the neighboring microlenses 120. Hence, image matching becomes a difficult task.
In that regard, in a configuration in which the optical filters of same colors are closely placed to each other, such as in the filter array 40E illustrated in
Given below is the explanation of a second embodiment. In the second embodiment, the explanation is given about a driving method and a signal processing method for an image sensor that is suitable in a filter-array-equipped microlens having a filter array that includes Ir filters.
As described above, in an image sensor in which the microlens array 12 is used, image matching can be performed among the microlens images so as to obtain the reduction ratio N of the microlens images, and the distance A to the photographic subject can be obtained from the reduction ratio N. At that time, during image matching, greater the texture quantity of the photographic subject image, the better is the strength against the factors such as noise causing false detection.
Thus, in a captured image, it is desirable to have a high image contrast, and it is necessary that the image is not too dark. On the other hand, if the image is too bright, then a saturated area attributed to blown out highlights gets formed in the image. Hence, there exists an area in which image matching becomes difficult. In this way, in order to perform image matching in an appropriate manner, the most suitable exposure time needs to be selected.
In the first embodiment, the filter array 40 includes Ir filters that transmit the infrared light. If the same image sensor is used herein, then there are times when the most suitable exposure time is different for visible light than for infrared light. For example, even if the exposure time enables obtaining a high-contrast visible light image formed by capturing visible light, there are times when only a low-contrast infrared light image is obtained by capturing infrared light.
In that regard, in the second embodiment, in an imaging device, the exposure by an image sensor is carried out by dividing a single frame period into a first time period and a second time period that is longer than the first time period. In the second time period, the exposure is carried out for a longer period of time than the first time period, and it is possible to secure a greater number of signals output from the image sensor. As a result, with respect to an infrared light image formed by capturing infrared light, image processing can be performed in a suitable manner.
In
The imaging device 1′ includes the camera module 10 and an ISP 20′. In an identical manner to the first embodiment, the camera module 10 includes an imaging optical system including the main lens 11; a solid-state imaging device including the microlens array 12 and the image sensor 13; an imaging unit 14′; and a signal processor 15′. With respect to the microlens array 12, the filter array 40 including Ir filters is disposed on the side of the image sensor 13 or the side of the main lens 11. Herein, it is assumed that the filter array 40 includes Ir filters and optical filters of RGB colors.
The ISP 20′ includes the camera module I/F 21 and the output I/F 24; as well as includes a switch 220, frame memories 221A and 221B, an image processor 230, frame memories 250A and 250B, a calculator 251, and a controller 26.
The controller 26 generates timing signals for the purpose of setting, in a single frame period, a first time period tRGB and a second time period tIr that is longer than the first time period tRGB. For example, as illustrated in
In the example illustrated in
The imaging unit 14′ reads, in a single frame period and according to the provided timing signals, the electrical charge from the image sensor 13 during the first time period tRGB; converts the electrical charge into electrical signals; and outputs the electrical signals. With respect to the electrical signals during the first time period tRGB, the signal processor 15′ performs predetermined signal processing such as gain adjustment, noise removal, and amplification; performs A/D conversion with respect to the processed electrical signals; and outputs them as image data 500 of a RAW image. Then, the image data 500, which corresponds to the first time period tRGB and which is output by the signal processor 15′, is sent from the camera module 10 to the ISP 20′; and is input to the switch 220 via the camera module I/F 21.
In the switch 220, either a selection output terminal 220A or a selection output terminal 220B is selected depending on the timing signals provided from the controller 26. Herein, during the first time period tRGB, it is assumed that the selection output terminal 220A is selected. Accordingly, the image data 500, which corresponds to the first time period tRGB and which is input to the switch 220, is stored in the frame memory 221A.
During the second time period tIr too, identical operations are performed. That is, after performing reading from the image sensor 13 during the first time period tRGB, the imaging unit 14′ reads the electrical charge from the image sensor 13 during the second time period tIr according to the timing signals provided from the controller 26; converts the electrical charge into electrical signals; and outputs the electrical signals. With respect to the electrical signals during the second time period tIr, the signal processor 15′ performs predetermined signal processing mentioned above; performs A/D conversion with respect to the processed electrical signals; and outputs them as image data 501 of a RAW image. Then, the image data 501 is sent from the camera module 10 to the ISP 20′, and is input to the switch 220 via the camera module I/F 21. In the switch 220, depending on the timing signals provided from the controller 26, the selection output terminal 220B is selected during the second time period tIr. Thus, the image data 501, which is input to the switch 220, is stored in the frame memory 221B.
The image processor 230 performs image processing with respect to the image data 500 which is stored in the frame memory 221A, and the image data 501, which is stored in the frame memory 221B. As a result of performing the image processing, the image processor 230 can obtain four types of image data from the image data of pixels of RGBIr colors included in the image data 500 and the image data 501.
That is, as illustrated in
If the first time period tRGB is set to be appropriate for the exposure of RGB colors, then the low-luminance RGB image data 510 serves as RGB image data having an appropriate contrast. Moreover, the high-luminance infrared-light image data 513 is likely to be infrared-light image data having an appropriate contrast. Furthermore, if the low-luminance RGB image data 510 and the high-luminance RGB image data 512 are combined, then it is possible to obtain RGB image data having a wide dynamic range. In an identical manner, if the low-luminance infrared-light image data 511 and the high-luminance infrared-light image data 513 are combined, then it is possible to obtain infrared-light image data having a wide dynamic range.
The image processor 230 selects, for example, a single set of image data from among the sets of image data 510 to 513, and outputs the selected image data to the outside via the output I/F 24. Herein, the image processor 230 can select a set of image data in response to a user operation performed using an operating unit (not illustrated) or based on contrast information obtained by analyzing the sets of image data.
Moreover, the image processor 230 stores the image data 500 and the image data 501 in the frame memories 250A and 250B, respectively. The calculator 251 performs distance calculation based on the sets of image data 500 and 501 stored in the frame memories 250A and 250B, respectively; and creates a distance map by obtaining the distance value for each microlens image 30.
Explained below with reference to
Then, with respect to the sets of image data 500 and 501 stored in the frame memories 221A and 221B, respectively; the image processor 230 performs de-mosaic processing and obtains RGBIr pixel values for each pixel (Step S11). Subsequently, for each of the sets of image data 500 and 501, the image processor 230 converts each pixel value into a luminance value (Step S12). At that time, based on the sets of image data 500 and 501, the image processor 230 converts pixel values into luminance values for each of the sets of image data 510 to 513.
Then, with respect to each of the sets of image data 510 to 513 after conversion to luminance values, the image processor 230 performs shading (Step S13), and stores the post-shading sets of image data 510 to 513 in the frame memories 250A and 250B. That is, of the post-shading image data, the image processor 230 stores the sets of image data 510 and 511, which are based on the image data 500, in the frame memory 250A; and stores the sets of image data 512 and 513, which are based on the image data 501, in the frame memory 250B.
Based on the sets of image data 510 to 513 read from the frame memories 250A and 250B, the calculator 251 performs distance calculation for each microlens image 30. At that time, for each of the sets of image data 510 to 513, the calculator 251 obtains the texture quantity. Then, based on the obtained texture quantities, the calculator 251 performs image matching by selecting appropriate data from the sets of image data 510 to 513, and calculates the distances.
Explained below with reference to a flowchart illustrated in
Explained below with reference to
However, the texture quantity of the microlens image 30 is not limited to the dispersion σ0. Alternatively, for example, as the texture quantity of the microlens image 30, it is possible to use the value obtained by dividing the maximum value of the luminance values of the pixels 1300 to 130n, which are included in the microlens value 30, by the minimum value of those luminance values.
Regarding the other sets of image data 511 to 513 too, with the microlens images 30 corresponding to the microlens image 30 of interest also treated as the microlens images 30 of interest, the calculator 251 calculates dispersions σ1, σ2, and σ3, respectively, of the pixels 1300 to 130n included in the respective microlens images 30 of interest. Then, the calculator 251 compares the dispersions σ0 to σ3 obtained from the microlens images 30 of interest in the sets of image data 510 to 513, and obtains σ as the greatest dispersion.
Then, from among the microlens images 30 of interest of the sets of image data 510 to 513, the calculator 251 selects the microlens image 30 of interest for which the greatest dispersion σ is obtained at Step S20 (Step S21).
Subsequently, from among the sets of image data 510 to 513 stored in the frame memories 250A and 250B, in the image data that includes the microlens image 30 of interest selected at Step S21, the calculator 251 performs image matching using the microlens images 30 positioned in the vicinity of the microlens image 30 of interest and having the same color as the microlens image 30 of interest (Step S22).
For example, at Step S21, of the dispersions σ0 to σ3 of the microlens images 30 of interest included in the sets of image data 510 to 513, the dispersion σ0 of the microlens image 30 of interest included in the image data 510 is assumed to have the greatest value, and this microlens image 30 of interest is assumed to be corresponding to the optical filter 400R of red color. In this case, in the image data 510, the calculator 251 performs image matching between the microlens images 30 that are positioned in the vicinity of the microlens image 30 of interest and that correspond to the optical filters 400R of red color.
As a result of performing image matching, the calculator 251 obtains the inter-microlens amount of shift Δ′, and calculates the reduction ratio N according to Equation (7) given earlier and using the known inter-microlens center distance L. Then, the calculator 251 applies the reduction ratio N to Equation (4) or Equation (6) given earlier and obtains the distance A to the photographic subject.
Subsequently, the calculator 251 determines whether or not the operations from Steps S20 to S22 are completed for all microlens images 30 (Step 323). If it is determined that the operations are yet to be performed for any microlens image 30 (No at Step S23), then the system control returns to Step S20, and the operations from Steps S20 to S22 are performed for the next microlens image 30 as the microlens image 30 of interest.
When it is determined that the operations are completed for all microlens images 30 (Yes at Step S23), the system control exits the flowchart illustrated in
Then, the calculator 251 creates a distance map according to the distance value calculated for each microlens image 30 at Step S14 (Step S15). As schematically illustrated in
As described above, in the imaging device 1′, a single frame period is divided into the first time period tRGB and the second time period tIr, and the respective sets of image data 500 and 501 are obtained. However, that is not the only possible case. Alternatively, in the imaging device 1′, without dividing a single frame period, exposure is performed for only one period and image data is obtained that contains microlens images formed by RGB colors as well as microlens images formed by infrared light. In this case, for example, if the photographic subject is sufficiently bright, the calculator 251 can perform image matching using the microlens images formed by RGB colors. However, if the photographic subject is dark, the calculator 251 can perform image matching using the microlens images formed by infrared light.
In the case of performing exposure for a single period of time without dividing a single frame period, the imaging device 1′ may not include the controller 26 that generates timing signals, the switch 220, one of the frame memories 221A and 221B, and one of the frame memories 250A and 250B.
According to the second embodiment, distance calculation is done not only using the microlens images formed by RGB colors but also using the microlens images formed by infrared light. Hence, distance calculation of a high degree of accuracy can be performed for various photographic subjects.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2014-059054 | Mar 2014 | JP | national |