The present invention relates to a camera device, a compound-eye imaging device, and an image processing method.
In particular, the present invention relates to a camera device for imaging images of the same field of view onto multiple imaging regions and acquiring multiple images representing different types of information in the multiple imaging regions. The present invention also relates to a compound-eye imaging device including a processor for enhancing the resolutions of the multiple images acquired by the above camera device. The present invention further relates to an image processing method implemented by the above compound-eye imaging device.
The present invention further relates to a program for causing a computer to execute processing of the above compound-eye imaging device or image processing method, and a recording medium storing the program.
In recent years, requirements for imaging devices have become diversified more and more, and for example, it has been desired to acquire not only an RGB visible image but also additional information. In particular, near-infrared light is suitable for monitoring, object recognition, or the like and attracts attention in the fields of monitoring cameras, in-vehicle cameras, or the like, by virtue of the feature that near-infrared light has higher transmittance than airglow and is invisible. Also, images obtained only from light having a specific polarization direction are useful for removing light reflected from windows, road surfaces, or the like and recognizing hard-to-see objects, such as black objects or transparent objects, and attract attention in the fields of in-vehicle cameras and inspection cameras used in factory automation (FA).
Conventionally, such different information has been commonly acquired instead of usual color images. As the simplest means for simultaneously acquiring a color image and different information, there is a camera array with multiple cameras arranged; however, there are problems in that the cameras need to be accurately positioned, the device is upsized, and the installation cost and maintenance cost are increased.
Also, RGB-X sensors, i.e., sensors that are provided with filters transmitting only near-infrared light or other elements in color filter arrays and thus are capable of simultaneously acquiring different information have recently appeared. However, such sensors require much cost and time in design and development, and have many problems in terms of manufacturing.
As a small device that solves these problems, there is proposed a device that has an imaging element divided into multiple imaging regions and images, onto the multiple imaging regions, respective images (see Patent Literature 1). This device is provided with different optical filters for the multiple imaging regions, and thereby capable of simultaneously acquiring different information. The device of Patent Literature 1 has the advantage that it can be manufactured by placing a lens array on a single imaging element and is easily downsized.
Patent Literature 1: Japanese Patent Application Publication No. 2001-61109 (paragraphs 0061 and 0064)
In the device of Patent Literature 1, while the division of the imaging element makes it possible to simultaneously acquire multiple images from the single imaging element, there is a problem in that increasing the number of imaging regions decreases the number of pixels per imaging region, i.e., the resolution.
The present invention has been made to solve the above problems, and is intended to provide a compound-eye camera capable of acquiring different types of information and acquiring high-priority information with high resolution.
A camera device of the present invention includes:
A compound-eye imaging device of the present invention includes:
With the camera device of the present invention, since the camera device includes the multiple imaging regions, it is possible to acquire images representing different types of information. Also, since the multiple imaging regions include the at least one imaging region of the first type and the multiple imaging regions of the second type having smaller areas and smaller numbers of pixels than the imaging region of the first type, by assigning a relatively large imaging region to high-priority information, i.e., an image that is more strongly desired to be acquired with high resolution, of the different types of information, it is possible to acquire such an image with high resolution.
With the compound-eye imaging device of the present invention, since the resolution enhancer enhances the resolution of the low-resolution image, even when the imaging region is relatively small, it is possible to obtain an image having high resolution.
The camera device 1 illustrated in
The imaging element 11 has a rectangular imaging surface 11a, and the imaging surface 11a is divided into multiple rectangular imaging regions 15a to 15f, for example, as illustrated in
The lens array 12 includes multiple lenses 12a, 12b, provided to correspond to the respective imaging regions 15a, 15b, . . . The multiple lenses 12a, 12b, . . . constitute a lens group.
The lenses 12a, 12b, . . . are configured so that images of the same field of view are imaged onto the respective corresponding imaging regions 15a, 15b,
To image images of the same field of view onto the imaging regions having different sizes, for example, lenses corresponding to larger imaging regions have longer focal lengths.
The filter array 13 includes optical filters 13a, 13b, . . . provided to respective one or more of the multiple imaging regions.
The partition 14 is provided between the imaging regions 15a, 15b, . . . and prevents each imaging region from receiving light from the lenses other than the corresponding lens.
The imaging element 11 is preferably a sensor having a CMOS or CCD structure that allows obtained image signals to be read out pixel by pixel. Because of the division, an imaging element of the global shutter (simultaneous exposure and simultaneous readout) type that causes no image blur is preferable.
Of the imaging regions, the largest imaging region(s), specifically the imaging region 15a having the largest number of pixels, will be referred to as high-resolution imaging region(s), and the other imaging regions 15b, 15c, . . . will be referred to as low-resolution imaging regions. In the example illustrated in
Since the numbers of pixels in the vertical and horizontal directions of the high-resolution imaging region 15a are twice those of each of the low-resolution imaging regions 15b, 15c, . . . , the size (number of pixels) of an image (high-resolution image) D0 acquired in the high-resolution imaging region is twice that of an image (low-resolution image) D1 acquired in each low-resolution imaging region, as illustrated in
As described above, the high-resolution imaging region differs in resolution from the low-resolution imaging regions, and the former is higher in resolution. For distinction, the former may be referred to as an imaging region of the first type, and the latter may be referred to as imaging regions of the second type.
The optical filters 13a, 13b, . . . constituting the filter array 13 include optical filters having different optical properties, so that different types of information (images representing different types of information) are acquired from the respective imaging regions.
For example, the optical filters 13a, 13b, . . . having different optical properties are provided to one or more of the imaging regions 15a, 15b, . . . , so that images representing different types of information are acquired in the respective imaging regions.
As the optical filters having different optical properties, for example, at least one of a spectral filter, a polarization filter, and a neutral density (ND) filter is used, and by using these, images resulting from light in different wavelength ranges, images resulting from light having different polarization directions, or images resulting from imaging at different exposure amounts are acquired in the respective imaging regions.
These optical filters may be used alone or in combination.
The above-described “images resulting from light in different wavelength ranges” refer to images obtained by photoelectrically converting light in specific wavelength ranges, and the “images resulting from light having different polarization directions” refer to images obtained by photoelectrically converting light having specific polarization directions.
For example, it is possible to provide the high-resolution imaging region 15a with a G (green) transmission filter having high transmittance, an infrared cut filter, or an optical filter of a complementary color system, or it is possible to provide no optical filter to the high-resolution imaging region 15a (set the high-resolution imaging region 15a to be a monochrome region).
Optical filters of a complementary color system generally have high optical transmittance, and are preferable in this respect.
Here, providing no optical filter means providing no optical filter intended to acquire different types of images, and an optical filter for another purpose may be provided.
Also, depending on the lens design, in some cases, a clear aperture of a lens provided to the high-resolution imaging region is larger than those of the low-resolution imaging regions, and thus the exposure amount in the high-resolution imaging region is large. Providing an optical filter, such as an ND filter, that decreases the amount of transmitted light to the high-resolution imaging region can prevent the differences in exposure amount between the high-resolution imaging region and the low-resolution imaging regions from being large, and prevent excess of the exposure amount in the high-resolution imaging region and shortage of the exposure amounts in the low-resolution imaging regions.
The method of dividing the imaging surface 11a into imaging regions is not limited to the example of
The larger an imaging region, the higher the resolution of the acquired image. Meanwhile, increasing the number of imaging regions makes it possible to increase the number of filters having different optical properties or the number of different combinations of filters having different optical properties, thereby increasing the number of types of information acquired by the imaging element 11.
In the example of
A center of the low-resolution imaging region 15c located at the middle in the vertical direction and a center of the high-resolution imaging region 15a are horizontally aligned. Such an arrangement makes it possible to obtain depth information by performing stereo matching by using displacement due to parallax between the image acquired in the low-resolution imaging region 15c and the image acquired in the high-resolution imaging region 15a.
In the examples of
In the example of
The imaging surface 11a of
In the example of
When the imaging surfaces 11a in the examples of
In the example of
With such division, the imaging surface 11a is divided into two in each of the vertical and horizontal directions, and the high-resolution imaging regions 15a to 15c are arranged in the left upper quarter, the right upper quarter, and the right lower quarter. Further, the left lower quarter is divided into two in each of the vertical and horizontal directions, and the respective divided regions form the four low-resolution imaging regions 15d to 15g.
The example of
In the example of
For distinction, the imaging regions 15b to 15i of the first group may be referred to as intermediate-resolution imaging regions, and the imaging regions 15j to 15y of the second group may be referred to as low-resolution imaging regions.
Also, since the imaging regions 15b to 15i of the first group and the imaging regions 15j to 15y of the second group are lower in resolution than the high-resolution imaging region 15a, they may be referred to collectively as low-resolution imaging regions.
The arrangement of the imaging regions in the configuration of
Providing many imaging regions in this manner makes it possible to acquire many different types of information. Also, since imaging regions having different sizes are provided as the imaging regions other than the high-resolution imaging region, it is possible to assign imaging regions having different sizes depending on the type of information.
For example, when it is intended to acquire a multispectral image consisting of many narrowband images, it is possible to assign one or more of the relatively small imaging regions (low-resolution imaging regions) 15j to 15y to the narrowband images and provide them with narrowband bandpass filters, and assign one or more of the relatively large imaging regions (intermediate-resolution imaging regions) 15b to 15e to information, such as primary RGB color information, near-infrared information, or polarization information, required to have a somewhat high resolution and provide them with optical filters for acquiring the respective information.
As above, in the examples illustrated in
As exemplified in
Advantages obtained by the camera device 1 of the first embodiment will now be described.
Since in the camera device 1 of the first embodiment, the imaging surface of the imaging element of the compound-eye camera 10 forming the camera device is divided into multiple imaging regions, it is possible to acquire images representing different types of information.
Also, since the imaging regions include a relatively large imaging region and a relatively small imaging region, by assigning the relatively large imaging region to high-priority information, i.e., an image that is more strongly desired to be acquired with high resolution, of the different types of information, it is possible to acquire such an image with high resolution.
Thereby, for example, it is possible to acquire primary images, such as RGB visible light images, with high resolution, by assigning relatively large imaging regions to these images, and acquire many images having additional information, such as narrowband images, near-infrared images, or ultraviolet images for constituting a multispectral image, polarization images, or images at different exposure amounts, by assigning many relatively small imaging regions to these images.
Further, since the multiple imaging regions are formed in a single imaging element, it is possible to reduce the size of the camera device.
The imaging controller 17 controls imaging by the camera device 1. For example, it controls the imaging timing or exposure time.
An imaging signal output from the camera device 1 is, after being subjected to processing, such as amplification, in an analog processor (not illustrated), converted to a digital image signal by the A/D converter 18, and input to the processor 20.
The processor 20 includes an image memory 22 and at least one resolution enhancer 30.
The image memory 22 preferably includes multiple storage regions 22-1, 22-2, . . . corresponding to the respective imaging regions of the compound-eye camera 10 forming the camera device 1. In this case, a digital image signal stored in each storage region represents an image acquired in the corresponding imaging region.
Of images stored in the respective storage regions 22-1, 22-2, . . . of the image memory 22, the images acquired in two imaging regions having different resolutions are supplied to the resolution enhancer 30.
For example, a high-resolution image acquired in the imaging region 15a of
The resolution enhancer 30 enhances (or increases) the resolution of the low-resolution image D1 by using the high-resolution image D0 as a reference image to generate a high-resolution image D30.
On the assumption that the high-resolution image D30 to be generated and the reference image D0 are correlated with each other in image features (such as local gradients or patterns), the resolution enhancer 30 performs a process of transferring (reflecting) a high-resolution component of an object included in the reference image D0 to the low-resolution image D1 and thereby generating the high-resolution image D30 including the high-resolution component of the object. Further, it is possible to compare the low-resolution image D1 and the reference image D0, and perform adaptive processing depending on the position in the image in such a manner as to facilitate the transfer of the high-resolution component in image regions where the correlation is seen as being high and suppress the transfer of the high-resolution component in image regions where the correlation is seen as being low. This is because imaging conditions (such as a wavelength, polarization direction, or exposure amount) are different between the low-resolution image D1 and the reference image D0, and thus depending on the reflective property of the object, a high-resolution component of the low-resolution image D1 and the reference image D0 are not always correlated.
The resolution enhancer 30a of
The resolution enhancer 30a illustrated in
The filter processor (first filter processor) 311 extracts a low-frequency component D1L and a high-frequency component D1H from the low-resolution image D1. For example, the filter processor 311 performs smoothing filter processing on the low-resolution image D1 to extract the low-frequency component D1L, and generates the high-frequency component D1H of the image by obtaining a difference between the extracted low-frequency component D1L and the original image D1.
The filter processor (second filter processor) 312 extracts a low-frequency component D0L and a high-frequency component D0H from the reference image D0. For example, the filter processor 312 performs smoothing filter processing on the reference image D0 to extract the low-frequency component D0L, and generates the high-frequency component D0H of the image by obtaining a difference between the extracted low-frequency component D0L and the original image D0.
In the smoothing filter processing in the filter processors 311 and 312, a Gaussian filter, a bilateral filter, or the like can be used.
The low-frequency component combiner 313 enlarges the low-frequency component D1L to the same resolution as the reference image D0, and combines the enlarged low-frequency component and the low-frequency component D0L by weighted addition to generate a combined low-frequency component D313.
The high-frequency component combiner 314 enlarges the high-frequency component D1H to the same resolution as the reference image D0, and combines the enlarged high/low-frequency component and the high-frequency component D0H by weighted addition to generate a combined high-frequency component D314.
The component combiner 315 combines the combined low-frequency component D313 and combined high-frequency component D314 to generate the high-resolution image D30.
In the configuration illustrated in
The resolution enhancer 30b illustrated in
The resolution enhancer 30b illustrated in
The reduction processor 321 reduces the reference image D0 to generate a reduced reference image D0b having the same resolution as the low-resolution image D1.
The coefficient calculator 322 calculates linear coefficients am and bm approximating a linear relationship between the reduced reference image D0b and the low-resolution image D1.
The coefficient calculator 322 first obtains a variance varI(x) of pixel values I(y) of the reduced reference image D0b in a local region Ω(x) centered on pixel position x, according to equation (1):
In equation (1), I(x) is a pixel value of a pixel at pixel position x of the reduced reference image D0b.
I(y) is a pixel value of a pixel at pixel position y of the reduced reference image D0b.
Here, pixel position y is a pixel position in the local region Ω(x) centered on pixel position x.
The coefficient calculator 322 also obtains a covariance covIp(x) of the pixel values I(y) of the reduced reference image D0b and pixel values p(y) of the input image D1 in the local region Ω(x) centered on pixel position x, according to equation (2):
In equation (2), I(x) and I(y) are as described for equation (1).
p(y) is a pixel value of a pixel at pixel position y of the input image D1.
The coefficient calculator 322 further calculates a coefficient a from the variance varI(x) obtained by equation (1) and the covariance covIp(x) obtained by equation (2), according to equation (3):
In equation (3), eps is a constant determining the degree of edge preservation, and is predetermined.
The coefficient calculator 322 further calculates a coefficient b(x) using the coefficient a(x) obtained by equation (3), according to equation (4):
The coefficients a(x) and b(x) are referred to as linear regression coefficients.
The coefficient calculator 322 further calculates linear coefficients am(x) and bm(x) by averaging the coefficients a(x) and b(x) obtained by equations (3) and (4) according to equation (5):
The coefficient map enlarger 323 enlarges a coefficient map consisting of the linear coefficients am(x) obtained by equation (5) in the coefficient calculator 322 and a coefficient map consisting of the linear coefficients bm(x) obtained by equation (5) in the coefficient calculator 322, to the same resolution as the reference image D0. The linear coefficients of the enlarged coefficient maps will be denoted by amb(x) and bmb(x). A coefficient map is a map in which coefficients corresponding to all the pixels constituting an image are arranged at the same positions as the corresponding pixels.
The linear converter 324 generates a high-resolution image D30 having information represented by the low-resolution image D1, on the basis of the linear coefficients amb and bmb of the enlarged coefficient maps and the reference image D0.
Specifically, the linear converter 324 uses the linear coefficients amb and bmb of the enlarged coefficient maps to derive guided filter output values q according to equation (6):
q(x)=amb(x)J(x)+bmb(x). (6)
Equation (6) indicates that the output of the guided filter (the pixel value of the image D30) q(x) and the pixel value J(x) of the reference image D0 have a linear relationship.
In the configuration illustrated in
By calculating the pixel values of the high-resolution image D30 as described above, it is possible to perform smoothing processing only on regions of the reduced reference image D0b in which the variance varI(x) is small, and preserve the texture of the other region.
Although the above resolution enhancer 30b performs processing using a guided filter, the present invention is not limited to this. The resolution enhancer 30b may use other methods, such as a method of enhancing the resolutions of images having different types of information on the basis of edge or gradient information of a high-resolution image, such as a method using a joint bilateral filter.
When there is misalignment between the low-resolution image D1 and the high-resolution image D0 output from the camera device 1, the aligner 25 performs alignment processing before the resolution enhancement by the resolution enhancer 30. As the alignment processing, it is possible to perform a fixed-value alignment using initial misalignment (correction) information, a dynamic alignment including registration (image matching), or the like.
The resolution enhancer 30 performs the resolution enhancement with the images aligned by the aligner 25 as inputs.
It is also possible to provide multiple resolution enhancers that are the same as one of the resolution enhancers (30a and 30b) described in
The high-resolution images used as the reference images by the multiple resolution enhancers may be the same or different.
It is also possible to provide multiple combinations of the aligner 25 and resolution enhancer 30 described in
The processor 20c illustrated in
The example of
In this case, three low-resolution images D1-r, D1-g, and D1-b respectively represent the R, G, and B images having low resolution. Also, the G image acquired in the high-resolution imaging region 15a is denoted by reference character D0.
The image enlargers 31r and 31b enlarge the images D1-r and D1-b to the same resolution as the high-resolution image D0 to generate enlarged images D31-r and D31-b, respectively.
The resolution enhancer 30c replaces the image D1-g with the image D0 and outputs the image obtained by the replacement as a high-resolution image D30-g.
Such processing can greatly reduce the calculation amount or calculation time for the image processing and reduce the hardware cost of the processor.
In the configuration of
Advantages obtained by the compound-eye imaging device 100 of the second embodiment will now be described.
In the compound-eye imaging device 100 of the second embodiment, multiple images having different types of information are acquired from the multiple imaging regions of the camera device, these images include an image having relatively low resolution and an image having relatively high resolution, and the resolution of the image having relatively low resolution is enhanced using a high-resolution component of the image having relatively high resolution, and thus it is possible to obtain multiple images having high resolution and having different types of information.
Thus, even when an imaging region for acquiring images having different types of information is small, an image having high resolution can be generated, and it is possible to obtain images having different types of information with high resolution while reducing the size of the camera device.
The first to Nth resolution enhancers 30-1 to 30-N are respectively provided to correspond to N low-resolution imaging regions (e.g., 15b, 15c, . . . in the example of
The resolution enhancer 30-n (n being one of the integers from 1 to N) enhances the resolution of the low-resolution image D1-n by using the high-resolution image D0 as a reference image and generates a high-resolution image D30-n. Such processing is performed by all the resolution enhancers 30-1 to 30-N, so that multiple high-resolution images D30-1 to D30-N having different types of information are generated.
Each of the resolution enhancers 30-1 to 30-N is configured as described in
The combiner 40 receives the multiple high-resolution images D30-1 to D30-N having the different types of information, and generates one or more combined high-resolution images D40-a, D40-b, . . .
Specifically, the combiner 40 combines the high-resolution images D30-1 to D30-N having the different types of information and generated by the resolution enhancers 30-1 to 30-N and generates the combined high-resolution images D40-a, D40-b, . . .
The combining processing in the combiner 40 can be performed by, for example, pan-sharpening processing, weighted addition of images, intensity combination, or region selection. The region selection may be performed on the basis of the visibility of an image estimated using a local variance as an index, for example.
Among the above, pan-sharpening techniques are used in satellite image processing (remote sensing) or the like, and pan-sharpening processing includes converting an RGB color image into an HSI (hue, saturation, intensity) image, replacing the I values of the HSI image resulting from the conversion with pixel values of a monochrome image that is a high-resolution image, and converting the HSI image with the replaced pixel values back to an RGB image.
The combiner 40b illustrated in
The combiner 40b receives an R image D30-r, a G image D30-g, a B image D30-b, a polarization image D30-p, and NIR (near-infrared) image D30-i subjected to resolution enhancement by the multiple resolution enhancers 30-1, 30-2, . . . (each of which is the same as that described in
The images D31-r and D31-b enlarged by the image enlargers 31r and 31b illustrated in
The luminance/color separator 411 receives the R image D30-r, G image D30-g, and B image D30-b, and separates them into a luminance component D411-y and color components (components of R, G, and B colors) D411-r, D411-g, and D411-b.
The luminance separator 412 receives the polarization image D30-p and separates a luminance component D412.
The weighting adder 413 weights the luminance component D412 of the polarization image output from the luminance separator 412 and the NIR image D30-i input to the combiner 40b and adds them to the luminance component D411-y output from the luminance/color separator 411, thereby obtaining a combined luminance component D413.
The luminance/color combiner 414 combines the color components D411-r, D411-g, and D411-b output from the luminance/color separator 411 and the combined luminance component D413 obtained by the weighting adder 413, thereby generating an R image D40-r, a G image D40-g, and a B image D40-b.
The R image D40-r, G image D40-g, and B image D40-b output from the luminance/color combiner 414 have luminance information enhanced by the luminance component of the polarization image D30-p and the NIR image D30-i.
In the weighted addition by the weighting adder 413, it is possible to use a method of adding a pixel value multiplied by a gain depending on an image.
Instead, it is also possible to extract high-frequency components from the respective images (the luminance component D411-y output from the luminance/color separator 411, the luminance component D412 output from the luminance separator 412, and the NIR image D30-i input to the combiner 40b) by filter processing, weight and add them, and obtain the weighted average.
The processor 20e illustrated in
The compound-eye camera information Dinfo includes information indicating wavelengths acquired in the respective imaging regions, information indicating polarization directions, information indicating the positions (the positions in the imaging surfaces) of the respective imaging regions, and the like.
By inputting the compound-eye camera information Dinfo to the resolution enhancers 30-1 to 30-N and combiner 40, it is possible to improve the accuracy in the resolution enhancement, combination processing, or the like, or increase information obtained by these processes.
For example, when the compound-eye camera information Dinfo indicates the spectral properties of the optical filters provided to the respective imaging regions, it is possible to extract a near-infrared image from RGB images and a monochrome image in the combination processing.
As described above, with the third embodiment, by enhancing the resolutions of multiple images having different types of information acquired by the camera device and then combining them, it is possible to generate a more useful image according to the intended use.
The combiner 41 receives high-resolution images D30-1 to D30-N output from the resolution enhancers 30-1 to 30-N, and generates, from them, high-resolution images D41-a, D41-b, . . . representing information different in type from them by interpolation.
In this case, it is assumed that the images D1-1, D1-2, . . . acquired in the multiple low-resolution imaging regions (e.g., 15b, 15c, . . . in the example of
The combiner 41 generates (reconstructs), by interpolation from the multiple high-resolution images D30-1, D30-2, . . . , high-resolution images D41-a, D41-b, . . . having a type or value of the at least one parameter different from that of any of the multiple high-resolution images D30-1, D30-2, . . .
For example, a restoration technique used in compressed sensing can be applied to the interpolation.
An example of the generation of the images by interpolation will be described below with reference to
Suppose that images with the parameter combinations indicated by the circles in
The combiner 41 generates, from the high-resolution images D30-1 to D30-6, by interpolation, images D41-a to D41-f corresponding to the parameter combinations indicated by the triangles. For example, the image D41-a is an image presumed to be generated by enhancing the resolution of an image acquired by imaging at an exposure amount of 1/1000 in an imaging region provided with an optical filter that transmits light in a G wavelength range.
The combiner 41 outputs not only the generated images D41-a to D41-f but also the input images D30-1 to D30-6.
By performing such processing, the larger number of high-resolution images D30-1 to D30-6 and D40-a to D40-f having different types of information are obtained.
Advantages obtained by the compound-eye imaging device of the fourth embodiment will now be described.
With the fourth embodiment, it is possible to generate, from a relatively small number of images acquired by imaging and having different types of information, a larger number of images having different types of information. Thus, even when the number of imaging regions is not large, many types of information can be obtained.
The combiner 42 performs combination processing on images D1-1 to D1-N acquired in the low-resolution imaging regions (e.g., 15b, 15c, . . . in the example of
The resolution enhancer 32 performs resolution enhancement on one or more combined images of the combined images D42-a, D42-b, . . . output from the combiner 42 by using the reference image D0, thereby generating high-resolution images (high-resolution combined images) D32-a, D32-b, . . .
When the number of combined images D42-a, D42-b, . . . generated by the combination by the combiner 42 is less than the number of input images D1-1 to D1-N, performing the resolution enhancement after the combination can reduce processing for resolution enhancement and reduce the calculation amount as a whole.
The compound-eye imaging device 102 according to the sixth embodiment includes a camera device 50, an imaging controller 17, A/D converters 18 and 19, and a processor 20h.
As described below in detail, as the compound-eye camera 60, one having an imaging surface divided into multiple imaging regions like the compound-eye camera 10 illustrated in
The compound-eye camera 60 includes an imaging element 61, a lens array 62, a filter array 63, and a partition 64.
The imaging element 61 has a rectangular imaging surface 61a, and the imaging surface 61a is divided into multiple, e.g., nine, imaging regions 65a to 65i, for example, as illustrated in
Although the imaging regions 65a, 65b, . . . of the compound-eye camera 60 can have the same size as in the examples of
Even when they have different sizes, they preferably have the same aspect ratio.
The lens array 62 includes lenses 62a, 62b, . . . that are provided to correspond to the respective imaging regions 65a, 65b, . . . and image images of the same field of view onto the respective corresponding imaging regions.
The filter array 63 includes optical filters 63a, 63b, . . . provided to one or more imaging regions of the multiple imaging regions.
The partition 64 is provided between the imaging regions 65a, 65b, . . . and prevents each imaging region from receiving light from the lenses other than the corresponding lens.
The monocular camera 70 includes an imaging element 71, a lens 72, and an optical filter 73.
The imaging element 71 also has a rectangular imaging surface 71a. The entire imaging surface 71a forms a single imaging region 75.
The imaging region 75 formed by the entire imaging surface 71a of the imaging element 71 of the monocular camera 70 has more pixels than any of the imaging regions 65a, 65b, . . . of the compound-eye camera 60. Thus, the resolution of the imaging region 75 of the monocular camera 70 is higher than the resolution of a largest imaging region of the imaging regions 65a, 65b, . . . of the compound-eye camera 60.
The imaging region 75 has the same aspect ratio as the imaging regions 65a, 65b, . . .
The imaging region 75 of the monocular camera 70 is an imaging region differing in resolution from the imaging regions 65a, 65b, . . . of the compound-eye camera, and the former is higher in resolution than the latter. For distinction, the former may be referred to as an imaging region of the first type, and the latter may be referred to as imaging regions of the second type.
The lens 72 of the monocular camera 70 is provided so that an image of the same field of view as images imaged onto the respective imaging regions of the compound-eye camera 60 is imaged onto the imaging region 75.
The lenses 62a, 62b, . . . of the compound-eye camera 60 and the lens 72 of the monocular camera 70 constitute a lens group.
The optical filters 63a, 63b, . . . constituting the filter array 63 and the optical filter 73 include optical filters having different optical properties, so that different types of information (images representing different types of information) are acquired from the respective imaging regions.
For example, it is possible to select the optical filters for the respective imaging regions on the assumption that the high-resolution imaging region 15a of
It is possible to provide no optical filters to one or more imaging regions, e.g., the imaging region 75, of the imaging regions 75, 65a, 65b, . . . (set the one or more imaging regions to be monochrome regions).
The imaging controller 17 controls imaging by the compound-eye camera 60 and imaging by the monocular camera 70. For example, it controls the timings or exposure amounts of imaging by the two cameras. The imaging timing control is performed so that the two cameras substantially simultaneously perform imaging.
The processor 20h of the compound-eye imaging device 102 of this embodiment is supplied with images acquired in the multiple imaging regions of the compound-eye camera 60 and an image acquired by the monocular camera 70 through the A/D converters 18 and 19, respectively.
The processor 20h includes an image memory 22 and at least one resolution enhancer 30.
The image memory 22 preferably has multiple storage regions 22-1, 22-2, . . . corresponding to the multiple imaging regions of the compound-eye camera 60 and the imaging region of the monocular camera 70 on a one-to-one basis.
The resolution enhancer 30 receives, as a reference image, a high-resolution image D0 acquired by the monocular camera 70, receives, as a low-resolution image, an image D1 acquired in one of the imaging regions of the compound-eye camera 60, and enhances the resolution of the low-resolution image D1 by using a high-resolution component included in the reference image D0, thereby generating a high-resolution image D30.
As above, when the camera device 1 of the first embodiment is used, with an image acquired in a high-resolution imaging region (15a or the like) of the compound-eye camera 10 as the reference image, the resolution of an image acquired in a low-resolution imaging region of the same compound-eye camera 10 is enhanced; on the other hand, when the camera device 50 of the sixth embodiment is used, with an image acquired by the imaging element of the monocular camera 70 as the reference image, the resolution of a low-resolution image acquired by the compound-eye camera 60 is enhanced.
Otherwise, the sixth embodiment is the same as the second embodiment. For example, the processing by the resolution enhancer 30 can be performed in the same manner as that described in the second embodiment with reference to
Although it has been described that a processor that is the same as that described in the second embodiment is used as the processor 20h of the sixth embodiment, it is also possible to use a processor that is the same as that of the third, fourth, or fifth embodiment instead. In any case, the image acquired in the imaging region of the monocular camera 70 should be used as the reference image instead of the high-resolution image acquired in the high-resolution imaging region of the first embodiment.
In the example of
Advantages obtained by the compound-eye imaging device 102 of the sixth embodiment will now be described.
In the compound-eye imaging device 102 of the sixth embodiment, the monocular camera 70 is provided separately from the compound-eye camera 60, and an image having high resolution can be acquired by the monocular camera 70. Thus, it is possible to enhance the resolution of an image acquired in each imaging region of the compound-eye camera 60 to a higher resolution.
Also, it is possible to make the imaging regions 65a, 65b, . . . of the compound-eye camera 60 all have the same shape as exemplified in
Further, since a distance between centers of the compound-eye camera 60 and the monocular camera 70 is relatively large, displacement due to parallax between the image acquired by the monocular camera 70 and the images acquired by the compound-eye camera 60 is greater than displacement due to parallax between the images acquired in the different imaging regions of the compound-eye camera 10 of the first embodiment, it is also possible to more accurately obtain depth information by taking advantage of the parallax.
In the first embodiment, a camera device is formed only by a compound-eye camera (10), and in the sixth embodiment, a camera device is formed by a compound-eye camera (60) and a monocular camera (70); however, in short, it is sufficient that a camera device have multiple imaging regions (an imaging region of a first type and imaging regions of a second type) having different sizes, and a group of filters be provided so that images representing different types of information are acquired in the multiple imaging regions. The different imaging regions may be formed in a single imaging element (11) as in the first embodiment, and may be formed in multiple imaging elements (61 and 71) as in the sixth embodiment. The multiple imaging regions having different sizes include at least one imaging region of a first type and multiple imaging regions of a second type having smaller areas and smaller numbers of pixels than the imaging region of the first type.
When the imaging region of the first type and the imaging regions of the second type are both formed in a single imaging element as in the first embodiment, the imaging region of the first type and the imaging regions of the second type are formed by dividing an imaging surface of the single imaging element, and a lens group includes a lens included in a lens array provided to the imaging surface.
In this case, a partition is provided to prevent each of the multiple imaging regions from receiving light from the lenses other than the corresponding lens.
When the imaging region of the first type and the imaging regions of the second type are formed in different imaging elements as in the sixth embodiment, the above-described “one or more imaging regions of the first type” are formed by a single imaging region formed by the entire imaging surface of a first imaging element (71), the multiple imaging regions of the second type are formed by dividing the imaging surface of a second imaging element (61), and a lens group includes a lens provided to the imaging surface of the first imaging element and a lens included in a lens array provided to the imaging surface of the second imaging element.
In this case, a partition is provided to prevent each of the multiple imaging regions of the second imaging element from receiving light from the lenses other than the corresponding lens.
The processors of the compound-eye imaging devices described in the second to sixth embodiments may be dedicated processors or may be computer's CPUs that execute programs stored in memories.
As an example, a processing procedure in the case of causing a computer to execute image processing implemented by a compound-eye imaging device having the processor of
First, in step ST1, imaging is performed by, for example, the camera device 1 illustrated in
Next, in step ST2, resolution enhancement processing is performed on each of the multiple low-resolution images D1-1, D1-2, . . . by using the high-resolution image D0 as a reference image, so that high-resolution images D30-1, D30-2, . . . are generated. The resolution enhancement processing is the same as that described for the resolution enhancer of
Next, in step ST3, the multiple high-resolution images D30-1, D30-2, . . . are combined to generate one or more combined high-resolution images D40-a, D40-b. The combination processing is performed as described for the combiner 40 of
As described above, with the present invention, it is possible to acquire, in multiple imaging regions, images having different types of information and having different resolutions, and obtain a high-resolution image from relatively low-resolution images of the acquired images.
Although compound-eye imaging devices have been described above in the second to sixth embodiments, image processing methods implemented by the compound-eye imaging devices also form part of the present invention. Further, programs for causing a computer to execute processing of the above compound-eye imaging devices or image processing methods, and computer-readable recording media storing such programs also form part of the present invention.
1 camera device, 10 compound-eye camera, 11 imaging element, 11a imaging surface, 12 lens array, 12a, 12b, . . . lens, 13 filter array, 13a, 13b, . . . optical filter, 14 partition, 15a high-resolution imaging region, 15b, 15c, . . . low-resolution imaging region, 17 imaging controller, 18, 19 A/D converter, 20, 20a to 20h processor, 22 image memory, 25 aligner, 30, 30a to 30c, 30-1 to 30-N resolution enhancer, 31r, 31b image enlarger, 32 resolution enhancer, 40, 42 combiner, 50 camera device, 60 compound-eye camera, 61 imaging element, 61a imaging surface, 62 lens array, 62a, 62b, . . . lens, 63 filter array, 63a, 63b, . . . optical filter, 64 partition, 65a, 65b, . . . low-resolution imaging region, 70 monocular camera, 71 imaging element, 71a imaging surface, 72 lens, 73 optical filter, 75 high-resolution imaging region, 100, 102 compound-eye imaging device, 311, 312 filter separator, 313 low-frequency component combiner, 314 high-frequency component combiner, 315 component combiner, 321 reduction processor, 322 coefficient calculator, 323 coefficient map enlarger, 324 linear converter, 411 luminance/color separator, 412 luminance separator, 413 weighting adder, 414 luminance/color combiner.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/023348 | 6/26/2017 | WO | 00 |