The present invention relates to an imaging device capable of measuring the distance to a subject.
To date, an imaging device for applying pattern light to a subject and measuring the distance to the subject has been known. With this type of imaging device, pattern light having a specific pattern (intensity distribution) is applied to a subject, and an image of the subject is captured by an imager. The imaging device stores, in advance, a standard image in which the above pattern is distributed. A pixel block having a highest correlation with a target pixel block on the captured image is searched for on the standard image. The search range is set with a position that is the same as that of the target pixel block, as a standard position. The pixel deviation amount of the pixel block extracted by the search, with respect to the standard position, is detected as a parallax. The distance to the subject is calculated from this parallax by a triangulation method.
With such an imaging device, not only the distance to the subject, but also an image (luminance image) of the subject can be further obtained in order to perform image recognition of the subject. In this case, when the pattern light is applied as described above, the pattern of the pattern light can be seen in the image of the subject. This causes problems such as a decrease in the accuracy of image recognition.
A configuration for solving such problems is described, for example, in Japanese Laid-Open Patent Publication No. 2001-153612. In this configuration, an imaging element for acquiring an RGB image of a subject, an imaging element for receiving pattern light, and a separation optical system for separating RGB light and pattern light reflected from the subject are used. The RGB light and the pattern light separated by the separation optical system are received by the imaging element for RGB light and the imaging element for pattern light, respectively.
In the above imaging device, the reflectance and the light absorption rate of the subject can vary in various ways. Therefore, it is preferable that the range of light intensity (dynamic range) over which image capture and distance calculation can be performed is as wide as possible. In addition, in the case where this imaging device is mounted on a robot or the like that continuously operates, it is preferable that calculation of the distance to a subject is prevented from becoming impossible as much as possible during continuous operation of the robot or the like.
An imaging device according to a main aspect of the present invention includes an imager and a projector configured to project pattern light in a first wavelength band to a field-of-view range of the imager. Here, the imager includes: a first imaging element; a second imaging element; an imaging lens; a light-splitting element configured to guide light, from a subject, condensed by the imaging lens, to the first imaging element and the second imaging element; a first filter formed on a light-receiving surface of the first imaging element and having a plurality of first regions configured to transmit light in the first wavelength band and a plurality of second regions configured not to transmit light in the first wavelength band and to transmit light in a second wavelength band different from the first wavelength band; and a second filter formed on a light-receiving surface of the second imaging element and having a plurality of the first regions and a plurality of the second regions.
In the imaging device according to this aspect, since the light from the subject is guided to the first imaging element and the second imaging element, an image of the subject can be captured and the distance to the subject can be calculated based on captured images from these imaging elements. At this time, for example, by making an exposure time for the first imaging element and an exposure time for the second imaging element different from each other, the image capture and distance calculation can be performed in the range of light intensity corresponding to each exposure time. Accordingly, the range (dynamic range) over which image capture and distance calculation can be performed can be expanded. In addition, even if a defect such as a failure occurs in any one of the imaging elements, the distance to the subject can be calculated using the captured image from the other imaging element. Therefore, the calculation of the distance to the subject can be prevented from becoming impossible.
The pattern light is removed by the second regions of the first filter and the second regions of the second filter, so that an image of the subject in which the pattern light is not seen can be generated, for example, using pixel signals from pixels in a range where the light passing through the second regions is received. Accordingly, the calculation of the distance to the subject and the generation of an image of the subject can be realized simultaneously.
The effects and the significance of the present invention will be further clarified by the description of the embodiment below. However, the embodiment below is merely an example for implementing the present invention. The present invention is not limited to the description of the embodiment below in any way.
It is noted that the drawings are solely for description and do not limit the scope of the present invention in any way.
Hereinafter, an embodiment of the present invention will be described with reference to the drawings. For convenience, in each drawing, X, Y, and Z axes that are orthogonal to each other are additionally shown. The X-axis direction is an alignment direction of a projector and an imager, and the Z-axis positive direction is the imaging direction of the imager.
The imaging device 1 includes a projector 10, an imager 20, and an image processor 30.
The projector 10 projects pattern light distributed in a predetermined pattern, to a field-of-view range of the imager 20. The direction in which the pattern light is projected by the projector 10 is the Z-axis positive direction. The projector 10 includes a light source 11, a collimator lens 12, a pattern generator 13, and a projection lens 14.
The light source 11 emits laser light having a predetermined wavelength. The light source 11 is, for example, a semiconductor laser. The emission wavelength of the light source 11 is, for example, included in the infrared wavelength band. The light source 11 may be another type of light source such as an LED (Light Emitting Diode). The collimator lens 12 converts the light emitted from the light source 11, into substantially collimated light.
The pattern generator 13 generates pattern light from the light emitted from the light source 11. In the present embodiment, the pattern generator 13 is a transmission-type diffractive optical element (DOE). The diffractive optical element (DOE), for example, has a diffraction pattern with a predetermined number of steps on an incident surface thereof. Due to the diffraction action by the diffraction pattern, the laser light incident on the diffractive optical element (pattern generator 13) is divided into a plurality of lights to be converted into a predetermined pattern of light. The generated pattern is a pattern that can maintain uniqueness for each pixel block 102 described later.
In the present embodiment, the pattern generated by the diffractive optical element (DOE) is a pattern in which a plurality of dot regions (hereinafter referred to as “dots”), which are regions through which light passes, are randomly distributed. However, the pattern generated by the diffractive optical element (DOE) is not limited to the pattern with dots, and may be another pattern. The pattern generator 13 may be a reflection-type diffractive optical element, or may be a photomask. Alternatively, the pattern generator 13 may be a device that generates a fixed pattern of pattern light by a control signal, such as a DMD (Digital Mirror Device) and a liquid crystal display.
The projection lens 14 projects the pattern light generated by the pattern generator 13. The projection lens 14 does not have to be a single lens, but may be composed of a combination of multiple lenses. A concave-shaped reflecting mirror may be used instead of the projection lens 14.
The imager 20 includes a first imaging element 21, a second imaging element 22, a light-splitting element 23, and an imaging lens 24.
The first imaging element 21 and the second imaging element 22 are CMOS image sensors. The first imaging element 21 and the second imaging element 22 may be CCDs. As described later, a first filter 41 and a second filter 42 (see
The light-splitting element 23 splits and guides light, from a subject, condensed by the imaging lens 24, to the first imaging element 21 and the second imaging element 22. The light-splitting element 23 is, for example, a half mirror. The light-splitting element 23 may be another optical element such as a diffractive optical element. The imaging lens 24 condenses light from the field-of-view range, onto the imaging surfaces of the first imaging element 21 and the second imaging element 22. The optical axis of the imaging lens 24 is parallel to the Z-axis.
The image processor 30 includes a signal processor 31, a light source driver 32, a first imaging processor 33, a second imaging processor 34, and a communication interface 35.
The signal processor 31 includes an arithmetic processing circuit such as a microcomputer, and a memory, and controls each part according to a predetermined program. In addition, the signal processor 31 processes pixel signals inputted from the first imaging processor 33 and the second imaging processor 34 to calculate the distance to the subject and generate an image of the subject.
The signal processor 31 stores therein a standard image in which dots are distributed in the same pattern as in the pattern light projected from the projector 10. The signal processor 31 performs a process of comparing this standard image with images captured by the first imaging processor 33 and the second imaging processor 34, respectively, performs a stereo correspondence point search, and calculates the distance to the subject projected on each pixel block on each captured image.
That is, the signal processor 31 sets a pixel block to be the target for distance acquisition (hereinafter referred to as “target pixel block”) on the captured image, and searches for a pixel block corresponding to the target pixel block, that is, a pixel block that best matches the target pixel block (hereinafter referred to as “matching pixel block”), in a search range defined on the standard image. Then, the signal processor 31 performs a process of acquiring a pixel deviation amount between a pixel block at the same position as the target pixel block on the standard image (hereinafter referred to as “standard pixel block”) and the matching pixel block extracted from the standard image by the above search, and calculating the distance to the subject at the position of the target pixel block from the acquired pixel deviation amount by a calculation based on a triangulation method.
The signal processor 31 generates images including no pattern light dot, from the images captured by the first imaging processor 33 and the second imaging processor 34, respectively. A process for distance calculation and image generation for the subject will be described later with reference to
The signal processor 31 performs a first process of performing distance calculation and image generation on a captured image (pixel signal) from the first imaging element 21, and a second process of performing distance calculation and image generation on a captured image (pixel signal) from the second imaging element 22, in parallel. The signal processor 31 transmits one of distances and one of images acquired by the first process and the second process, respectively, to an external device via the communication interface 35.
The signal processor 31 may execute the first process and the second process by a semiconductor integrated circuit composed of two FPGAs (Field Programmable Gate Arrays). Alternatively, the first process and the second process may be executed by another semiconductor integrated circuit such as a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and an ASIC (Application Specific Integrated Circuit).
The light source driver 32 drives the light source 11 under control from the signal processor 31. The first imaging processor 33 controls the first imaging element 21 and performs processing such as luminance correction and camera calibration on the pixel signal of the captured image outputted from the first imaging element 21. The second imaging processor 34 controls the second imaging element 22 and performs processing such as luminance correction and camera calibration on the pixel signal of the captured image outputted from the second imaging element 22.
As shown in
As shown in
In
As shown in
As shown in
As shown in
In the present embodiment, the arrangement pattern of the first regions 411 and the second regions 412 in the first filter 41 and the arrangement pattern of the first regions 421 and the second regions 422 in the second filter 42 are the same as each other.
When the pattern light is projected from the projector 10, reflected light of the pattern light reflected from the subject is received by the imaging lens 24 and condensed onto the first imaging element 21 and the second imaging element 22. Accordingly, as shown in
As shown in
As shown in
First, as shown in
The signal processor 31 sets the start position ST1 as a search position. The signal processor 31 sets a reference pixel block RB1 having the same size as the target pixel block TB1, at this search position, and calculates a correlation value between the target pixel block TB1 and the reference pixel block RB1.
Here, the correlation value is acquired, for example, as a value (SAD) obtained by calculating the differences between the pixel values (luminance) for the mutually corresponding pixel regions in the target pixel block TB1 and the reference pixel block RB1 and adding up all the absolute values of the respective calculated differences. Alternatively, the correlation value may be acquired as a value (SSD) obtained by adding up all the squared values of the above differences. However, the calculation method for the correlation value is not limited to these methods, and other calculation methods may be used as long as a correlation value that serves as an index of the correlation between the target pixel block TB1 and the reference pixel block RB1 is acquired.
Then, when the process for one reference pixel block RB1 is completed, the signal processor 31 sets the next search position as shown in
The signal processor 31 repeats the same process while shifting the search position in the direction to the end by one pixel.
Then, when the process for the final search position is completed, the signal processor 31 acquires the search position at which the correlation value is minimum, among the sequentially set search positions. Then, the signal processor 31 extracts the reference pixel block RB1 at the search position at which the minimum correlation value has been acquired, as the pixel block (matching pixel block) corresponding to the target pixel block TB1. Furthermore, the signal processor 31 acquires a pixel deviation amount of the matching pixel block with respect to the start position ST1. Then, the signal processor 31 calculates the distance to the subject by a triangulation method from the acquired pixel deviation amount and the separation distance between the projector 10 and the imager 20.
The signal processor 31 temporarily stores the calculated distance. The signal processor 31 repeats the same process for all target pixel blocks TB1 on the captured image. Thus, by the first process, the distance to the subject is temporarily stored for all the target pixel blocks TB1.
A search process of the second process for the captured image acquired by the second imaging element 22 is also the same as above. The signal processor 31 acquires the distance to the subject for all target pixel blocks TB1 by the second process, and temporarily stores the acquired distance.
In the present embodiment, as shown in
As shown in
On the other hand, as shown in
To solve such a problem, the signal processor 31 sets a predetermined threshold value Th0 for a correlation value, and if the minimum correlation value acquired by the search process is not the threshold value Th0 or less, the signal processor 31 determines that a distance cannot be calculated for this target pixel block TB1, and performs a process for this target pixel block TB1. In this case, the signal processor 31 interpolates the distance for the target pixel block TB1 for which a distance could not be calculated, with the distances obtained for the pixel blocks surrounding the target pixel block TB1. As the method of interpolation, a well-known conventional method, such as a method using the average of the distances for the surrounding pixel blocks, can be used.
In the flowchart in
In step S101, the signal processor 31 causes the first imaging element 21 to operate with a first exposure time to acquire a first captured image. In step S102, the signal processor 31 performs luminance correction based on the exposure time for each pixel signal of the first captured image. More specifically, the luminance correction is performed for each pixel signal of the first captured image such that the luminance level becomes a level achieved when a standard exposure time is set for the first imaging element 21.
In step S103, the signal processor 31 extracts, from the first captured image after the luminance correction, regions on which natural light (visible light) is incident, that is, the regions corresponding to the second regions 412 of the first filter 41. In step S104, the signal processor 31 interpolates pixel signals for the regions between the extracted natural light regions with pixel signals for the regions surrounding these regions, and generates an image (luminance image) of the subject for one screen.
In step S105, the signal processor 31 calculates a distance for each pixel block 212 on the first captured image by the above-described search process from the first captured image. In step S106, the signal processor 31 interpolates distances for the pixel blocks 212 for which a distance could not be acquired by the search process, with the distances for the pixel blocks surrounding these pixel blocks 212, and acquires a distance image in which distances are associated with all pixel blocks.
In step S107, the signal processor 31 causes the second imaging element 22 to operate with a second exposure time different from the first exposure time, to acquire a second captured image. For example, the second exposure time is set shorter than the first exposure time. Steps S108 to S112 and steps S102 to S106 are the same as each other in the content of the process itself, and are different from each other only in that the captured images for which the process is to be performed are the first captured image and the second captured image, respectively. By steps S108 to S112, for the second captured image, the signal processor 31 acquires an image (luminance image) of the subject and a distance image in which distances are associated with all pixel blocks.
In step S113, the signal processor 31 selects one of the images (luminance images) of the subject acquired for the first captured image and the second captured image, respectively. In step S114, the signal processor 31 selects one of the distance images acquired for the first captured image and the second captured image, respectively.
For example, when there is no defect such as a failure in both the first imaging element 21 and the second imaging element 22, the signal processor 31 selects the image (luminance image) of the subject and the distance image acquired for the first captured image (captured image for which the exposure time is longer) in steps S113 and S114. When a defect such as a failure is detected in one of the first imaging element 21 and the second imaging element 22, the signal processor 31 selects the image (luminance image) of the subject and the distance image acquired for the captured image from the imaging element in which there is no defect, in steps S113 and S114.
For example, during initial operation before executing the process in
For example, when the pixel signals of the pixels, of the first captured image and the second captured image, corresponding to each other are substantially the same for all pixels, it is determined that there is no defect such as a failure in the first imaging element 21 and the second imaging element 22. On the other hand, when one of the pixel signals of pixels corresponding to each other tends to be lower than the other, it is determined that there is a defect such as a failure in the imaging element from which the one of the pixel signals is acquired.
However, the method of determining a failure is not limited thereto. For example, the average value of pixel signals may be acquired from each of the first captured image and the second captured image acquired during initial operation, and the acquired average values may be compared to determine the presence or absence of a failure in the first captured image and the second captured image. In this case, when these average values are substantially the same, it is determined that there is no failure in the first captured image and the second captured image, and when one of these average values is lower than the other, it is determined that there is a defect such as a failure in the imaging element from which the one of these average values is acquired.
Alternatively, from the first captured image and the second captured image for several frames acquired during initial operation, the variation of each pixel value of the first captured image and the variation of each pixel value of the second captured image may be monitored, and when the variation of the pixel values of any of the first imaging element 21 and the second imaging element 22 exceeds a threshold range, it may be determined that there is a defect in that imaging element.
When there is no defect such as a failure in both the first imaging element 21 and the second imaging element 22, one of the luminance images and one of the distance images acquired for the first captured image and the second captured image, respectively, may be selected based on the reflectance or the light absorption rate of the subject. For example, when the reflectance for infrared light is high, the signal processor 31 may select the distance image acquired from the second captured image for which the exposure time is shorter, in step S114.
Alternatively, representative values (e.g., average values, median values, etc.) of pixel signals may be obtained from a predetermined region in the first captured image and a predetermined region in the second captured image, and the luminance image and the distance image acquired from the captured image whose representative value is closer to a preset value, may be selected in step S114.
In step S115, the signal processor 31 transmits the image (luminance image) of the subject and the distance image selected in steps S113 and S114 to the external device via the communication interface 35. Then, the signal processor 31 ends the process in
According to the above embodiment, the following effects are achieved.
As shown in
As described with reference to
As shown in
As shown in
As shown in
As shown in
As described above, the wavelength band transmitted through the first regions 411 and 421 (first wavelength band) is a wavelength band included in the infrared band (non-visible band), and a second wavelength band transmitted through the second regions 412 and 422 is a wavelength band included in the visible band. Accordingly, a luminance image of the subject can be acquired by natural light (visible light) from the subject while the distance to the subject is acquired by applying infrared (non-visible) pattern light to the subject.
In steps S113 and S114 in
In the above embodiment, as shown in
As shown in
Even with the configuration in
In the configuration of Modification 1, as shown in
As for the distance image, similarly, a distance image for one frame can be generated by superimposing a distance image for one frame including only distances acquired for the pixel blocks corresponding to the first regions 411 on the first captured image, and a distance image for one frame including only distances acquired for the pixel blocks corresponding to the first regions 421 on the second captured image, on each other. That is, in the configuration of Modification 1, by superimposing such distance images, a distance image for one frame can be acquired without performing an interpolation process.
In the flowchart in
In step S121, the signal processor 31 generates a luminance image for one frame by combining a luminance image including only the natural light regions extracted in step S103 and a luminance image including only the natural light regions extracted in step S109. More specifically, the signal processor 31 generates a luminance image for one frame by embedding the pixel signals (luminance) of the respective pixels in the regions extracted in step S109 into the regions not extracted in step S103.
In step S122, the signal processor 31 generates a distance image for one frame by combining a distance image including only distances corresponding to the pixel blocks for which a distance is calculated in step S105 and a distance image including only distances corresponding to the pixel blocks for which a distance is calculated in step S111. More specifically, the signal processor 31 generates a distance image for one frame by embedding the distances at the positions of the pixel blocks for which a distance is calculated in step S111, that is, the positions corresponding to the first regions 421 of the second filter 42, into the positions of the pixel blocks for which a distance is not calculated in step S105, that is, the positions corresponding to the second regions 412 of the first filter 41.
In step S115, the signal processor 31 transmits the luminance image generated in step S121 and the distance image generated in step S122 to the external device via the communication interface 35.
With the configuration of Modification 1, by the process in
If a distance could not be calculated for some pixel blocks in the distance image synthesized in step S122 in
In the above embodiment, the pattern light in one type of wavelength band is projected from the projector 10, but pattern lights in a plurality of types of wavelength bands may be projected from the projector 10 and applied to a subject.
The projector 10 includes a light source 15, a collimator lens 16, and a combining element 17 in addition to the configuration shown in
The light source 15 emits light in a wavelength band different from that of the light source 11. The light source 15 is, for example, a semiconductor laser. The light source 15 may be another type of light source such as an LED (Light Emitting Diode). The emission wavelength of the light source 15 is, for example, included in the infrared wavelength band. The emission wavelength of the light source 15 may be included in the visible wavelength band. The collimator lens 16 converts the light emitted from the light source 15, into substantially collimated light.
The combining element 17 aligns the optical axis of the light source 11 and the optical axis of the light source 15 with each other. The combining element 17 transmits the light from the light source 11 and reflects the light from the light source 15. The combining element 17 is, for example, a dichroic mirror. When the light sources 11 and 15 are semiconductor lasers, the combining element 17 may be a polarizing beam splitter. In this case, the light source 11 is placed such that a polarization direction is a P-polarization direction with respect to the combining element 17, and the light source 15 is placed such that a polarization direction is an S-polarization direction with respect to the combining element 17.
The pattern generator 13 generates pattern lights for both the light emitted from the light source 11 and the light emitted from the light source 15, respectively. As in the above embodiment, the pattern generator 13 can be composed of a transmission-type diffractive optical element (DOE). The pattern light in each wavelength band is projected onto a subject by the projection lens 14.
The imager 20 includes a third imaging element 25 and a light-splitting element 26 in addition to the configuration shown in
The first regions 431 of the third filter 43 transmit light in the emission wavelength band of the light source 15 and do not transmit light in other wavelength bands. The second regions 432 of the third filter 43 transmit light in the visible light wavelength band and do not transmit light in other wavelength bands. Therefore, the second regions 432 do not transmit light in the emission wavelength bands of the light sources 11 and 15.
As described above, the first regions 411 and 421 of the first filter 41 and the second filter 42 transmit light in the emission wavelength band of the light source 11 and do not transmit light in other wavelength bands, and thus do not transmit light in not only the visible light wavelength band but also the emission wavelength band of the light source 15. In addition, as described above, the second regions 412 and 422 of the first filter 41 and the second filter 42 transmit light in the visible light band and do not transmit light in other wavelength bands, and thus do not transmit not only light in the emission wavelength band of the light source 11 but also light in the emission wavelength band of the light source 15.
The light-splitting element 26 splits light incident from the imaging lens 24 side to the third imaging element 25 and the light-splitting element 23. The light-splitting element 26 is, for example, a half mirror. The light-splitting element 26 may be a diffractive optical element.
In the configuration in
In the configuration in
That is, the signal processor 31 exposes the third imaging element 25 for a predetermined time to acquire the third captured image. The exposure time for the third imaging element 25 may be the same as either one of the first exposure time for the first imaging element 21 and the second exposure time for the second imaging element 22, or may be different from the first exposure time and the second exposure time.
The signal processor 31 executes the same process as in steps S102 to S106 in
With this configuration, in addition to the first captured image and the second captured image, a luminance image of the subject and a distance image can also be acquired from the third captured image. Therefore, even if a defect such as a failure occurs in both the first imaging element 21 and the second imaging element 22, the luminance image and the distance image acquired from the third captured image can be provided to an external device.
Since the wavelength bands transmitted through the first regions 431 of the third filter 43 and the first regions 411 and 421 of the first filter 41 and the second filter 42 are different from each other, when the reflectance of the subject differs in these wavelength bands, the intensity of the pattern light received by the first imaging element 21 and the second imaging element 22 and the intensity of the pattern light received by the third imaging element 25 are different from each other. Therefore, in this case, the distance image acquired for the imaging element in which the intensity of the pattern light received is higher may be selected and transmitted to the external device. Accordingly, a distance image having a higher quality can be provided to the external device.
As shown in
In the configuration in
On the imaging surfaces of the first imaging element 21 and the second imaging element 22, a first filter 44 and a second filter 45 are formed instead of the first filter 41 and the second filter 42 described in the above embodiment. A plurality of first regions 441 and a plurality of second regions 442 are arranged in the first filter 44, and a plurality of first regions 451 and a plurality of second regions 452 are arranged in the second filter 45.
The first regions 441 and 451 transmit light in the emission wavelength band of the light source 11 and do not transmit light in other wavelength bands. The second regions 442 and 452 transmit light in the emission wavelength band of the light source 15 and do not transmit light in other wavelength bands. The arrangement pattern of the plurality of first regions 441 and the plurality of second regions 442 in the first filter 44 may be the same as in
On the imaging surface of the third imaging element 25, a third filter 46 is formed instead of the third filter 43 in
The pattern light generated from the light from the light source 11 is incident on the regions corresponding to the first regions 441 of the first imaging element 21, and the pattern light generated from the light from the light source 15 is incident on the regions corresponding to the second regions 442 of the first imaging element 21. Therefore, the signal processor 31 can generate a distance image based on the two types of lights from a first captured image of the first imaging element 21. Similarly, the signal processor 31 can generate a distance image based on the two types of lights from a second captured image of the second imaging element 22.
Steps S101, S102, S107, S108, and S115 in
The signal processor 31 calculates a distance by a stereo correspondence point search with respect to the regions corresponding to the first regions 441 for the first captured image acquired by the first imaging element 21 (S131). The signal processor 31 performs interpolation based on the distance calculated in step S131 for the regions corresponding to the second regions 442, and generates a distance image for one frame based on the light from the light source 11 (S132).
Furthermore, the signal processor 31 calculates a distance by a stereo correspondence point search with respect to the regions corresponding to the second regions 442 for the first captured image (S133). The signal processor 31 performs interpolation based on the distance calculated in step S133 for the regions corresponding to the first regions 441, and generates a distance image for one frame based on the light from the light source 15 (S134).
Moreover, the signal processor 31 calculates a distance by a stereo correspondence point search with respect to the regions corresponding to the first regions 451 for the second captured image acquired by the second imaging element 22 (S135). The signal processor 31 performs interpolation based on the distance calculated in step S135 for the regions corresponding to the second regions 452, and generates a distance image for one frame based on the light from the light source 11 (S136).
Furthermore, the signal processor 31 calculates a distance by a stereo correspondence point search with respect to the regions corresponding to the second regions 452 for the second captured image (S137). The signal processor 31 performs interpolation based on the distance calculated in step S137 for the regions corresponding to the first regions 451, and generates a distance image for one frame based on the light from the light source 15 (S138).
Due to the above process, the two types of distance images based on the two types of lights from the light sources 11 and 15 are acquired with the first exposure time, and the two types of distance images based on the two types of lights from the light sources 11 and 15 are acquired with the second exposure time. As in the above embodiment, the signal processor 31 selects one of these distance images based on the presence or absence of a failure or the like in the first imaging element 21 and the second imaging element 22 (S139). At this time, the signal processor 31 may further select the distance image acquired using light in a wavelength band for which the reflectance of a subject is higher.
The signal processor 31 transmits the distance image thus selected, together with the luminance image acquired from a third captured image of the third imaging element 25, to an external device (S115). Then, the signal processor 31 ends the process in
With this configuration, as in the above embodiment, since the exposure times for the first imaging element 21 and the second imaging element 22 are different from each other, the dynamic range of distance detection based on the light from the light source 11 can be expanded, and the dynamic range of distance detection based on the light from the light source 15 can be expanded.
Also in this configuration, even if a defect such as a failure occurs in either one of the first imaging element 21 and the second imaging element 22, a distance image can be acquired using the captured image from the other imaging element. Furthermore, since distance images can be acquired using two types of lights corresponding to the emission wavelength bands of the light sources 11 and 15, a highly accurate distance image can be provided to the external device by selecting the distance image acquired using the light in the wavelength band for which the reflectance of the subject is higher.
In the above embodiment, the sizes of the first regions 411 and the second regions 412 of the first filter 41 and the sizes of the first regions 421 and the second regions 422 of the second filter 42 are the same as the sizes of the pixel blocks 212 and 222, but the sizes of the first regions 411 and 421 and the second regions 412 and 422 do not necessarily have to be the same as those of the pixel blocks 212 and 222.
For example, as shown in
As shown in
However, when the sizes of the first regions 411 and 421 and the second regions 412 and 422 are integer multiples of the sizes of the pixel blocks 212 and 222, as shown in
Meanwhile, when the sizes of the first regions 411 and the second regions 412 are smaller than the size of the pixel block 212 as shown in
In the above embodiment, as shown in
When the arrangement patterns of the pixels of the first imaging element 21 and the second imaging element 22 are adjusted as described above, a luminance image having a resolution higher than that of the pixel regions 211 and 221 can be acquired as shown in
Similarly, a distance image having a resolution higher than that of the pixel regions 211 and 221 can be acquired by superimposing the distance image acquired by the first imaging element 21 and the distance image acquired by the second imaging element 22 and performing an interpolation process.
The shift of the center Co2 of the imaging surface of the second imaging element 22 with respect to the optical axis center Co0 of the imaging lens 24 is not limited to a shift by half a pixel in the right-left direction and the up-down direction. For example, a shift may occur only in the up-down direction, or a shift may occur only in the right-left direction.
The magnitude of the shift is not limited to half a pixel, and may be an integer pixel. For example, when filters 41 and 42 having the same configuration as shown in
Furthermore, the shift may be based on the difference between the center Co1 of the imaging surface of the first imaging element 21 and the center Co2 of the imaging surface of the second imaging element 22, instead of being based on the difference from the optical axis center Co0 of the imaging lens 24.
In the above embodiment, the light received by the imaging lens 24 is split by the light-splitting element 23, and the two split lights are guided to the first imaging element 21 and the second imaging element 22, respectively, but the number of lights into which the light received by the imaging lens 24 is split and the number of imaging elements that receive the split lights are not limited thereto.
For example, as shown in
With this configuration, by applying the same process as that for the first captured image and the second captured image captured by the first imaging element 21 and the second imaging element 22 to the third captured image captured by the third imaging element 25, a luminance image and a distance image based on the third captured image are generated. Therefore, even if a failure or the like occurs in both the first imaging element 21 and the second imaging element 22, the luminance image and the distance image based on the third captured image can be provided to the external device.
By making a third exposure time for the third imaging element 25 different from the first exposure time and the second exposure time for the first captured image and the second captured image, the range (dynamic range) where a luminance image and a distance image can be acquired can be further expanded.
In the process in
The emission wavelength of the light source 11 does not necessarily have to be in the infrared wavelength band, and may be, for example, in the visible wavelength band. In this case, the first regions 411 transmit light in the emission wavelength band of the light source 11 in the visible wavelength band and do not transmit light in other wavelength bands, and the second regions 412 do not transmit light in the emission wavelength band of the light source 11 in the visible wavelength band and transmit light in other wavelength bands. In addition, the emission wavelength band of the second regions 412 does not necessarily have to be in the visible wavelength band, and may be in the infrared wavelength band.
The arrangement pattern of the first regions 411 and the second regions 412 in the first filter 41 and the arrangement pattern of the first regions 421 and the second regions 422 in the second filter 42 are not limited to the arrangement patterns shown in the above embodiment and Modification 1. For example, the sizes of the first regions 411 and the second regions 412 may be different from each other, and the interval between the adjacent first regions 411 and the interval between the adjacent second regions 412 may be different form each other.
In Modifications 2 and 3 above, two types of pattern lights having different wavelength bands are projected from the one projector 10 to a subject, but two types of pattern lights having different wavelength bands may be projected from two projectors, respectively, to a subject.
In the above embodiment, the search range for the stereo correspondence point search is in the row direction, but the search range may be in the column direction, or the search range may be in a combination of the row and column directions.
The first imaging processor 33 and the second imaging processor 34 may correct distortion due to distortion of the imaging lens 24, etc., for the first captured image and the second captured image, or may correct distortion due to distortion of the imaging lens 24, etc., for the standard image stored in advance in the signal processor 31.
In the above embodiment, the correlation value is obtained for each pixel block, but correlation values between the pixel blocks may be further acquired by a process such as parabolic fitting, and a correspondence point search may be performed.
The configuration of the imaging device 1 is not limited to the configuration shown in the above embodiment, and, for example, a photosensor array having a plurality of photosensors arranged in a matrix may be used as each of the first imaging element 21 and the second imaging element 22.
In addition to the above, various modifications can be made as appropriate to the embodiment of the present invention, without departing from the scope of the technological idea defined by the claims.
Number | Date | Country | Kind |
---|---|---|---|
2022-027790 | Feb 2022 | JP | national |
This application is a continuation of International Application No. PCT/JP2023/003977 filed on Feb. 7, 2023, entitled “IMAGING DEVICE”, which claims priority under 35 U.S.C. Section 119 of Japanese Patent Application No. 2022-027790 filed on Feb. 25, 2022, entitled “IMAGING DEVICE”. The disclosures of the above applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2023/003977 | Feb 2023 | WO |
Child | 18813179 | US |