This patent application is based on and claims priority pursuant to 35 U.S.C. § 119(a) to Japanese Patent Application Nos. 2022-164992, filed on Oct. 13, 2022, and 2023-134288, filed on Aug. 21, 2023, in the Japan Patent Office, the entire disclosure of which is hereby incorporated by reference herein.
Embodiments of the present disclosure relate to an image-capturing system, an image-capturing device, and an image-capturing method.
Indirect time-of-flight (ToF) sensing is known as a technology for measuring the distance to an object. In indirect ToF sensing, the object is irradiated with reference light having a modulated intensity, and the light reflected from the object is received, so as to obtain four kinds of phase images that are shifted in phase from each other, for distance measuring. Based on the conversion of the phase images, one distance image indicating the distance to the object is generated. For example, data of a construction site or an indoor space is acquired, and a point cloud can be reproduced as three-dimensional space restoration data by performing post-stage processing, which may be provided by a cloud service. If a tripod (or some mount) is required in acquiring data, for example, the equipment for acquiring data is bulky, and it takes time to capture an image. In a narrow space such as an attic space, it is difficult to use the tripod. To avoid such situations, a hand-held device for image capturing is used.
According to an embodiment, an image-capturing system includes circuitry to receive multiple phase images for each of multiple phases. The multiple phase images include multiple first phase images captured under a first condition and one or more second phase images captured under a second condition different from the first condition, and the multiple first phase images are greater in number than the one or more second phase images. The circuitry calculates, for each of the multiple phases, a motion amount, at least, among the multiple first phase images of a same phase; performs correction on the multiple first phase images of the same phase based on the motion amount; and performs addition processing on the multiple first phase images of the same phase, so as to generate a summed phase image for each of the multiple phases.
According to another embodiment, an image-capturing device includes an image-capturing sensor to receive reflected light of irradiation light emitted from a light source and reflected by an object, to perform image capturing, and circuitry. The controls the image-capturing sensor to receive the reflected light so as to capture multiple phase images for each of multiple phases. The multiple phase images include multiple first phase images captured under a first condition and one or more second phase images captured under a second condition different from the first condition. The multiple first phase images are greater in number than the one or more second phase images. The circuitry calculates, for each of the multiple phases, a motion amount, at least, among the multiple first phase images of a same phase; performs correction on the multiple first phase images of the same phase based on the motion amount; and performs addition processing on the multiple first phase images of the same phase, so as to generate a summed phase image for each of the multiple phases.
According to another embodiment, an image-capturing method includes receiving, with an image-capturing sensor, reflected light of irradiation light emitted from a light source and reflected by an object, to perform image capturing; and controlling the image-capturing sensor to receive the reflected light so as to capture multiple phase images for each of multiple phases. The multiple phase images include multiple first phase images captured under a first condition and one or more second phase images captured under a second condition different from the first condition. The multiple first phase images are greater in number than the one or more second phase images. The method further includes calculating, for each of the multiple phases, a motion amount, at least, among the multiple first phase images of a same phase; performing correction on the multiple first phase images of the same phase based on the motion amount; and performing addition processing on the multiple first phase images of the same phase, so as to generate a summed phase image for each of the multiple phases.
A more complete appreciation of embodiments of the present disclosure and many of the attendant advantages and features thereof can be readily obtained and understood from the following detailed description with reference to the accompanying drawings, wherein:
The accompanying drawings are intended to depict embodiments of the present disclosure and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted. Also, identical or similar reference numerals designate identical or similar components throughout the several views.
In the related art, there is a distance measuring device that emits light from a light source, acquires a phase signal with an image-capturing device, repeatedly stores the acquired phase signal in a storage unit multiple times, and generates a distance image representing the distance to an object calculated from multiple phase signals.
When a hand holding the image-capturing device or the object shakes, a shift occurs in multiple phase images, which degrades the quality of the distance image generated based on the multiple phase images.
According to the embodiments described below, multiple phase images can be processed for reducing a decrease in distance calculation accuracy.
In describing embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that have a similar function, operate in a similar manner, and achieve a similar result.
Referring now to the drawings, an image-capturing device, an image-capturing system, and an image-capturing method according to embodiments of the present disclosure are described below. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The present invention, however, is not limited to the following embodiments, and constituent elements of the following embodiments include elements easily conceivable by those skilled in the art, substantially the same elements, and elements within so-called equivalent ranges. Furthermore, various omissions, substitutions, changes, and combinations of the constituent elements may be made without departing from the gist of the following embodiments.
Terms used in this disclosure are defined as described below. “Computer software,” which may be referred to simply as “software” in the following description, is defined as a program related to the operation of a computer or any information that is used in processing performed by a computer and equivalent to a program. “Application software,” which may be referred to simply as an “application,” is a generic name for any software used to perform certain processing. By contrast, an “operating system (OS)” is software for controlling a computer to allow, for example, application software to use computer resources. An “OS” controls basic operations of the computer, such as input and output of data, management of hardware resources such as a memory and a hard disk, and processes to be performed.
“Application software” operates by utilizing functions provided by an OS. A “program” is a set of instructions for causing a computer to perform processing to generate a certain result. Information that is not a direct command to a computer is not referred to as a program itself. However, information that defines processing performed by a program is similar in nature to a program and thus is interpreted as equivalent to a program. For example, a data structure, which is a logical structure of data represented by an interrelation between data elements, is interpreted as equivalent to a program.
An image-capturing system 1 illustrated in
The light projection device 10 emits pulsed light (irradiation light Le) toward the measured target 3.
The light projection device 10 includes a light source 21 and a drive circuit 22.
The light source 21 emits pulsed light toward the measured target 3.
The drive circuit 22 drives the light source 21. When receiving a modulation signal from the image-capturing control circuit 12, the drive circuit 22 causes a current corresponding to the modulation signal to flow to the light source 21. As a result, the light source 21 emits the pulsed light that is modulated, toward the measured target 3.
The light-receiving sensor 11 receives reflected light Lr that is light reflected from the measured target 3 being irradiated with the irradiation light Le emitted from the light projection device 10.
The light-receiving sensor 11 includes multiple pixels that receive the reflected light Lr. The pixels may be, for example, a two-dimensional array of area sensors. Each pixel includes, for example, a photodiode (PD) 103, modulation switches 104a and 104b, and capacitors 105a and 105b, as illustrated in
The PD 103 is a diode component that causes a current to flow in a certain direction when detecting (receiving) light.
The modulation switches 104a and 104b are switching elements that include, for example, a metal-oxide-semiconductor (MOS) transistor, a MOS device (e.g., a transfer gate), and a charge-coupled device (CCD). The modulation switches 104a and 104b perform an on/off operation according to a transfer signal from the image-capturing control circuit 12.
The capacitors 105a and 105b are power storage elements. Examples of the power storage elements include a MOS, a CCD, a metal insulator metal (MIM), wiring, and a parasitic capacitance of a p-n junction. The capacitors 105a and 105b accumulate electric charges (may be referred to simply as “charges”) independently of each other. Each of the capacitors 105a and 105b accumulates charges generated by photoelectric conversion according to the light received by the PD 103.
Each pixel of the light-receiving sensor 11 has a pixel structure to allocate the charges to two portions (the capacitors 105a and 105b). With this pixel structure, for example, a signal in one light receiving period can be allocated to a component of 0-degree phase and a component of 180-degree phase. In principle, it is also possible that each pixel has a pixel structure to allocate charges to three or more portions so that a signal in one light receiving period is allocated to three or more phase components. Each pixel receives the reflected light Lr reflected from the measured target 3. Each pixel accumulates charges according to a transfer signal (transfer signals TRA and TRB illustrated in
More specifically, in a period in which the transfer signal TRA illustrated in
When receiving an instruction signal for outputting received-light data from the image-capturing control circuit 12, the light-receiving sensor 11 performs analog-to-digital (A/D) conversion on signals that are voltages converted from charge amounts A and B (see
The image-capturing control circuit 12 repeatedly generates a pulse of the modulation signal and outputs the pulse to the light projection device 10, to control the irradiation operation of the irradiation light Le. In addition, the image-capturing control circuit 12 repeatedly generates a pulse of the transfer signal, to control the light receiving operation of the reflected light Lr performed by the light-receiving sensor 11. To be specific, the image-capturing control circuit 12 repeatedly generates a pulse of the transfer signal TRA for turning on and off the modulation switch 104a of the light-receiving sensor 11, to accumulate the charges in the capacitor 105a. Further, the image-capturing control circuit 12 repeatedly generates a pulse of the transfer signal TRB for turning on and off the modulation switch 104b of the light-receiving sensor 11, to accumulate the charges in the capacitor 105b. After the accumulation of charges in the capacitors 105a and 105b are repeated for a predetermined number of times (a predetermined number of pulses are generated), the image-capturing control circuit 12 stops outputting the modulation signal, the transfer signal TRA, and the transfer signal TRB. Then, the image-capturing control circuit 12 outputs the instruction signal to the light-receiving sensor 11 so as to control the light-receiving sensor 11 to output the received-light data LRA and LRB to the addition and correction unit 13. Alternatively, the image-capturing control circuit 12 may store the received-light data LRA and LRB from the light-receiving sensor 11 in a memory such as a random access memory (RAM) of the phase-image capturing device 5 and output the received-light data LRA and LRB from the memory to the addition and correction unit 13. Yet alternatively, as illustrated in
The addition and correction unit 13 is a processing unit that receives, as phase signals, the received-light data LRA and LRB output from the phase-image capturing device 5 or the information processing device 7, and performs the correction for shaking (shaking of a user hand holding the capturing device or shaking of the object) on multiple phase images of two-dimensional phases corresponding to the phase signals for each pixel of the light-receiving sensor 11. The addition and correction unit 13 further performs addition processing on the multiple phase images of the same phase, to generate a summed phase image. The addition and correction unit 13 may be implemented by hardware such as an integrated circuit, or may be implemented as a central processing unit (CPU) or an arithmetic and logic unit executes a program.
The distance calculation unit 14 is a processing unit that generates a distance image indicating the distance to the measured target 3 based on the summed phase image generated by the addition and correction unit 13. In other words, the distance calculation unit 14 calculates the distance to the measured target 3 by generating the distance image. The distance calculation unit 14 may be implemented by hardware such as an integrated circuit, or may be implemented as a CPU or an arithmetic and logic unit executes a program. The distance calculation unit 14 may generate a point cloud of a three-dimensional space from the generated distance image. In addition to outputting the generated distance image to the display control unit 15, the distance calculation unit 14 may be configured to output the generated distance image outside the image-capturing system 1 via, for example, a network interface.
The display control unit 15 is, for example, a control circuit that controls a display 2 illustrated in
The display 2 displays, for example, the distance image generated by the distance calculation unit 14 under the control of the display control unit 15. The display 2 is, for example, a liquid crystal display (LCD) or an organic electro-luminescence (EL).
In the configuration illustrated in
An example of the operation of the image-capturing system 1 will be described below with reference to
When the irradiation light Le is reflected from the measured target 3, the reflected light Lr is received by the light-receiving sensor 11. The waveform of the reflected light Lr illustrated in
D=Td×c/2 Equation 1
In a period in which the transfer signal TRA output from the image-capturing control circuit 12 is at the H level, the modulation switch 104a of each pixel of the light-receiving sensor 11 turns on (to be conductive). As a result, charges are accumulated in the capacitor 105a. Further, in a period in which the transfer signal TRB output from the image-capturing control circuit 12 is at the H level, the modulation switch 104b of each pixel of the light-receiving sensor 11 turns on (to be conductive). As a result, charges are accumulated in the capacitor 105b. The transfer signal TRA maintains the H level in a period substantially the same as the period of irradiation of the irradiation light Le, and transitions from the H level to the L level after the pulse duration Tw. The transfer signal TRB transitions from the L level to the H level almost at the same time as the transfer signal TRA transitions to the L level, and transitions from the H level to the L level after the elapse of the pulse duration Tw from the transition.
In the period during which the transfer signal TRA is at the H level, charges are accumulated in the capacitor 105a. Accordingly, an accumulated charge amount A, representing the amount of charges corresponding to the reflection light Lr, is accumulated in the capacitor 105a during the period indicated by hatching in
Td/Tw=B/(A+B) Equation 2
According to the above-described Equations 1 and 2, the distance D to the measured target 3 is obtained by following Equation 3 from the accumulated charge amount A of the reflection light Lr in the capacitor 105a and the accumulated charge amount B of the reflection light Lr in the capacitor 105b.
D=B/(A+B)×Tw×c/2 Equation 3
The sinusoidal modulation method is a method for acquiring the delay time Td of the reflected light by calculating a phase difference angle using signals detected by temporally dividing the received light (reflected light) into three or more. As an example, a four-phase sinusoidal modulation method will be described with reference to
As illustrated in
After each pixel is reset by a reset signal, the irradiation light is cyclically emitted toward the measured target, and the reflected light is cyclically returned. At this time, in the first subframe period, the modulation switches 104a and 104b of the pixels are alternately turned on a predetermined number of times by the transfer signal TRA (0-degree phase) and the transfer signal TRB (180-degree phase), respectively. The transfer signal TRB is shifted in phase from the transfer signal TRA by 180 degrees. Then, phase signals P0 and P180 corresponding to the amounts of charges accumulated temporally corresponding to the 0-degree phase and 180-degree phase are read out with a read signal.
Subsequently, in the second subframe period, the modulation switches 104a and 104b of the pixels are alternately turned on a predetermined number of times by the transfer signal TRA (90-degree phase) and the transfer signal TRB (270-degree phase), respectively. The transfer signal TRB is shifted in phase from the transfer signal TRA by 180 degrees. Then, phase signals P90 and P270 corresponding to the amounts of charges accumulated temporally corresponding to the 90-degree phase and 270-degree phase are read out with the read signal.
When the signals P0, P90, P180, and P270 that are temporally divided into four phases, i.e., 0-degree, 90-degree, 180-degree, and 270-degree phases, are acquired, a phase difference angle Φ is obtained using following Equation 4.
Φ=Arctan{(P90−P270)/(P0−P180)} Equation 4
Using the phase difference angle Φ obtained by Equation 4, the delay time Td of the reflected light from the irradiation light is obtained from following Equation 5.
Td=Φ/2π×T Equation 5
In Equation 5, when Tw represents the pulse duration of the irradiation light, and T represents the cycle, T=2Tw.
From the calculation method of the phase difference angle Φ, an ideal waveform of the irradiation light for enhancing the distance measuring performance in the sinusoidal modulation method is a sine waveform. As in the pixel configuration of the light-receiving sensor 11 illustrated in
In the above-described example, each pixel has two charge allocation destinations. Alternatively, each pixel may have, for example, four charge allocation destinations, i.e., include four sets of a modulation switch and a capacitor. In this case, the four sets of the modulation switch and the capacitor correspond to the 0-degree, 90-degree, 180-degree, and 270-degree phases, respectively, so that the delay time Td and the distance D to the measured target can be calculated with one exposure. The phases to be read out as phase signals are not limited to the above-described four phases, and phase signals of a different number of phases may be read out.
The pulse modulation method is a method for acquiring the delay time Td of the reflected light by using the signals detected by temporally dividing the received light (reflected light) into two or more signals. As an example, a two-phase pulse modulation method will be described with reference to
As illustrated in
After each pixel is reset by a reset signal, the irradiation light is cyclically emitted toward the measured target, and the reflected light is cyclically returned. At this time, in one frame period, the modulation switches 104a and 104b of the pixels are alternately turned on a predetermined number of times by the transfer signal TRA (0-degree phase) and the transfer signal TRB (180-degree phase), respectively. The transfer signal TRB is shifted in phase from the transfer signal TRA by 180 degrees. Then, the phase signals P0 and P180 corresponding to the amounts of charges accumulated temporally corresponding to the 0-degree phase and 180-degree phase are read out with a read signal.
When the phase signals P0 and P180 temporally divided into two phases, 0-degree and 180-degree phases, are obtained, the delay time Td of the reflection light from the irradiation light can be obtained by using following Equation 6.
Td={P180/(P0+P180)}×Tw Equation 6
From the calculation method of the delay time Td, an ideal waveform of the irradiation light for enhancing the distance-measuring performance in the pulse modulation method is a rectangular waveform.
The image-capturing control circuit 12 sequentially outputs the transfer signals TR in 0-degree, 90-degree, 180-degree, and 270-degree phases to the corresponding modulation switches with respect to the irradiation light emitted from the light projection device 10 (the modulation signal output to the light projection device 10), so as to turn on the modulation switches. With this operation, charges are accumulated in the corresponding capacitors. Then, the light-receiving sensor 11 can acquire the 0-degree, 90-degree, 180-degree, and 270-degree phase signals in one frame period. In this case, the amount of the reflected light of the irradiation light, reflected from a measured target at a long distance is smaller than that reflected from a measured target at a near distance. Accordingly, as illustrated in
Subsequently, as illustrated in
In
Further, the multiple phase images acquired under the same condition, to be added together for each phase for generating the summed phase images are not necessarily acquired in one period (exposure time) as illustrated in
Further, although the description is given above of capturing the phase images in the order of long distance, medium distance, and near distance with reference to
As described above, use of the summed phase image obtained by adding the multiple phase images of the same phase acquired under the same condition can increase the accuracy of the distance calculated as the distance image. Even so, there may be shifts among the multiple phase images due to a shake of a user holding the phase-image capturing device 5 (may be also referred to as “shake of the device”) or a shake of the object (measured target). When the addition processing is performed to generate the summed phase image from the phase images having a shift, the shift may result in a decrease in the accuracy of the distance measurement. Therefore, in the image-capturing system 1 according to the present embodiment, the addition and correction unit 13 performs shaking correction for correcting the shift among the multiple phase images due to a shake of the device or a shake of the object.
First, the addition and correction unit 13 focuses on the long-distance phase images, which are large in number of phase images for each phase among the phase images for each phase obtained from the light-receiving sensor 11 in the exposure times for different distances. In the example illustrated in
To be more specific, when selecting the reference phase image to be used for shaking correction from among the 0-degree phase images obtained in the long-distance exposure time, as illustrated in
As described above, a motion amount is calculated using the reference phase image that is a phase image obtained at a time point closer to a different exposure time, selected from among the phase images obtained in the long-distance exposure time. This is advantageous in increasing the accuracy of shaking correction performed on the phase images obtained in the different exposure time (e.g., the medium-distance or near-distance exposure time) using this motion amount since this motion amount is calculated with reference to the reference phase image obtained at the time point closer to the different exposure time (e.g., the medium-distance or near-distance exposure time).
A description is given of a method for the addition and correction unit 13 to calculate a motion amount (shift amount) among the multiple phase images caused by a shake of the device or the object, using the selected reference phase image. For example, edges of the measured target are extracted from the reference phase image as feature points of the measured target. Then, the edges extracted as the feature points of the measured target are compared between the reference phase image and the other phase images, and align the other phase images with the reference phase image, so as to calculate the motion amount.
The following is other methods for the addition and correction unit 13 to calculate a motion amount (shift amount) among the multiple phase images caused by a shake of the device or the object, using the selected reference phase image. For example, motion amounts ΔX and ΔY between a certain reference phase image and another phase image in a time Nt can be calculated by a typical process of obtaining an optical flow or a machine learning method disclosed in the following reference article.
Alternatively, for example, a machine shaking model due to shaking of a user hand or an accelerometer may be introduced, and a motion may be predicted from the information therefrom and motion amount information of multiple phase images obtained in a long-distance exposure time.
Further, in the case where the addition processing is performed on the multiple phase images acquired in multiple periods, which is described above with reference to
To be more specific, as illustrated in
For performing the addition processing on multiple phase images acquired in a long-distance exposure time in the case where phase images are obtained in the order of near-distance phase images, medium-distance phase images, and long-distance phase images as described above with reference to
Specifically, as illustrated in
As described above, also in the examples illustrated in
Further, as illustrated in
Although the reference phase images are selected from the multiple phase images in the periods of the same type of exposure (for example, the long-distance exposure time) in
With reference to
The distance image DIG1 is generated by the distance calculation unit 14 based on the summed phase images without the shaking correction by the addition and correction unit 13. The distance image DIG1 serves as a first distance image.
The distance image DIG2 is generated by the distance calculation unit 14 based on the summed phase images on which the shaking correction has been performed by the addition and correction unit 13. The distance image DIG2 serves as a second distance image.
By displaying the distance image without the shaking correction and the distance image with the shaking correction side by side in this way, the effect of the shaking correction is presented.
There may be cases where the amount of shake of the device is too large to be corrected. Alternatively, there may be cases where a movement of the measured target blurs a region of the image or prevents the calculation of a correct distance in a region of the image. In such cases, the display control unit 15 may display, on the screen image 1000, the blurred region or, for example, a message to prompt a user to perform re-generation of the distance image. Such an action can allow the user to know that there is a shake of the device or the object in the captured phase image and to perform processing again so as to eliminate the shake.
As described above, in the image-capturing system 1 according to the present embodiment, the light-receiving sensor 11 captures an image of the measured target 3 by receiving the light emitted from the light source 21 and reflected from the measured target 3. The image-capturing control circuit 12 controls the light-receiving sensor 11 to receive the reflected light and capture multiple phase images for each of multiple phases. The addition and correction unit 13 performs addition processing on the multiple phase images of the same phase, captured by the light-receiving sensor 11 under the same condition, to generate a summed phase image. The multiple phase images of the multiple phases include multiple first phase images (for example, long-distance phase images) captured under a first condition (for example, long-distance exposure time) and one or more second phase images (for example, medium-distance phase images and near-distance phase images) captured under a second condition (for example, medium-distance exposure time and near-distance exposure time) different from the first condition. The first phase images are greater in number than the second phase image(s). The addition and correction unit 13 calculates a motion amount among the first phase images of the same phase, corrects the first phase images of the same phase based on the motion amount, and performs addition processing to generate respective summed phase images of the multiple phases. By performing the correction (shaking correction) for a shake of the device or the measured target in this manner, the multiple phase images can be processed for reducing a decrease in distance calculation accuracy.
In the image-capturing system 1 according to the present embodiment, the distance calculation unit 14 calculates the distance to the measured target 3 based on the summed phase images respectively acquired for the multiple phases. Further, the distance calculation unit 14 generates a distance image indicating the distance to the measured target 3 based on the summed phase images respectively acquired for the multiple phases, and the display control unit 15 causes the display 2 to display the distance image on a display. This operation allows to the user to visually confirm the distance to the measured target 3.
In the image-capturing system 1 according to the present embodiment, the addition and correction unit 13 further generates a summed phase image by addition processing without correction, The distance calculation unit 14 generates a first distance image (for example, the distance image DIG1) based on the summed phase image generated by addition processing without correction, and generates a second distance image (for example, the distance image DIG2) based on the summed phase image generated by addition processing after correction. The display control unit 15 displays, on the display 2, the first distance image and the second distance image. By displaying the distance image without the correction and the distance image with the correction side, the effect of the shaking correction is presented.
Descriptions are given below of an image-capturing system according to a modification, focusing on differences from the above-described image-capturing system 1. In the above-described embodiment, the addition and correction unit 13 automatically selects the reference phase image for calculating the motion amount from the phase images captured by the light-receiving sensor 11. By contrast, in the present modification described below, selection by a user of a reference phase image is received.
As illustrated in
The addition and correction unit 13 receives the received-light data LRA and LRB output from the light-receiving sensor 11 as phase signals, and performs the correction for shaking (shaking of the capturing device or the target) on multiple phase images of two-dimensional phases corresponding to the phase signals for each pixel of the light-receiving sensor 11. The addition and correction unit 13 further performs addition processing on the multiple phase images of the same phase, to generate a summed phase image. Specifically, using a phase image selected by a user operation input via the input device 16 as the reference phase image, the addition and correction unit 13 calculates a motion amount (shift amount) among the multiple phase images caused by a shake of the device or the object. Then, the addition and correction unit 13 performs correction (shaking correction) on the multiple long-distance phase images of the same phase, using the calculated motion amount, performs the addition processing on the corrected long-distance phase images of the same phase, and generates a summed phase image for each phase. In addition, the addition and correction unit 13 performs the shaking correction on one or more middle-distance phase images and one or more near-distance phase images using the motion amount calculated from the reference phase image for long distance.
The input device 16 is, for example, a touch panel or a control panel that is included in the image-capturing system la and receives a user operation. The input device 16 may be, for example, an input interface circuit that receives a user operation signal input to, for example, an external information processing device. In either case, the input device 16 allows the user to input an operation to the image processing apparatus 6 and input the operation of selecting the reference phase image from among the respective phase images of the multiple phases as described above.
In order for the user to select the reference phase image via the input device 16, for example, the display control unit 15 may display, on the display 2, information indicating the phase images and the timing of image capture of the phase images as illustrated in
In addition to receiving the selection by the user of the reference phase image via the input device 16, the image-capturing system la may further receive, from the user, the selection of a phase image not to be subjected to the addition processing by the addition and correction unit 13 and a phase image other than the reference phase image, for example.
As described above, in the image-capturing system la according to the present modification, the input device 16 receives input from outside the image-capturing system 1a, and the addition and correction unit 13 calculates the motion amount using the reference phase image selected, from the multiple first phase images, based on the input via the input device 16. With this configuration, the correction can be performed using the reference phase image according to the user's intention.
At this time, the image-capturing control circuit 12 controls the light-receiving sensor 11 to receive the reflected light and capture multiple phase images for each of the multiple phases (step S2). The image-capturing control circuit 12 controls the light-receiving sensor 11 to obtain multiple first phase images under a first condition and one or more second phase images captured under a second condition different from the first condition such that the number of the first phase images is greater than the number of second phase images.
The addition and correction unit 13 then performs addition processing on the multiple phase images of the same phase, to generate a summed phase image (step S3).
The addition processing performed in step S3 includes the following steps as illustrated in
Note that, in a case where at least a portion of the functional units of the image-capturing system 1 according to the above-described embodiment and the image-capturing system 1a according to the modification is implemented by execution of a computer program, the program can be prestored in, for example, a read-only memory (ROM). Alternatively, the computer program executed by the image-capturing system 1 according to the above-described embodiment and the image-capturing system 1a according to the modification can be provided as a file in an installable or executable format, stored in a computer-readable recording medium, such as a compact disc read-only memory (CD-ROM), a flexible disk (FD), a compact disc recordable (CD-R), or a digital versatile disk (DVD). Further, the computer program executed by the image-capturing system 1 according to the above-described embodiment and the image-capturing system la according to the modification may be stored on a computer connected to a network such as the Internet, to be downloaded via the network. Further, the computer program executed by the image-capturing system 1 according to the above-described embodiment and the image-capturing system 1a according to the modification may be provided or distributed via a network such as the Internet. The computer program to be executed by the image-capturing system 1 according to the above-described embodiment and the image-capturing system 1a according to the modification has a module structure including at least one of the above-described functional units. Regarding the actual hardware related to the computer program, the CPU reads and executes the computer program from the above-described memory to be loaded onto the main memory to implement the above-described functional units.
The present disclosure includes the following aspects.
In Aspect 1, an image-capturing system includes an addition unit configured to perform, for each of multiple phases, addition processing on multiple phase images of the same phase among multiple received phase images, to generate a summed phase image.
The multiple received phase images include multiple first phase images captured under a first condition and one or more second phase images captured under a second condition different from the first condition, and the first phase images is greater in number than the second phase image (or second phase images).
In the image-capturing system, the addition unit calculates a motion amount at least among the multiple first phase images of the same phase, corrects the multiple first phase images of the same phase based on the motion amount, and generates the summed phase image for each of multiple phases by the addition processing.
According to Aspect 2, in the image-capturing system of Aspect 1, the multiple first phase images are captured in a first period, and the one or more second phase images are captured in a second period different from the first period.
The addition unit selects, from among the multiple first phase images captured in the first period, one first phase image captured at a time closer to the second period than a center of the first period, as a first reference phase image, and calculates the motion amount using the first reference phase image.
According to Aspect 3, in the image-capturing system of Aspect 2, the first period includes multiple temporally separated periods.
According to Aspect 4, in the image-capturing system of Aspect 2, the multiple received phase images further include multiple third phase images captured in a third period, different from the first period and the second period, under a third condition different from the first condition and the second condition.
The second period is between the first period and the third period, and the addition unit selects, from among the multiple third phase images captured in the third period, one third phase image captured at a time closer to the second period than a center of the third period, as a second reference phase image, and calculates the motion amount using the second reference phase image.
According to Aspect 5, the image-capturing system of any one of Aspects 1 to 4 further includes an image-capturing device that includes a light source to emit irradiation light, and a light-receiving sensor to receive reflected light of the irradiation light reflected by an object and output a light-receiving signal; and an image-capturing control unit configured to control the image-capturing device to capture the multiple phase images for each of the multiple phases.
According to Aspect 6, the image-capturing system of any one of Aspects 1 to 5 further includes an input unit to receive input from the outside. The input unit receives an input of selection of the reference phase image from among the multiple first phase images, and the addition unit calculates the motion amount using the reference phase image selected via the input unit.
According to Aspect 7, the image-capturing system of any one of Aspects 1 to 6 further includes a distance measurement unit configured to calculate a distance to an object based on the respective summed phase images of the multiple phases.
According to Aspect 8, in the image-capturing system of Aspect 7, the distance measurement unit generates a distance image indicating a distance to the object based on the respective summed phase images of the multiple phases, and the image-capturing system further includes a display control unit configured to control a display to display the distance image.
According to Aspect 9, in the image-capturing system of Aspect 8, the addition unit further performs, for each of the multiple phases, the addition processing on the multiple phase images without performing the correction, to generate another summed phase image.
The distance measurement unit generates a first distance image based on the summed phase image generated by the addition processing without the correction, and generates a second distance image based on the summed phase image generated by the addition processing with the correction.
The display control unit controls the display to display the first distance image and the second distance image.
In Aspect 10, an image-capturing device includes an image-capturing unit (image-capturing sensor) to receive reflected light of irradiation light emitted from a light source and reflected by an object, to perform image capturing, an image-capturing control unit configured to control the image-capturing unit to receive the reflected light so as to capture multiple phase images for each of multiple phases, and an addition unit configured to perform, for each of the multiple phases, addition processing on multiple phase images of a same phase, to generate a summed phase image.
The multiple phase images captured for each of the multiple phases includes multiple first phase images captured under a first condition and one or more second phase images captured under a second condition different from the first condition, and the first phase images are greater in number than the second phase image (or second phase images).
The addition unit calculates a motion amount among the multiple phase images of the same phase regarding at least the multiple first phase images, corrects the multiple first phase images of the same phase based on the motion amount, and generates the summed phase image for each of multiple phases by the addition processing.
In Aspect 11, an image-capturing method includes receiving, with an image-capturing unit, reflected light of irradiation light emitted from a light source and reflected by an object, to perform image capturing controlling, with an image-capturing control unit, the image-capturing unit to receive the reflected light so as to capture multiple phase images for each of multiple phases; and performing, with an addition unit, addition processing on multiple phase images of a same phase for each of the multiple phases, to generate a summed phase image.
In the image-capturing method, the multiple phase images captured for each of the multiple phases include multiple first phase images captured under a first condition and one or more second phase images captured under a second condition different from the first condition, and the first phase images is greater in number than the second phase image (or second phase images).
The addition processing includes calculating a motion amount among the multiple phase images of the same phase regarding at least the multiple first phase images, correcting the multiple first phase images of the same phase based on the motion amount, and generating the summed phase image for each of multiple phases by the addition processing.
The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of the present invention. Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above.
The functionality of the elements disclosed herein may be implemented using circuitry or processing circuitry which includes general-purpose processors, special purpose processors, integrated circuits, application-specific integrated circuits (ASICs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), conventional circuitry and/or combinations thereof which are configured or programmed to perform the disclosed functionality. Processors are considered processing circuitry or circuitry as they include transistors and other circuitry therein. In the disclosure, the circuitry, units, or means are hardware that carry out or are programmed to perform the recited functionality. The hardware may be any hardware disclosed herein or otherwise known which is programmed or configured to carry out the recited functionality. When the hardware is a processor which may be considered a type of circuitry, the circuitry, means, or units are a combination of hardware and software, the software being used to configure the hardware and/or processor.
Number | Date | Country | Kind |
---|---|---|---|
2022-164992 | Oct 2022 | JP | national |
2023-134288 | Aug 2023 | JP | national |