The present disclosure relates to a ToF (Time of Flight) camera.
In order to support autonomous driving or autonomous control of the light distribution of a headlamp, an object identification system is employed for sensing the position and the kind of an object that exists in the vicinity of a vehicle. The object identification system includes a sensor and a processing device configured to analyze the output of the sensor. As such a sensor, a desired one is selected from among a camera, LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), millimeter-wave radar, ultrasonic sonar, etc., giving consideration to the usage, required precision, and cost.
Typical monocular cameras are not capable of acquiring depth information. Accordingly, in a case in which there is overlap between multiple objects positioned at different distances, it is difficult to separate individual objects.
As a camera that is capable of acquiring depth information, ToF cameras are known. A ToF camera is configured to emit infrared light by means of a light-emitting device, to measure the time of flight (delay time) τ to the time point at which the reflected light returns to the image sensor, and to convert the time of flight τ into distance information in the form of an image. The distance d to an object is represented by the following Expression (1).
d=cτ/2 (1).
The methods employed in ToF cameras are broadly classified into the direct and indirect methods. In the direct method, the delay time d is directly measured. In this method, a high-speed Time To Digital Converter (TDC) is employed. In a ToF camera employing the direct method, in order to provide high resolution, a high-frequency device having high-speed clock is required. Accordingly, it is difficult to employ such a method with a camera that generates an image including multiple pixels.
Accordingly, at present, ToF cameras employing the indirect method have become mainstream. As a method employed in such an indirect ToF camera, a square wave illumination method is known.
The ToF camera consecutively executes two exposures (image acquisitions) with an exposure time that is equal to the pulse width tw of the illumination light.
In the first exposure, a portion of the pulse width tw of the reflected light that corresponds to the preceding period t1 is detected. Furthermore, in the second exposure, a portion of the pulse width tw of the reflected light that corresponds to the subsequent period t2 is detected. The ratio of the amount of light (amount of charge Q1) detected in the first exposure and the ratio of the amount of light (amount of charge Q2) detected in the second exposure are proportional to the periods t1 and t2, respectively. Accordingly, the following relations hold true.
t
1
=tw×Q
1/(Q1+Q2)
t
2
=tw×Q
2/(Q1+Q2)
In a case in which the first exposure is started with a delay of td from the illumination of the illumination light, the delay time τ is represented by the following Expression (2).
τ=td+t2=td+tw×Q2/(Q1+Q2)
As a result of investigating the square wave illumination ToF camera, the present inventor has come to recognize the following problem. The square wave illumination ToF camera requires the waveform of the illumination light to be a perfect square wave. In other words, the square wave illumination ToF camera operates assuming that the illumination light is irradiated with a constant intensity in time during the pulse illumination period tw. However, it is difficult to generate such square wave illumination light with an intensity that is constant over time. In a case of supporting such a function, this leads to difficulty in designing the light source, leading to an increased cost.
The present disclosure is made in view of such a situation.
An embodiment of the present disclosure relates to a ToF camera. The ToF camera includes: a light source structured to illuminate the field of view (FOV) with pulsed illumination light having an intensity that changes with time; an image sensor structured to be exposed to reflected light from an object in the FOV in two consecutive exposures; and a calculation unit structured to generate a distance image giving consideration to the waveform of the illumination light based on the output of the image sensor acquired in the two exposures.
It is to be noted that any arbitrary combination or rearrangement of the above-described structural components and so forth is effective as and encompassed by the present embodiments. Moreover, this summary does not necessarily describe all necessary features so that the disclosure may also be a sub-combination of these described features.
Embodiments will now be described, by way of example only, with reference to the accompanying drawings which are meant to be exemplary, not limiting, and wherein like elements are numbered alike in several Figures, in which:
A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “one embodiment” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.
One embodiment disclosed in the present specification relates to a ToF camera. The ToF camera includes: a light source structured to irradiate pulsed illumination light having an intensity that changes with time; an image sensor arranged to be exposed to reflected light from an object in two consecutive exposures; and a calculation unit structured to generate a distance image giving consideration to the waveform of the illumination light based on the output of the image sensor acquired in the two exposures.
With this embodiment, this provides improved measurement precision even in a case in which the illumination light is not an ideal square wave. Accordingly, this allows the light source to be designed in a simple manner, thereby allowing the cost of the light source to be reduced.
In one embodiment, the calculation unit may generate the distance image using a calculation expression defined based on the waveform of the illumination light.
In one embodiment, the calculation unit may include: a distance calculation unit structured to calculate a distance assuming that the illumination light is irradiated with a constant intensity in time; and a correction unit structured to correct the distance calculated by the distance calculation unit based on correction characteristics that correspond to the waveform of the illumination light.
In one embodiment, the correction characteristics may be acquired by calibration.
The preferred embodiments will now be described, which do not intend to limit the scope of the present invention but exemplify the invention. All of the features and the combinations thereof described in the embodiment are not necessarily essential to the invention.
The light source 22 irradiates pulsed illumination light L1 with an intensity that changes with time. The image sensor 24 is configured as a sensor suitable for a ToF camera. The image sensor 24 measures reflected light L2 reflected from an object OBJ for every two consecutive exposures. The image sensor 24 includes multiple light-receiving elements (which will also be referred to as “pixels” hereafter) in the form of an array. The image sensor 24 is configured to convert the light incident on the light-receiving elements (pixels) into the amount of charge or current for every two exposures, and to integrate the measurement values. The reflected light L2 is incident on each pixel of the image sensor 24 with a different timing (delay time τ). The image sensor 24 generates two items of image data I1 and I2 that correspond to the two exposures. The pixel value of each pixel of the image data I1 acquired in the first exposure represents the integrated value of the reflected light L2 incident on that pixel in the first exposure (i.e., integrated amount of charge Q1). Similarly, the pixel value of each pixel of the image data I2 acquired in the second exposure represents the integrated value of the reflected light L2 incident on that pixel in the second exposure (i.e., integrated amount of charge Q2).
It should be noted that, in order to generate a single distance image I3, a set of illumination by the light source 22 and exposures by the image sensor 24 may be repeated multiple times.
The calculation unit 30 is configured to generate the distance image I3 giving consideration to the waveform of the illumination light L1 based on the outputs I1 and I2 of the image sensor 24 acquired in the two exposures.
The above is the configuration of the ToF camera 20. Next, description will be made regarding the operation thereof.
In the first exposure, the amount of received light of the front-side portion t1 of the reflected light L2 is detected, and the pixel value Q1 that represents the integrated value thereof is generated. The pixel value Q1 represents the left-side area S1 of the reflected light L2.
In the second exposure, the amount of received light of the rear-side portion t2 of the reflected light L2 is detected, and the pixel value Q2 that represents the integrated value thereof is generated. The pixel value Q2 represents the right-side area S2 of the reflected light L2.
In a case in which the illumination light L1 has a known waveform, i.e., in a case in which the reflected light L2 has a known waveform, and the areas S1 and S2 are known, the times t1 and t2 can be obtained. Subsequently, the distance d to the object that reflects the reflected light incident on the pixel can be calculated based on the Expressions (1) and (2).
The above is the operation of the ToF camera 20. With the ToF camera 20, this allows the distance to be measured with high precision even if the intensity of the illumination light L1 changes during the illumination period tw. Furthermore, this arrangement is capable of compensating for the change of the intensity of the illumination light L1, thereby allowing the cost of the light source 22 to be reduced.
Next, description will be made regarding the processing by the calculation unit 30 based on several examples.
In an example 1, the calculation unit 30 calculates the distance using an Expression defined based on the waveform of the illumination light L1.
The intensity waveform of the illumination light L1, i.e., the intensity waveform of the reflected light L2, is represented by I(t). It should be noted that description will be made with the pulse rising timing as t=0. In this case, the areas S1 and S2 shown in
S
1=∫0t1I(t)dt∝Q1 (3)
S
2=∫twtwI(t)dt∝Q2 (4)
In a case in which I(t) is known and tw is a constant, the calculation unit 30 is capable of acquiring t1 based on the measurement results Q1 and Q2.
For example, let us consider a case in which the illumination light L1 changes with a constant slope. In this case, the waveform I(t) is represented by Expression (5). It should be noted that k is a coefficient that represents the slope, and has a dimension that is the reciprocal of time.
I(t)I0(1−kt) (5)
The Expression (2) is substituted into Expressions (3) and (4), thereby obtaining Expressions (6) and (7).
Expression (8) can be obtained based on Expressions (6) and (7). Furthermore, I0 is eliminated, thereby obtaining Expression (9), which is a quadratic equation with respect to t1.
The quadratic equation is solved, thereby obtaining Expression (10).
The calculation unit 30 is capable of calculating the time t1 based on Expression (1). After t1 is obtained, the delay time τ can be calculated based on the following Expression (11).
τ=td+t2=td+tw−t1 (11)
Subsequently, the distance d can be calculated based on Expression (3).
In Expression (10), the term “tw·Q1/(Q1+Q2)” is nothing but the time t1′ obtained assuming that the illumination L1 is irradiated as a square wave without any change of intensity. Accordingly, the time t1 obtained in a case in which the illumination light L1 is irradiated with a change of the intensity can be obtained as a value that corrects the time t1′ obtained assuming that there is no change in the intensity. In this case, the correction expression is represented by the following Expression (12). It should be noted that the correction Expression (12) is made assuming that t1=0 when t1′=0.
The correction unit 34 corrects the distance d calculated by the distance calculation unit 32 based on correction characteristics that correspond to the waveform of the illumination light L1, and outputs the corrected distance image I3 including the corrected distance dc.
For example, the correction characteristics to be used in the correction unit 34 may be acquired by calibration.
x=f
−1(d)
The correction characteristics are converted into a polynomial approximation expression or a table. The polynomial approximation expression or table is stored in the correction unit 34. The correction unit 34 generates the corrected distance dc that represents the corrected distance x based on the correction characteristics.
d
c
=f
−1(d)
As shown in
The correction characteristics may be calculated by simulation instead of being obtained by calibration. Alternatively, in a case in which the waveform of the illumination light S1 is represented by a simple function I(t), the correction characteristics may be calculated based on the function I(t), and the correction characteristics thus obtained may be held by the correction unit 34.
Description will be made regarding the usage of the ToF camera 20.
The processing device 420 is configured to be capable of identifying the position and the kind (category, class) of an object based on the distance image 13. The processing device 420 may include a classifier 422. The processing device 420 may be configured as a combination of a processor (hardware component) such as a Central Processing Unit (CPU), Micro Processing Unit (MPU), microcontroller, or the like, and a software program to be executed by the processor (hardware component). Also, the processing device 420 may be configured as a combination of multiple processors. Alternatively, the processing device 420 may be configured as a hardware component alone.
The classifier 422 may be implemented based on a prediction model generated by machine learning. The classifier 422 judges the kind (category or class) of an object included in an input image. The algorithm employed by the classifier 422 is not restricted in particular. Examples of the algorithms that can be employed include You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), Region-based Convolutional Neural Network (R-CNN), Spatial Pyramid Pooling (SPPnet), Faster R-CNN, Deconvolution-SSD (DSSD), Mask R-CNN, etc. Also, other algorithms that will be developed in the future may be employed. The processing device 420 and the calculation unit 30 of the image capture apparatus 410 may be implemented on the same processor or the same FPGA.
Also, the output of the object identification system 400 may be used for the light distribution control of the automotive lamp, Also, the output of the object identification system 400 may be transmitted to the in-vehicle ECU so as to support autonomous driving control.
Also, the information with respect to the object OBJ detected by the processing device 40 may be used to support the light distribution control operation of the automotive lamp 200. Specifically, a lamp ECU 208 generates a suitable light distribution pattern based on the information with respect to the kind of the object OBJ and the position thereof generated by the processing device 40. The lighting circuit 204 and the optical system 206 operate so as to provide the light distribution pattern generated by the lamp ECU 208.
Also, the information with respect to the object OBJ detected by the processing device 40 may be transmitted to the in-vehicle ECU 304. The in-vehicle ECU may support autonomous driving based on the information thus transmitted. The function of the processing device 40 for detecting an object may be implemented in the in-vehicle ECU 304.
In a case in which the light source 22 of the ToF camera 20 is built into an automotive lamp as shown in
While the preferred embodiments of the present disclosure have been described using specific terms, such description is for illustrative purposes only, and it is to be understood that changes and variations may be made without departing from the spirit or scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2019-081025 | Apr 2019 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2020/017167 | Apr 2020 | US |
Child | 17451513 | US |