The present disclosure relates to a measurement device.
Patent Literature 1 discloses an indirect time-of-flight (ToF) measurement device that measures a distance to a target based on emission of laser light (pulsed light) and exposure to the reflected and returned light.
In the measurement device disclosed in Patent Literature 1, exposure is performed with the same width as a pulse width of emitted light. By setting both a light emission pulse width and an exposure width to be long, a depth of a measurement target region can be increased, and as a result, a frame rate (FPS) can be increased. However, due to the restriction of a light source device, the light emission pulse width is often limited to a certain range, and when an exposure period is extended beyond the range, the light emission pulse width<the exposure width. In this case, the depth of the measurement target region can also be increased, and the frame rate can also be increased, but the exposure occurs for a certain period (that is, a dead zone occurs), and a section occurs in which a time or a distance cannot be measured by using the indirect ToF. Therefore, it is difficult to increase the frame rate under the condition that the light emission pulse width<the exposure width.
An object of the present disclosure is to increase a frame rate even under a condition that a light emission pulse width<an exposure width.
A measurement device according to one aspect of the present disclosure for achieving the above-described object includes:
According to the present disclosure, a frame rate can be increased even under a condition that a light emission pulse width<an exposure width.
Hereinafter, an embodiment of the present disclosure will be described with reference to the drawings.
The measurement device 1 shown in
As shown in
The light emitting unit 10 emits (projects) light to a space to be imaged. The light emitting unit 10 emits light according to an instruction from the control unit 30. The light emitting unit 10 includes a light source 12 and a projecting optical system (not shown) that emits light emitted by the light source 12.
The light source 12 is a light source including a light emitting element. The light source 12 emits pulsed laser light under the control of the control unit 30. Hereinafter, this pulsed light is also referred to as a light emission pulse.
The imaging unit 20 (indirect ToF camera) performs imaging based on exposure to light reflected by a target of distance measurement. The imaging unit 20 includes an imaging sensor 22 and an exposure optical system (not shown) that guides incident (exposed) light to the imaging sensor 22.
The imaging sensor 22 captures an image of an object to be imaged according to an instruction from the control unit 30 and outputs image data obtained by capturing an image to an image acquisition unit 34 of the control unit 30. A value (pixel data) of each pixel constituting the image data indicates a signal value corresponding to an exposure amount. The imaging sensor 22 will be described later.
The control unit 30 controls the measurement device 1. The control unit 30 is implemented by a hardware configuration including an element such as a memory and a CPU, and a circuit. The control unit 30 implements a predetermined function by the CPU executing a program stored in the memory. The control unit 30 is not limited to being implemented by executing a software processing using the memory and the CPU. For example, the control unit 30 may be implemented by hardware such as an ASIC or an FPGA.
The timing control unit 32 controls a light emission timing of the light emitting unit 10 and an exposure timing of the imaging unit 20. The light emission timing and the exposure timing will be described later.
The image acquisition unit 34 acquires image data from the imaging sensor 22 of the imaging unit 20. The image acquisition unit 34 includes a memory (not shown) that stores the acquired image data. The image acquisition unit 34 corresponds to a “signal acquisition unit”.
The time calculation unit 36 calculates an arrival time (time of flight of light: ToF) from when the light emitting unit 10 emits light until reflected light reaches the imaging sensor 22. In the present embodiment, the time calculation unit 36 corresponds to a “calculation unit”.
The distance calculation unit 38 calculates a distance based on the arrival time of the light. As will be described later, the measurement device 1 can acquire a distance image by the distance calculation unit 38 calculating a distance for each pixel.
As shown in
The control unit 30 (timing control unit 32) causes the imaging sensor 22 of the imaging unit 20 to be exposed to the reflected light after a time Tdelay from emission of the light emission pulse. An exposure period is set based on the delay time Tdelay and an exposure width Gw.
The delay time Tdelay is a time (delay time) from the emission of the light emission pulse to a start of the exposure period. The delay time Tdelay is set according to a distance to a measurement target region. That is, by setting a short time from when the light emitting unit 10 emits the light emission pulse until the imaging sensor 22 is exposed to light, the measurement device 1 can acquire an image of a target (object that reflects light) in a short distance region. Conversely, by setting a long time from when the light emitting unit 10 emits the light emission pulse until the imaging sensor 22 is exposed to light, the measurement device 1 can acquire an image of the target in a long distance region.
The exposure width Gw is a width of the exposure period (that is, a period from a start of the exposure to an end of the exposure). The width of the exposure period defines a length of the measurement target region in a measurement direction. Accordingly, the smaller the exposure width Gw is, the higher a distance resolution becomes.
In the present embodiment, as shown in
The light emission and the exposure are repeated a plurality of times at a cycle Tp shown in
In an image obtained for each region, an image of a target (object that reflects light) present in the region is captured. The image for each region may be referred to as a “range image”. A value (image data) of each pixel constituting the image indicates a signal value corresponding to an exposure amount. The number of (for example, four) range images obtained by one imaging may be referred to as a “subframe”. A plurality of regions (for example, four regions) measured by one imaging may be referred to as a “zone”.
As shown in
First, the control unit 30 (timing control unit 32) causes the light emitting unit 10 to emit light at the cycle Tp (see
First, the control unit 30 acquires an image of a region 1. At this time, the timing control unit 32 causes the imaging sensor 22 of the imaging unit 20 to be exposed to light for each pixel of an image in exposure periods delayed from the light emission timing.
The timing control unit 32 causes the imaging sensor 22 of the imaging unit 20 to be exposed to light repeatedly (n times) for each cycle Tp, and causes storage units CS (to be described later) of the imaging sensor 22 to store charges.
The image acquisition unit 34 acquires a signal value corresponding to the charges stored in the imaging sensor 22 (the storage units CS). The acquired image data on the region 1 is written into an image memory.
Next, in the same manner, the control unit 30 acquires an image of a region 2 adjacent (contiguous) to the region 1 in the measurement direction. Then, the control unit 30 writes image data on the region 2 to the image memory of the image acquisition unit 34. The delay time Tdelay from a light emission timing in the region 2 is set to be longer than that in the case of the region 1. As described above, the number of times of repetition (the number of times of charge storage) is set to increase as the measurement target region becomes further away.
By performing the above operation up to a region N, an image up to the region N (entire-region image) is acquired.
The measurement device 1 according to the present embodiment can perform measurement in two modes: a normal mode and a high-speed mode.
The normal mode is a mode for performing normal measurement (general measurement by using the indirect ToF). In the normal mode, a pulse width Lw of a light emission pulse and a width Gw of an exposure period are set to be equal (Lw=Gw). For example, when the pulse width Lw of the light emission pulse is set to 10 nsec and the width Gw of the exposure period is set to 10 nsec, a depth (width in the measurement direction) of the measurement target region is set to 1.5 m.
The high-speed mode is a mode for performing measurement at a higher speed than the normal mode. In the high-speed mode, a width Gw of an exposure period is set to be larger than a pulse width Lw of a light emission pulse (Gw>Lw). For example, when the pulse width of the light emission pulse is set to 10 nsec and the width Gw of the exposure period is set to 100 nsec, the depth (width in the measurement direction) of the measurement target region is set to 15 m. In this case, a depth (width in the measurement direction) of a region in the high-speed mode is 10 times a depth (width in the measurement direction) of a region in the normal mode. As described above, in the high-speed mode, the depth of the measurement target region is larger than that in the normal mode, and a frame rate can be increased by reducing the number of regions (the number of images acquired when the distance image is created). In the following description, the width Gw of the exposure period is twice the pulse width Lw of the light emission pulse (Gw=2Lw).
In
In the normal mode, the pulse width Lw of the light emission pulse and the width Gw of the exposure period are set to be equal (Lw=Gw), and thus the exposure amount reaches a maximum (peak) at the distance L2 at which the exposure to all the reflected light of the light emission pulse is performed as shown in
In
In the exposure operation A, an exposure period (exposure period A) corresponding to a predetermined region (hereinafter, also referred to as one region) is set. A delay time of the exposure period A with respect to a start (time 0) of light emission of the light emission pulse is a delay time Ta (corresponding to Tdelay in
In the exposure operation B, an exposure period (exposure period B) corresponding to a region (hereinafter, also referred to as the other region) continuous to the predetermined region in a measurement direction is set. A delay time of the exposure operation B with respect to a start (time 0) of light emission of the light emission pulse is Tb (corresponding to Tdelay in
The timing control unit 32 sets such an exposure operation A and exposure operation B, and exposes each pixel of the imaging sensor 22 to the reflected light. The image acquisition unit 34 acquires signal values (here, signal values Sa and Sb shown in
In
The distance Lx to the target is calculated based on the arrival time Tx. That is, the light travels twice the distance L during the arrival time Tx, and thus when a speed of the light is Co, Lx=(Co×Tx)/2 . . . (2). The distance calculation unit 38 calculates the distance Lx for each pixel according to equation (2) using the arrival time Tx.
In
The distance La at which the exposure amount A reaches a peak corresponds to La=Co×Ta/2. The distance Lb at which the exposure amount B reaches a peak corresponds to Lb=Co×Tb/2.
In the high-speed mode, as described above, the exposure width Gw is set to be larger than the pulse width Lw of the light emission pulse (Gw>Lw: here, Gw=2Lw). In addition, the pulse width of the reflected light is denoted by Lw.
Since Gw is larger than Lw, both when a distance to the target is L2 (
As shown in
That is, only by the exposure operation shown in
Therefore, in the present embodiment, the following measurement is performed in the high-speed mode.
As shown in
Similarly, the timing control unit 32 sets two sub-exposure operations (sub-exposure operation B1 and sub-exposure operation B2) for the exposure operation B. Exposure widths of the sub-exposure operation B1 and the sub-exposure operation B2 are Gw (=2Lw). Start timings of the sub-exposure operation B1 and the sub-exposure operation B2 are different from each other by a time corresponding to the pulse width Lw. Accordingly, an overlap period corresponding to the pulse width Lw of the light emission pulse is provided in an exposure period (sub-exposure period B1) of the sub-exposure operation B1 and an exposure period (sub-exposure period B2) of the sub-exposure operation B2.
The timing control unit 32 causes each pixel of the imaging sensor 22 to perform two sub-exposure operations and exposes each pixel to reflected light. The two sub-exposure operations are repeatedly performed as described later. Each pixel of the imaging sensor 22 outputs a signal value corresponding to a total exposure amount in the exposure periods of the two sub-exposure operations. As shown in
A relation between a distance and the exposure amount A1 in the sub-exposure operation A1 is a trapezoidal shape as in
As described above, by providing two sub-exposure operations (sub-exposure operations A1 and A2 and sub-exposure operations B1 and B2) as the exposure operations A and B, respectively, a relation between the exposure amount A and the exposure amount B becomes the same relation as in
The image acquisition unit 34 acquires, for each pixel, a signal value corresponding to an amount of light to which each pixel is exposed in the exposure period, based on the output of the imaging sensor 22. Here, the image acquisition unit 34 acquires, for each pixel, the signal value Sa corresponding to the exposure amount A (=A1+A2) obtained by summing the exposure amount A1 in the sub-exposure period A1 and the exposure amount A2 in the sub-exposure period A2 as the signal value corresponding to the exposure amount in the exposure period A. In addition, the image acquisition unit 34 acquires, for each pixel, the signal value Sb corresponding to the exposure amount B (=B1+B2) obtained by summing the exposure amount B1 in the sub-exposure period B1 and the exposure amount B2 in the sub-exposure period B2 as the signal value corresponding to the exposure amount in the exposure period B.
The time calculation unit 36 calculates the arrival time Tx according to the following equation (3) based on the relation.
As shown in equation (3), the time calculation unit 36 calculates the arrival time Tx based on the ratio of the signal value Sa corresponding to one region to the total signal value (Sa+Sb) corresponding to the two regions and the start timing Tb of the overlap period of the two sub-exposure periods B1 and B2 corresponding to the other region. Accordingly, the arrival time Tx can also be calculated in the high-speed mode.
Similar to the normal mode, the distance calculation unit 38 calculates the distance Lx according to equation (2) using the arrival time Tx. The distance La at which the exposure amount A reaches a peak corresponds to La=Co×Ta/2. The distance Lb at which the exposure amount B reaches a peak corresponds to Lb=Co×Tb/2.
As described above, the control unit 30 (timing control unit 32) according to the present embodiment sets two sub-exposure operations (sub-exposure periods) having different start timings and having the same overlap period as the pulse width Lw in the high-speed mode. Accordingly, in the high-speed mode in which the exposure width Gw is larger than the pulse width Lw of the light emission pulse, a waveform of a triangular exposure amount can also be created, and the arrival time Tx or the distance Lx can also be calculated. Therefore, a frame rate can be increased.
When there is one measurable region for once light emission, it takes time to acquire image data on a large number of regions, so that a measurement time becomes long (it is difficult to increase a speed of FPS). Therefore, a plurality of (here, four) exposure periods are set for once light emission, and a plurality of ((here, four)) regions are measured for once light emission. Here, a multi-tap (four-tap) CMOS image sensor is used as the imaging sensor 22. However, the imaging sensor 22 is not limited to the multi-tap CMOS image sensor. The number of measurable regions for once light emission may be one.
As shown in
The light receiving element PD is an element (for example, a photodiode) that generates charges corresponding to an exposure amount.
The signal reading unit RU1 includes a storage unit CS1, a transistor G1, a reset transistor RT1, a source follower transistor SF1, and a selection transistor SL1.
The storage unit CS1 is implemented by a storage capacitor C1 for storing charges generated in the light receiving element PD, and is generally called a floating diffusion (FD).
The transistor G1 is provided between the light receiving element PD and the storage unit CS1. The transistor G1 is turned on in a predetermined exposure period (for example, the exposure period A to be described later) and supplies the charges generated in the light receiving element PD to the storage unit CS1 based on an instruction from the timing control unit 32 of the control unit 30. Similarly, transistors G2 to G4 supply the charges generated in the light receiving element PD to storage units CS2 to CS4, respectively, based on instructions from the timing control unit 32. That is, the transistors G1 to G4 correspond to a “drive circuit” that distributes the charges generated in the light receiving element PD to the storage units CS1 to CS4 according to the exposure periods.
In this way, the imaging sensor 22 according to the present embodiment can divide and store the charges generated in the four exposure periods in the storage units (CS1 to CS4) corresponding to each exposure period.
In the normal mode, the charges are repeatedly stored in each storage unit in the corresponding exposure period. The charges stored in each storage unit correspond to an amount of light to which the light receiving element PD is exposed in each exposure period. A signal value is output based on the charges stored in each storage unit. The signal value based on the charges stored in the storage unit is a signal value corresponding to the exposure amount in each exposure period.
On the other hand, in the high-speed mode, the charges are repeatedly stored in each storage unit in the corresponding two sub-exposure periods. The charges stored in each storage unit correspond to an amount of light to which the light receiving element PD is exposed in the two sub-exposure periods. A signal value is output based on the charges stored in each storage unit. The signal value based on the charges stored in the storage unit is a signal value corresponding to a sum of the exposure amounts in the two sub-exposure periods. It is also possible to cause each of the storage units to perform sub exposure (storage of the charges), read out the signal values of the storage units, and then add the signal values.
By using such an imaging sensor 22, four regions can be measured by imaging once. That is, by using the imaging sensor 22, images of four regions (zones) can be captured by one imaging, and four range images (subframes) can be obtained.
In this example, exposure operations A to D (exposure periods A to D) are set. In addition, in the normal mode, as described above, the width Gw of each exposure period is equal to the pulse width Lw of the light emission pulse (Gw=Lw). The exposure periods are periods in which a level of the exposure (exposure operations) in
In the exposure operation A, a region (here, the region 1) defined by a delay time (Tdelay in
As shown in
In
The image acquisition unit 34 of the control unit 30 acquires the signal values Sa to Sd (signal values according to the charges stored in the storage units CS1 to CS4) of each pixel 221 from the imaging sensor 22. Accordingly, the image acquisition unit 34 acquires the image data on the regions 1 to 4. Similarly, the image acquisition unit 34 acquires the image up to the region N (entire-region image).
The time calculation unit 36 of the control unit 30 calculates the arrival time Tx of the reflected light. Specifically, first, the time calculation unit 36 specifies a signal value S of exposure to the reflected light from the signal values S up to the region N. For example, the time calculation unit 36 specifies a signal value of a signal corresponding to two consecutive exposure periods (in other words, two consecutive regions) and having the highest exposure amount. For example, when a signal value corresponding to an exposure period i in which the reflected light starts exposure is Si, two signal values Si and Si+1 are specified. For example, when the imaging sensor 22 is exposed to the reflected light in the exposure period B and the exposure period C, the signal values Sb and Sc correspond to the signal values Si and Si+1 obtained by exposing the imaging sensor 22 to the reflected light.
The time calculation unit 36 uses the signal values Si and Si+1 to calculate the flight time (hereinafter also referred to as an arrival time) Tx of the light according to the following equation (4).
The distance calculation unit 38 calculates the distance Lx to the target according to the above-mentioned equation (2) using the arrival time Tx.
In this example, the exposure operations A to D are provided, and two sub-exposure operations (sub-exposure periods) having different start timings are provided for each exposure operation in the same manner as in
The two sub-exposure periods are provided to have an overlap period corresponding to the pulse width Lw of the light emission pulse. In this case, the relation between a distance and an exposure amount in each exposure operation (two sub-exposure operations) is a triangular shape as shown in
In the examples shown in
As described above, when the four-tap imaging sensor 22 is used, four regions can be measured by one imaging, and the frame rate can be further increased.
In
As shown in
The image acquisition unit 34 acquires a signal value corresponding to a sum of the exposure amounts of the pixels 221 in the two sub-exposure periods (sum of the charges stored in the storage units CS1 to CS4).
Here, the two sub-exposure operations (sub-exposure periods) are alternately performed, and the present invention is not limited thereto. For example, one sub-exposure operation may be repeated a predetermined number of times, and then the other sub-exposure operation may be repeated the predetermined number of times. However, as compared with the case in which one of the sub-exposure operations (sub-exposure periods) A1 to D1 is performed a plurality of times and then the corresponding one of sub-exposure operations (sub-exposure periods) A2 to D2 is performed a plurality of times, it is possible to disperse the influence of the variation of the distance to the target during the measurement in a case in which one of the sub-exposure operations (sub-exposure periods) A1 to D1 and the corresponding one of the sub-exposure operations (sub-exposure period) A2 to D2 are alternately performed a plurality of times, and thus a measurement accuracy can be improved.
In the present embodiment, two sub-exposure operations (sub-exposure periods) are provided for each exposure operation, and the number of sub-exposure operations (sub-exposure periods) is not limited to two as long as there are a plurality of sub-exposure operations. For example, three sub-exposure operations (sub-exposure periods) may be provided for each exposure operation.
In this example, the timing control unit 32 sets three sub-exposure operations (sub-exposure periods) A1 to A3 having different start timings as the exposure operation A. As shown in
Similarly, the timing control unit 32 sets three sub-exposure operations (sub-exposure periods) B1 to B3 for the exposure operation B (description will be omitted).
In this case, a relation between a distance and an exposure amount in the sub-exposure operations A1 to A3 is a trapezoidal shape with a timing shifted by Lw as shown in
The exposure amount A (of a sum of the exposure amounts A1 to A3) obtained by adding the exposure amounts A1 to A3 is a triangular shape as shown in
As described above, in the case in which there are three sub-exposure operations, when a time from the emission of the light emission pulse to a start timing of the overlap period of the exposure operation B is Tb, the arrival time Tx can be similarly calculated according to Tx=Tb−Gw×{Sa/(Sa+Sb)}.
The distance Lx can also be calculated according to the equation (2).
The measurement device 1 according to the present embodiment has been described above. The measurement device 1 includes the light emitting unit 10 that emits a light emission pulse (pulsed light), the imaging sensor 22, the timing control unit 32, and the image acquisition unit 34. The imaging sensor 22 outputs a signal value corresponding to the exposure amount for each pixel. The timing control unit 32 sets an exposure period corresponding to the measurement target region, and causes the pixels of the imaging sensor 22 to be exposed to the reflected light in the exposure period. The image acquisition unit 34 acquires a signal value corresponding to the exposure amount of the pixel in the exposure period based on the output of the imaging sensor 22. With such a configuration, the timing control unit 32 sets a plurality of (for example, two) sub-exposure periods longer than the pulse width Lw of the light emission pulse as the exposure period. Specifically, a plurality of sub-exposure periods having different start timings are set so as to have an overlap period corresponding to the pulse width Lw of the light emission pulse. The timing control unit 32 causes the pixels 221 of the imaging sensor 22 to be exposed to the reflected light in each sub-exposure period. The image acquisition unit 34 acquires a signal value corresponding to a sum of exposure amounts of the pixels 221 in the plurality of sub-exposure periods. Accordingly, in the high-speed mode in which the width Gw of the exposure period is larger than the pulse width Lw of the light emission pulse, a relation between a distance and an exposure amount can also be made triangular. Accordingly, the measurement device 1 can also measure the arrival time of the reflected light or the distance to the target in the high-speed mode, and the frame rate can be increased.
The measurement device 1 further includes the time calculation unit 36 that calculates the arrival time Tx of the reflected light based on the signal values corresponding to the two consecutive regions. Accordingly, the arrival time Tx can be obtained. The distance Lx to the target can also be obtained based on the arrival time Tx.
The time calculation unit 36 calculates the arrival time Tx based on the ratio of the signal value Sa corresponding to one region to the total signal value (Sa+Sb) corresponding to the two regions and the start timing Tb of the overlap period of the two sub-exposure periods corresponding to the other region. Accordingly, the arrival time Tx can also be obtained in the high-speed mode.
More specifically, when the width of the sub-exposure period is Gw, the signal value corresponding to one of the regions is Sa, the signal value corresponding to the other region is Sb, and the time from emission of the pulsed light to the start timing of the overlap period is Tb,
The width Gw of the sub-exposure period is an integral multiple of the pulse width Lw of the light emission pulse. Accordingly, by varying the start timing of the sub-exposure period by the pulse width Lw, a plurality of sub-exposure periods can be set so as to have an overlap period corresponding to the pulse width Lw (as a result, the graph showing the relation between the distance and the exposure amount is a triangular shape).
The imaging sensor 22 includes, for each pixel 221, the light receiving element PD that generates charges corresponding to the exposure amount and the storage units CS that store the generated charges. The imaging sensor 22 causes the storage unit CS (for example, the storage unit CS1) to store the charges generated in the light receiving element PD in a certain sub-exposure period (for example, the sub-exposure period A1) and the charges generated in the light receiving element PD in another sub-exposure period (for example, the sub-exposure period A2) having a different start timing, and outputs a signal value (for example, the signal value Sa) corresponding to the charges stored in the storage unit CS. Accordingly, the signal values corresponding to the exposure amounts in the two sub-exposure periods can be acquired.
The imaging sensor 22 alternately and repeatedly stores, in the storage unit CS, the charges generated in the light receiving element PD in the certain sub-exposure period and the charges generated in the light receiving element PD in the other sub-exposure period. The imaging sensor 22 outputs the signal value corresponding to the charges stored in the storage unit CS. Accordingly, the influence of the variation in the distance to the target during measurement can be dispersed, and the measurement accuracy can be improved.
The imaging sensor 22 includes the four storage units CS1 to CS4 and the transistors G1 to G4 that distribute and store the charges in the respective storage units CS1 to CS4 according to the exposure periods. The charges generated by one light emission pulse are distributed and stored in the respective storage units CS1 to CS4 according to the exposure periods. Accordingly, the frame rate can be further increased.
The embodiments described above are intended to facilitate understanding of the present disclosure, and are not to be construed as limiting the present disclosure. In addition, it is needless to say that the present disclosure can be changed or improved without departing from the gist thereof, and equivalents thereof are included in the present disclosure.
As described above, the following matters are disclosed in the present specification.
The present application is based on the Japanese patent application (JP2022-014089A) filed on Feb. 1, 2022, and the contents thereof are incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
2022-014089 | Feb 2022 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/043371 | 11/24/2022 | WO |