The aspect of the embodiments relates to a light detection system.
In light detection unit using an avalanche diode causing avalanche multiplication, there has been known a light detection unit which digitally measures the number of photons entering an avalanche diode to output a measured value as a digital signal from a pixel. This technique is called “Single-Photon Avalanche Diode (SPAD).
Japanese Patent Application Laid-Open No. 2018-088488 discusses an apparatus in which light emitted to an object from a light source device and reflected on a surface of the object is received by a light detection unit having a plurality of SPAD pixels, so that a distance image corresponding to a distance to the object can be acquired.
In the apparatus discussed in Japanese Patent Application Laid-Open No. 2018-088488, in order to acquire distance information, the light detection unit has to receive light reflected from the object. However, there is a case where the light detection unit cannot receive reflected light because of a shape of the object or a positional relationship between the object and the light detection unit. In this case, information about the object cannot be acquired by the apparatus discussed in Japanese Patent Application Laid-Open No. 2018-088488.
According to an aspect of the embodiments, a light detection system includes a light detection unit including a plurality of photoelectric conversion portions arranged in a two-dimensional plane, and a calculation processing unit configured to execute calculation based on information acquired by the light detection unit, wherein the light detection unit acquires light amount distribution information of light based on an incident light beam incident on an object from a laser light source and light amount distribution information of light based on a reflected light beam reflected on the object in the two-dimensional plane, wherein the calculation processing unit calculates, from the light amount distribution information of light based on the incident light beam, the light amount distribution information of light based on the reflected light beam, and time information about time at which the light amount distribution information of light based on the incident light beam and the light amount distribution information of light based on the reflected light beam are acquired, information about a normal vector with respect to a reflection plane of the object on which the incident light beam is reflected, and wherein the normal vector is a vector in three dimensions including a direction orthogonal to the two-dimensional plane.
According to another aspect of the embodiments, a light detection system includes a light detection unit including a plurality of photoelectric conversion portions arranged in a two-dimensional plane, and a calculation processing unit configured to execute calculation based on information acquired by the light detection unit, wherein the light detection unit acquires light amount distribution information of light based on an incident light beam incident on an object from a laser light source and light amount distribution information of light based on a refracting light beam refracted by the object in the two-dimensional plane, wherein the calculation processing unit calculates, from the light amount distribution information of light based on the incident light beam, the light amount distribution information of light based on the refracting light beam, and time information about time at which the light amount distribution information of light based on the incident light beam and the light amount distribution information of light based on the refracting light beam are acquired, information about a normal vector with respect to a refracting plane of the object by which the incident light beam is refracted, and wherein the normal vector is a vector in three dimensions including a direction orthogonal to the two-dimensional plane.
According to yet another aspect of the embodiments, a light detection system includes a light detection unit including a plurality of photoelectric conversion portions arranged in a two-dimensional plane, and a calculation processing unit configured to execute calculation based on information acquired by the light detection unit, wherein the light detection unit acquires light amount distribution information of light based on a reflected light beam emitted from a laser light source and reflected on an object in the two-dimensional plane, wherein the calculation processing unit calculates, from direction information of laser light emitted to the object from the laser light source, the light amount distribution information of light based on the reflected light beam, and time information about time at which the light amount distribution information of light based on the reflected light beam is acquired, information about a normal vector with respect to a reflection plane of the object on which the laser light is reflected, and wherein the normal vector is a vector in three dimensions which includes a direction orthogonal to the two-dimensional plane.
Further features of the present disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Embodiments described below are merely examples embodying a technical idea of the present disclosure, and are not intended to limit the present disclosure. In order to provide a clear description, sizes and a positional relationship of members in each of the drawings may be illustrated with exaggeration. In the below-described exemplary embodiments, the same reference numerals are applied to constituent elements similar to each other, and descriptions thereof will be omitted.
A light source 101 is a laser light source capable of emitting laser light. For example, a short-pulse laser source such as a picosecond laser can be used as the light source 101. A wavelength of the light source 101 is not limited to a specific wavelength, and a light source that emits infrared light can also be used. For example, a light source having a peak wavelength of 750 nm and more to 1500 nm and less can be used.
A light detection unit 100 detects diffusion light generated from pulsed laser light emitted from the light source 101. The light detection unit 100 may detect pulsed laser light emitted from the light source 101. The light detection unit 100 is a single-photon avalanche diode (SPAD) array configured of a plurality of SPAD pixels arranged in the X-Y direction. Hereinafter, the light detection unit 100 including a photoelectric conversion portion configured of avalanche diodes will be described as an example. However, the present exemplary embodiment is not limited thereto, and the photoelectric conversion portion may be configured of photodiodes that does not cause avalanche multiplication.
A timing control unit 110 controls an light emission timing of the light source 101 and a light detection timing of the light detection unit 100. More specifically, the timing control unit 110 controls timings of starting and ending light emission executed by the light source 101 and timings of starting and ending light detection executed by the light detection unit 100. In other words, the timing control unit 110 synchronizes the light emission timing of the light source 101 and a light detection starting timing of the light detection unit 100. Herein, “synchronization” includes not only a state where the light emission timing of the light source 101 and the light detection starting timing are synchronized with each other, but also a state where the light emission and start of light detection are executed at different timings based on the control signal from the timing control unit 110. In other words, “synchronization” refers to a state where a timing at which light is emitted from the light source 101 and a timing at which the light detection unit 100 starts executing light detection are controlled based on a common control signal transmitted from the timing control unit 110.
A calculation processing unit 120 executes calculation processing based on the information detected by the light detection unit 100. The calculation processing unit 120 includes a traveling direction analysis unit 111, a space information extraction unit 112, and an image reconstitution unit 113.
The traveling direction analysis unit 111 calculates a traveling direction of laser light from pieces of light amount distribution information in the two-dimensional plane in a plurality of frames output from the light detection unit 100. In other words, the traveling direction analysis unit 111 acquires the traveling direction of light in an X-Y plane from pieces of information x (X direction information), y (Y direction information), and t (time information). The traveling direction analysis unit 111 may divide light tracks into a plurality of groups depending on each traveling direction of light.
The space information extraction unit 112 extracts space information of a Z direction component from among the traveling directions of light for each of the traveling directions of pulsed laser light calculated by the traveling direction analysis unit 111. In other words, the space information extraction unit 112 acquires the traveling direction of light in four-dimensional information of components x, y, z, and t from the three-dimensional information of components x, y, and t. By acquiring the traveling direction of light in the four-dimensional information of components x, y, z, and t, it is possible to acquire a normal vector of an object including the Z direction component. An acquisition method of the four-dimensional information will be described below in detail.
The image reconstitution unit 113 reconstitutes a shape of the object in an x-y-z three-dimensional space from information about the normal vector received from the space information extraction unit 112, and outputs the information to the display unit 114.
The display unit 114 displays an image based on a signal received from the image reconstitution unit 113. An image may be displayed on the display unit 114 by selecting a piece of information output from the image reconstitution unit 113 through a user operation.
A general idea of a measurement method using the light detection system 1000 according to the present exemplary embodiment will be described with reference to
As illustrated in
An effect of acquiring the incident light vector and the reflected light vector in the x-y-z three-dimensional space will be described with reference to
Examples of arrangement positions of the light detection unit 100, the light source 101, and the object 102 according to the present exemplary embodiment will be described with reference to
On the other hand,
As described above, by analyzing the speed in the traveling direction of light observed by the camera, a vector of the traveling direction of light in a new dimension (Z information) can be estimated. More specifically, information including information about the Z direction can be extracted from a data set including a plurality of pieces of light amount distribution information in the X-Y plane acquired by the light detection unit of the camera and a plurality of pieces of time information indicating time at which the pieces of light amount distribution information are acquired. Accordingly, four-dimensional spatiotemporal information can be acquired from three-dimensional spatiotemporal information. The light amount distribution information is distribution information of diffusion light.
The SPAD pixel 103 includes a photoelectric conversion portion 201 (avalanche diode), a quench element 202, a control unit 210, a counter/memory 211, and a readout unit 212.
A potential based on a potential VH higher than a potential VL supplied to an anode is supplied to a cathode of the photoelectric conversion portion 201. A potential is supplied to the anode and the cathode of the photoelectric conversion portion 201 so that a reverse bias for causing a photon entering the photoelectric conversion portion 201 to be multiplied by the avalanche multiplication is applied. By executing the photoelectric conversion in a state where a potential of the reverse bias is supplied thereto, avalanche multiplication occurs in the electric charges generated by the incident light to cause avalanche current to be generated.
In a case where a difference between the potentials of the anode and the cathode is greater than a breakdown voltage when the potential of the reverse bias is supplied thereto, the avalanche diode is brought into a Geiger mode operation. An avalanche diode for rapidly detecting a faint signal in a single-photon level using the Geiger mode operation is called “Single Photon Avalanche Diode (SPAD)”.
The quench element 202 is connected to a power source supplying the high potential VH and the photoelectric conversion portion 201. The quench element 202 is configured of a P-type metal oxide semiconductor (MOS) transistor or a resistor element such as a polysilicon resistor. In addition, the quench element 202 may be configured of a plurality of MOS transistors in serial connection. When photoelectric current is multiplied by the avalanche multiplication occurring in the photoelectric conversion portion 201, electric current acquired by the multiplied electric charges flows into a connection node of the photoelectric conversion portion 201 and the quench element 202. Because of a voltage drop caused by the electric current, a potential of the cathode of the photoelectric conversion portion 201 is lowered, so that an electronic avalanche is not produced in the photoelectric conversion portion 201. As a result, the avalanche multiplication occurring in the photoelectric conversion portion 201 is stopped. Thereafter, the potential VH is supplied to the cathode of the photoelectric conversion portion 201 from the power source via the quench element 202, so that the potential supplied to the cathode of the photoelectric conversion portion 201 returns to the potential VH. In this way, the operating range of the photoelectric conversion portion 201 is brought back to the Geiger mode operation again. As described above, when the electric charge is multiplied by the avalanche multiplication, the quench element 202 functions as a load circuit (quench circuit) to suppress the avalanche multiplication (i.e., quench operation). Further, after the avalanche multiplication is suppressed, the quench element 202 functions to bring the operation range of the avalanche diode to the Geiger mode.
The control unit 210 determines whether to count signals output from each of the photoelectric conversion portions 201. For example, the control unit 210 is a switch (gate circuit) arranged at a position between the photoelectric conversion portion 201 and the counter/memory 211. A gate of the switch is connected to a pulse line 124, and ON and OFF of the control unit 210 is switched depending on the signal input to the pulse line 124. A signal based on the control signal transmitted from the timing control unit 110 in
Further, the control unit 210 may be configured of a logic circuit instead of a switch. For example, an AND circuit is arranged as a logic circuit. Then, an output from the photoelectric conversion portion 201 is input thereto as a first input of the AND circuit, and a signal from the pulse line 124 is input thereto as a second input thereof. In this way, it is possible to switch whether to count the signals output from the photoelectric conversion portion 201.
Furthermore, the control unit 210 does not have to be arranged at a position between the photoelectric conversion portion 201 and the counter/memory 211, and the control unit 210 may be a circuit that inputs a signal for switching operation/non-operation of a counter of the counter/memory 211.
The counter/memory 211 counts the number of photons entering the photoelectric conversion portion 201 and saves the counted value as digital data. A reset line 213 is arranged for each of rows, so that the saved signal is reset when a control pulse is supplied to the reset line 213 from a vertical scanning circuit unit (not illustrated).
The readout unit 212 is connected to the counter/memory 211 and a readout signal line 123. A control pulse is supplied to the readout unit 212 from the vertical scanning circuit (not illustrated) via a control line, so that the readout unit 212 switches whether to output the count value of the counter/memory 211 to the readout signal line 123. For example, the readout unit 212 includes a buffer circuit for outputting signals.
The readout signal line 123 may be a signal line for outputting a signal to the calculation processing unit 120 from the light detection unit 100, or may be a signal line for outputting a signal to a signal processing unit arranged inside the light detection unit 100. Further, the horizontal scanning circuit unit (not illustrated) and the vertical scanning circuit unit (not illustrated) may be arranged on a substrate on which the SPAD array is arranged, or may be arranged on a substrate different from the substrate on which the SPAD array is arranged.
Further, while the counter is used in the above-described configuration, a time-to-digital converter (TDC) may be used instead of the counter, and information may be saved in a memory by acquiring a pulse detection timing.
In the first frame period, light emission and light detection are started at time t11 (t12), and the light detection is ended at time t13. In the first frame period illustrated in
As illustrated in
In the second frame, light emission is started at time t21, light detection is started at time t22, and light detection is ended at time 23. In comparison with the first frame, in the second frame, an interval between the start of light emission and the start of light detection is longer. In the second frame illustrated in
Thereafter, the respective frames are set so that an interval between the start of light emission and the start of light detection becomes gradually longer. More specifically, in the N-th frame, light is emitted at time tN1, light detection is started at time tN22, and the light detection is ended at time tN3.
The light detection unit 100 is a SPAD array configured of SPAD pixels arranged two-dimensionally. Accordingly, as illustrated in the above-described timing chart, a pair of data including light amount distribution information of the X-Y plane and time information indicating acquisition time of the light amount distribution information can be acquired for each of the frames. Therefore, it is possible to acquire the information relating to components x, y, and t.
Next, in step S420, the traveling direction analysis unit 111 acquires an intersection point of the incident light vector and the reflected light vector. In step S430, if the intersection point is imaged by the light detection unit 100, and the incident light vector and the reflected light vector intersect at one point (YES in step S430), a coordinate thereof is taken as an intersection point, and the processing proceeds to step S450. If the intersection point is not imaged by the light detection unit 100, and information about the intersection point cannot be acquired directly (NO in step S430), the processing proceeds to step S440. In step S440, the traveling direction analysis unit 111 determines a point where the two vectors are closest to each other to be an intersection point, and makes the two vectors intersect at the intersection point by moving the two vectors without changing the inclination in three dimensions.
Next, in step S450, the space information extraction unit 112 searches for a function for fitting, and calculates a normal vector of an object surface at the intersection point. Positional information of a light track on a measured imaging surface (light amount distribution information of the X-Y plane) and time information corresponding to positional information of the light track are used for calculating the normal vector. A calculation model for searching for the function will be described below.
Next, in step S460, a three-dimensional shape of the object surface is acquired from the normal vector of the object surface.
By repeatedly executing the processing in steps S410 to S460 while changing the light emission area (light emission range) of light emitted from the light source, it is possible to measure a three-dimensional shape of the object illustrated in
An imaging surface 400 of the light detection unit 100 is illustrated in
As illustrated in
For example, when the description is given in linear approximation, the light track is described as “position in the X direction (objective variable)=a+b·time (explanatory variable)”, and coefficients a and b have different values depending on a difference in the vector component of Z direction.
In order to describe the actual light track, it is necessary to consider a vector component in the Y direction, a distance between a position where diffusion light is generated and an imaging surface of light detection, and a non-linear effect caused by temporal change of a direction vector from the light detection unit to pulsed light, associated with progression of the pulsed light. Thus, a calculation model becomes more complex because the numbers of variables and coefficients are increased. Even so, positional information of light at the imaging surface and time information are described in different functions depending on the vector component of light in the Z direction.
Similar to the graph in
In other words, based on the two-dimensional space information (vector information of the X-Z direction) and time information, a model for calculating one-dimensional positional information (vector information of the X direction) and time information on the light detection unit is created. Then, a vector component of the Z direction can be estimated if vector information of the X-Z direction and time information, which can sufficiently describe actual measurement data of X direction information and time information, can be found. This can also be said that an inverse issue is solved through the above-described calculation because the movement of light as an observation target is estimated from the actual measurement data.
Further, by extending a dimension, a model for calculating two-dimensional space information (vector information of the X-Y direction) and time information on the light detection unit is created from three-dimensional space information (vector information of the X-Y-Z direction) and time information. A vector component of the Z direction can be estimated if three-dimensional space information (vector information of the X-Y-Z direction) and time information that can sufficiently describe information of the X-Y direction and time information as actual measurement data can be searched. More specifically, three-dimensional space information and time information are acquired by fitting a data set with the two-dimensional space information and time information on the light detection unit calculated by using the model. Then, the vector component of the Z direction is estimated from the acquired three-dimensional space information and time information.
Hereinafter, an example of the model used for calculation will be described. A more complex model taking lens aberration and ununiform sensor characteristics into consideration may be used instead of the below-described model. Further, parameter estimation using a neural network model may be executed instead of solving the least square.
Temporal change of a laser pulse position expressed by the expression 1 can be described as the expression 2.
Herein, the expression 3 is a constant vector not depending on time, “c” is a speed of light, and the expression 4 is a normalized vector that indicates a light propagating direction.
Herein, “t” is a time when a laser pulse has reached a position indicated by the expression 5, and the time t has an offset with respect to time
The time t′ is time when a laser pulse located at a position expressed by the expression 6 is detected by a camera.
A position of the laser pulse projected onto an imaging surface (focusing surface) of the light detection device is expressed by the expression 7.
Herein, “α(t)” is a coefficient depending on time, and “−zp” is a focal distance. If the focal distance zp does not depend on time, the coefficient α(t) can be described as α(t)=zp/(z0+ct·nt). Movement of a laser pulse geometrically projected on the imaging surface of the light detection device can be described as the following expression 8.
If time taken to propagate light from the laser pulse position to the light detection device is taken into consideration, observation time t′ can be described by the following expression 10.
When the above expression 10 is solved, the following expression 11 can be acquired.
The expression 11 is substituted for the expression 8, so that a position of laser pulsed light projected on the imaging surface, i.e., a function of the observation time t′, can be described by the following expression 12.
Through time-resolved measurement, N sets of data of a three-dimensional data point (Xip, Yip, Ti) can be acquired (i=1, 2, . . . , N). In order to recreate four-dimensional light, six parameters, x0, y0, z0, nx, ny, nz are set, and an optimization issue expressed by the following expression 13 is solved.
Herein, N is the number of entire measurement data points, (Xpi, Ypi) is a position of a pixel on an imaging surface with respect to the i-th data point, −zp is a focal distance, Ti is observation time measured with respect to the i-th data point, and Expression 14 is a position of laser light when t=0.
{right arrow over (r)}
0=(x0,y0,z0) <Expression 14>
In the expression 13, a normalized light propagating vector is expressed as
{right arrow over (n)}=(sin θ cos ϕ, sin θ sin ϕ, cos θ) <Expression 15>
When the expression 13 is converted to a polar coordinate, the following expressions 16 and 17 are acquired.
In the above expressions 16 and 17, an optimization issue is solved by setting five parameters, x0, y0, z0, θ, and ϕ.
The above-described model has the following three characteristics.
Firstly, the calculation is executed based on supposition of “rectilinear propagation characteristic of light” and “law of light speed constancy”. Secondly, the two-dimensional coordinate (Xp, Yp) on the imaging surface is calculated from a projection on the imaging surface with respect to the position (x, y, z) of the pulsed light. Thirdly, the detection time T′ is calculated with consideration for time taken for diffusion light to reach the camera with respect to time t when pulsed light has reaches the position (x, y, z).
When the optimization issue is solved with respect to a track where the number of data points is small, there is a possibility that values are diverged or converged on an incorrect solution. This issue is avoidable if continuity of light track is assumed. More specifically, when a plurality of light tracks exists, a limiting condition that a starting point of the second track is an ending point of the first track is added.
More specifically, with respect to the second track, a cost function (loss function), λ·((xc−x0−ctc·nx)2+(yc−y0−ctc·ny)2+(zc−z0−ctc·nz)2), may be added to the expression 13. Herein, (xc, yc, zc, tc) is a four-dimensional coordinate of the ending point of the first track. Alternatively, instead of adding the limiting condition to the formula of the least square, the ending point of the first track may be set as an initial condition for estimating the second track.
A configuration of the light detection system according to a second exemplary embodiment will be described with reference to
According to the present exemplary embodiment, similar to the first exemplary embodiment, a three-dimensional shape of the object can be acquired even in a case where the light detection unit cannot receive light reflected from the object. Further, because the light source 101 as a heat generation body can be fixed to a heat dissipation member, it is possible to improve the heat dissipation performance in comparison to the case where the light source 101 is moved.
A configuration of a light detection system according to a third exemplary embodiment will be described with reference to
In the present exemplary embodiment, the reflection plane of the object 102 is located in a blind area between the boundaries L2. Accordingly, the light detection unit 100 cannot detect an intersection point of the incident light vector and the reflected light vector nr. However, the intersection point of the incident light vector ni and the reflected light vector nr is estimated from the incident light vector and the reflected light vector nr. Then, the light detection unit 100 calculates a normal vector from the estimated intersection point to estimate a shape of the object 102.
According to the present exemplary embodiment, similar to the first exemplary embodiment, a three-dimensional shape of the object can be acquired even in a case where the light detection unit 100 cannot receive light reflected from the object 102. Further, a shape of the object 102 located in a blind area of the light detection unit 100 can also be calculated.
A configuration of a light detection system according to a fourth exemplary embodiment will be described with reference to
As illustrated in
According to the present exemplary embodiment, similar to the first exemplary embodiment, a shape of the object 102 can be acquired even in a case where it is not possible to receive light reflected from the object 102. Further, as long as the incident light vector and the reflected light vector nr are detectable, the object 102 does not have to be arranged at a position detectable by the light detection unit 100. Accordingly, a degree of freedom can be improved with respect to the arrangement positions of the light detection unit 100 and the object 102.
A configuration of a light detection system according to a fifth exemplary embodiment will be described with reference to
The present exemplary embodiment will be described based on the assumption that the light detection system has already acquired the information about the incident light vector n1 from the light source 101 to the object 102. In this case, the reflected light vector nr is calculated from the reflected light beam detected by the light detection unit 100, and a normal vector is calculated by combining the information about the incident vector and the information about the reflected light vector nr.
According to the present exemplary embodiment, similar to the fourth exemplary embodiment, a three-dimensional shape of the object 102 can be acquired even in a case where the reflected light is not received by the light detection unit 100. Further, a degree of freedom can be improved with respect to the arrangement positions of the light detection unit 100 and the object 102.
A configuration of a light detection system according to a sixth exemplary embodiment will be described with reference to
The present exemplary embodiment will be described based on the assumption that the light detection system has previously acquired the information about the incident light vector n1 from the light source 101 to the object 102. For example, as illustrated in
According to the present exemplary embodiment, similar to the third exemplary embodiment, a three-dimensional shape of the object 102 can be acquired even if the reflected light is not received by the light detection unit 100. Further, a shape of the surface of the object 102 located in the blind area of the light detection unit 100 can also be calculated from the information about the incident light vector ni and the information about the reflected light vector nr acquired by the light detection unit 100.
A configuration of a light detection system according to a seventh exemplary embodiment will be described with reference to
In the present exemplary embodiment, a shape of the object 102 is measured by using light sources 101a and 101b. The light detection unit 100 estimates a shape of the object 102 by using an incident light vector nia from the light source 101a, a reflected light vector nra, and an incident light vector nib from the light source 101b.
According to the present exemplary embodiment, similar to the sixth exemplary embodiment, a three-dimensional shape of the object 102 can be acquired even if the reflected light is not received by the light detection unit 100. Further, a shape of the surface of the object 102 located in a blind area of the light detection unit 100 can also be calculated from the information about the incident light vector and the information about the reflected light vector acquired by the light detection unit 100. Further, even if the object 102 is located in a blind area when a single light source is arranged thereon, a probability of detecting light can be increased by using a plurality of light sources. Furthermore, because a shape of the object 102 is estimated by using the plurality of light sources 101a and 101b, accuracy of shape estimation can be improved.
A configuration of a light detection system according to an eighth exemplary embodiment will be described with reference to
In the present exemplary embodiment, a shape of the object 102 is measured by using the light sources 101a and 101b. In a state where the object 102 is divided into a plurality of portions, the light source 101a is used to detect one portion from among the plurality of divided portions of the object 102, whereas the light source 101b is used to detect another portion.
According to the present exemplary embodiment, similar to the other exemplary embodiments, a three-dimensional shape of the object 102 can be acquired even if the reflected light is not received by the light detection unit 100. Further, it is possible to reduce time taken to measure the shape of the object 102.
A configuration of a light detection system according to a ninth exemplary embodiment will be described with reference to
The light detection system according to the present exemplary embodiment acquires information about a back side of the object 102 (i.e., a blind area of the light detection unit 100) by making light emitted from the light source 101 be reflected for a plurality of times on the mirrored-surface body 108 and the object 102.
In one embodiment, a reflection plane of the mirrored-surface body 108 is a concaved surface. With this configuration, light can easily be reflected on the object 102.
According to the present exemplary embodiment, similar to the other exemplary embodiments, a three-dimensional shape of the object 102 can be acquired even if the reflected light is not received by the light detection unit 100. Further, information about the blind area of the light detection unit 100 can also be detected by the light detection unit 100.
A configuration of a light detection system according to a tenth exemplary embodiment will be described with reference to
In the present exemplary embodiment, a shape of the object 102 is measured by using the light detection units 100a and 100b. The object 102 is located in an area within boundaries L1a of the field of view of the light detection unit 100a and boundaries L1b of the field of view of the light detection unit 100b. The light detection units 100a and 100b are arranged at different positions. The light detection unit 100a detects an X-Y plane, whereas the light detection unit 100b detects a Y-Z plane. As described above, the light detection unit 100b is arranged to detect a two-dimensional plane different from a two-dimensional plane detected by the light detection unit 100a.
In the present exemplary embodiment, the X-Y plane is detected by the light detection unit 100a, and a plane including a component of the Z direction is detected by the light detection unit 100b. Accordingly, the incident light vector and the reflected light vector nr in the three-dimensional space can be acquired from the light detection units 100a and 100b without using “apparent speed” described in the first to the ninth exemplary embodiments. Therefore, the normal vector can be calculated, and the three-dimensional shape of the object can be measured without using the apparent speed.
In the exemplary embodiment using the apparent speed, the incident light vector and the reflected light vector are acquired by using diffusion light. In the present exemplary embodiment, light amount distribution information of laser light itself may be used.
According to the present exemplary embodiment, similar to the other exemplary embodiments, a three-dimensional shape of the object 102 can be acquired even if reflected light is not received by the light detection unit 100. Further, a load placed on the image processing can be reduced because the apparent speed does not have to be used. Furthermore, the reflected light can be detected by the other light detection unit 100b even if the reflected light is in the blind area of the light detection unit 100a. Therefore, it is possible to improve detection accuracy.
A configuration of a light detection system according to an eleventh exemplary embodiment will be described with reference to
As illustrated in
According to the present exemplary embodiment, similar to the other exemplary embodiments, a three-dimensional shape of the object 102 can be acquired even if reflected light is not received by the light detection unit 100. Further, a shape can be calculated without using light directly incident from the light source 101.
A configuration of a light detection system according to a twelfth exemplary embodiment will be described with reference to
As illustrated in
As illustrated in
A configuration of a light detection system according to a thirteenth exemplary embodiment will be described with reference to
The light detection unit 100 detects the incident light vector and the refracting light vector nf2 in the X-Y plane of the object 102. Similar to the first exemplary embodiment, the incident light vector ni in the x-y-z-t four dimensional information is calculated from an incident light beam, and a refracting light vector nf2 in the x-y-z-t four dimensional information is calculated from a refracting light beam. The refracting light vector nf1 traveling through the inside of the object 102 is estimated by connecting a point where the incident light vector enters the object 102 (a point where a vector is changed) and a point where the incident light vector ni exits from the object 102.
As illustrated in
According to the present exemplary embodiment, similar to the other exemplary embodiments, when light enters the object 102, a three-dimensional shape of the object 102 can also be acquired even in a case where the reflected light is not received by the light detection unit 100.
The present disclosure is not limited to the above-described exemplary embodiments, and various changes and modifications are possible. For example, an exemplary embodiment in which a part of the configurations according to any one of the above-described exemplary embodiments is added to another exemplary embodiment or replaced with a part of the configurations according to another exemplary embodiment is also included in the exemplary embodiments of the present disclosure.
In addition, the above-described exemplary embodiments are merely examples embodying the present disclosure, and shall not be construed as limiting the technical range of the present disclosure. In other words, the present disclosure can be realized in various ways without departing from the technical spirit or the main features of the present disclosure.
According to the aspect of the present disclosure, it is possible to provide a light detection system capable of acquiring information about an object even in a case where the light detection unit cannot receive light reflected on the object.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2020-124549, filed Jul. 21, 2020, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2020-124549 | Jul 2020 | JP | national |