This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2021-051645 filed on Mar. 25, 2021.
The present invention relates to a detection apparatus, a non-transitory computer readable medium storing a program causing a computer to execute a process for detecting an object, and an optical device.
Patent Literature 1 discloses a method for measuring a depth that is insensitive to corrupting light due to internal reflections, the method including: emitting light by a light source onto a scene; performing a corrupting light measurement by controlling a first electric charge accumulation unit of a pixel to collect electric charge based on light hitting the pixel during a first time period where the corrupting light hits the pixel but no return light from an object within a field of view of the pixel hits the pixel; removing a contribution from the corrupting light from one or more measurements influenced by the corrupting light based on the corrupting light measurement; and determining the depth based on the one or more measurements with the contribution from the corrupting light removed.
Patent Literature 2 discloses a distance measurement apparatus including: a light projection unit configured to project light onto a target object; a light-receiving unit configured to receive light reflected or scattered by the target object; a scanning unit configured to scan a scanning region with light projected from the light projection unit; and a distance measurement unit configured to measure a time from light projection by the light projection unit to light reception by the light-receiving unit, and measure a distance to the target object, in which when the scanning region is divided into plural divided regions, and a period from a start of scanning one divided region among all the divided regions to an end of scanning all the divided regions is defined as one scanning, it is determined whether a measurement value of a first divided region can be a measurement result of the first divided region based on the measurement value of the first divided region and a measurement value of a second divided region measured before the measurement value of the first divided region, measured by the distance measurement unit during the one scanning, and when it is determined that the measurement result of the first divided region can be obtained, the measurement value of the first divided region is output as a distance to the target object in the first divided region.
Patent Literature 3 discloses a time-of-flight distance measurement apparatus including: a first light source configured to emit first light to a first light-emitting space; a light-receiving unit including plural pixels and configured to receive light by the pixels; a distance image acquisition unit configured to acquire a distance image indicating a distance from the own apparatus to a target object for each pixel by receiving, by the light-receiving unit, light including first reflected light obtained by reflecting the first light on a front surface of the target object during a light-emitting period during which the first light is repeatedly emitted from the first light source; a luminance value image acquisition unit configured to acquire a luminance value image indicating a luminance value of each pixel by receiving, by the light-receiving unit, light including second reflected light obtained by reflecting, on a front surface of a target object, second light emitted from a second light source to a second light-emitting space including at least a part of the first light-emitting space such that an optical axis of the second light is different from an optical axis of the first light during a non-light-emitting period during which the first light is not repeatedly emitted from the first light source; and a multipath detection unit configured to detect a region where a multipath occurs, by using the distance image and the luminance value image.
Patent Literature 4 discloses a distance measurement apparatus including: a light-emitting unit configured to emit search light; and a light-receiving unit configured to receive reflected light of the search light; in the distance measurement apparatus configured to measure a distance to a target object by reflecting the search light, based on reflected light received by the light-receiving unit, a region centered on the light-emitting unit in which an intensity of scattered light generated when the search light passes through a water droplet having a diameter larger than a wavelength of the search light or is reflected by the water droplet exceeds a noise level of the light-receiving unit is set as a strong scattering region, and the light-receiving unit is installed at a position deviated from the strong scattering region, and a light-blocking unit configured to block, of the scattered light, convergent scattered light that converges in a specific direction and scattered light that is to be incident on the light-receiving unit at an incident angle larger than that of the convergent scattered light is provided.
[Patent Literature 1]: JP-A-2019-219400
[Patent Literature 2]: JP-A-2019-028039
[Patent Literature 3]: JP-A-2017-15448
[Patent Literature 4]: JP-A-2007-333592
Aspects of non-limiting embodiments of the present disclosure relate to a detection apparatus, a detection program, and an optical device that may prevent an influence of light other than direct light when detecting an object to be detected by detecting reflected light of light emitted to the object to be detected from a light-emitting element array including a plurality of light-emitting elements, as compared with a case where light other than the direct light that is directly incident on and reflected by the object to be detected is not considered.
Aspects of certain non-limiting embodiments of the present disclosure address the above advantages and/or other advantages not described above. However, aspects of the non-limiting embodiments are not required to address the advantages described above, and aspects of the non-limiting embodiments of the present disclosure may not address advantages described above.
According to an aspect of the present invention, there is provided a detection apparatus including: a light-emitting element array including plural light-emitting elements; a light-receiving element array including plural light-receiving elements configured to receive reflected light of light emitted from the light-emitting element array to an object to be detected; a drive unit configured to selectively drive the plural light-emitting elements; and a detection unit configured to cause the light-emitting elements to emit light and cause a first light-emitting element corresponding to a first light-receiving element having received light amounts less than a predetermined threshold among received light amounts of light received by all light-receiving elements to emit light to detect the object to be detected.
Exemplary embodiment(s) of the present invention will be described in detail based on the following figures, wherein:
Hereinafter, an example of an exemplary embodiment according to disclosed technique will be described in detail with reference to the drawings.
As a measurement apparatus that measures a three-dimensional shape of an object to be measured, there is an apparatus that measures a three-dimensional shape based on a so-called time of flight (ToF) method based on a flight time of light. In the ToF method, a time from a timing at which light is emitted from a light source of a measurement apparatus to a timing at which the irradiated light is reflected by the object to be measured and received by a three-dimensional sensor (hereinafter, referred to as 3D sensor) of the measurement apparatus is measured, and a distance to the object to be measured is measured to identify a three-dimensional shape. An object whose three-dimensional shape is to be measured is referred to as the object to be measured. The object to be measured is an example of an object to be detected. Further, measurement of a three-dimensional shape may be referred to as three-dimensional measurement, 3D measurement, or 3D sensing.
The ToF method includes a direct method and a phase difference method (indirect method). The direct method is a method of irradiating the object to be measured with pulsed light that emits light for a very short time, and actually measuring a time until the light returns. The phase difference method is a method of periodically blinking the pulsed light and detecting, as a phase difference, a time delay when plural pulsed lights travel back and forth with respect to the object to be measured. In the present exemplary embodiment, a case where the three-dimensional shape is measured by the phase difference method will be described.
Such a measurement apparatus is mounted on a portable information processing apparatus or the like, and is used for face authentication or the like of a user who intends to access the apparatus. In the related art, in the portable information processing apparatus or the like, a method of authenticating a user by a password, a fingerprint, an iris, or the like has been used. In recent years, there has been a demand for an authentication method with higher security. Therefore, the measurement apparatus that measures the three-dimensional shape has been mounted on the portable information processing apparatus. That is, a three-dimensional image of a face of an accessing user is acquired, whether access is permitted is identified, and only when it is authenticated that the user is permitted to access the apparatus, use of the own apparatus (the portable information processing apparatus) is permitted.
Such a measurement apparatus is also applied to a case where the three-dimensional shape of the object to be measured is continuously measured, such as augmented reality (AR).
A configuration, a function, a method, and the like described in the present exemplary embodiment described below can be applied not only to the face authentication and the augmented reality but also to measurement of a three-dimensional shape of another object to be measured.
(Measurement Apparatus 1)
The measurement apparatus 1 includes an optical device 3 and a control unit 8. The control unit 8 controls the optical device 3. Then, the control unit 8 includes a three-dimensional shape identification unit 81 that identifies a three-dimensional shape of an object to be measured. The measurement apparatus 1 is an example of a detection apparatus. Further, the control unit 8 is an example of a detection unit.
A communication unit 14 and a storage unit 16 are connected to the I/O 12D.
The communication unit 14 is an interface for performing data communication with an external apparatus.
The storage unit 16 is configured with a non-volatile rewritable memory such as a flash ROM, and stores a measurement program 16A described later, a partition correspondence table 16B described later, and the like. The CPU 12A reads the measurement program 16A stored in the storage unit 16 into the RAM 12C and executes the measurement program 16A, so that the three-dimensional shape identification unit 81 is configured and the three-dimensional shape of the object to be measured is identified. The measurement program 16A is an example of a detection program.
The optical device 3 includes a light-emitting device 4 and a 3D sensor 5. The light-emitting device 4 includes a wiring substrate 10, a heat dissipation base material 100, a light source 20, a light diffusion member 30, a drive unit 50, a holding portion 60, and capacitors 70A and 70B. Further, the light-emitting device 4 may include passive elements such as a resistance element 6 and a capacitor 7 in order to operate the drive unit 50. Here, two resistance elements 6 and two capacitors 7 are provided. Further, although the two capacitors 70A and 70B are shown, one capacitor may be used. When the capacitors 70A and 70B are not distinguished from each other, the capacitors 70A and 70B are referred to as the capacitor 70. Further, the number of each of the resistance element 6 and the capacitor 7 may be one or more. Here, electric components such as the 3D sensor 5, the resistance element 6, and the capacitor 7 other than the light source 20, the drive unit 50, and the capacitor 70 may be referred to as circuit components without being distinguished from each other. The capacitor may be referred to as an electric condenser. The 3D sensor 5 is an example of a light-receiving element array.
The heat dissipation base material 100, the drive unit 50, the resistance element 6, and the capacitor 7 of the light-emitting device 4 are provided on a front surface of the wiring substrate 10. Although the 3D sensor 5 is not provided on the front surface of the wiring substrate 10 in
The light source 20, the capacitors 70A and 70B, and the holding portion 60 are provided on a front surface of the heat dissipation base material 100. Then, the light diffusion member 30 is provided on the holding portion 60. Here, an outer shape of the heat dissipation base material 100 and an outer shape of the light diffusion member 30 are the same. Here, the front surface refers to a front side of a paper surface of
The light source 20 is configured as a light-emitting element array in which plural light-emitting elements are two-dimensionally arranged (see
The light emitted by the light source 20 is incident on the light diffusion member 30. Then, the light diffusion member 30 diffuses incident light and emits diffused light. The light diffusion member 30 is provided so as to cover the light source 20 and the capacitors 70A and 70B. That is, the light diffusion member 30 is provided by a predetermined distance from the light source 20 and the capacitors 70A and 70B provided on the heat dissipation base material 100, by the holding portion 60 provided on the front surface of the heat dissipation base material 100. Therefore, the light emitted by the light source 20 is diffused by the light diffusion member 30 and radiated to the object to be measured. That is, the light emitted by the light source 20 is diffused by the light diffusion member 30 and radiated to a wider range than in a case where the light diffusion member 30 is not provided.
When three-dimensional measurement is performed by the ToF method, the light source 20 is required to emit, for example, pulsed light (hereinafter, referred to as an emitted light pulse) having a frequency of 100 MHz or more and a rise time of 1 ns or less by the drive unit 50. When the face authentication is taken as an example, a distance of light irradiation is about 10 cm to about 1 m. Then, a range of light irradiation is about 1 m square. The distance of light irradiation is referred to as a measurement distance, and the range of light irradiation is referred to as an irradiation range or a measurement range. Further, a surface virtually provided in the irradiation range or the measurement range is referred to as an irradiation surface. The measurement distance to the object to be measured and the irradiation range with respect to the object to be measured may be other than those described above, for example, in a case other than the face authentication.
The 3D sensor 5 includes plural light-receiving elements, for example, 640×480 light-receiving elements, and outputs a signal corresponding to a time from a timing at which light is emitted from the light source 20 to a timing at which light is received by the 3D sensor 5.
For example, the light-receiving elements of the 3D sensor 5 receive pulsed reflected light (hereinafter, referred to as received light pulse) from the object to be measured with respect to the emitted light pulse from the light source 20, and accumulate an electric charge corresponding to a time until light is received by the light-receiving elements. The 3D sensor 5 is configured as a device having a CMOS structure in which each light-receiving element includes two gates and electric charge accumulation units corresponding to the gates. Then, by alternately applying pulses to the two gates, generated photoelectrons are transferred to one of the two electric charge accumulation units at high speed. Electric charges corresponding to a phase difference between the emitted light pulse and the received light pulse are accumulated in the two electric charge accumulation units. Then, the 3D sensor 5 outputs, as a signal, a digital value corresponding to the phase difference between the emitted light pulse and the received light pulse for each light-receiving element via an AD converter. That is, the 3D sensor 5 outputs a signal corresponding to the time from the timing at which the light is emitted from the light source 20 to the timing at which the light is received by the 3D sensor 5. That is, a signal corresponding to the three-dimensional shape of the object to be measured is acquired from the 3D sensor 5. The AD converter may be provided in the 3D sensor 5 or may be provided outside the 3D sensor 5.
As described above, the measurement apparatus 1 diffuses the light emitted by the light source 20 to irradiate the object to be measured with the diffused light, and receives the reflected light from the object to be measured by the 3D sensor 5. In this way, the measurement apparatus 1 measures the three-dimensional shape of the object to be measured.
First, the light source 20, the light diffusion member 30, the drive unit 50, and the capacitors 70A and 70B that constitute the light-emitting device 4 will be described.
(Configuration of Light Source 20)
A direction orthogonal to the x direction and the y direction is defined as a z direction. A front surface of the light source 20 refers to a front side of the paper surface, that is, a surface on +z direction side, and a back surface of the light source 20 refers to a back side of the paper surface, that is, a surface on −z direction side. The plan view of the light source 20 is a view of the light source 20 when viewed from the front surface side.
More specifically, in the light source 20, a side on which an epitaxial layer that functions as a light-emitting layer (an active region 206 described later) is formed is referred to as a front surface, a front side, or a front surface side of the light source 20.
The VCSEL is a light-emitting element in which the active region serving as a light-emitting region is provided between a lower multilayer film reflector and an upper multilayer film reflector laminated on a semiconductor substrate 200, and laser light is emitted in a direction perpendicular to a front surface. For this reason, the VCSEL is easily formed into a two-dimensional array as compared with a case where an edge-emitting laser is used. The number of VCSELs provided in the light source 20 is, for example, 100 to 1000. The plural VCSELs are connected to each other in parallel and driven in parallel. The number of VCSELs described above is an example, and may be set in accordance with a measurement distance or an irradiation range.
As shown in
An anode electrode 218 (see
Here, the light source 20 has a rectangular shape when viewed from the front surface side (referred to as planar shape, the same applies hereinafter). Then, a side surface on a −y direction side is referred to as a side surface 21A, a side surface on a +y direction side is referred to as a side surface 21B, a side surface on a −x direction side is referred to as a side surface 22A, and a side surface on a +x direction side is referred to as a side surface 22B. The side surface 21A and the side surface 21B face each other. The side surface 22A and the side surface 22B connect the side surface 21A and the side surface 21B, and face each other.
Then, a center of the planar shape of the light source 20, that is, a center in the x direction and the y direction is defined as a center Ov.
(Drive Unit 50 and Capacitors 70A and 70B)
When it is desired to drive the light source 20 at a higher speed, the light source 20 is preferably driven on a low side. The low-side drive refers to a configuration in which a drive element such as a MOS transistor is positioned on a downstream side of a current path with respect to a drive target such as a VCSEL. Conversely, a configuration in which a drive element is positioned on an upstream side is referred to as high-side drive.
As described above, the light source 20 is configured by connecting the plural VCSELs in parallel. The anode electrode 218 of the VCSEL (see
As described above, the light source 20 is partitioned into the plural light-emitting partitions 24, and the control unit 8 drives the VCSELs for each light-emitting partition 24. In
As shown in
The drive unit 50 includes an n-channel MOS transistor 51 and a signal generation circuit 52 that turns on and off the MOS transistor 51. A drain (referred to as [D] in
One terminal of each of the capacitors 70A and 70B is connected to the power supply line 83, and the other terminal of each of the capacitors 70A and 70B is connected to the reference line 84. Here, when there are plural capacitors 70, the plural capacitors 70 are connected in parallel. That is, in
Next, a method for driving the light source 20 that is low-side driven will be described.
First, the control unit 8 turns on the switch elements SW of the light-emitting partition 24 in which the VCSELs are desired to emit light, and turns off the switch elements SW of the light-emitting partition 24 in which the VCSELs are not desired to emit light.
Hereinafter, driving the VCSELs provided in the light-emitting partition 24 in which the switch elements SW are turned on will be described.
First, a signal generated by the signal generation circuit 52 of the drive unit 50 is at the “L level”. In this case, the MOS transistor 51 is in an off state. That is, no current flows between the source ([5] in
The capacitors 70A and 70B are connected to the power supply 82, one terminal connected to the power supply line 83 of the capacitors 70A and 70B becomes the power supply potential, and the other terminal connected to the reference line 84 becomes the reference potential. Therefore, the capacitors 70A and 70B are charged by a current flowing from the power supply 82 (by being supplied with electric charges).
Next, when the signal generated by the signal generation circuit 52 of the drive unit 50 reaches the “H level”, the MOS transistor 51 shifts from the off state to an on state. Then, a closed loop is formed by the capacitors 70A and 70B and the MOS transistor 51 and the VCSELs connected in series, and electric charges accumulated in the capacitors 70A and 70B are supplied to the MOS transistor 51 and the VCSELs connected in series. That is, a drive current flows through the VCSELs, and the VCSELs emit light. The closed loop is a drive circuit that drives the light source 20.
Then, when the signal generated by the signal generation circuit 52 of the drive unit 50 reaches the “L level” again, the MOS transistor 51 shifts from the on state to the off state. Accordingly, the closed loop (the drive circuit) of the capacitors 70A and 70B and the MOS transistor 51 and the VCSELs connected in series becomes an open loop, and the drive current does not flow through the VCSELs. Accordingly, the VCSELs stop emitting light. The capacitors 70A and 70B are charged by being supplied with electric charges from the power supply 82.
As described above, each time the signal output by the signal generation circuit 52 is shifted between the “H level” and the “L level”, the MOS transistor 51 is repeatedly turned on and off, and the VCSELs repeat light emission and non-light emission. Repetition of turning on and off the MOS transistor 51 may be referred to as switching.
Incidentally, when the light emitted from the light source 20 is directly incident on the object to be measured and only the reflected light can be received by the 3D sensor 5, a distance to the object to be measured can be accurately measured.
However, in practice, the 3D sensor 5 includes a lens (not shown), and there is a problem of lens flare that a light-receiving element that should not originally receive unnecessary light multiply reflected by the lens receives the unnecessary light. Hereinafter, light that is directly incident on the object to be measured and reflected is directly received by the light-receiving element, and is referred to as direct light. Further, unnecessary light other than the direct light is referred to as indirect light.
In a light-receiving element that receives not only the direct light but also the indirect light due to the lens flare, a received light amount may exceed an assumed amount and may be saturated. Further, even when an obstacle such as a finger of a user is present between the measurement apparatus 1 and the object to be measured, the received light amount may exceed an assumed amount due to unnecessary indirect light reflected by the obstacle.
Therefore, in the present exemplary embodiment, a distance to the object to be measured is measured by the phase difference method described above based on a received light amount of direct light received by a light-receiving element PD that directly receives light that is directly incident on and reflected by the object to be measured among plural light-receiving elements PD provided in the 3D sensor 5 that receives reflected light of light emitted from the light source 20 to the object to be measured. Specifically, the distance to the object to be measured is measured by causing a VCSEL corresponding to a light-receiving element PD having a received light amount less than a predetermined threshold to emit light. That is, a VCSEL corresponding to a light-receiving element PD having a received light amount equal to or larger than the predetermined threshold does not emit light. A series of processings of measuring the distance to the object to be measured after the light source 20 is caused to emit light may be referred to as integration.
In the present exemplary embodiment, as shown in
In the present exemplary embodiment, a light-receiving partition 26 to which the light-receiving elements PD that receive the direct light belong is identified in advance for each light-emitting partition 24 when all VCSELs belonging to the light-emitting partition 24 emit light. A correspondence relationship between the light-emitting partition 24 and the light-receiving partition 26 is stored in advance in the storage unit 16 as the partition correspondence table 16B (see
The partition correspondence table 16B is obtained based on, for example, a received light amount of light received by each light-receiving partition 26 by individually causing each light-emitting partition 24 to emit light to a predetermined object to be measured in a state where an obstacle or the like is not present.
The light-emitting partitions 24 and the light-receiving partitions 26 may have any one of one-to-one correspondence, many-to-one correspondence, one-to-many correspondence, and many-to-many correspondence, but in the present exemplary embodiment, for convenience of description, the light-emitting partitions 24 and the light-receiving partitions 26 have the one to one correspondence.
Next, operations of the measurement apparatus 1 according to the present exemplary embodiment will be described.
In step S100, the MOS transistor 51 of the drive unit 50 is turned on and all the switch elements SW are turned on so that the VCSELs of all the light-emitting partitions 24 of the light source 20 emit light. Accordingly, all the VCSELs emit light.
In step S102, a received light amount (an electric charge amount) of light received by the light-receiving elements of all the light-receiving partitions 26 is acquired from the 3D sensor 5.
In step S104, it is determined whether there is a light-receiving element whose received light amount is equal to or larger than a predetermined threshold. The threshold is set to a value at which it can be determined that the indirect light other than the direct light is also received and the received light amount is saturated. For example, the threshold may be set to a maximum value of the light amount received that can be measured by the light-receiving element. Then, when there is the light-receiving element whose received light amount is equal to or larger than a predetermined threshold, the processing shifts to step S106. On the other hand, when there is no light-receiving element whose received light amount is equal to or larger than the predetermined threshold, the processing shifts to step S110.
In step S106, a light-emitting partition 24 corresponding to a light-receiving partition 26 to which the light-receiving element having the received light amount equal to or larger than the threshold belongs is identified with reference to the partition correspondence table 16B. Then, VCSELs belonging to first light-emitting partitions 24 are caused to emit light a predetermined number of times with light-emitting partitions 24 other than the identified light-emitting partition 24 as the first light-emitting partitions 24, received light amounts of light-receiving elements belonging to light-receiving partitions 26 corresponding to the first light-emitting partitions 24 that emit light are acquired from the 3D sensor 5, and a distance to the object to be measured is measured by the phase difference method described above. That is, the VCSELs belonging to the first light-emitting partitions 24 corresponding to the light-receiving partitions 26 to which the light-receiving elements whose received light amounts are less than the threshold belong are caused to emit light, and the distance to the object to be measured is measured. In this way, the VCSELs belonging to the first light-emitting partitions 24 corresponding to the light-receiving partitions 26 less influenced by the indirect light are caused to emit light to measure the distance to the object to be measured. The distance to the object to be measured may be measured by acquiring only the received light amounts of the light-receiving elements belonging to the light-receiving partitions 26 corresponding to the first light-emitting partitions 24.
In step S108, VCSELs belonging to second light-emitting partition 24 other than the first light-emitting partitions 24 are caused to emit light, received light amounts of light-receiving elements belonging to the light-receiving partition 26 corresponding to the second light-emitting partition 24 that emits light are acquired from the 3D sensor 5, and the distance to the object to be measured is measured. At this time, the VCSELs belonging to the second light-emitting partition 24 are caused to emit light at the number of times N2 smaller than the number of times N1 at which the VCSELs of the first light-emitting partitions 24 are caused to emit light at step S106. The number of times N2 is set to the number of times at which a received light amount of light received by the light-receiving elements is less than the threshold. Accordingly, the received light amount of light received by the light-receiving elements is prevented from becoming equal to or larger than the threshold.
In step S110, since there is no light-receiving element whose received light amount is equal to or larger than the threshold, the distance to the object to be measured is measured based on the received light amounts of all the light-receiving elements acquired in step S102.
In this way, in the present exemplary embodiment, the light-emitting partitions corresponding to the light-receiving partitions 26 to which the light-receiving elements whose received light amounts are less than the predetermined threshold belong are set as the first light-emitting partitions 24, and the light-emitting partition corresponding to the light-receiving partition 26 to which the light-receiving elements whose received light amounts are equal to or larger than the predetermined threshold belong is set as the second light-emitting partition 24. Then, the second light-emitting partition 24 is caused to emit light with the number of times of light emission reduced.
For example, as shown in
In the processing of
Next, a second exemplary embodiment will be described. The same parts as those in the first exemplary embodiment are designated by the same reference numerals, and detailed description thereof will be omitted.
Since the configuration of the measurement apparatus 1 is the same as that of the first exemplary embodiment, description thereof will be omitted.
A problem generated when an object to be measured is irradiated with light from the light source 20 and a distance to the object to be measured is measured by receiving reflected light thereof is not only the lens flare described in the first exemplary embodiment. For example, as shown in
Due to the multipath, a light-receiving element receives not only direct light but also indirect light that should not be originally received, and thus accuracy of a measured distance may be influenced.
Therefore, in the present exemplary embodiment, a second light-emitting partition 24 corresponding to a light-receiving partition 26 to which the light-receiving element that receives the indirect light that should not be originally received belongs is not caused to emit light, and VCSELs belonging to first light-emitting partitions 24 other than the second light-emitting partition 24 are caused to emit light to measure the distance to the object to be measured. Accordingly, as shown in
Hereinafter, operations of the present exemplary embodiment will be described.
In step S200, one light-emitting partition 24 that does not emit light is caused to emit light. That is, the MOS transistor 51 of the drive unit 50 is turned on and switch elements SW of one light-emitting partition 24 that does not emit light are turned on so that VCSELs of one light-emitting partition 24 that does not emit light emit light. Accordingly, the VCSELs of one light-emitting partition 24 emit light, and VCSELs of other light-emitting partitions 24 do not emit light.
In step S202, received light amounts of light-receiving elements belonging to all the light-receiving partitions 26 are acquired from the 3D sensor 5.
In step S204, a first light-receiving partition 26 corresponding to the light-emitting partition 24 that emits light in step S200 is identified with reference to the partition correspondence table 16B. Then, based on the received light amounts of the light-receiving elements belonging to all the light-receiving partitions 26 acquired in step S202, it is determined whether light is received by second light-receiving partitions 26 other than the first light-receiving partition 26.
Then, when the light is received by the second light-receiving partitions 26, the processing shifts to step S206, and when the light is not received by the second light-receiving partitions 26, the processing shifts to step S208.
In step S206, the light-emitting partition 24 that emits light in step S200 is set as the second light-emitting partition 24.
On the other hand, in step S208, the light-emitting partition 24 that emits light in step S200 is set as the first light-emitting partition 24.
In step S210, it is determined whether light is emitted by all the light-emitting partitions 24, and when the light is emitted by all the light-emitting partitions 24, the processing shifts to step S212. On the other hand, when there is the light-emitting partition 24 that does not emit light, the processing shifts to step S200, the light-emitting partition 24 that does not emit light is caused to emit light, and the same processing as described above is performed.
In step S212, only the first light-emitting partition 24 set in step S208 is caused to emit light a predetermined number of times, received light amounts of light-receiving elements are acquired from the 3D sensor 5, and the distance to the object to be measured is measured by the phase difference method described above.
In this way, in the present exemplary embodiment, when the light is received by the light-receiving elements belonging to the second light-receiving partitions 26 other than the first light-receiving partition 26 corresponding to the light-emitting partition 24 that emits light, the first light-emitting partition 24 other than the second light-emitting partitions 24 corresponding to the second light-receiving partitions 26 is caused to emit light, and the distance to the object to be measured is measured.
Next, a third exemplary embodiment will be described. The same parts as those in the first exemplary embodiment are designated by the same reference numerals, and detailed description thereof will be omitted.
Since the configuration of the measurement apparatus 1 is the same as that of the first exemplary embodiment, description thereof will be omitted.
Hereinafter, operations of the present exemplary embodiment will be described.
In step S300, as in step S100 of
In step S302, as in step S102 of
In step S304, it is determined whether there is a light-receiving partition 26 where the distance measured in step S302 continuously changes. Then, when there is the light-receiving partition 26 where the distance continuously changes, the processing shifts to step S306, and when there is no light-receiving partition 26 where the distance continuously changes, the processing shifts to step S308.
In step S306, a light-emitting partition 24 corresponding to the light-receiving partition 26 where the distance continuously changes is set as the second light-emitting partition 24, and other light-emitting partitions are set as the first light-emitting partitions 24.
In step S308, all the light-emitting partitions 24 are set as the first light-emitting partitions 24.
In step S310, a received light amount of light received by light-receiving elements of all the light-receiving partitions 26 by causing the first light-emitting partitions 24 to emit light is acquired from the 3D sensor 5, and the distance to the object to be measured is measured.
Accordingly, since the light-emitting partition 24 that irradiates a wall or the like with light and where the distance continuously changes is not caused to emit light, an influence of multipath is avoided.
Next, a fourth exemplary embodiment will be described. The same parts as those in the first exemplary embodiment are designated by the same reference numerals, and detailed description thereof will be omitted.
Since the configuration of the measurement apparatus 1 is the same as that of the first exemplary embodiment, description thereof will be omitted.
Hereinafter, operations of the present exemplary embodiment will be described.
In step S400, as in step S200 of
In step S402, as in step S202 of
In step S404, it is determined whether light is emitted by all the light-emitting partitions 24, and when the light is emitted by all the light-emitting partitions 24, the processing shifts to step S406. On the other hand, when there is the light-emitting partition 24 that does not emit light, the processing shifts to step S400, the light-emitting partition 24 that does not emit light is caused to emit light, and the same processing as described above is performed. Accordingly, a received light amount map representing a correspondence relationship between the light-emitting partition 24 and received light amounts of the light-receiving partitions 26 when the light-emitting partition 24 is caused to emit light is obtained.
In step S406, a light emission order of the light-emitting partitions 24 is set based on the received light amount map. Specifically, the light emission order is set such that light is emitted by each light-emitting partition 24 where mutual interference of light does not occur. Here, the mutual interference of the light refers to a situation where, when plural light-emitting partitions 24 are caused to emit light simultaneously, the light is also received by the second light-receiving partitions 26 other than the corresponding first light-receiving partition 26, and accuracy of measurement is adversely influenced.
For example, as shown in
In this case, no mutual interference occurs even when the light-emitting partitions 2411, 2412, 2413, 2414, 2421, 2424, 2432, and 2433 corresponding to the light-receiving partitions 2611, 2612, 2613, 2614, 2621, 2624, 2632, and 2633 are caused to emit light simultaneously. Similarly, no mutual interference occurs even when the light-emitting partitions 2422 and 2423 corresponding to the light-receiving partitions 2622 and 2623 are caused to emit light simultaneously. Further, no mutual interference occurs even when the light-emitting partitions 2431 and 2434 corresponding to the light-receiving partitions 2631 and 2634 are caused to emit light simultaneously.
Therefore, in the example of
In step S408, the light-emitting partitions 24 emit light in the light emission order set in step S406, and the distance to the object to be measured is measured.
In this way, based on the received light amount map, each set of light-emitting partitions 24 having a combination in which the mutual interference of the light does not occur are caused to emit light.
Next, a fifth exemplary embodiment will be described. The same parts as those in the first exemplary embodiment are designated by the same reference numerals, and detailed description thereof will be omitted.
Since the configuration of the measurement apparatus 1 is the same as that of the first exemplary embodiment, description thereof will be omitted.
Hereinafter, operations of the present exemplary embodiment will be described.
In step S500, as in step S200 of
In step S502, a light-receiving partition 26 corresponding to the light-emitting partition 24 that emits light in step S500 is identified with reference to the partition correspondence table 16B, and received light amounts of light-receiving elements belonging to the identified light-receiving partition 26 are acquired from the 3D sensor 5.
In step S504, a distance to an object to be measured in the light-receiving partition 26 corresponding to the light-emitting partition 24 that emits light in step S500 is measured by the phase difference method described above, based on the received light amounts of the light-receiving elements acquired in step S502.
In step S506, it is determined whether light is emitted by all the light-emitting partitions 24, and when the light is emitted by all the light-emitting partitions 24, the present routine ends. On the other hand, when there is the light-emitting partition 24 that does not emit light, the processing shifts to step S500, the light-emitting partition 24 that does not emit light is caused to emit light, and the same processing as described above is performed.
In this way, in the present exemplary embodiment, the light-emitting partitions 24 are caused to emit light one by one, and the processing of measuring the distance to the object to be measured in the light-receiving partition 26 corresponding to the light-emitting partition 24 that emits light is individually performed.
Next, a sixth exemplary embodiment will be described. The same parts as those in the first exemplary embodiment are designated by the same reference numerals, and detailed description thereof will be omitted.
Since the configuration of the measurement apparatus 1 is the same as that of the first exemplary embodiment, description thereof will be omitted.
Hereinafter, operations of the present exemplary embodiment will be described.
In step S600, as in step S200 of
In step S602, as in step S202 of
In step S604, it is determined whether light is received by the second light-receiving partitions 26 other than the first light-receiving partition 26 corresponding to the first light-emitting partition 24 that emits light in step S600, based on the received light amounts of the light-receiving elements belonging to all the light-receiving partitions 26 acquired in step S602.
Then, when the light is received by the second light-receiving partitions 26, the processing shifts to step S606, and when the light is not received by the second light-receiving partitions 26, the processing shifts to step S608.
In step S606, the received light amount of the light received by light-receiving elements of the second light-receiving partitions 26 is stored in the storage unit 16 as a correction amount.
In step S608, it is determined whether all the light-emitting partitions 24 emit light, and when all the light-emitting partitions 24 emit light, the processing shifts to step S610. On the other hand, when there is the light-emitting partition 24 that does not emit light, the processing shifts to step S600, the light-emitting partition 24 that does not emit light is caused to emit light, and the same processing as described above is performed.
In step S610, VCSELs belonging to all the light-emitting partitions 24 are caused to emit light.
In step S612, received light amounts of light-receiving elements of all the light-receiving partitions 26 are acquired from the 3D sensor 5.
In step S614, the received light amounts are corrected by subtracting the correction amount from the received light amounts for the light-receiving elements for which the correction amount is stored in the storage unit 16 in step S606 among the light-receiving elements of all the light-receiving partitions 26. Then, a distance to an object to be measured is measured using the corrected received light amounts. Accordingly, an influence of indirect light is avoided.
Although the exemplary embodiments have been described above, a technical scope of the present invention is not limited to a scope described in the above exemplary embodiments. Various modifications or improvements can be made to the above-described exemplary embodiments without departing from a gist of the invention, and the modified or improved embodiments are also included in the technical scope of the present invention.
The above-described exemplary embodiments do not limit the invention according to the claims, and all combinations of features described in the exemplary embodiments are not necessarily essential to solutions of the invention. The exemplary embodiments described above include inventions of various stages, and various inventions are extracted by a combination of plural disclosed constituent elements. Even when some constituent elements are deleted from all the constituent elements shown in the exemplary embodiments, a configuration in which some constituent elements are deleted can be extracted as an invention as long as an effect is obtained.
For example, in the above-described exemplary embodiments, a case where the three-dimensional shape of the object to be measured is identified by measuring the distance to the object to be measured has been described, but for example, it may be sufficient to only detect whether the object to be measured exists within a predetermined distance.
The control unit 8 that executes the processings of
In the present exemplary embodiments, a configuration in which the measurement program 16A is installed in the storage unit 16 has been described, but the present invention is not limited thereto. The measurement program 16A according to the present exemplary embodiments may be provided in a form of being recorded in a computer-readable storage medium. For example, the measurement program 16A according to the present exemplary embodiments may be provided in a form in which the measurement program 16A is recorded on an optical disc such as a compact disc (CD)-ROM and a digital versatile disc (DVD)-ROM, or in a form in which the measurement program 16A is recorded on a semiconductor memory such as a universal serial bus (USB) memory and a memory card. Further, the measurement program 16A according to the present exemplary embodiments may be acquired from an external apparatus via a communication line connected to the communication unit 14.
In the above-described exemplary embodiments, the term “processor” refers to hardware in a broad sense. Examples of the processor include general processors (for example, CPU: central processing unit) and dedicated processors (for example, GPU: graphics processing unit, ASIC: application specific integrated circuit, FPGA: field programmable gate array, and programmable logic device).
In the above-described exemplary embodiments, the term “processor” is broad enough to encompass one processor or plural processors in collaboration which are located physically apart from each other but may work cooperatively. An order of operations of the processor is not limited to one described in the above-described exemplary embodiments, and may be changed.
The foregoing description of the exemplary embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2021-051645 | Mar 2021 | JP | national |