This application relates to the field of detection technologies, and in particular, to a detection apparatus and a terminal device.
With development of informatization, intelligent terminals gradually enter people's daily life. Sensing systems are playing an increasingly important role in intelligent terminals. At present, sensing systems have been widely used in many fields such as industrial production, cosmic development, ocean exploration, environmental protection, resource investigation, medical diagnosis, and bioengineering. A three-dimensional (three-dimensional, 3D) sensing system can obtain complete geometric information in a three-dimensional scene and accurately digitize the scene by using images with depth information, to implement functions such as high-precision recognition, positioning, reconstruction, and scene understanding, and is a hot research topic in the field of sensing systems.
Technologies applicable to 3D sensing systems mainly include stereo imaging, structured light, the time-of-flight (time-of-flight, TOF) technology, and the like. The TOF is an important technology used in the 3D sensing systems because the TOF has advantages such as a long detection distance and high resolution. The TOF technology is a depth measurement technology of measuring a round-trip time of an actively emitted light pulse among an emitting component, a target, and a receiving component, and obtaining accurate distance information based on the round-trip time and the speed of light. The TOF technology is mainly divided into two categories. The first category is referred to as a direct time-of-flight (direct time-of-flight, d-TOF) technology, and the other category is referred to as an indirect time-of-flight (indirect time-of-flight, i-TOF) technology. Since depth measurement precision of the d-TOF technology is irrelevant to a detection distance, the d-TOF technology is usually for long-distance detection and has low ranging precision. When the d-TOF technology is applied to a short-distance (for example, less than 0.5 m) detection scenario, the measurement precision cannot meet a requirement. Depth measurement precision of the i-TOF technology decreases as the detection distance increases, and the detection precision is lower when the i-TOF technology is applied to a long-distance detection scenario.
To sum up, how to implement high-precision measurement within a full detection distance range is a technical problem that needs to be resolved urgently at present.
This application provides a detection apparatus and a terminal device, to implement high-precision measurement within a full detection distance range.
According to a first aspect, this application provides a detection apparatus. The detection apparatus may include an emitting component and a receiving component. The receiving component includes a light splitting component, a first detection component, and a second detection component. When a detection distance is less than a preset value, detection precision of the first detection component is greater than detection precision of the second detection component. When the detection distance is not less than the preset value, the detection precision of the first detection component is not greater than the detection precision of the second detection component. The emitting component is configured to emit a first beam. The light splitting component is configured to change an optical path of prorogation of a return optical signal that is from a detection region and that is for the first beam to obtain a first return optical signal and a second return optical signal, propagate the first return optical signal to a first detection component, and propagate the second return optical signal to a second detection component. The first detection component is configured to detect the received first return optical signal, to obtain a first electrical signal. The second detection component is configured to detect the received second return optical signal, to obtain a second electrical signal. The first electrical signal and the second electrical signal are for determining distance information of a target in the detection region.
Based on this solution, the light splitting component changes the optical path of prorogation of the return optical signal from the detection region to obtain a first return optical signal and a second return optical signal, to propagate the first return optical signal to the first detection component, and propagate the second return optical signal to the second detection component. In addition, when the detection distance is less than the preset value, the detection precision of the first detection component is greater than the detection precision of the second detection component. When the detection distance is not less than the preset value, the detection precision of the first detection component is not greater than the detection precision of the second detection component. Therefore, when the detection distance is less than the preset value, the target is detected based on the first detection component. When the detection distance is not less than the preset value, the target is detected based on the second detection component. In this way, high-precision detection may be implemented within the full detection distance range.
In a possible implementation, the first detection component includes an i-TOF image sensor, and the second detection component includes a d-TOF image sensor.
The i-TOF image sensor is suitable for short-distance detection, and has high detection precision during short-distance detection. The d-TOF image sensor is suitable for long-distance detection, and has higher detection precision than that of the i-TOF image sensor during long-distance detection. Through the cooperation of i-TOF image sensor and d-TOF image sensor, high-precision detection within the full detection distance range can be achieved.
In a possible implementation, the light splitting component is configured to propagate, based on a received second control signal, the first return optical signal to the first detection component in a second time sequence, or propagate the second return optical signal to the second detection component in a third time sequence.
The light splitting component propagates corresponding return optical signals respectively to different detection components in different time sequences, so that the first detection component works in the second time sequence, and the second detection component works in the third time sequence, to be specific, the two detection components work in different time sequences.
Further, optionally, the second time sequence and the third time sequence are alternately arranged. It may also be understood that the first detection component and the second detection component work alternately.
Because the first detection component and the second detection component work alternately, a dynamic detection capability of the detection apparatus is improved. Especially for a high-speed moving object, an imaging offset between the first detection component and the second detection component can be reduced, which helps avoid a problem that affects image quality, such as smearing, in a fused image.
A time division-based light splitting component is provided. For example, the light splitting component may be, for example, a liquid crystal on silicon (liquid crystal on silicon, LCOS), an optical switch, a fiber circulator, or a digital micromirror device (digital micromirror device, DMD).
In a possible implementation, the emitting component is configured to emit the first beam based on a received first control signal. The first control signal is for controlling a first time sequence in which the emitting component emits the first beam.
By controlling a time sequence in which the emitting component emits the first beam, the emitting component may be synchronized with the first detection component and the second detection component, to further improve detection precision of the detection apparatus.
Further, optionally, the emitting component is configured to emit the first beam based on a received first modulation signal. The first modulation signal may be a pulse wave or a continuous wave.
In a possible implementation, the light splitting component is configured to split the return optical signal to obtain the first return optical signal and the second return optical signal, propagate the first return optical signal to the first detection component, and propagate the second return optical signal to the second detection component.
The light splitting component performs space division on the received return optical signal, to enable the first detection component and the second detection component to work simultaneously, thereby helping improve an imaging speed of a depth map of the target.
A space division-based light splitting component is provided. For example, the light splitting component may be, for example, a beam splitter or a diffractive optical element.
In a possible implementation, the emitting component is configured to emit the first beam based on a received second modulation signal. The second modulation signal includes a pulse wave.
In this way, both the first return optical signal and the second return optical signal may be pulse waves, to enable the first detection component and the second detection component to simultaneously be in a working mode.
In a possible implementation, the detection apparatus further includes a processing control component, configured to receive the first electrical signal from the first detection component and the second electrical signal from the second detection component; and determine the distance information of the target based on the first electrical signal and the second electrical signal.
Further, optionally, the processing control component may be configured to determine, based on the first electrical signal, a phase difference between emission of the first beam and reception of the first return optical signal, determine, based on the phase difference, a first time difference between the emission of the first beam and the reception of the first return optical signal, and determine, based on the second electrical signal, a second time difference between the emission of the first beam and reception of the second return optical signal; and determine first distance information of the target based on the first time difference, and determine second distance information of the target based on the second time difference.
In a possible implementation, the processing control component is further configured to determine error distance information corresponding to a multipath interference region or a strength error region in the first distance information; determine target distance information that is in the second distance information and that corresponds to the error distance information; and replace the error distance information with the target distance information. It may also be understood that the error distance information corresponding to the multipath interference region or the strength error region is removed from the first distance information, and the target distance information that is in the second distance information and that corresponds to the error distance information is used for supplementation.
Because the i-TOF image sensor may generate a multipath interference region or a strength error region, the error distance information corresponding to the multipath interference region or the strength error region is removed from in the first distance information corresponding to the i-TOF image sensor, and is replaced with the target distance information that is in the second distance information and that corresponds to the error distance information, to help improve accuracy of the distance information of the target detected by the detection apparatus, and further improve quality of forming a depth map of the target.
In a possible implementation, the processing control component is further configured to remove distance information of a distance greater than the preset value from the first distance information to obtain third distance information, and generate a first image based on the third distance information; remove distance information of a distance not greater than the preset value from the second distance information to obtain fourth distance information, and generate a second image based on the fourth distance information; and fuse the first image and the second image to obtain a depth map of the target.
The first distance information is obtained based on the first electrical signal of the first detection component, and the second distance information is obtained based on the second electrical signal of the second detection component. In addition, when the detection distance is less than the preset value, detection precision of the first detection component is greater than detection precision of the second detection component. Therefore, third distance information with higher precision can be obtained by removing the distance information of a distance greater than the preset value from the first distance information. When the detection distance is not less than the preset value, the detection precision of the first detection component is not greater than the detection precision of the second detection component. Therefore, high-precision fourth distance information is obtained by removing distance information of a distance not greater than the preset value from the second distance information. The depth map of the target obtained based on the third distance information and the fourth distance information has high detection precision within the full detection distance range.
In a possible implementation, the processing control component is further configured to multiply distance information of a distance greater than the preset value in the first distance information by a first confidence, and multiply distance information of a distance not greater than the preset value in the first distance information by a second confidence, to obtain fifth distance information, and generate a third image based on the fifth distance information, where the second confidence is greater than the first confidence; multiply distance information of a distance not greater than the preset value in the second distance information by a third confidence, and multiply distance information of a distance greater than the preset value in the second distance information by a fourth confidence, to obtain sixth distance information, and generate a fourth image based on the sixth distance information, where the fourth confidence is greater than the third confidence; and fuse the third image and the fourth image to obtain a depth map of the target.
The first distance information is obtained based on the first electrical signal of the first detection component, and the second distance information is obtained based on the second electrical signal of the second detection component. In addition, when the detection distance is less than the preset value, detection precision of the first detection component is greater than detection precision of the second detection component. When the detection distance is not less than the preset value, the detection precision of the first detection component is not greater than the detection precision of the second detection component. Therefore, the first distance information is divided into two parts, a part that is not greater than the preset value is multiplied by the second confidence that is larger, and a part that is greater than the preset value is multiplied by the first confidence that is smaller, to obtain the fifth distance information. The second distance information is also divided into two parts, a part that is not greater than the preset value is multiplied by the third confidence that is smaller, and a part that is greater than the preset value is multiplied by the fourth confidence that is larger, to obtain the sixth distance information. The depth map of the target obtained based on the fifth distance information and the sixth distance information has high detection precision within the full detection distance range.
In a possible implementation, the processing control component is further configured to multiply the first distance information by a fifth confidence to obtain seventh distance information, and generate a fifth image based on the seventh distance information, where the fifth confidence is negatively correlated with the first distance information; multiply the second distance information by a sixth confidence to obtain eighth distance information, and generate a sixth image based on the eighth distance information, where the sixth confidence is positively correlated with the second distance information; and fuse the fifth image and the sixth image to obtain a depth map of the target.
The first distance information is obtained based on the first electrical signal of the first detection component, and the second distance information is obtained based on the second electrical signal of the second detection component. In addition, when the detection distance is less than the preset value, detection precision of the first detection component is greater than detection precision of the second detection component. When the detection distance is not less than the preset value, the detection precision of the first detection component is not greater than the detection precision of the second detection component. Therefore, as the distance of the first distance information increases, the fifth confidence by which the first distance information is multiplied is increasingly small. Conversely, as the distance corresponding to the second distance information increases, the sixth confidence by which the second distance information is multiplied is increasingly small. In this way, the sixth distance information may account for larger proportion when the distance is short, and account for a smaller proportion when the distance is long. Conversely, the eighth distance information accounts for a smaller proportion when the distance is short, and accounts for a larger proportion when the distance is long. Therefore, the depth map of the target obtained based on the seventh distance information and the eighth distance information has high detection precision within the full detection distance range.
According to a second aspect, this application provides a terminal device, including any detection apparatus according to any one of the first aspect or the possible implementations of the first aspect and a processor. The processor may be configured to control the detection apparatus to detect a detection region.
In a possible implementation, the terminal device may be, for example, a radar (such as a lidar), a smartphone, a vehicle, a smart home device, an intelligent manufacturing device, a robot, an uncrewed aerial vehicle, or an intelligent transportation device.
For technical effects that can be achieved in the second aspect, refer to descriptions of beneficial effects in the first aspect. Details are not described herein again.
The following describes in detail embodiments of this application with reference to the accompanying drawings.
The following describes some terms in this application. It should be noted that, these explanations are intended for ease of understanding by a person skilled in the art, and are not intended to limit the protection scope claimed in this application.
1. Detection Precision
Detection precision refers to a minimum distance with which two different targets can be distinguished.
2. Image Fusion (Image Fusion)
Image fusion is an image processing technology and refers to performing image processing, specific algorithm-based calculation, and the like on image data that is about a same target and that is collected from a plurality of source channels, to extract beneficial information in the respective channels to the greatest extent, and finally, fusing a high-quality (for example, in terms of brightness, definition, and color) image. The fused image has higher resolution than the original image.
The foregoing describes some terms involved in this application, and the following describes technical features involved in this application. It should be noted that, these explanations are intended for ease of understanding by a person skilled in the art, and are not intended to limit the protection scope claimed in this application.
It should be noted that, the signal light is usually pulsed laser light. Due to a limitation of laser safety and a limitation of power consumption of the detection apparatus, energy of the emitted signal light is limited, but the detection region needs to be covered completely. Therefore, when the return optical signal obtained through reflecting the signal light by the target returns to a receiver, energy loss is serious. In addition, ambient light, as noise, interferes with the detection and restoration performed by the receiver on the return optical signal. Therefore, the d-ToF technology requires a detector with high sensitivity to detect the return optical signal. For example, a single-photon avalanche diode (single-photon avalanche diode, SPAD) has sensitivity of detecting a single photon, and the SPAD is a diode biased with a high reverse voltage in its operating state. Reverse bias creates a strong electric field inside the device. When a photon is absorbed by the SPAD and converted into a free electron, the free electron is accelerated by the internal electric field to obtain enough energy to generate a free electron and hole pair when it hits another atom. Moreover, a newly generated carrier continues to be accelerated by the electric field, to generate more carriers through collision. Such a geometrically amplified avalanche effect enables the SPAD to have an almost infinite gain, so that a large current pulse is output, to detect a single photon.
In an actual application scenario, there is complex diffuse reflection or even specular reflection. In principle, multi-path interference (multi-path interference, MPI) causes a measured value of the distance to increase, resulting in impact on an effect of three-dimensional reconstruction.
It should be noted that, a detected strength error may also affect the effect of three-dimensional reconstruction. For example, regions with different reflectivity on a same plane may present different distances. For example, when a detected target is a black-and-white chessboard, a detection result may show that the chessboard is uneven.
Based on the foregoing content, the following provides a possible application scenario of the detection apparatus in this application. For example, referring to
The detection apparatus may alternatively be a lidar. The lidar may alternatively be mounted on a vehicle as an in-vehicle lidar, refer to
Detection apparatuses have been widely used in fields such as unmanned driving, autonomous driving, assisted driving, intelligent driving, a connected vehicle, security surveillance, surveying and mapping. It should be noted that, the application scenarios shown above are merely examples. The detection apparatus provided in this application may be further used in a plurality of other scenarios, and is not limited to the scenarios in the foregoing examples. For example, the detection apparatus may be further used in a terminal device or a component disposed in the terminal device. The terminal device may be, for example, a smartphone, a smart home device, an intelligent manufacturing device, a robot, an uncrewed aerial vehicle, or an intelligent transportation device (for example, an automated guided vehicle (automated guided vehicle, AGV) or an unmanned transport vehicle). For another example, the detection apparatus may also be installed on the uncrewed aerial vehicle and serve as an airborne detection apparatus and the like. For another example, the detection apparatus may also be installed on a roadside traffic device (for example, a road side unit (road side unit, RSU)) and serve as a roadside traffic detection apparatus, referring to
As described in the background, a detection apparatus in the conventional art cannot achieve high-precision measurement within a full detection distance range.
In view of this, this application provides a detection apparatus. The detection apparatus can implement high-precision measurement within a full detection distance range of detection.
The following describes in detail the detection apparatus provided in this application with reference to
Based on the foregoing content,
Based on the foregoing detection apparatus, the light splitting component changes the optical path of prorogation of the return optical signal from the detection region to obtain a first return optical signal and a second return optical signal, to propagate the first return optical signal to the first detection component, and propagate the second return optical signal to the second detection component. In addition, when the detection distance is less than the preset value, the detection precision of the first detection component is greater than the detection precision of the second detection component. When the detection distance is not less than the preset value, the detection precision of the first detection component is not greater than the detection precision of the second detection component. Therefore, when the detection distance is less than the preset value, the target is detected based on the first detection component. When the detection distance is not less than the preset value, the target is detected based on the second detection component. In this way, high-precision detection may be implemented within the full detection distance range. Further, when three-dimensional modeling is performed based on the obtained high-precision distance information, an accurate three-dimensional model can be obtained.
The first detection component is suitable for short-distance detection and the second detection component is suitable for long-distance detection. In a possible implementation, the first detection component includes an i-TOF image sensor, and the second detection component includes a d-TOF image sensor.
In a possible implementation, the preset value may be a distance obtained when detection precision of the first detection component is the same as the detection precision of the second detection component. For example, if the detection precision of the first detection component is 2% of the distance, and the detection precision of the second detection component is 2 nanoseconds (ns), that is, a constant detection distance of C*t/2=0.3 meters (m), the preset value may be 15 m, where C is the speed of light. With reference to
In a possible implementation, strengths of the first return optical signal and the second return optical signal may be the same, or may be different. This is not limited in this application.
It should be noted that, when the detection apparatus is a camera module, the detection region may also be understood as a field of view range of the camera module. A target object includes, but is not limited to, a single object. For example, when a person is photographed, the target object includes a person and an environment around the person, that is, the environment around the person is also a part of the target object.
The following respectively describes functional components shown in
1. Emitting Component
In a possible implementation, the emitting component may include a light source. It may also be understood that the detection apparatus may include a self-illuminating light source.
In a possible implementation, the light source may be a single light source, or may be a light source array including a plurality of light sources. For example, the light source may be, for example, a vertical cavity surface emitting laser (vertical cavity surface emitting laser, VCSEL), or may be or an edge emitting laser (edge emitting laser, EEL). An EEL-based light source may implement independent addressing, and the so-called independent addressing means that any light source in a light source array may be independently strobed (or referred to as being lightened, turned on, or powered on).
In a possible implementation, the first beam emitted by the light source may be visible light, infrared light, or ultraviolet light. For example, a wavelength of the first beam emitted by the light source may be in a range of 905 nanometers (nm). Further, optionally, the emitting module may further include a diffuser (or referred to as a beam expander or a light homogenizing element). The diffuser may be configured to expand the first beam emitted by the light source into a uniform first beam, and project the uniform first beam onto the detection region. The uniform first beam may also be referred to as a flood beam.
In a possible implementation, the light source may control, based on the received first control signal, a first time sequence for emitting the first beam. Further, optionally, the light source may further determine a waveform of the first beam based on a received first modulation signal. The first modulation signal may be a pulse wave, a continuous wave, or the like. The continuous wave may be, for example, a sine wave or a square wave.
With reference to
In another possible implementation, the light source may emit the first beam based on a received second modulation signal. The second modulation signal includes a pulse wave. With reference to
It should be noted that, a plurality of modulation signals may be included in each period, for example, a plurality of pulse waves may be included in the period T1 and a plurality of pulse waves may be included in the period T2.
2. Light Splitting Component
The following describes cases in which the light splitting component performs light splitting in space or light splitting in time.
Based on the case 1, the light splitting component propagates, based on a received second control signal, the first return optical signal to the first detection component in a second time sequence, or propagate the second return optical signal to the second detection component in a third time sequence. The second control signal may be transmitted by the processing control component to the light splitting component. For details, refer to the following descriptions of the processing control component. Details are not described herein again. Further, optionally, the second time sequence and the third time sequence are alternately arranged.
With reference to
It should be noted that, the time sequence of the light splitting component may be controlled based on an actual requirement, and
It should also be noted that, the period T1 is the second time sequence, the period T2 is the third time sequence. The period T1 and the period T2 are alternately arranged, and so on. In addition, the period T1 and the period T2 are one detection periodicity of the detection apparatus, and one depth map of the target may be fused based on distance information detected in the period T1 and the period T2.
As stated above, for example, four possible structures of the light splitting component performing light splitting in time are shown. It may also be understood that the four structures of the light splitting component shown below may control optical switching in a time sequence.
A structure 1 is a DMD.
For example, when the DMD is at an angle α, the DMD may propagate the first return optical signal to the first detection component. When the DMD is at an angle β, the DMD may propagate the second return optical signal to the second detection component.
In a possible implementation, the DMD may be controlled based on the received second control signal to be at the angle α in the second time sequence and be at the angle β in the third time sequence, to propagate the first return optical signal to the first detection component in the second time sequence and propagate the second return optical signal to the second detection component in the third time sequence. It may also be understood that light splitting may be implemented in time by controlling an angle of the DMD.
With reference to
A structure two is an LCOS.
In a possible implementation, a first phase map may be applied to the LCoS in the second time sequence based on the received second control signal. The first phase map may control a voltage of each pixel in the LCoS in the second time sequence, so that the LCoS propagates the first return optical signal to the first detection component in the second time sequence. A second phase map is applied to the LCoS in the third time sequence. The second phase map may control a voltage of each pixel in the LCoS in the third time sequence, so that the LCoS propagates the second return optical signal to the second detection component in the third time sequence.
With reference to
A structure 3 is an optical switch.
An optical switch is an optical path conversion device, and is an optical device having one or more optional transmission ports. A function of the optical switch is to perform physical switching or a logical operation on an optical signal in an optical transmission line or an integrated optical circuit. The optical switch may be a conventional mechanical optical switch, a micro mechanical optical switch, a thermo-optical switch, a liquid crystal optical switch, an electro-optical switch, an acousto-optical switch, and the like.
In a possible implementation, based on the second control signal, the output end 1 may be controlled to connect to the input end in the second time sequence, and the output end 2 may be controlled to connect to the input end in the third time sequence.
With reference to
In a possible implementation, based on the second control signal, the output end 1 may be controlled to connect to the input end 1 in the second time sequence, and the output end 2 may be controlled to connect to the input end 2 in the third time sequence.
With reference to
A structure 4 is a fiber circulator.
A fiber circulator is a multi-port non-reciprocal optical device, where an optical signal can only be propagated in one direction.
In a possible implementation, based on the second control signal, the first return optical signal may be controlled to be input from the port 1 in the second time sequence, and the second return optical signal may be controlled to be input from the port 3 in the third time sequence
It should be understood that all of the structure 1, the structure 2, the structure 3, and the structure 4 that are provided above are examples. This is not limited in this application.
Based on the case 2, the light splitting component may split a return optical signal from the detection region to obtain a first return optical signal and a second return optical signal, propagate the first return optical signal to a first detection component, and propagate the second return optical signal to a second detection component. For example, the light splitting component may divide a return optical signal from the detection region into a first return optical signal and a second return optical signal based on a strength ratio. The strength ratio may be 1:1, that is, strength of the first return optical signal is the same as strength of the second return optical signal. Alternatively, the strength ratio may be another possible ratio. For example, strength of the first return optical signal is greater than strength of the second return optical signal, or strength of the first return optical signal is less than strength of the second return optical signal.
As stated above, for example, two possible structures of the light splitting component performing light splitting in space are shown.
A structure A is a beam splitter (beam splitter).
In a possible implementation, the beam splitter may be, for example, a beam splitter (beam splitter, BS) prism or a beam splitter plate. A beam splitter prism is formed by plating one or more thin films (for example, beam splitter films) on a surface of a prism, and a beam splitter plate is formed by plating one or more thin films (for example, beam splitter films) on a surface of a glass plate. Both the beam splitter prism and the beam splitter plate use films that have different transmissivity and reflectivity to incident light, to split the return optical signal from the detection region into two return optical signals, to obtain the first return optical signal and the second return optical signal. For example, the beam splitter is a polarizing beam splitter. The polarizing beam splitter may include two polarizing beam splitters (polarizing beam splitter, PBS), and inclined surfaces of the two PBSs are attached through an adhesive layer (referring to
For example, the polarizing beam splitter may split an incident return optical signal (P-polarized light and S-polarized light) from the detection region into horizontally polarized light (namely, S-polarized light) and vertically polarized light (namely, P-polarized light), that is, the first return optical signal and the second return optical signal. The P-polarized light is completely transmitted, the S-polarized light is reflected at an angle of 45 degrees, and an exit direction of the S-polarized light and an exit direction of the P-polarized light form an angle of 90 degrees.
It should be noted that,
A structure B is a diffractive optical element (diffractive optical element, DOE).
In a possible implementation, the diffractive optical element may split a return optical signal from the detection region to obtain a first return optical signal and a second return optical signal, propagate the first return optical signal to a first detection component, and propagate the second return optical signal to a second detection component.
It should be understood that both the structure A and the structure B provided above are examples. This is not limited in this application.
It should be noted that, if the light splitting component is based on a structure of the foregoing case 1, the waveform of the first beam emitted by the emitting component is related to whether the detection apparatus works in a d-TOF image sensor mode or an i-TOF image sensor mode. If the detection apparatus works in the i-TOF image sensor mode, the waveform of the first beam may be any waveform in a sine wave, a square wave, a pulse wave, or a continuous wave. If the detection apparatus works in the d-TOF image sensor mode, the waveform of the first beam may be a pulse wave. If the light splitting component is based on a structure of the foregoing case 2, regardless of whether the detection apparatus works in a d-TOF image sensor mode or an i-TOF image sensor modem, the waveform of the first beam emitted by the emitting component is always a pulse wave.
3. First Detection Component
In a possible implementation, the first detection component may be an i-TOF image sensor. The i-TOF image sensor may be configured to perform optical-to-electrical conversion on the received first return optical signal, to obtain a first electrical signal.
Further, optionally, the d-TOF image sensor may further include a memory and a control circuit. The control circuit may store a time-of-flight detected by the CAPD into the memory.
It should be noted that, the i-TOF image sensor is suitable for short-distance detection, to be specific, detection precision of the i-TOF image sensor is higher when the detection distance is less than a preset value.
4. Second Detection Component
In a possible implementation, the second detection component may be a d-TOF image sensor. The d-TOF image sensor may be configured to perform optical-to-electrical conversion on the received second return optical signal, to obtain a second electrical signal.
Further, optionally, the d-TOF image sensor may further include a memory and a control circuit. The control circuit may store a time-of-flight detected by the SPAD/TDC into the memory.
It should be noted that, depth measurement precision of the d-TOF technology is independent of a detection distance, and mainly depends on a time-to-digital converter (time-to-digital converter, time-to-digital converter) in a d-TOF sensor module. The d-TOF is suitable for long-distance detection. When the detection distance is not less than the preset value, detection precision of the d-TOF is high.
In a possible implementation, a resolution range of the d-TOF image sensor may be [8 megapixels, 48 megapixels], and a resolution range of the i-TOF image sensor may also be [8 megapixels, 48 megapixels]. For example, the resolution of the d-TOF image sensor may be 8 megapixels, 12 megapixels, 20 megapixels, or 48 megapixels. The resolution of the i-TOF image sensor may be 8 megapixels, 12 megapixels, 20 megapixels, or 48 megapixels. It should be understood that the resolution of the d-TOF image sensor may alternatively be greater than 48 megapixels, for example, may alternatively be 52 megapixels, 60 megapixels, 72 megapixels, or the like. The resolution of the i-TOF image sensor may alternatively be greater than 48 megapixels, for example, may alternatively be 52 megapixels, 60 megapixels, 72 megapixels, or the like.
In a possible implementation, the detection apparatus may further include a processing control component. The processing control component is described in detail below.
5. Processing Control Component
In a possible implementation, the processing control component may be connected to the emitting component and the receiving component respectively, to control the emitting component and the receiving component. The details are described as follows:
In a possible implementation, the processing control component may generate a first control signal, and emit the first control signal to the emitting component (for example, a light source in the emitting component), to control a first time sequence in which the light source emits the first beam. Further, optionally, the processing control component may generate a first modulation signal and emit the first modulation signal to the light source to control a waveform of the first beam emitted by the light source. The waveform may be, for example, a pulse wave or a continuous wave.
In another possible implementation, with reference to the foregoing case 1 of the light splitting component, the processing control component may further generate a second control signal, and send the second control signal to the light splitting component, to enable the light splitting component to propagate the first return optical signal to the first detection component in the second time sequence, or propagate the second return optical signal to the second detection component in the third time sequence.
In a possible implementation, the second time sequence and the third time sequence are alternately arranged, that is, it indicates that the first detection component and the second detection component work alternately. With reference to
Further, optionally, the processing control component may receive a first electrical signal from the i-TOF image sensor, and determine, based on the first electrical signal, a phase difference between emission of the first beam and receiving the first return optical signal by the i-TOF image sensor, calculate, based on the phase difference, a first time difference (namely, a time-of-flight of the first beam) between the emission of the first beam and the receiving the first return optical signal by the i-TOF image sensor, and determine the first distance information of the target based on the first time difference. For a specific process, refer to the descriptions of the principle in
Correspondingly, the processing control component may receive a second electrical signal from the d-TOF image sensor, determine, based on the second electrical signal, a second time difference between emission of the first beam and receiving the second return optical signal by the d-TOF image sensor, and determine the second distance information of the target based on the second time difference. For details, refer to the descriptions in
It should be noted that, the processing control component also needs to control the first time sequence in which the light source emits the first beam and TDC synchronization in the d-TOF image sensor. For example, the processing control component controls the light source to emit the first pulse light, and simultaneously controls the TDC to start timing. After the SPAD receives the second return optical signal, the TDC stops the timing and waits. When the processing control component controls the light source to emit the second pulse light, and simultaneously controls the TDC to start second timing, similarly, after the SPAD receives the second return optical signal, the TDC stops the second timing and waits, and so on. It should be understood that a measurement range of the TDC is less than a periodicity of the pulse light. In general, the synchronization of the TDC in the d-TOF image sensor with the light source emitting the first beam is synchronization at a periodicity of hundreds of nanoseconds (ns). It should be understood that a synchronization periodicity between the TDC and the light source emitting the first beam is related to a detection distance.
In a possible implementation, the processing control component may generate the distance information of the target based on the first distance information and the second distance information.
Because the i-TOF image sensor may generate a multipath interference region or a strength error region, distance information detected in some regions is incorrect, and quality of an image is seriously affected. An elliptical region within a range of Z0 to Z1 in
In a possible implementation, the processing control component may remove error distance information corresponding to the multipath interference region or the strength error region from the first distance information, obtain target distance information corresponding to the error distance information from the second distance information, and supplement the error distance information with the target distance information. That is, the processing control component may determine error distance information corresponding to the multipath interference region or the strength error region in the first distance information, determine target distance information corresponding to the error distance information in the second distance information, and replace the error distance information with the target distance information.
Further, optionally, the processing control component may generate a first image based on the first distance information, generate a second image based on the second distance information, and fuse the first image and the second image to obtain a depth map of the target. Alternatively, the first distance information for generating the first image may be the first distance information obtained after the error distance information is replaced with the target distance information. In this way, precision of the distance information of the target detected by the detection apparatus is improved, and quality of the formed depth map of the target can be improved.
In a possible implementation, the processing control component may generate the depth map of the target based on the first electrical signal from the first detection component and the second electrical signal from the second detection component. For ease of description of the solution, in the following, a high-precision measurement distance range for which the i-TOF image sensor is suitable is Z0 to Z1, and a high-precision measurement distance range for which the d-TOF image sensor is suitable is Z1 to Z2, where Z0<Z1<Z2, referring to
The following shows two possible implementations of image fusion as examples, to implement high-precision detection within a full detection distance range.
An implementation 1 is removing inaccurate distance information.
In a possible implementation, the processing control component may remove distance information of a distance greater than the preset value from the first distance information to obtain third distance information, and generate a first image based on the third distance information; remove distance information of a distance not greater than the preset value from the second distance information to obtain fourth distance information, and generate a second image based on the fourth distance information; and fuse the first image and the second image to obtain a depth map of the target.
With reference to
Because the i-TOF image sensor has high precision at a short distance, its precision decreases linearly as the distance increases, and the d-TOF image sensor keeps precision unchanged within a measurement range, the precision can be kept at a high level at a long distance, but precision of a near target usually cannot meet a requirement. The first image is generated by using the third distance information from the i-TOF image sensor at a short distance, and the second image is generated by using the fourth distance information from the d-TOF image sensor at a long distance, and then, the first image and the second image are fused to obtain a depth map of the target. Therefore, high-precision imaging within the full detection distance range can be obtained.
An implementation 2 is confidence-based weighted averaging.
In a possible implementation, confidences may be designed based on relative detection precision of the i-TOF image sensor and the d-TOF image sensor. For the i-TOF image sensor, a lower confidence is set for a larger distance value. For the d-TOF image sensor, a higher confidence is set for a larger distance value. Alternatively, it may be understood that a relationship between the first distance information and a confidence may be set. For example, a larger distance corresponding to the first distance information indicates a lower confidence. A larger distance corresponding to the second distance information indicates a higher confidence.
In an example 1, distance information of a distance greater than the preset value in the first distance information is multiplied by a first confidence, and distance information of a distance not greater than the preset value in the first distance information is multiplied by a second confidence, to obtain fifth distance information, and generate a third image based on the fifth distance information. The second confidence is greater than the first confidence. Distance information of a distance not greater than the preset value in the second distance information is multiplied by a third confidence, and distance information of a distance greater than the preset value in the second distance information is multiplied by a fourth confidence, to obtain sixth distance information, and generate a fourth image based on the sixth distance information. The fourth confidence is greater than the third confidence. The third image and the fourth image are fused to obtain a depth map of the target.
With reference to
The first distance information is divided into two parts, a part that is not greater than the preset value is multiplied by the second confidence that is larger, and a part that is greater than the preset value is multiplied by the first confidence that is smaller. That is, in the first distance information, the part that is not greater than the preset value accounts for a larger proportion, and the part that is greater than the preset part accounts for a smaller proportion. Similarly, the second distance information is also divided into two parts, a part that is not greater than the preset value is multiplied by the third confidence that is smaller, and a part that is greater than the preset value is multiplied by the fourth confidence that is larger. The depth map of the target obtained based on the fifth distance information and the sixth distance information has high precision within the full detection distance range.
In an example 2, the first distance information is multiplied by a fifth confidence to obtain seventh distance information, and a fifth image is generated based on the seventh distance information. The fifth confidence is negatively correlated with the first distance information. That is, the fifth confidence decreases as the first distance increases. The second distance information is multiplied by a sixth confidence to obtain eighth distance information, and a sixth image is generated based on the eighth distance information. The sixth confidence is positively correlated with the second distance information. That is, the sixth confidence increases as the second distance increases. The fifth image and the sixth image are fused to obtain a depth map of the target.
It should be noted that, a specific function relationship between the fifth confidence and the first distance may be linear or non-linear, and a specific function relationship between the sixth confidence and the second distance may be linear or non-linear. This is not limited in this application.
With reference to
It should be noted that, in the foregoing two implementations of image fusion, error distance information corresponding to an elliptical region within a range Z0 to Z1 in the first distance information may be replaced with target distance information corresponding to the error distance in the second distance information.
In a possible implementation, a first processing component may be a processor, a microprocessor, or the like, for example, may be a general-purpose central processing unit (central processing unit, CPU), a general-purpose processor, or digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or another programmable logic device, a transistor logic device, a hardware component, or any combination thereof.
It should be noted that, if the detection apparatus does not include the foregoing processing control component, functions of the foregoing processing control component may be performed by a processor in a terminal device in which the detection apparatus is used. For example, when the detection apparatus is used in a vehicle, functions of the processing control component described above may be performed by a main processor in the vehicle. For another example, when the detection apparatus is used in a smartphone, functions of the foregoing processing control component may be performed by a CPU in the smartphone.
The receiving component in the detection apparatus in any one of the foregoing embodiments may further include a lens assembly. The lens assembly is configured to converge, as much as possible, received return optical signals from the detection region to the d-TOF image sensor and/or the i-TOF image sensor. Further, optionally, the receiving component may further include an infrared ray (infrared radiation, IR) filter. The IR filter may be located between the lens assembly and the light splitting component. The IR filter may be configured to block or absorb infrared rays to prevent damage to the d-TOF image sensor and i-TOF image sensor. For example, a material of the IR filter may be glass or glass-like resin, for example, blue glass (blue glass).
Based on the foregoing content, the following provides two specific examples of the foregoing detection apparatus with reference to a specific hardware structure, to help further understand the structure of the foregoing detection apparatus.
It should be noted that, an emitting module, a lens assembly, and the like of the detection apparatus in the conventional art may be reused as the emitting module in the detection apparatus.
Based on the foregoing described structure and function principles of the detection apparatus, this application may further provide a camera. The camera may include the detection apparatus in any one of the foregoing embodiments. It may also be understood that the detection apparatus in any one of the foregoing embodiments may be independently used as a camera. Further, optionally, the camera may generate a grayscale image, or may be an infrared camera for imaging.
Based on the foregoing described structure and function principles of the detection apparatus, this application may further provide a terminal device. The terminal device may include the detection apparatus in any one of the foregoing embodiments and a processor. The processor is configured to control the detection apparatus to detect the detection region. Further, optionally, the terminal device may further include a memory. The memory is configured to store a program or instructions. The processor is configured to invoke the program or the instructions to control the detection apparatus to detect the detection region. It may be understood that the terminal device may further include another component, for example, a wireless communication apparatus, a touchscreen, and a display.
The processor 1001 may include one or more processing units. For example, the processor 1001 may include an application processor 1001 (application processor, AP), a graphics processing unit 1001 (graphics processing unit, GPU), an image signal processor 1001 (image signal processor, ISP), a controller, a digital signal processor 1001 (digital signal processor, DSP), and the like. Different processing units may be independent components, or may be integrated into one or more processors 1001.
The memory 1002 may be a random access memory (random access memory, RAM), a flash memory, a read-only memory (read-only memory, ROM), a programmable read-only memory (programmable ROM, PROM), an erasable programmable read-only memory (erasable PROM, EPROM), an electrically erasable programmable read-only memory (electrically EPROM, EEPROM), a register, a hard disk, a removable hard disk, a CD-ROM, or any other form of storage medium well known in the art. For example, the memory 1002 is coupled to the processor 1001, so that the processor 1001 can read information from the memory 1002 and can write information to the memory 1002. Certainly, the memory 1002 may alternatively be a part of the processor 1001. Certainly, the processor 1001 and the memory 1002 may alternatively exist in the terminal device as discrete components.
The camera module 1003 may be configured to capture a moving image, a static image, and the like. In some embodiments, the terminal device may include one or N camera modules 1003, and N is a positive integer. For descriptions of the camera module 1003, refer to the descriptions in the foregoing embodiment. Details are not described herein again.
When the camera module 1003 is used as an in-vehicle camera module, based on functions of the in-vehicle camera module, the camera module 1003 may be classified into a driving assistance camera module, a parking assistance camera module, and an in-vehicle driver monitoring camera module. The driving assistance camera module is configured for driving recording, lane departure warning, door opening warning, blind area monitoring, traffic sign recognition, and the like. The driving assistance camera module includes: an intelligent front-view camera (which is, for example, monocular, binocular, or tricular), which can be configured for dynamic object detection (vehicles and pedestrians), static object detection (traffic lights, traffic signs, lane lines, and the like), free space division, and the like; a side-view assistance camera (which is, for example, wide-angle), configured to monitor dynamic targets in a rear-view mirror blind spot during driving; and a night vision assistance camera (for example, a night vision camera), which can be configured to better detect target objects at night or under poor light conditions. The parking assistance camera module can be configured for a reversing image or 360-degree surround view. The 360-degree surround view (for example, wide-angle/fisheye) is mainly for low-speed short-distance perception and can form a seamless 360-degree panoramic view around a vehicle. The in-vehicle driver monitoring camera module mainly provides one or more levels of warning for the driver's fatigue, distraction, and non-standard driving. Based on different installation positions of the in-vehicle camera module in the terminal device, the in-vehicle camera module may be classified into a front-view camera module, a side-view camera module, a rear-view camera module, and a built-in camera module.
The display 1004 may be configured to display an image, a video, or the like. The display 1004 may include a display panel. The display panel may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (organic light-emitting diode, OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (flex light-emitting diode, FLED), a mini-LED, a micro-LED, a micro-OLED, a quantum dot light-emitting diode (quantum dot light-emitting diode, QLED), or the like. In some embodiments, the terminal device may include one or Q displays 1004, where Q is a positive integer greater than 1. For example, the terminal device may implement a display function via the GPU, the display 1004, the processor 1001, and the like.
For example, the terminal device may be, for example, a radar (such as a lidar), a vehicle, a smartphone, a smart home device, an intelligent manufacturing device, a robot, an uncrewed aerial vehicle, an intelligent transportation device (such as an AGV or an unmanned transport vehicle).
In various embodiments of this application, unless otherwise stated or there is a logic conflict, terms and/or descriptions in different embodiments are consistent and may be mutually referenced, and technical features in different embodiments may be combined based on an internal logical relationship thereof, to form a new embodiment.
In this application, “uniformity” does not mean absolute uniformity, and a specific error can be allowed. “Vertical” does not mean absolute verticality, and a specific engineering error can be allowed. “At least one” refers to one or more, and “a plurality of” refers to two or more. The term “and/or” is an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists, where A and B may be singular or plural. In text descriptions of this application, the character “/” generally indicates an “or” relationship between associated objects. In a formula of this application, the character “/” indicates a “division” relationship between the associated objects. In this application, a symbol “[a, b]” represents a closed interval, and a range thereof is greater than or equal to a and less than or equal to b. In addition, in this application, the term “for example” indicates giving an example, an illustration, or a description. Any embodiment or design scheme described as an “example” in this application should not be explained as being more preferred or having more advantages than another embodiment or design scheme. Alternatively, it may be understood that use of the term “example” is intended to present a concept in a specific manner, and does not constitute a limitation on this application.
It may be understood that various numbers in embodiments of this application are merely used for differentiation for ease of description, and are not used to limit the scope of embodiments of this application. The sequence numbers of the foregoing processes do not mean execution sequences, and the execution sequences of the processes should be determined based on functions and internal logic of the processes. The terms “first”, “second”, and another similar expression are intended to distinguish between similar objects but do not necessarily indicate a specific order or sequence. In addition, the terms “include”, “have”, and any variant thereof are intended to cover non-exclusive inclusion, for example, include a series of steps or units. Methods, systems, products, or devices are not necessarily limited to those steps or units that are literally listed, but may include other steps or units that are not literally listed or that are inherent to such processes, methods, products, or devices
Although this application is described with reference to specific features and embodiments thereof, it is clear that various modifications and combinations may be made to them without departing from the spirit and scope of this application. Correspondingly, the specification and accompanying drawings are merely examples for description of the solutions defined by the appended claims, and are considered as any of or all modifications, variations, combinations or equivalents that cover the scope of this application.
It is clear that a person skilled in the art can make various modifications and variations to this application without departing from the spirit and the scope of the present invention. In this way, this application is intended to cover these modifications and variations of embodiments of this application provided that they fall within the scope of protection defined by the claims of this application and their equivalent technologies.
Number | Date | Country | Kind |
---|---|---|---|
202110292907.X | Mar 2021 | CN | national |
This application is a continuation of International Application No. PCT/CN2022/076175, filed on Feb. 14, 2022, which claims priority to Chinese Patent Application No. 202110292907.X, filed on Mar. 18, 2021. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/076175 | Feb 2022 | US |
Child | 18467817 | US |