The application claims priority to Chinese Patent Application Nos. 202211363022.5, 202211362329.3, 202211363010.2, and 202211363009.X, all filed on Nov. 2, 2023. The entire content of all of the above-referenced applications is incorporated herein by reference.
This application relates to the field of lidar technologies, and in particular, to a lidar light source and system.
A lidar is a radar system that detects feature quantities such as a position and a speed with emitted light beams. A working principle of the lidar is as follows: A detection signal (a light beam) is emitted to a target, and then a received signal (a target echo) reflected from the target is compared with the detection signal. After appropriate processing, relevant information of the target, such as a target distance, an orientation, a height, a speed, a pose, a shape, and other parameters may be obtained. High-precision data obtained from a mobile platform through the lidar has been widely used in the field of autonomous driving.
Due to a limited power of a driving power supply, currently no feasible stationary lidar can be realized for full-coverage detection within a relatively long detection range. Currently feasible long-range detection radar solutions are mainly fixed partial-coverage vertical-cavity surface-emitting laser (VSCEL) radars, MENS and prism-based edge-emitting laser (EEL) radars, and mechanically rotatable VSCEL radars and EEL radars. Due to a limited VSCEL laser power, a detection range of the fixed partial-coverage VSCEL radars is significantly limited. Due to a complex process, the MENS and prism EEL radars have a high cost and have a high requirement on a usage status. Due to poor reliability, the mechanical rotatable VSCEL radars and the EEL radars have a short service life.
From another perspective, the current lidar solutions include a mechanical lidar, a semi-solid-state lidar, and an all-solid-state lidar. This application is an all-solid-state flash lidar.
Due to the long detection range of the lidar, a laser with high power is required. Therefore, the current semi-solid-state lidar adopts an EEL light source solution. However, the following problems exit in the solution:
The above disclosed content of the background is merely used for understanding the inventive concept and the technical solution of this application, and does not necessarily belong to the existing technologies of this application. The background should not be used to evaluate the novelty and creativity of this application in absence of clear evidence indicating that the above content has been disclosed on the application date of this application.
In this application, an array of EELs are employed. Each of the EELs includes an active region disposed therein. The active regions may share a cathode or an anode, thereby significantly improving emission consistency of the lasers and reducing the power of a lidar light source. Light beams are collimated and diffused through a cylindrical lens and are fused into a uniform light beam on a target surface, so that the lidar light source has desirable consistency and reliability, and has a low power and is free of a problem regarding heat dispassion, thereby facilitating implementation of a high peak power and long-distance detection.
In a first aspect, this application provides a lidar light source, including an array of EELs and a cylindrical lens.
A plurality of active regions are arranged inside the array of EELs. Light emitting ports of the plurality of active regions are disposed adjacent to each other, the plurality of active regions share a cathode and include separate anodes, or share an anode and include separate cathodes.
The cylindrical lens is configured to collimate light beams emitted from the array of EELs in a fast axis direction, and fuse the light beams emitted through the light emitting ports into a uniform light beam on a target surface.
In some embodiments, in the lidar light source, a light-emitting surface of each of the EELs is located on a focal point of the cylindrical lens.
In some embodiments, the lidar light source further includes a controller. The controller is configured to control the plurality of active regions to be in an on or off state through an addressable drive mechanism.
In some embodiments, in the lidar light source, the plurality of active regions are turned on successively, and only one of the active regions is turned on at a time.
In some embodiments, in the lidar light source, all of the active regions are turned on within an emission cycle.
In some embodiments, in the lidar light source, a distance between the array of EELs and the cylindrical lens is in a range of 0.1 mm to 1 mm.
In some embodiments, in the lidar light source, the each of the light beams is rectangular.
In some embodiments, in the lidar light source, the cylindrical lens includes a reflecting surface for changing directions of the light beams.
In a second aspect, this application provides a lidar light source, including an array of EELs and a cylindrical lens.
A plurality of active regions are arranged inside the array of EELs. Light emitting ports of the plurality of active regions are disposed adjacent to each other, the plurality of active regions share a cathode and include separate anodes, or share an anode and include separate cathodes.
The cylindrical lens is configured to collimate light beams emitted from the array of EELs in a fast axis direction.
Regions on the cylindrical lens projected by the light beams emitted from the array of EELs (from the plurality of active regions) do not overlap.
In some embodiments, in the lidar light source, a light-emitting surface of each of the EELs is located on a focal point of the cylindrical lens.
In some embodiments, in the lidar light source, an opaque pattern is arranged on each of the regions projected from the plurality of active regions on the cylindrical lens, and the plurality of opaque patterns are different from each other.
In some embodiments, in the lidar light source, the cylindrical lens includes a reflecting surface for changing directions of the light beams.
In some embodiments, in the lidar light source, the plurality of EELs are arranged in at least two rows.
In some embodiments, in the lidar light source, surfaces of the plurality of active regions are in an arc shape.
In a third aspect, this application provides a lidar light source, including an array of EELs, a cylindrical lens, and a display.
A plurality of active regions are arranged inside the array of EELs. Light emitting ports of the plurality of active regions are disposed adjacent to each other, the plurality of active regions share a cathode and include separate anodes, or share an anode and include separate cathodes.
The cylindrical lens is configured to collimate light beams emitted from the array of EELs in a fast axis direction.
The light beams emitted from the active regions are modulated into specific shapes and emitted in a form of pulses.
The display is configured to display patterns to shape the emergent light beams. The patterns do not allow the light beams to pass through.
In some embodiments, in the lidar light source, the display includes a plurality of sub-screens, a number of the sub-screens is equal to a number of the active regions, and the light beam emitted from each of the active regions is emitted through a corresponding sub-screen.
In some embodiments, the lidar light source further includes a controller. The controller is configured to control the plurality of active regions to be in an on or off state through an addressable drive mechanism.
In some embodiments, in the lidar light source, the display is configured to dynamically adjust the displayed patterns.
In a fourth aspect, this application provides a lidar system, including a lidar light source and a receiving sensor.
The lidar light source is any lidar light source described above.
The receiving sensor is synchronized to the lidar light source to receive a reflected signal of a laser for three-dimensional reconstruction.
In some embodiments, in the lidar system, the receiving sensor is divided into n sub-regions, and n is a number of the active regions.
The sub-regions are in a one-to-one correspondence with the active regions.
A sub-region is controlled to operate when a corresponding active region operates.
Compared with the existing technologies, this application has the following beneficial effects.
In this application, the array of EELs is used, and the plurality of active regions are arranged, so that a power of a single laser may be reduced, thereby reducing an instantaneous power of the light source, and reducing a power requirement on a driving power supply. Therefore, a lidar light source driven by an existing driving power supply can realize detection of a longer range.
In this application, the plurality of active regions share the anode or the cathode, so that the plurality of active regions have desirable consistency, thereby avoiding problems such as a non-uniform light beam and unstable emission caused by inconsistent positions or angles of the plurality of lasers.
In this application, the cylindrical lens is used to collimate the light beams and fuses the light beams into a uniform light beam on the target surface, so that a number of optical elements can be considerably reduced, impact of process errors on the light beams can be alleviated, quality of the laser light beams can be improved, and a size of the lidar light source can be reduced.
In order to describe the technical solutions in embodiments of this application or in existing technologies more clearly, drawings required for describing the embodiments or existing technologies are briefly described below. Apparently, the drawings in the following description merely show embodiments of this application, and a person of ordinary skill in the art may still derive other drawings from these drawings without creative efforts. By reading detailed description of non-limiting embodiments with reference to the following drawings, other features, objectives, and advantages of this application become more apparent.
This application is further described below with reference to specific embodiments. The following embodiments help a person skilled in the art further understand this application, but do not limit this application in any form. It should be noted that a person of ordinary skill in the technology may make a plurality of modifications and improvements without departing from the concept of this application. All fall within the protection scope of this application.
In the specification, claims, and drawings of this application, terms “first”, “second”, “third”, “fourth”, and the like (if any) are intended to distinguish between similar objects rather than describe a specific order or sequence. It should be understood that data used in this way may be transposed where appropriate, so that the embodiments of this application described herein may be, for example, implemented in an order different from those illustrated or described herein. Moreover, terms “include”, “have” and any other variants are intended to cover non-exclusive inclusion. For example, a process, method, a system, a product, or a device that includes a list of steps or units is not necessarily limited to those expressly listed steps or units, but may include other steps or units not expressly listed or inherent to such a process, method, system, product, or device.
A lidar light source and system provided in the embodiments of this application are intended to resolve problems in existing technologies.
The technical solutions of this application and how the technical solutions of this application resolve the above technical problems are described in detail below by using specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. The embodiments of this application are described below with reference to the drawings.
In the lidar light source provided in the embodiments of this application, the EELs share an anode and include separate anodes, or share an anode and include separate cathodes. In this way, consistency and reliability of the lidar light source can be ensured, more effective control of emission can be realized, emission at different times can be realized, a power can be reduced, and a requirement on a driving power supply can be reduced. In addition, a cylindrical lens is used to collimate light beams and fuse the light beams into a uniform light beam on a target surface, which realizes desirable uniformity even at a long range.
Light emitting ports of the plurality of active regions are disposed adjacent to each other, and the plurality of active regions share a cathode and include separate anodes, or share an anode and include separate cathodes. If the plurality of active regions share an anode, the plurality of active regions include independent cathodes. If the plurality of active regions share a cathode, the plurality of active regions include independent anodes.
In some embodiments, the surface of the electrode plate 11 is on the same surface as the cathodes. The cathodes are snugly attached to the electrode plate 11, which can realize connection without the metal wires 12. Since the cathodes are snugly attached to the electrode plate 11, through a housing and a substrate, positions and directions of the lasers can be precisely controlled. In order to cause the surface of electrode plate 11 to be located on the same surface as the cathodes, a thickness of the substrate 13 may be increased, or a thickness of each of the electrode plate 11 may be increased. In case of increasing the thickness of the substrate 13, the substrate 13 below the electrode plate 11 may be made thicker, and the substrate 13 below the EELs 10 is made thinner. In case of increasing the thickness of the electrode plate 11, the substrate 13 is kept flat to facilitate production and processing of the substrate 13, and the electrode plate 11 may be located on the same surface as the cathode surfaces merely by using an increased thickness. In industrial production, a solution may be selected based on comprehensive consideration of materials and processing difficulty of the electrode plate 11 and the substrate 13, to achieve optimal costs.
Compared to a VSCEL laser, the EELs have a larger far-field divergence angle, can emit a wider light beam, and have a high power density and a high pulse peak power. A VCSEL laser may be formed as a lidar system without a movable component that incorporates a 2D emitter array and a 2D SPAD detector array, while the technical solution of this application enables an EEL array that has similar constituent detectors to be more efficient and stable in long-range detection as compared to a VSCEL laser array. A wavelength of each of the EELs used in this application may be various, feasible wavelengths such as 905 nm or 1550 nm. Based on a current industrial production capacity, 905 nm is a mainstream wavelength, since silicon absorbs photons at the wavelength. In addition, silicon-based photoelectric detectors are more mature than indium gallium arsenide (InGaAs) near-infrared detectors required to detect light at 1550 nm, and therefore are inevitably chosen for a large number of applications taking costs and overall maturity into consideration and have a higher cost efficiency. A laser at 1550 nm is far away from a visible light spectrum of human eyes, and a laser at 1550 nm with the same power has a safety level 40 times higher than that of a laser at 905 nm. 1550 nm is a wavelength with an expectable enormous potential in the future.
A cylindrical lens 2 is configured to collimate light beams emitted from the array of EELs in a fast axis direction, and fuse the light beams emitted through the light emitting ports into a uniform light beam on a target surface.
The cylindrical lens 2 is a lens at a fixed position with respect to the EELs. In this embodiment, the light beams are collimated only in the fast axis direction, and do not need to be collimated in a slow axis or another direction. In this way, a requirement on the cylindrical lens can be reduced, which ensures a desirable projection effect while reducing costs. The cylindrical lens 2 collimates and shapes the laser light beams emitted from all of the EELs, so that the light beams are fused into a uniform light beam on the target surface after passing through the cylindrical lens 2. A light-emitting surface of each of the array of EELs 1 is located on a focal point of the cylindrical lens 2. The light-emitting surface is a surface composed of the plurality light emitting ports. The cylindrical lens 2 has a specific degree of curvature for shaping the light beams. A distance between the cylindrical lens 2 and the array of EELs is in a range of 0.1 mm to 1 mm, to minimize a size of the lidar light source. The cylindrical lens 2 may be a common lens, a superlens, or the like. When the cylindrical lens is a superlens, the size of the lidar light source may be significantly reduced.
The cylindrical lens 2 is a lens at a fixed a position relative to the array of EELs. Regions on the cylindrical lens projected by the light beams emitted from the array of EELs (from the plurality of active regions) do not overlap. The regions projected from the plurality of active regions on the cylindrical lens do not fully cover the cylindrical lens, that is, the cylindrical lens has regions not illuminated by the active regions, to ensure independent operability of the light beam in each active region. A light-emitting surface of each of the array of EELs 1 is located on a focal point of the cylindrical lens 2.
In this embodiment, the regions on the cylindrical lens projected by the light beams emitted from the array of EELs (from the plurality of active regions) do not overlap, so that the light beams of the plurality of array-type lasers are spaced apart from each other, thereby facilitating an optical operation on each light beam, and increasing an information density of the light beams projected from the different active regions. In this embodiment, separate illumination on different regions is achieved without a need to arrange a scanning system, which simplifies a system, and improves stability.
The patterns of the display do not allow the light beams to pass through, and regions without patterns allow the light beams to pass through. As shown in
In this embodiment, the display displays the patterns to shape the emergent light beams, and depth data may be calculated by using a deformation of the patterns. In this way, not only TOF depth data is obtained, but also structured light depth data may be obtained. In this embodiment, through display of the patterns on the display to process the emergent light beams, a fast changing characteristic of the display can be fully utilized, so that the emergent light beams can be quickly switched between different shapes.
In some embodiments, as shown in
In some embodiments, the lidar light source further includes a controller that may control the plurality of active regions to be in an on or off state through an addressable drive mechanism. The plurality of active regions are turned on successively, or only one of the active regions is turned on at a time. All of the active regions are turned on within an emission cycle. The expression “the active regions are turned on successively” means that the active regions are turned on successively at a very small time interval, and does not mean that the active regions are successively turned on one after another. In fact, a propagation time of a laser light beam will last at a relatively long detection range. Therefore, if adjacent active regions project light beams in a relatively short time, the light beams may interfere with each other at a data receiving end due to factors such as light overflow. In this embodiment, the active regions may be turned on successively at an equal interval not less than 2. For example, the active regions are numbered as 1, 2, 3 . . . , 20 from left to right. If the interval is 2, the active regions are turned on in a sequence of 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20. In this embodiment, after one cycle of turn-on at the equal interval is completed, active regions not turned on are further turned on at the equal interval until all of the active regions are turned on, to achieve an efficient and controllable turn-on process. When controlling the active regions to be turned on or off, the controller controls the patterns in the sub-screens on the display to be displayed or not, thereby achieving synchronous updating of the active regions and the display. In some embodiments, within an emission cycle, the controller updates the display only once and synchronously updates all of the sub-screens, so that all of the sub-screens adapt to the active regions within the emission cycle, thereby reducing a refresh frequency of the display and prolonging a service life of the display.
In some embodiments, as shown in
In some embodiments, as shown in
In some embodiments, as shown in
In some embodiments, the lidar light source further includes a controller that may control the plurality of active regions to be in an on or off state through an addressable drive mechanism. The plurality of active regions are turned on successively, or only one of the active regions is turned on at a time. All of the active regions are turned on within an emission cycle. The expression “the active regions are turned on successively” means that the active regions are turned on successively at a very small time interval, and does not mean that the active regions are successively turned on one after another. In some embodiments, a propagation time of a laser light beam will last at a relatively long detection range. Therefore, if adjacent active regions project light beams in a relatively short time, the light beams may interfere with each other at a data receiving end due to factors such as light overflow. In this embodiment, the active regions are turned on successively at an equal interval not less than 2. For example, the active regions are numbered as 1, 2, 3 . . . , 20 from left to right. If the interval is 2, the active regions are turned on in a sequence of 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20. In this embodiment, after one cycle of turn-on at the equal interval is completed, active regions not turned on are further turned on at the equal interval until all of the active regions are turned on, to achieve an efficient and controllable turn-on process.
The lidar light source may be the lidar light source in any of foregoing embodiments.
The receiving sensor is synchronized to the lidar light source to receive a reflected signal of a laser for three-dimensional reconstruction.
The lidar light source turns on its active regions successively, to achieve a plurality of pulsed emissions within an emission cycle. The receiving sensor performs adjustment in synchronization with the lidar light source for receiving a reflected signal of a laser pulse. A depth value of a target region may be calculated based on a time difference between a moment at which the signal is received by the receiving sensor and a moment at which the signal is emitted by the lidar. d=(t1−t2)*c/2, where t1 is the moment at which the receiving sensor receives the signal, t2 is the moment at which the lidar emits the signal, c is the propagation speed of light, that is, 299,792,458 meters/second, and d is a distance (that is, the depth value) between the target object and the lidar system. The depth value of the target region maybe be calculated based on the formula. Then, through other position information, three-dimensional information of the target region may be restored, that is, the three-dimensional reconstruction may be performed.
Within one emission cycle, a complete target region 301 may be detected, but each pulse covers only a target region 302. A plurality of target regions 302 are pieced together to form the complete target region 301. d is the distance between the target region and a sensor in the receiving sensor, rather than a distance to a front-end optical element of the receiving sensor. A scanning sequence of the lidar light source may be random, that is, an operating sequence of the active regions may be random. The receiving sensor operates synchronously with the lidar light source. When the operating sequence of the active regions is random, switching efficiency of the active regions is high, and conversion can be quickly achieved, which improves efficiency and reduces a computational load of the lidar system.
Step S1: Controlling a laser light source to emit pulse lasers in a first form region by region, controlling a receiver to receive reflected signals region by region, and obtaining a first depth based on a time difference.
In this step, the laser light source has a plurality of regions, and the receiver further has a plurality of regions. The regions of the laser light source and the regions of the receiver are equal in number and are in a one-to-one correspondence. Each of the plurality of emitting region corresponds to one laser emitter. The pulse lasers emitted in the first form have a minimum pulse duration. In this embodiment, emissions in the plurality of different regions are performed in the first form to obtain the first depth.
Step S2: Calculating a first depth obtained for each different region and performing determination. If the first depth is greater than a first threshold, a current state is not changed. If the first depth is less than the first threshold and greater than a second threshold, step S3 is performed. If the first depth is less than the second threshold, step S4 is performed.
In this step, emissions in all of the regions are performed in the first form in step S1, but the target object may not be within an effective measurement range of dTOF. Therefore, corresponding regions need to be corrected. Values of the first depths of the different regions are calculated and determination is performed separately, and subsequent steps are determined. Since the depth of the target object in each different region is not a single value, and even differs significantly in some cases, in this embodiment, a minimum depth value of the region is used as a depth value of the region. This step is to process the corresponding regions after an emission cycle is completed, that is, steps S3 and S4 are performed after a pulse string is completed.
Step S3: Adjusting an emission sequence of the region to a last place and modulating an emitted pulse laser into a second form to obtain a second depth based on a phase difference.
In this step, a region for which depth data is obtained by using an iTOF technology is configured to perform emission later. A plurality of regions may be adjusted during the sequence adjustment. During sequence adjustment, a current region is not transposed with a current last region, but the current region is placed after the current last region. For example, a current region emission sequence is ABCDEFGHIJKLM, where C is a region that needs to be adjusted in this step, and a current last region is M. If C is placed after M, a new emission sequence ABDEFGHIJKLMC is obtained.
In some embodiments, the second form is a pulse modulated laser. Compared to the first form, an emission time in a single region is prolonged for the second form, which facilitates conversion and implementation, and reduces impact of an ambient light.
In some embodiments, the second form is a sine modulated laser. Compared to the first form, in addition to the prolonging of the emission time, the laser needs to be modulated into a sine wave for the second form, thereby allowing the use of a full set of products with a more mature technology.
Step S4: Adjusting the emission sequence of the region to the last place, modulating the emitted pulse laser into a third form, and displaying a preset pattern on a display corresponding to the region, to obtain a third depth based on a parallax.
In this step, a region for which depth data is obtained by using a structured light technology is configured to perform emission later. The sequence adjustment in this step is similar to that in step S3. The pulse laser emitted from the laser light source is emitted after passing through the display. The display is divided into a plurality of different regions and are in a one-to-one correspondence with the regions of the laser light source. A pulse laser emitted from a particular region of the laser light source penetrates a particular region of the display and is then emitted. When a particular pattern is displayed on the display, a region with the pattern cannot be penetrated by the pulse laser, so that the penetrated pulse laser shows a specific shape. That is to say, the third depth may be obtained based on a deformation of the pattern (a parallax principle).
In this embodiment, the laser light source emits pulse lasers region by region, which can reduce an instantaneous power of the laser light source and reduce a requirement for a driving power supply. In addition, due to a limited power of the driving power supply, the laser light source in this embodiment may have a higher output power, thereby realizing detection in a longer range. In this embodiment, the laser light source can emit three different light beams within one emission cycle, so that an appropriate laser type may be selected based on a distance to a target object, thereby obtaining precise depth data through an optimal depth measurement technology. In this embodiment, the emission sequence is adjusted based on different light source types, to ensure that a pulse required for dTOF is emitted first and other pulses are emitted later, so that times between adjacent pulse lasers are fixed and have high consistency, and durations of the emission cycles may be different.
Step S5: Controlling a laser light source to emit pulse lasers in a second form region by region, controlling a receiver to receive reflected signals region by region, and obtaining a second depth based on a phase difference.
In this step, the regions of the laser light source and the regions of the receiver are in a one-to-one correspondence. Compared to the foregoing embodiments, in this embodiment, during initial emission, the laser light source emits the pulse lasers in the second form, that is, uses the iTOF technology. Since an effective range of the iTOF technology is usually between that of a dTOF technology and that of a structured light technology, a distance to a target object can be recognized more effectively.
Step S6: Calculating a second depth obtained for each different region and performing determination. If the second depth is greater than a first threshold, step S7 is performed. If the second depth is less than the first threshold and greater than a second threshold, a current state is not changed. If the second depth is less than the second threshold, step S8 is performed.
In this step, due to a winding effect, a distance to a target region may be mistakenly determined by using the dTOF technology. For example, when an effective measurement distance of dTOF is in a range of 0.5 m to 5 m, a target object at 6 m away may be recognized as being located at 1 m away. In this case, the region may be scanned by using the structured light technology to resolve the problem. Therefore, precision of the measurement data can be effectively increased during execution of subsequent steps.
Step S7: Adjusting an emission sequence of the region to a first place and modulating an emitted pulse laser into a first form to obtain a first depth based on a time difference.
In this step, a relatively far region is modulated into the first form, and an emission sequence thereof is adjusted to the first place, so that a farthest target object can be recognized at a close time interval, thereby quickly obtaining depth data of the relatively far target object.
Step S8: Adjusting the emission sequence of the region to the last place, modulating the emitted pulse laser into a third form, and displaying a preset pattern on a display corresponding to the region, to obtain a third depth based on a parallax.
In this step, a region using the parallax principle is adjusted to the last place to ensure that the pulse laser in the first form is emitted first.
In this embodiment, the iTOF technology is set for an initial emission mode, so that data about various distances can be effectively classified, and an optimal detection method can be obtained for each different region, thereby increasing data acquisition accuracy and precision.
Step S9: Controlling a laser light source to emit pulse lasers in a third form region by region, controlling a receiver to receive reflected signals region by region, displaying a preset pattern on a display corresponding to the region, and obtaining a third depth based on a parallax.
In this step, an initial emission mode is set to a structured light mode. The regions of the laser light source and the regions of the receiver are in a one-to-one correspondence. Since data obtained in the structured light mode is more stable and reliable, target regions at various distances may be accurately classified. Compared with the foregoing embodiments, since the structured light technology requires a larger amount of calculation compared with the TOF technology, a higher requirement is imposed on a computing chip.
Step S10: Calculating a third depth obtained for each different region and performing determination. If the third depth is greater than a first threshold, step S11 is performed. If the third depth is less than the first threshold and greater than a second threshold, step S12 is performed. If the third depth is less than the second threshold, a current state is not changed.
In this step, different measurement modes are correspondingly used for targets at different distances. Due to the different effective measurement ranges of the different measurement technologies, different laser light sources, receivers, and displays may be designed in practical products based on different application scenarios, to achieve optimal cooperation of detection at different distances.
Step S11: Adjusting an emission sequence of the region to a first place and modulating an emitted pulse laser into a first form to obtain a first depth based on a time difference.
In this step, an emission sequence of a farthest region is adjusted to the first order. The depth data obtained by using the structured light technology is converted to the depth data obtained by using the dTOF technology, thereby reducing a calculation amount and improving a data acquisition speed while obtaining the depth data with optimal precision.
Step S12: Modulating an emitted pulse laser into a second form to obtain a third depth based on a phase difference.
In this step, the emission sequence is not adjusted. Even through the sequence is adjusted in step S11, first emission of the farthest region is still ensured. However, the iTOF technology and the structured light technology have no special requirement on the emission sequence of the corresponding region. A laser emission duration in the second form is less than a laser emission duration in the third form.
In this embodiment, the initial emission mode is set to the structured light mode, so that the regions can be defined very accurately, and all of the regions can achieve an optimal shape during second emission, thereby quickly achieving precise measurement of the target region and improving efficiency.
a first emitting module 410, configured to: control a laser light source to emit pulse lasers in a first form region by region, control a receiver to receive reflected signals region by region, and obtain a first depth based on a time difference, where the regions of the laser light source and the regions of the receiver are in a one-to-one correspondence;
a second emitting module 420, configured to: control the laser light source to emit pulse lasers in a second form region by region, control the receiver to receive reflected signals region by region, and obtain a second depth based on a phase difference;
a third emitting module 430, configured to: control the laser light source to emit pulse lasers in a third form region by region, control the receiver to receive reflected signals region by region, display a preset pattern on a display corresponding to the region, and obtain a third depth based on a parallax; and
a selection module 440, configured to perform determination based on the depth data obtained for each different region. If the depth data greater than a first threshold, the first emitting module performs emission. If the depth data is less than the first threshold and greater than a second threshold, the second emitting module performs emission. If the depth data is less than the second threshold, the third emitting module performs emission.
During initial emission, any of the first emitting module 410, the second emitting module 420, or the third emitting module 430 is used to control all regions of the laser light source to obtain data. Then, the selection module 440 classifies the different regions based on data obtained for the first time. A farthest region is controlled by the first emitting module 410, a region at a moderate distance is controlled by the second emitting module 420, and a closest region is controlled by the third emitting module 430, thereby achieving detection with an optimal technology at various distances and obtaining optimal accuracy. During the emission, the selection module 440 dynamically performs determination on and controls each region to enable timely perception and obtaining of changes within a target region.
In this embodiment, the different modules are used to control the three modes separately, which achieves more consistent cooperation between the laser light source and the receiver and ensures data consistency. In addition, in this embodiment, the selection module is used to determine various depth data and monitor data in each region in real time, to adapt to changes in the target region better.
A person skilled in the art may understand that each aspect of this application may be implemented as a system, a method, or a program product. Therefore, each aspect of this application may be implemented in the following forms: a hardware only implementation, a software only implementation (including firmware, microcode, and the like), or an implementation of a combination of software and hardware, which may be collectively referred to as a “circuit”, a “module”, or a “platform” herein.
As shown in
The storage unit stores a program code, and the program code may be executed by the processing unit 610, so that the processing unit 610 performs the steps according to the various exemplary implementations of this application described in the foregoing part of the laser system switching method of the specification. For example, the processing unit 610 can perform steps shown in
The storage unit 620 may include a readable medium in a form of a volatile storage unit, such as a random access memory (RAM) unit 6201 and/or a cache unit 6202, and may further include a read-only memory (ROM) unit 6203.
The storage unit 620 may further include a program/utility tool 6204 having a set of (at least one) program modules 6205. The program modules 6205 include, but are not limited to: an operating system, one or more applications, other program modules, and program data. Each or a combination of these examples may include implementation of a network environment.
The bus 630 may be one or more of a plurality of types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local bus that uses any of a variety of bus structures.
The electronic device 600 can further communicate with one or more external devices 700 (such as a keyboard, a pointing device, and a Bluetooth device), and can further communicate with one or more devices that enable a user to interact with the electronic device 600, and/or any device (such as a router or a modem) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may be performed by using an input/output (I/O) interface 650. In addition, the electronic device 600 can further communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN), and/or a public network, for example, the Internet) by using a network adapter 660. The network adapter 660 can communicate with another module of the electronic device 600 by using the bus 630. It should be understood that, although not shown in
An embodiment of this application further provides a computer-readable storage medium configured to store a program. When the program is executed, the steps of the laser system switching method are implemented. In some possible embodiments, each aspect of this application may be further implemented in a form of a program product including a program code. When the program product is run on a terminal device, the program code is used to enable the terminal device to perform the steps according to the various exemplary implementations of this application described in the foregoing part of the laser system switching method of the specification.
As shown above, in this embodiment, a multi-level calibration board image is used to provide appropriate positioning information for different moving objects. Through identification of smallest-level positioning information from the multi-level calibration board image obtained by a camera, a maximum amount of positioning information is obtained, thus achieving adaptive positioning for the different moving objects and achieving higher precision.
The program product may be any combination of one or more computer readable mediums. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be, for example but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses, or devices, or any combination thereof. More specific examples (a non-exhaustive list) of the computer-readable storage medium include: an electrical connection by one or more wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical memory device, a magnetic storage device, or any appropriate combination thereof.
The computer readable storage medium may be a data signal included in a baseband or transmitted as a part of a carrier, in which a readable program code is carried. The propagated data signal may have various forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination thereof. The readable storage medium may alternatively be any readable medium other than a readable storage medium, and the readable storage medium may be used to send, propagate, or transmit a program used by or in combination with an instruction execution system, apparatus, or device. The program code included in the readable storage medium may be transmitted using any suitable medium, including but not limited to a wireless medium, a wired medium, an optical cable, RF, or any suitable combination thereof.
A program code for performing the operations of this application may be written by using any combination of one or more programming languages. The programming language includes an object-oriented programming language such as Java and C++, and further includes a conventional procedural programming language such as a “C” Language or a similar programming language. The program code may be fully executed on a computing device of a user or partially executed on a user equipment, or may be executed as an independent software package, or may be partially executed on a computing device of a user and partially executed on a remote computing device, or may be fully executed on a remote computing device or a server. In case of the remote computing device, the remote computing device may be connected to the computing device of a user by using any network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (for example, connected to the external computing device through the Internet using an Internet service provider).
In this embodiment, a multi-level calibration board image is used to provide appropriate positioning information for different moving objects. Through identification of smallest-level positioning information from the multi-level calibration board image obtained by a camera, a maximum amount of positioning information is obtained, thus achieving adaptive positioning for the different moving objects and achieving higher precision.
The embodiments in this specification are described in a progressive manner. Description of each of the embodiments focuses on differences from another embodiment, and reference may be made to each other for the same or similar parts of the embodiments. The foregoing descriptions of the disclosed embodiments enable a person skilled in the art to implement or use this application. It is apparent to a person skilled in the art to make various changes to these embodiments. The general concept defined herein may be implemented in another embodiment without departing from the spirit and scope of this application. Therefore, this application is not limited to these embodiments illustrated herein, but conforms to the broadest scope consistent with the principles and novel features disclosed in this application.
The specific embodiments of this application have been described above. It should be understood that this application is not limited to the specific foregoing embodiments. A person skilled in the art may make various variations or modifications within the scope of the claims, which does not affect the essential content of this application.
Number | Date | Country | Kind |
---|---|---|---|
202211362329.3 | Nov 2022 | CN | national |
202211363009.X | Nov 2022 | CN | national |
202211363010.2 | Nov 2022 | CN | national |
202211363022.5 | Nov 2022 | CN | national |