Various examples relate to light sources and, more specifically but not exclusively, to light sources for lidar transmitters.
This section introduces aspects that may help facilitate a better understanding of the disclosure. Accordingly, the statements of this section are to be read in this light and are not to be understood as admissions about what is in the prior art or what is not in the prior art.
Light detection and ranging, known as lidar, is a remote-sensing technique that can be used to measure a variety of parameters, such as distance, velocity, and vibration, and also for high-resolution imaging. Compared to radio-frequency (RF) remote sensing, lidar is capable of providing a finer range resolution and a higher spatial resolution due to the use of a higher carrier frequency and the ability to generate a smaller spot size at the foci. Lidar systems are used in urban planning, hydraulic and hydrologic modeling, geology, forestry, fisheries and wildlife management, three-dimensional (3D) imaging, engineering, coastal management, atmospheric science, meteorology, navigation, autonomous driving, robotic and drone operations, and other applications.
Disclosed herein are, among other things, various aspects, features, and embodiments of a light source capable of scanning the field of view (FOV) thereof without employing moving parts. In an example, the light source includes an addressable two-dimensional (2D) array of semiconductor lasers provided with a monolithic optical adaptor in which optical fibers or waveguides are spatially arranged in a fanout configuration to provide suitable sampling spots within a relatively large FOV suitable for lidar-based or structured-light-based 3D mapping. For example, by selectively firing different individual lasers of the laser array at different times in a raster pattern, the light source can effectively optically scan the FOV in a manner suitable for 3D mapping. Different optical adaptors can beneficially be used to enable substantially the same laser array to optimally sample differently shaped FOVs typically encountered in different specific applications. Due to the non-mechanical scanning structure thereof, the laser source is well suited for mobile-platform applications, such as autonomous driving.
One example provides an apparatus that includes an array of lasers disposed along a substantially planar substrate. The apparatus further includes an optical adapter having a first surface and an opposite second surface. The first surface is adjacent and along the array of lasers. The optical adapter includes a plurality of optical waveguides, each of the optical waveguides having a respective first end at the first surface and a respective second end at the second surface. The plurality of optical waveguides is optically end-connected to the array of lasers. An end section of a first optical waveguide of the plurality of optical waveguides is oriented at a first nonzero angle with respect to a surface normal of the second surface. The end section of the first optical waveguide is adjacent to the respective second end thereof. An end section of a second optical waveguide of the plurality of optical waveguides is oriented at a different second nonzero angle with respect to the surface normal. The end section of the second optical waveguide is adjacent to the respective second end thereof.
Another example provides an optical method that includes the steps of: determining, via an electronic controller, a next laser to emit light in an array of lasers; routing, via a driver circuit, one or more firing voltages to the next laser to cause the next laser to emit an optical pulse through a respective set of one or more optical waveguides of an optical adapter having a first surface and an opposite second surface, the first surface being adjacent and along the array of lasers; and repeating the steps of determining and routing to cause different ones of the lasers in the array of lasers to emit respective optical pulses at different respective times. The optical adapter includes a plurality of the optical waveguides and each of the optical waveguides has a respective first end at the first surface of the optical adapter and a respective second end at the second surface of the optical adapter. The plurality of optical waveguides is optically end-connected to the array of lasers. An end section of a first optical waveguide of the plurality of optical waveguides is oriented at a first nonzero angle with respect to a surface normal of the second surface, the end section of the first optical waveguide being adjacent to the respective second end thereof. An end section of a second optical waveguide of the plurality of optical waveguides is oriented at a different second nonzero angle with respect to the surface normal, and the end section of the second optical waveguide is adjacent to the respective second end thereof.
Other aspects, features, embodiments, and benefits will become more fully apparent, by way of example, from the following detailed description and the accompanying drawings, in which:
In the following description, numerous details are set forth, such as optical device configurations, timings, operations, and the like, in order to provide an understanding of one or more aspects of the present disclosure. It will be readily apparent to a person of ordinary skill in the pertinent art that these specific details are merely examples and not intended to limit the scope of this application.
The lidar transceiver 160 comprises a scanning light (e.g., laser) source 166 and an optical receiver 168. The laser source 166 operates to generate an optical-probe beam 172 that is directed toward the scene 198. Depending on the intended application, the lidar system 100 may have one or more lenses (not explicitly shown in
In some examples, the laser source 166 and the optical receiver 168 are implemented in the form of two physically separate and distinct system elements. In some other examples, the laser source 166 and the optical receiver 168 are integrated into a single component or device, e.g., an opto-electronic chip or assembly. In such instances, the driver circuit 150 may be configured to provide appropriate electrical currents and/or voltages to pertinent integrated circuit components of both the laser source 166 and the optical receiver 168.
In one example, the optical monitor 140 includes a laser-beam monitor 142 configured to measure the intensity (optical power) and other pertinent characteristics of the optical-probe beam 172. The measurement/monitoring results generated by the optical monitor 140 are directed, via the bus 102, to the processor/controller 110 and are processed therein to monitor the optical performance of the lidar transceiver 160 and, if needed, to implement configuration changes directed at maintaining the intended level of performance for the lidar system 100.
In a typical conventional lidar system, the scanning light source includes an opto-mechanical scanner in which a mechanically movable mirror or other mechanically movable optical element is used to scan the optical-probe beam across the FOV. In mobile lidar applications in which the corresponding lidar system is mounted on a moving platform, such as a moving car, the scanning light source is typically subjected to relatively strong mechanical vibrations and/or shocks caused by the movement of the platform. Such mechanical vibrations and/or shocks may typically cause an opto-mechanical scanner to break down in a relatively short period of time, e.g., a time period that is shorter than an average lifespan of the corresponding vehicle, thereby disadvantageously requiring the lidar system repair or replacement.
The above-indicated problems in the state of the art can beneficially be addressed using at least some instances disclosed herein. More specifically, in one example, the laser source 166 is capable of optically scanning the FOV of the lidar system 100 without moving parts by employing an addressable 2D laser array coupled to an optical FOV adaptor in which optical fibers or waveguides are spatially arranged to provide suitable sampling points within the FOV. By selectively firing different lasers of the laser array, different points of the FOV can be sampled at different times, thereby providing the requisite FOV-scanning capability without employing moving parts. Due to the non-mechanical scanning structure thereof, the laser source 166 can be engineered to beneficially withstand mechanical vibrations/shocks for a longer period of time than a functionally comparable opto-mechanical scanner, without breaking down. In addition, at least some embodiments of the laser source 166 may be cheaper to manufacture than the functionally comparable opto-mechanical scanners.
In an example embodiment, the laser array 210 is a substantially planar semiconductor device whose main plain is oriented parallel to the YZ-coordinate plane. Disposed along said main plane is a plurality of semiconductor lasers (not explicitly shown in
Herein, a “main plane” of an object, such as a die, a substrate, an integrated circuit (IC), or a printed circuit board (PCB) is a plane parallel to a substantially planar surface thereof that has about the largest area among exterior surfaces of the object. This substantially planar surface may be referred to as a main surface. The exterior surfaces of the object that have one relatively large size, e.g., length, but are of much smaller area, e.g., less than one quarter of the main-surface area, are typically referred to as the edges of the object.
To cause an individual one of the semiconductor lasers of the laser array 210 to emit light, that individual semiconductor laser is appropriately electrically biased and injected with electrical current. One or more electrical signals 252 generated by the driver circuit 150 are appropriately routed to different semiconductor lasers of the laser array 210 for this purpose. Such routing can be implemented, e.g., using circuitry similar to that used in display devices or in solid state memories. The routing of the one or more electrical signals 252 is controlled using one or more address-control signals 254 generated by the driver circuit 150 in response to the corresponding control input received from the controller 110. For example, the one or more address-control signals 254 are used to control the states of switches via which individual ones of the semiconductor lasers of the laser array 210 are connected to or disconnected from the electrical lines supplying the one or more electrical signals 252.
In operation, at a given time, the one or more address-control signals 254 are used to select for emission: (i) a single one of the semiconductor lasers of the laser array 210; (ii) a subset of two or more semiconductor lasers of the laser array 210; or (iii) all of the semiconductor lasers of the laser array 210. Depending on the injection-current waveform, the light emitted by the selected semiconductor laser(s) of the laser array 210 is pulsed light or (quasi) CW light. The one or more address-control signals 254 are changed from time to time to change the selected subset of the semiconductor lasers in the laser array 210.
The FOV adapter 220 comprises a plurality of optical waveguides (e.g., optical fibers) 224, each being end-connected between a first surface 222 of the FOV adapter and an opposite second surface 226 of the FOV adapter. In one instance, the first and second surfaces 222, 226 of the FOV adapter 220 are substantially planar surfaces. The first surface 222 may be parallel to the second surface 226. The surfaces 222 and 226 of the FOV adapter 220 may also be parallel to the main plane (YZ in
For clarity, only three of the optical waveguides 224 (labeled 2241, 224n, and 224N, respectively) of the FOV adapter 220, are explicitly shown in
The end of an individual optical waveguide 224 located at the first surface 222 of the FOV adapter 220 is optically end-connected to receive light from a corresponding one of the semiconductor lasers of the laser array 210. In operation, the individual optical waveguide 224 guides the received light to the other end thereof located at the second surface 226 of the FOV adapter 220. The guided light then exits through that end of the optical waveguide 224, is collimated, and directed towards the directionally corresponding portion of an FOV 298. Depending on the specific configuration, a single optical waveguide 224 or a bundle of optical waveguides 224 are optically end-connected to receive light from a respective one of the semiconductor lasers of the laser array 210. In various examples, the number of optical waveguides 224 in such a bundle can range, e.g., from 2 to 100, or be even more than 100. In some examples, some or all of the optical waveguides 224 may be tapered such that the transverse size (e.g., the diameter) of the optical waveguide 224 is larger at the second surface 226 than at the first surface 222 of the FOV adapter 220.
End sections of different optical waveguides 224 located at the second surface 226 of the FOV adapter 220 may have different angles with respect to that surface. As an example,
The light emitted from the end of the optical waveguide 2241 is collimated to form an optical-probe beam 2721 near a boundary of the FOV 298. The optical axis of the optical-probe beam 2721 is oriented approximately at the same angle α1 with respect to the surface normal of the second surface 226 as the end section of the optical waveguide 2241. The optical-probe beam 2721 propagates towards the corresponding scene 198 in the FOV 298, illuminates a spot on a surface of an object within the scene, and undergoes specular and/or diffuse reflection thereat to form a corresponding reflected optical beam 180 (also see
In various alternatives, other angular arrangements of the optical waveguides 224 in the FOV adapter 220 are also possible. For example, in one possible example, the optical waveguides 2241, 224n, and 224N may be oriented as follows: (i) the end section of the optical waveguide 2241 is orthogonal to the second surface 226; (ii) the end section of the optical waveguide 224 has a nonzero angle αn with respect to the surface normal of the second surface 226; and (iii) the end section of the optical waveguide 224N has a larger nonzero angle αN with respect to the surface normal of the second surface 226, i.e., α1=0 and αN>αn>0. For the optical waveguides located between the optical waveguides 2241 and 224N, the tilt angle gradually increases, in equal or unequal increments, from the zero angle α1 to progressively larger angles, as the position of the optical waveguide 224 gets farther from the first edge and closer to the second edge of the FOV adapter 220.
In some alternatives, two or more light sources 166 are mounted side to side to increase the effective angular span of the FOV 298. The increase can be implemented for the lateral angular span only, for the vertical angular span only, or both for the lateral and vertical angular spans. The relative orientation of the two or more light sources 166 can be such that the FOVs of the individual light sources 166 partially overlap or are adjacent to one another without spatially overlapping.
Each spot within the shown FOV 298 represents a respective one of the optical-probe beams 272 (also see
As can be seen in
The method 900 includes the controller 110 retrieving from the memory 120 a firing or scan sequence (in block 902). When executed, a firing or scan sequence causes the light source 166 to emit a sequence of laser pulses to sequentially illuminate different portions of the FOV 298. The memory 120 may have stored therein a plurality of different firing or scan sequences. One of said different firing or scan sequences may be selected (in block 902) based on the intended application of the laser source 166 and/or specific instance thereof.
A firing or scan sequence may typically specify the order in which different semiconductor lasers 402 will emit pulses of light. For example, one possible firing or scan sequence may be in accordance with a raster pattern that effectively causes the optical-probe beam 172 to sweep across the FOV 298 horizontally and vertically at an approximately steady rate. With the laser array 210, the raster pattern can be implemented by (i) sequentially firing individual semiconductor lasers 402 of the first row, e.g., starting from the left end of the row and moving toward the opposite end of the row; (ii) sequentially firing individual semiconductor lasers 402 of the second row, e.g., starting from the right end of the row and moving toward the opposite end of the row, and so on until all individual semiconductor lasers 402 of the last row are sequenced through. Other suitable firing or scan sequences and patterns may also be used and controlled using the controller 110. In some such sequences, a first subset of two or more individual semiconductor lasers 402 may be fired at the (same) first time, and then a different second subset of two or more individual semiconductor lasers 402 may be fired at the (same) second time, and so on.
The method 900 includes the controller 110 selecting (in block 904) a next one of the semiconductor lasers 402 of the laser source 166 based on the firing or scan sequence loaded up in block 902. The address (e.g., the row number and the column number) of the selected semiconductor laser 402 is then used (in block 904) to generate an appropriate address-control signal 254. The address-control signal 254 so generated causes the one or more electrical signals 252 generated by the driver circuit 150 to be routed to the selected laser 402.
The method 900 includes the controller 110 and the driver circuit 150 causing the selected semiconductor laser 402 to emit an optical pulse, e.g., by causing an appropriate current to be injected into the selected semiconductor laser 402 (in block 906). Depending on the implementation, the emitted optical pulse will be in the form of one or more optical-probe beams 272 emitted from the corresponding one or more optical waveguides 224 (also see
The method 900 includes the controller 110 determining (in decision block 908) whether or not the firing or scan sequence selected in block 902 is completed. When the controller 110 determines that the firing or scan sequence is not completed, the processing of the method 900 is looped back to the block 904. Otherwise, the processing of the method 900 is terminated.
The optical pulse emitted at the block 906 of the method 900 will propagate toward the corresponding object within the FOV 298, and the corresponding reflected optical pulse 180 will be received by the receiver 168. The time difference between the optical-pulse emission from the laser source 166 and the reflected-pulse arrival to the receiver 168, often referred to as the time of flight (TOF), can then be determined and converted into the depth information for the corresponding portion of the FOV 298. Therefore, by executing in full the firing or scan sequence selected at the block 902, a depth map of the FOV 298 is generated by the system 100.
It is to be understood that the above description is intended to be illustrative and not restrictive. Many implementations and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future examples. In sum, it should be understood that the application is capable of modification and variation.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary is made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter incorporate more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in fewer than all features of a single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
While this disclosure includes references to illustrative examples, this specification is not intended to be construed in a limiting sense. For example, although various instances of the laser source 166 are describe above in reference to the lidar system 100, the invention(s) disclosed herein are not so limited. For example, at least some instances of the laser source 166 can also be used for various “structured light” measurements. Various modifications of the described aspects, features, and examples are possible.
Herein, the term “structured light” refers to a 3D sensing technique in which the scene (e.g., 198) is illuminated with a specially designed 2D spatially varying intensity pattern, e.g., a pattern of spots generated by a suitable light source (e.g., 166). An imaging sensor (e.g., 168) is used to acquire a 2D image of the scene under the structured-light illumination. When the scene is a planar surface without any 3D surface variations, the pattern in the acquired image is undistorted and is similar to the projected structured-light pattern. However, when the surface in the scene is nonplanar, the geometric shape of the surface distorts the projected structured-light pattern for the imaging sensor. A suitable structured-light image-processing algorithm (e.g., run by 110, 120) can then be used to extract the 3D surface shape based on the detected distortion of the projected structured-light pattern.
Unless explicitly stated otherwise, each numerical value and range should be interpreted as being approximate as if the word “about” or “approximately” preceded the value or range.
The use of figure numbers and/or figure reference labels (if any) in the claims is intended to identify one or more possible embodiments of the claimed subject matter in order to facilitate the interpretation of the claims. Such use is not to be construed as necessarily limiting the scope of those claims to the embodiments shown in the corresponding figures.
Although the elements in the following method claims, if any, are recited in a particular sequence with corresponding labeling, unless the claim recitations otherwise imply a particular sequence for implementing some or all of those elements, those elements are not necessarily intended to be limited to being implemented in that particular sequence.
Unless otherwise specified herein, the use of the ordinal adjectives “first,” “second,” “third,” etc., to refer to an object of a plurality of like objects merely indicates that different instances of such like objects are being referred to, and is not intended to imply that the like objects so referred-to have to be in a corresponding order or sequence, either temporally, spatially, in ranking, or in any other manner.
Unless otherwise specified herein, in addition to its plain meaning, the conjunction “if” may also or alternatively be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” which construal may depend on the corresponding specific context. For example, the phrase “if it is determined” or “if [a stated condition] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event].”
Also for purposes of this description, the terms “couple,” “coupling,” “coupled,” “connect,” “connecting,” or “connected” refer to any manner known in the art or later developed in which energy is allowed to be transferred between two or more elements, and the interposition of one or more additional elements is contemplated, although not required. Conversely, the terms “directly coupled,” “directly connected,” etc., imply the absence of such additional elements.
The functions of the various elements shown in the figures, including any functional blocks labeled as “processors” and/or “controllers,” may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and nonvolatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
As used in this application, the terms “circuit,” “circuitry” may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry); (b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions); and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.” This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
It should be appreciated by those of ordinary skill in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.