An autonomous vehicle is a motorized vehicle that can operate without human conduction. An exemplary autonomous vehicle includes a plurality of sensor systems, such as but not limited to, a lidar sensor system, a camera sensor system, and a radar sensor system, amongst others. The autonomous vehicle operates based upon sensor signals output by the sensor systems.
The differing types of sensor systems included in an autonomous vehicle can enhance overall performance of the autonomous vehicle. For instance, a first type of sensor system may be better suited than a second type of sensor system to detect a first type of object or event in an environment nearby the autonomous vehicle, whereas the second type of sensor system may be better suited than the first type of sensor system to detect a second type of object or event in the environment nearby the autonomous vehicle. Moreover, the differing types of sensor systems can provide redundancy for the autonomous vehicle.
While conventional autonomous vehicles oftentimes include different types of sensor systems, the different sensor systems are typically separate within the autonomous vehicle. The separate sensor systems can occupy a significant amount of space within the autonomous vehicle and add to the overall system cost.
The following is a brief summary of subject matter that is described in greater detail herein. This summary is not intended to be limiting as to the scope of the claims.
Described herein are various technologies that pertain to a fused camera and lidar system for an autonomous vehicle. The fused camera and lidar system can include a fused receiver. The fused receiver can include optics configured to receive a received electromagnetic signal from an environment nearby the fused camera and lidar system (e.g., nearby the autonomous vehicle that includes the fused camera and lidar system). The fused receiver can further include a beam splitter configured to split the received electromagnetic signal into a first split electromagnetic signal and a second split electromagnetic signal. The first split electromagnetic signal can include wavelengths in a visible spectrum and the second split electromagnetic signal can include wavelengths in an infrared spectrum. The beam splitter, for example, can include a dichroic coated mirror. Moreover, the fused receiver can include a camera pipeline and a lidar pipeline. The first split electromagnetic signal can be inputted to the camera pipeline. The camera pipeline can be configured to generate image data based on the first split electromagnetic signal. Further, the second split electromagnetic signal can be inputted to the lidar pipeline. The lidar pipeline can be configured to generate lidar data based on the second split electromagnetic signal. The fused camera and lidar system can further include a lidar transmitter configured to transmit a transmitted electromagnetic signal into the environment nearby the fused camera and lidar system. A portion of the received electromagnetic signal received by the optics of the fused receiver can correspond to the transmitted electromagnetic signal.
According to various embodiments, the camera pipeline can include a camera detector and the lidar pipeline can include a lidar detector. The camera detector can include a complementary metal-oxide-semiconductor (CMOS) image sensor. Moreover, the lidar detector can include a single-photon avalanche diode (SPAD) array. The first split electromagnetic signal can be directed to illuminate the camera detector. Further, the second split electromagnetic signal can be directed to illuminate the lidar detector.
According to various embodiments, the fused camera and lidar system can include a shared processing system. The shared processing system can be configured to process the image data and the lidar data generated. Moreover, according to other embodiments, it is contemplated that the fused camera and lidar system can include a first processing system configured to process the image data and a separate, second processing system configured to process the lidar data.
Pursuant to various embodiments, the fused receiver can include a frontend device. The frontend device can include the optics, the beam splitter, a waveguide, a first output port, and a second output port. The waveguide can be between the optics and the beam splitter. Moreover, the first output port can be configured to output the first split electromagnetic signal, and the second output port can be configured to output the second split electromagnetic signal. Thus, according to such embodiments, the optics, the beam splitter, the waveguide, the first output port, and the second output port can be integrated as part of a single device.
The above summary presents a simplified summary in order to provide a basic understanding of some aspects of the systems and/or methods discussed herein. This summary is not an extensive overview of the systems and/or methods discussed herein. It is not intended to identify key/critical elements or to delineate the scope of such systems and/or methods. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
Various technologies pertaining to a fused camera and lidar system for an autonomous vehicle are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspect(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more aspects. Further, it is to be understood that functionality that is described as being carried out by certain system components may be performed by multiple components. Similarly, for instance, a component may be configured to perform functionality that is described as being carried out by multiple components.
Moreover, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from the context, the phrase “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, the phrase “X employs A or B” is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.
As used herein, the terms “component” and “system” are intended to encompass computer-readable data storage that is configured with computer-executable instructions that cause certain functionality to be performed when executed by a processor. The computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices. Further, as used herein, the term “exemplary” is intended to mean “serving as an illustration or example of something.”
As described herein, one aspect of the present technology is the gathering and use of data available from various sources to improve quality and experience. The present disclosure contemplates that in some instances, this gathered data may include personal information. The present disclosure contemplates that the entities involved with such personal information respect and value privacy policies and practices.
Referring now to the drawings,
The fused receiver 108 can include optics 110, a beam splitter 112, a camera pipeline 114, and a lidar pipeline 116. The optics 110 can be configured to receive the received electromagnetic signal 108 from the environment nearby the fused camera and lidar system 104. The optics 110 can include a single lens assembly that receives the received electromagnetic signal 108. Thus, the single lens assembly can be used to receive electromagnetic radiation having wavelengths in both the visible spectrum and the infrared spectrum.
Moreover, the beam splitter 112 can be configured to split the received electromagnetic signal 108 into a first split electromagnetic signal and a second split electromagnetic signal. The first split electromagnetic signal can include wavelengths in a visible spectrum (e.g., 400 to 700 nanometers). Further, the second split electromagnetic signal can include wavelengths in an infrared spectrum (e.g., 700 nanometers to 1 millimeter, 850 to 940 nanometers, etc.). The beam splitter 112, for example, can include a dichroic coated mirror. Following this example, the dichroic coated mirror can allow a portion of the received electromagnetic signal with wavelengths above a threshold wavelength to pass there through, while a differing portion of the received electromagnetic signal with wavelengths below the threshold wavelength can be reflected by the dichroic coated mirror. Thus, the dichroic coated mirror can split the received electromagnetic signal 108 based on the wavelength of the electromagnetic radiation with minimal transmission loss.
The first split electromagnetic signal can be inputted to the camera pipeline 114. The camera pipeline 114 can be configured to generate image data based on the first split electromagnetic signal (including the wavelengths in the visible spectrum). The camera pipeline 114, for instance, can include a camera detector. Accordingly, the first split electromagnetic signal can be directed to illuminate the camera detector. Moreover, the camera pipeline 114 can generate the image data based on the first split electromagnetic signal that illuminates the camera detector.
The second split electromagnetic signal can be inputted to the lidar pipeline 116. The lidar pipeline 116 can be configured to generate lidar data based on the second split electromagnetic signal (including the wavelengths in the infrared spectrum). For example, the lidar pipeline 116 can include a lidar detector. Accordingly, the second split electromagnetic signal can be directed to illuminate the lidar detector. The lidar pipeline 116 can generate the lidar data based on the second split electromagnetic signal that illuminates the lidar detector.
Although not depicted, it is contemplated that the lidar transmitter 102 can include various components. For instance, the lidar transmitter 102 can include a laser, a modulator, a resonator, frontend optics, and the like. The laser, for example, can be a semiconductor laser, a laser diode, or the like. Moreover, the lidar transmitter 102 and the lidar pipeline 116 (as well as the optics 110 and the beam splitter 112) can provide a time of flight (TOF) lidar system, a frequency modulated continuous wavelength (FMCW) lidar system, or the like as part of the fused camera and lidar system 100.
The fused camera and lidar system 100 can integrate a camera sensor system and a lidar sensor system into a single unit. Accordingly, the fused camera and lidar system 100 can include fewer parts as compared to conventional approaches that include a separate camera sensor system and lidar sensor system, since common receive optics 110 can be used in the fused camera and lidar system 100. The beam splitter 112 can enable directing electromagnetic radiation having the appropriate wavelengths to the camera pipeline 114 and the lidar pipeline 116.
Now turning to
Again, the optics 110 are configured to receive the received electromagnetic signal from the environment nearby the fused receiver 104. Moreover, the beam splitter 112 can be configured to split the received electromagnetic signal into the first split electromagnetic signal and the second split electromagnetic signal. The first split electromagnetic signal, which comprises wavelengths in the visible spectrum, can be directed to illuminate the camera detector 200. Moreover, the second split electromagnetic signal, which includes wavelengths in the infrared spectrum, can be directed to illuminate the lidar detector 202. According to an example, the camera detector 200 can include a complementary metal-oxide-semiconductor (CMOS) image sensor. The lidar detector 202, for example, can include a single-photon avalanche diode (SPAD) array. However, the claimed subject matter is not limited to the foregoing examples.
In the example of
According to various examples, the shared processing system 204 can include an application-specific integrated circuit (ASIC) configured to process both raw data streams from the camera detector 200 and the lidar detector 202. In accordance with another example, the shared processing system 204 can include a field programmable gate array (FPGA) configured to process both raw data streams from the camera detector 200 and the lidar detector 202.
With reference to
In the embodiment shown in
Turning to
As opposed to being discrete elements, the optics 110, the waveguide 402, the beam splitter 112, the first output port 404, and the second output port 406 can be integrated into a single device, namely, the frontend device 400, in the embodiment shown in
Reference is now generally made to
Turning to
The autonomous vehicle 500 further includes several mechanical systems that are used to effectuate appropriate motion of the autonomous vehicle 500. For instance, the mechanical systems can include, but are not limited to, a vehicle propulsion system 504, a braking system 506, and a steering system 508. The vehicle propulsion system 504 may be an electric engine or a combustion engine. The braking system 506 can include an engine brake, brake pads, actuators, and/or any other suitable componentry that is configured to assist in decelerating the autonomous vehicle 500. The steering system 508 includes suitable componentry that is configured to control the direction of movement of the autonomous vehicle 500.
The autonomous vehicle 500 additionally includes a computing system 510 that is in communication with the sensor systems 100 and 502, the vehicle propulsion system 504, the braking system 506, and the steering system 508. The computing system 510 includes a processor 512 and memory 514; the memory 514 includes computer-executable instructions that are executed by the processor 512. Pursuant to various examples, the processor 512 can be or include a graphics processing unit (GPU), a plurality of GPUs, a central processing unit (CPU), a plurality of CPUs, an application-specific integrated circuit (ASIC), a microcontroller, a programmable logic controller (PLC), a field programmable gate array (FPGA), or the like.
The memory 514 of the computing system 510 can include a localization system 516, a perception system 518, a planning system 520, and a control system 522. The localization system 516 can be configured to determine a local position of the autonomous vehicle 500. The perception system 518 can be configured to perceive objects nearby the autonomous vehicle 500 (e.g., based on outputs from the sensor systems 100 and 502). For instance, the perception system 518 can detect, classify, and predict behaviors of objects nearby the autonomous vehicle 500. The perception system 518 (and/or differing system(s) included in the memory 514) can track the objects nearby the autonomous vehicle 500 and/or make predictions with respect to the environment in which the autonomous vehicle 500 is operating (e.g., predict the behaviors of the objects nearby the autonomous vehicle 500). Further, the planning system 522 can plan motion of the autonomous vehicle 500. Moreover, the control system 522 can be configured to control at least one of the mechanical systems of the autonomous vehicle 500 (e.g., at least one of the vehicle propulsion system 504, the braking system 506, and/or the steering system 508).
Referring now to
The computing device 600 additionally includes a data store 608 that is accessible by the processor 602 by way of the system bus 606. The data store 608 may include executable instructions, various data, etc. The computing device 600 also includes an input interface 610 that allows external devices to communicate with the computing device 600. For instance, the input interface 610 may be used to receive instructions from an external computer device, etc. The computing device 600 also includes an output interface 612 that interfaces the computing device 600 with one or more external devices. For example, the computing device 600 may transmit control signals to the vehicle propulsion system 504, the braking system 506, and/or the steering system 508 by way of the output interface 612.
Additionally, while illustrated as a single system, it is to be understood that the computing device 600 may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device 600.
Various functions described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer-readable storage media. A computer-readable storage media can be any available storage media that can be accessed by a computer. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc (BD), where disks usually reproduce data magnetically and discs usually reproduce data optically with lasers. Further, a propagated signal is not included within the scope of computer-readable storage media. Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computer-readable media.
Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methodologies for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the details description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.