The subject disclosure relates to automotive sensor fusion.
A vehicle (e.g., automobile, truck, construction equipment, farm equipment, automated factory equipment) may include a number of sensors to provide information about the vehicle and the environment inside and outside the vehicle. For example, a radar system or lidar system may provide information about objects around the vehicle. As another example, a camera may be used to track a driver's eye movement to determine if drowsiness is a potential safety risk. Each sensor, individually, may be limited in providing a comprehensive assessment of the current safety risks. Accordingly, automotive sensor fusion may be desirable.
According to a first aspect, the invention provides a system to fuse sensor data in a vehicle, the system comprising an image processor formed as a first system on a chip (SoC) and configured to process images obtained from outside the vehicle by a camera to classify and identify objects, a surround-view processor formed as a second SoC and configured to process close-in images obtained from outside the vehicle by a surround-view camera to classify and identify obstructions within a specified distance of the vehicle, wherein the close-in images are closer to the vehicle than the images obtained by the camera, an ultrasonic processor configured to obtain distance to one or more of the obstructions, and a fusion processor formed as a microcontroller and configured to fuse information from the surround-view processor and the ultrasonic processor based on a speed of the vehicle being below a threshold value.
The surround-view processor also displays the obstructions identified and classified by the surround-view processor on a rear-view mirror of the vehicle.
A de-serializer provides the images obtained from outside the vehicle by the camera to the image processor and to provide the close-in images obtained by the surround-view camera to the surround-view processor.
An interior camera obtains images of a driver of the vehicle, wherein the de-serializer provides the images of the driver to the image processor or to the surround-view processor for a determination of driver state, the driver state indicating fatigue, alertness, or distraction.
A communication port obtains data from additional sensors and provides the data from the additional sensors to the fusion processor. The additional sensors include a radar system or a lidar system and the data from the additional sensors includes a range or angle to one or more of the objects.
The fusion sensor fuses information from the image processor and the additional sensors based on the speed of the vehicle being above a second threshold value.
A power monitoring module supplies and monitors power to components of the system. The components include the image processor, the ultrasonic processor, and the fusion processor.
The fusion processor obtains map information and provides output of a result of fusing combined with the map information to a display. The fusion processor generates haptic outputs based on the result of the fusing.
The fusion processor provides information to an advanced driver assistance system.
The information from the fusion processor is used by the advanced driver assistance system to control operation of the vehicle.
According to a second aspect, the invention provides a method to fuse sensor data in a vehicle, the method comprising obtaining images from outside the vehicle with a camera, processing the images from outside the vehicle using an image processor formed as a first system on a chip (SoC) to classify and identify objects, obtaining close-in images from outside the vehicle using a surround-view camera, processing the close-in images using a surround-view processor formed as a second SoC to identify and classify obstructions within a specified distance of the vehicle, the close-in images being closer to the vehicle than the images obtained by the camera, transmitting ultrasonic signals from ultrasonic sensors and receiving reflections; processing the reflections using an ultrasonic processor to obtain a distance to one or more of the objects; and fusing information from the surround-view processor and the ultrasonic processor using a fusion processor formed as a microcontroller based on a speed of the vehicle being below a threshold value.
The method may also include displaying the obstructions identified and classified by the surround-view processor on a rear-view mirror of the vehicle.
The method may also include providing the images obtained from outside the vehicle by the camera and the close-in images obtained by the surround-view camera to a de-serializer. Output of the de-serializer is provided to the image processor or to the surround-view processor.
The method also includes providing images of a driver of the vehicle from within the vehicle, obtained using an interior camera, to the de-serializer and providing the output of the de-serializer to the image processor or to the surround-view processor to determine driver state. The driver state indicates fatigue, alertness, or distraction.
The method also includes obtaining data from additional sensors using a communication port, and providing the data from the additional sensors to the fusion processor. The sensors include a radar system or a lidar system, and the data from the additional sensors including a range or angle to one or more of the objects.
The method also includes the fusion processor fusing information from the image processor and the additional sensors based on the speed of the vehicle being above a second threshold value.
The method also includes supplying and monitoring power to components of the system using a power monitoring module. The components include the image processor, the ultrasonic processor, and the fusion processor.
The method also includes the fusion processor obtaining map information and providing a result of the fusing combined with the map information to a display, and the fusion processor generating haptic outputs based on the result of the fusing.
The method also includes the fusion processor providing a result of the fusing to an advanced driver assistance system.
The method also includes the advanced driver assistance system using the result of the fusing from the fusion processor to control operation of the vehicle.
Objects and advantages and a fuller understanding of the invention will be had from the following detailed description and the accompanying drawings.
For a better understanding, reference may be made to the accompanying drawings. Components in the drawings are not necessarily to scale. Like-referenced numerals and other reference labels designate corresponding parts in the different views.
As previously noted, sensors may be used to provide information about a vehicle and the environment inside and outside the vehicle. Different types of sensors may be relied on to provide different types of information for use in autonomous or semi-autonomous vehicle operation. For example, radar or lidar systems may be used for object detection to identify, track, and avoid obstructions in the path of the vehicle. Cameras positioned to obtain images within the passenger cabin of the vehicle may be used to determine the number of occupants and driver behavior. Cameras positioned to obtain images outside the vehicle may be used to identify lane markings. The different types of information may be used to perform automated operations (e.g., collision avoidance, automated braking) or to provide driver alerts.
Embodiments of the inventive systems and methods detailed herein relate to automotive sensor fusion. Information from various sensors is processed and combined on the chip to obtain a comprehensive assessment of all conditions that may affect vehicle operation. That is, a situation that may not present a hazard by itself (e.g., vehicle is close to a detected road edge marking) may be deemed a hazard when coupled with other information (e.g., driver is distracted). The action taken (e.g., driver alert, autonomous or semi-autonomous operation) is selected based on the comprehensive assessment.
The exemplary sensors shown for the vehicle 100 include cameras 120, surround-view cameras 130, an interior camera 140, ultrasonic sensors 150, a radar system 160, and a lidar system 170. The exemplary sensors and components shown in
As another example, there may be up to three cameras 120 and up to twelve ultrasonic sensors 150. The ultrasonic sensors 150 transmit ultrasonic signals outside the vehicle 100 and determine a distance to an object 101 based on the time-of-flight of the transmission and any reflection from the object 101. A comparison of the field of view FOV1 of the exemplary front-facing camera 120 to the field of view FOV2 of the exemplary surround-view camera 130 shown under the side-view mirror indicates that the FOV2 associated with the surround-view camera 130 is closer to the vehicle 100 than the FOV1.
The image processor 210 and the surround-view processor 220 obtain de-serialized data from a de-serializer 250. The de-serialized data provided to the image processor 210 comes from the one or more cameras 120 and, optionally, one or more interior cameras 140. The image processor 210 may be implemented as a system on chip (SoC) and may execute a machine learning algorithm to identify patterns in images from the one or more cameras 120 and, optionally, from the one or more interior cameras 140. The image processor 210 detects and identifies objects 101 in the vicinity of the vehicle 100 based on the de-serialized data from the one or more cameras 120. Exemplary objects 101 include lane markers, traffic signs, road markings, pedestrians, and other vehicles. Based on de-serialized data obtained from one or more interior cameras 140, the image processor 210 may detect driver state. That is, the de-serialized data may be facial image data from the driver of the vehicle 100. Based on this data, the image processor 210 may detect fatigue, drowsiness, or distraction. Information from the image processor 210 may be weighted more heavily by the fusion processor 200 (than information from other components) when the vehicle 100 is travelling at a speed exceeding a threshold (e.g., 30 kilometers per hour (kph)).
The de-serialized data provided to the surround-view processor 220 comes from the one or more surround-view cameras 130 and, optionally, one or more interior cameras 140. The surround-view processor 220, like the image processor 210, may be implemented as a SoC and may execute a machine learning algorithm to identify and report patterns. The surround-view processor 220 may stitch together the images from each of the surround-view cameras 130 to provide a surround-view (e.g., 360 degree) image. In addition to providing this image to the fusion processor 200, the surround-view processor 220 may also provide this image as a rear-view mirror display 260. As previously noted with reference to the image processor 210, when images from the interior camera or cameras 140 are provided to the surround-view processor 220, the surround-view processor 220 may detect driver state (e.g., fatigue, drowsiness, or distraction). Information from the surround-view processor 220 may be weighted more heavily by the fusion processor 200 (than information from other components) when the vehicle 100 is travelling at a speed below a threshold (e.g., 10 kph). The information from the surround-view processor 220 may be used during parking, for example.
The ultrasonic processor 230 obtains the distance to objects 101 in the vicinity of the vehicle 100 based on time-of-flight information obtained by ultrasonic sensors 150. The fusion processor 200 may correlate the objects 101 whose distance is obtained by the ultrasonic processor 230 with objects 101 identified by the surround-view processor 220 during low-speed scenarios such as parking, for example. Noise and other objects 101 that are not of interest may be filtered out based on the identification by the image processor 210 or surround-view processor 220. The communication port 240 obtains data from the radar system 160, lidar system 170, and any other sensors. Based on the information from the sensors, the communication port 240 may convey range, angle information, relative velocity, lidar images, and other information about objects 101 to the fusion processor 200.
The fusion processor 200 obtains map information 205 for the vehicle 100 in addition to the information from processors of the controller 110. The fusion processor 200 may provide all the fused information (i.e., comprehensive information based on the fusion) to an advanced driver assistance system (ADAS) 275, according to an exemplary embodiment. This comprehensive information includes the objects 101 identified based on detections by the cameras 120 and surround-view cameras 130 as well as their distance based on the ultrasound sensors 150, driver state identified based on processing of images obtained by the camera 140, information from the sensors (e.g., radar system 160, lidar system 170), and map information 205. The information that is most relevant may be based on the speed for the vehicle 100, as previously noted. Generally, at higher speeds, information from the exterior cameras 120, radar system 160, and lidar system 170 may be most useful while, at lower speeds, information from the surround-view cameras 130 and ultrasonic sensors 150 may be most useful. The interior cameras 140 and information about driver state may be relevant in any scenario regardless of the speed of the vehicle 100.
Based on the comprehensive information, the ADAS 275 may provide an audio or visual output 270 (e.g., through the infotainment screen of the vehicle 100) of objects 101 indicated on the map. For example, the relative position of detected objects 101 to the vehicle 100 may be indicated on a map. The ADAS 275 may also provide haptic outputs 280. For example, based on the image processor 210 determining that images from one or more interior cameras 140 indicate driver inattention and also determining that images from one or more exterior cameras 120 indicate an upcoming hazard (e.g., object 101 in a path of the vehicle 100), the driver seat may be made to vibrate to alert the driver. The ADAS 275, which may be part of the controller 110, may additionally facilitate autonomous or semi-autonomous operation of the vehicle 100.
According to alternate embodiments, the fusion processor 200 may perform the functionality discussed for the ADAS 275 itself. Thus, the fusion processor 200 may directly provide an audio or visual output 270 or control haptic outputs 280. The fusion processor 200 may implement machine learning to weight and fuse the information from the image processor 210, surround-view processor 220, ultrasonic processor 230, and communication port 240. The controller 110 also includes a power monitor 201. The power monitor 201 supplies power to the other components of the controller 110 and monitors that the correct power level is supplied to each component.
At block 320, processing and fusing the data to obtain comprehensive information refers to using the various processors of the controller 110, as discussed with reference to
As
What have been described above are examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.