This document relates to sensors for an autonomous vehicle, and specifically, the configuration, placement, and orientation of autonomous vehicle sensors.
Autonomous vehicle navigation is a technology that can control the autonomous vehicle to safely navigate towards a destination. A prerequisite for safe navigation and control of the autonomous vehicle includes an ability to sense the position and movement of vehicles and other objects around an autonomous vehicle, such that the autonomous vehicle can be operated to avoid collisions with the vehicles or other objects. Thus, multiple sensors located on a vehicle that can be used for detecting objects external to the vehicle are needed for autonomous operation of a vehicle.
This patent document discloses example embodiments for providing full and redundant sensor coverage for an environment surrounding a vehicle. Example embodiments provide configurations of multiple sensors, including cameras, located on a vehicle for capturing a 360 degree environment of the vehicle, with certain sensors being redundant to others at least for improved object detection and tracking at high speeds. In some embodiments, sensor configurations capture the 360 degree environment surrounding the vehicle for up to 500 meters, 800 meters, 1000 meters, 1200 meters, or 1500 meters away from the vehicle. For example, various embodiments described herein may be used with an autonomous vehicle (e.g., for autonomous operation of a vehicle) to detect objects located outside of the autonomous vehicle, to track objects as the objects and/or the autonomous vehicle move relative to each other, to estimate distances between the autonomous vehicle and objects, and/or to provide continued operation in events of failure of individual sensors. Embodiments disclosed herein enable lane marking detection and traffic sign/light detection for autonomous operation of a vehicle.
In one exemplary aspect of the present disclosure, an autonomous vehicle is provided. The autonomous vehicle includes a plurality of first cameras associated with a first field-of-view (FOV) having a first horizontal aspect. The autonomous vehicle further includes a plurality of second cameras associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane. The horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane overlap in the horizontal plane by at least a predetermined number of degrees.
In another exemplary aspect, a sensor network for an autonomous vehicle is provided. The sensor network includes a plurality of first cameras associated with a first field-of-view (FOV) having a first horizontal aspect. The sensor network further includes a plurality of second cameras associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane. The horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane overlap in the horizontal plane by at least a predetermined number of degrees.
In yet another exemplary embodiment, a system for operating an autonomous vehicle is provided. The system includes a processor communicatively coupled with and configured to receive image data from a plurality of first cameras and a plurality of second cameras. The first cameras are associated with a first FOV having a first horizontal aspect, and the second cameras are associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane. The horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane overlap in the horizontal plane by at least a predetermined number of degrees.
In yet another exemplary aspect, a method for operating an autonomous vehicle is provided. The method includes receiving image data from a sensor network. The sensor network includes a plurality of first cameras associated with a first field-of-view (FOV) having a first horizontal aspect and a plurality of second cameras associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane. Horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane overlap in the horizontal plane by at least a predetermined number of degrees. The method further includes detecting one or more objects located outside of the autonomous vehicle based on the image data. The method further includes determining a trajectory for the autonomous vehicle based on the detection of the one or more objects. The method further includes causing the autonomous vehicle to travel in accordance with the trajectory.
In yet another exemplary aspect, an autonomous truck is disclosed. The autonomous truck includes a controller configured to control autonomous driving operation of the truck. The autonomous truck includes a sensor network including at least six sensors disposed on an exterior of the truck. Each sensor is oriented to capture sensor data from a corresponding directional beam having a corresponding beam width and a corresponding beam depth such that beam widths of the at least six sensors cover a surrounding region of the truck that is relevant to safe autonomous driving of the truck.
The above and other aspects and their implementations are described in greater detail in the drawings, the descriptions, and the claims.
Development of autonomous driving technology hinges on the ability to detect and be aware of the surrounding environment of a vehicle. In a conventional vehicle without autonomous driving capabilities, a human operator or driver visually collects information of the surrounding environment surrounding environment intuitively interprets the visually-collected information to operate the vehicle. In conventional vehicles relying upon human operation, human operators are limited to a field-of-view and need physical mobility to allow visual observation of wider orientations.
To observe and collect data on the surrounding environment, an autonomous vehicle may include multiple sensors, including cameras and light detection and ranging (LiDAR) sensors, located on the autonomous vehicle. Various technical challenges have stood in the way of autonomous systems reaching full environmental awareness or human-level awareness of the environment surrounding the autonomous vehicle. For example, blind spots or gaps in sensor coverage may exist in some existing approaches, and further, resource costs such as communication bandwidth and data storage may limit an exceedingly large number of sensors from being implemented. Further, autonomous vehicles may operate in high-speed environments in which objects are in motion relative to an autonomous vehicle at a high speed, and such objects moving at high speeds may go undetected by inadequate existing approaches. Even further, some existing approaches are vulnerable to localized physical damage that can cause failure in a significant number of sensors located on a vehicle, and individual failures of sensors may result in significant portions of the environment going undetected.
Thus, to address at least the above-identified technical issues, this patent document describes sensor configurations and layouts that are optimized and configured to provide enhanced environmental awareness for an autonomous vehicle. In example embodiments, sensor configurations and layouts refers to configurations of position and orientation of cameras located along an exterior of the vehicle. The cameras include long range cameras, medium range cameras, short range cameras, wide-angle/fisheye cameras, and infrared cameras. As such, example configurations and layouts include a heterogenous set of camera types.
In particular, various embodiments described herein are configured to provide vision in 360 degrees surrounding the autonomous vehicle. According to various embodiments described herein, sensors are configured (e.g., located and oriented) such that the fields-of-view (FOVs) of the sensors overlap with each other by at least a predetermined amount. In some embodiments, the FOVs of the sensors overlap with each other by at least 15 degrees in a horizontal aspect, for example. In some embodiments, FOVs of the sensors overlap horizontally by at least 10 degrees, at least 12 degrees, at least 15 degrees, at least 17 degrees, or at least 20 degrees. In some embodiments, the FOVs of the sensors may overlap based on a predetermined amount based on a percent of area covered.
Example embodiments include sensors with different sensor ranges, such that environmental awareness at different distances from the autonomous vehicle is provided in addition to the enhanced environmental awareness at 360 degrees of orientations about the autonomous vehicle. With overlapped FOVs, the sensors can provide improved object tracking and improved redundancy in case of individual sensor failure. For example, the amount by which the sensor FOVs overlap may be configured such that high-speed objects can be reliably captured by at least two sensors.
In some embodiments, redundant sensors (e.g., sensors with overlapped FOVs) are configured to support each other in the event of failure or deficiency of an individual sensor. For example, embodiments disclosed address sensor failure conditions that include components failures, loss of connection (e.g., wired connections or wireless connections), local impacts and physical damage (e.g., an individual sensor being damaged due to a debris colliding with the sensor), global environmental impacts (e.g., rain or dense fog affecting vision capabilities of an individual sensor or a homogenous sensor configuration), and/or the like. In some embodiments, the sensors may be located along the autonomous vehicle to enable stereovision or binocular-based distance estimations of detected objects.
The autonomous vehicle 105 may include various vehicle subsystems that support of the operation of autonomous vehicle 105. The vehicle subsystems may include a vehicle drive subsystem 142, a vehicle sensor subsystem 144, a vehicle control subsystem 146 and/or a vehicle power subsystem 148. The vehicle drive subsystem 142 may include components operable to provide powered motion for the autonomous vehicle 105. In an example embodiment, the vehicle drive subsystem 142 may include an engine or motor, wheels/tires, a transmission, an electrical subsystem, and a power source (e.g., battery and/or alternator).
The vehicle sensor subsystem 144 may include a number of sensors configured to sense information about an environment or condition of the autonomous vehicle 105. For example, the vehicle sensor subsystem 144 may include an inertial measurement unit (IMU), a Global Positioning System (GPS) transceiver, a RADAR unit, a laser range finder or a light detection and ranging (LiDAR) unit, and/or one or more cameras or image capture devices. The vehicle sensor subsystem 144 may also include sensors configured to monitor internal systems of the autonomous vehicle 105 (e.g., an O2 monitor, a fuel gauge, an engine oil temperature).
The IMU may include any combination of sensors (e.g., accelerometers and gyroscopes) configured to sense position and orientation changes of the autonomous vehicle 105 based on inertial acceleration. The GPS transceiver may be any sensor configured to estimate a geographic location of the autonomous vehicle 105. For this purpose, the GPS transceiver may include a receiver/transmitter operable to provide information regarding the position of the autonomous vehicle 105 with respect to the Earth. The RADAR unit may represent a system that utilizes radio signals to sense objects within the local environment of the autonomous vehicle 105. In some embodiments, in addition to sensing the objects, the RADAR unit may additionally be configured to sense the speed and the heading of the objects proximate to the autonomous vehicle 105. The laser range finder or LIDAR unit may be any sensor configured to sense objects in the environment in which the autonomous vehicle 105 is located using lasers. The cameras may include one or more devices configured to capture a plurality of images of the environment of the autonomous vehicle 105. The cameras may be still image cameras or motion video cameras.
The cameras, the LiDAR units, or other external-facing visual-based sensors (e.g., sensors configured to image the external environment of the vehicle) of the vehicle sensor subsystem 144 may be located and oriented along the autonomous vehicle in accordance with various embodiments described herein, including those illustrated in
In some embodiments, the vehicle sensor subsystem 144 includes cameras that have different optical characteristics. For example, the vehicle sensor subsystem 144 includes one or more long-range cameras, one or more medium-range cameras, one or more short-range cameras, one or more wide-angle lens cameras, one or more infrared cameras, or the like. Different cameras having different ranges have different fields-of-view, and a range of a camera may be correlated with (e.g., and inversely proportional to) the field-of-view of the camera. For example, a long-range camera may have a field-of-view with a relatively narrow horizontal aspect, while a short-range camera may have a field-of-view with a relatively wider horizontal aspect. In some embodiments, the vehicle sensor subsystem 144 includes cameras of different ranges on a plurality of faces or orientations on the autonomous vehicle to reduce blind spots.
In some embodiments, the vehicle sensor subsystem 144 may be communicably coupled with the in-vehicle control computer 150 such that data collected by various sensors of the vehicle sensor subsystem 144 (e.g., cameras, LiDAR units) may be provided to the in-vehicle control computer 150. For example, the vehicle sensor subsystem 144 may include a central unit to which the sensors are coupled, and the central unit may be configured to communicate with the in-vehicle control computer 150 via wired or wireless communication. The central unit may include multiple ports and serializer/deserializer units to which multiple sensors may be connected. In some embodiments, for localize individual failure events, sensors configured to be redundant with each other (e.g., two cameras with overlapped FOVs) may be connected to the central unit and/or to the in-vehicle control computer via different ports or interfaces, for example.
The vehicle control system 146 may be configured to control operation of the autonomous vehicle 105 and its components. Accordingly, the vehicle control system 146 may include various elements such as a throttle, a brake unit, a navigation unit, and/or a steering system.
The throttle may be configured to control, for instance, the operating speed of the engine and, in turn, control the speed of the autonomous vehicle 105. The brake unit can include any combination of mechanisms configured to decelerate the autonomous vehicle 105. The brake unit can use friction to slow the wheels in a standard manner. The navigation unit may be any system configured to determine a driving path or route for the autonomous vehicle 105. The navigation unit may additionally be configured to update the driving path dynamically while the autonomous vehicle 105 is in operation. In some embodiments, the navigation unit may be configured to incorporate data from the GPS transceiver and one or more predetermined maps so as to determine the driving path for the autonomous vehicle 105.
The vehicle control system 146 may be configured to control operation of power distribution units located in the autonomous vehicle 105. The power distribution units have an input that is directly or indirectly electrically connected to the power source of the autonomous vehicle 105 (e.g., alternator). Each power distribution unit can have one or more electrical receptacles or one or more electrical connectors to provide power to one or more devices of the autonomous vehicle 105. For example, various sensors of the vehicle sensor subsystem 144 such as cameras and LiDAR units may receive power from one or more power distribution units. The vehicle control system 146 can also include power controller units, where each power controller unit can communicate with a power distribution unit and provide information about the power distribution unit to the in-vehicle control computer 150, for example.
Many or all of the functions of the autonomous vehicle 105 can be controlled by the in-vehicle control computer 150. The in-vehicle control computer 150 may include at least one data processor 170 (which can include at least one microprocessor) that executes processing instructions stored in a non-transitory computer readable medium, such as the data storage device 175 or memory. The in-vehicle control computer 150 may also represent a plurality of computing devices that may serve to control individual components or subsystems of the autonomous vehicle 105 in a distributed fashion. In some embodiments, the data storage device 175 may contain processing instructions (e.g., program logic) executable by the data processor 170 to perform various methods and/or functions of the autonomous vehicle 105, including those described in this patent document. For instance, the data processor 170 executes operations for processing image data collected by cameras (e.g., blur and/or distortion removal, image filtering, image correlation and alignment), detecting objects captured in image data collected by overlapped cameras (e.g., using computer vision and/or machine learning techniques), accessing camera metadata (e.g., optical characteristics of a camera), performing distance estimation for detected objects, or the like.
The data storage device 175 may contain additional instructions as well, including instructions to transmit data to, receive data from, interact with, or control one or more of the vehicle drive subsystem 142, the vehicle sensor subsystem 144, the vehicle control subsystem 146, and the vehicle power subsystem 148. In some embodiment, additional components or devices can be added to the various subsystems or one or more components or devices (e.g., temperature sensor shown in
The in-vehicle control computer 150 may control the function of the autonomous vehicle 105 based on inputs received from various vehicle subsystems (e.g., the vehicle drive subsystem 142, the vehicle sensor subsystem 144, the vehicle control subsystem 146, and the vehicle power subsystem 148). For example, the in-vehicle control computer 150 may use input from the vehicle control system 146 in order to control the steering system to avoid a high-speed vehicle detected in image data collected by overlapped cameras of the vehicle sensor subsystem 144, move in a controlled manner, or follow a path or trajectory. In an example embodiment, the in-vehicle control computer 150 can be operable to provide control over many aspects of the autonomous vehicle 105 and its subsystems. For example, the in-vehicle control computer 150 may transmit instructions or commands to cameras of the vehicle sensor subsystem 144 to collect image data at a specified time, to synchronize image collection rate or frame rate with other cameras or sensors, or the like. Thus, the in-vehicle control computer 150 and other devices, including cameras and sensors, may operate at a universal frequency, in some embodiments.
The cameras may be located on an exterior surface of the autonomous vehicle or may be integrated into an exterior-facing portion of the autonomous vehicle such that the cameras are not significantly obstructed from collected image data of the exterior environment. In some embodiments, the cameras may be located on or within one or more racks, structures, scaffolds, apparatuses, or the like located on the autonomous vehicle, and the racks, structures, scaffolds, apparatuses, or the like may be removably attached to the autonomous vehicle. For example, cameras may be removed from the autonomous vehicle to enable easier adjustment, configuration, and maintenance of the cameras while the autonomous vehicle is not operating (e.g., to restore a desired FOV overlap amount).
While the cameras indicated in
As shown in
To provide a minimization or lack of blind spots, the FOVs of the cameras are overlapped at least with respect to their horizontal aspects. For example, during operation of the autonomous vehicle, various vibrations experienced by the autonomous vehicle may result in slight orientation or position changes in the cameras, and to prevent the generation of a blind spot due to such slight changes, the FOVs of the cameras may overlap.
Furthermore, overlapped FOVs of the camera may enable improved object tracking at various orientations about the autonomous vehicle. For example, due to the horizontal overlap in camera FOVs, objects in motion (e.g., relative to the autonomous vehicle) can be detected by more than one camera at various points and can therefore be tracked in their motion with improved accuracy. In some embodiments, an expected relative speed of objects to be detected by the cameras may be used to determine an amount of horizontal overlap for the camera FOVs. For example, to detect objects moving at high relative speeds, the cameras may be configured with a larger horizontal overlap.
In some embodiments, the amount of horizontal overlap for camera FOVs is based on the speed of objects to be detected by the cameras and/or a frame rate of the cameras, or a frequency at which the cameras or sensors collect data. Thus, for example, given a slow frame rate, the amount of horizontal overlap for camera FOVs may be configured to be larger, as compared to an amount of horizontal overlap that may be implemented for cameras operated at higher frame rates. In some embodiments, the cameras may be operated at a frame rate that is synchronized with an overall system frequency, or a frequency at which multiple devices and sensors on the autonomous vehicle operate. For example, the cameras may be operated at a frame rate of 10 Hz to synchronize with LiDAR units on the autonomous vehicle that collect LiDAR data at 10 Hz. As such, the in-vehicle control computer 150 may receive data from multiple cameras, other sensors, and devices at a synchronized frequency and may process the data accordingly.
The overlapping of the camera FOVs as shown in
In some embodiments, while at least one camera may provide redundancy with a given camera, the at least one redundant camera may be located at a different location than the given camera. With there being an extent of separation between a given camera and its redundant cameras, a likelihood of localized physical damage that would render the given camera and its redundant cameras each inoperable or unreliable is lowered. Otherwise, given an example in which a given camera and its redundant backups are co-located, located on or within the same structure (e.g., a roof rack, a protruding member), physically connected, or the like, debris impacts could result in a significantly loss in camera FOV coverage.
While example embodiments are described herein with respect to horizontal overlap of camera FOVs, the autonomous vehicle may include cameras whose FOVs vertically overlap, in some embodiments. In some embodiments, while a horizontal aspect of the fields-of-view of the cameras can be understood as defined as an angular width (e.g., in degrees), a vertical aspect of a camera field-of-view may be defined with respect to a range of distances that are captured within the camera field-of-view.
Thus, to configure vertical overlap of two camera FOVs, a predetermined amount of distance, or an overlap range of locations, may be used. As shown in the illustrated example, an overlap range may be determined as a distance (D3-D2), and the first camera 202A and the second camera 202B may be configured to vertically overlap accordingly. As a result, for example, objects located between points D2 and D3 (and aligned within a horizontal aspect or angular/beam width of the first and second cameras) may be captured by both the first camera 202A and the second camera 202B, while objects located between point D1 and D2 may only be captured by the first camera 202A and objects located between D3 and D4 may only be captured by the second camera 202B.
In some embodiments, overlap of camera FOVs (e.g., with respect to a horizontal aspect, with respect to a vertical aspect) may be achieved based on obtaining calibration data from the cameras. During a calibration operation of the autonomous vehicle, for example, the in-vehicle control computer 150 may obtain image data from the plurality of cameras and perform operations associated with image processing and image stitching techniques to determine a first degree of overlap for each pair of cameras. For example, the in-vehicle control computer 150 may be configured to identify reference objects in image data collected by a pair of cameras and determine the first degree of overlap of the pair of cameras based on the reference objects. Then, the in-vehicle control computer 150 may indicate the first degree of overlap for each pair of cameras, for example, to a human operator who may modify the orientation of the pair of cameras to reach the desired degree of overlap (e.g., 10 degrees horizontally, 12 degrees horizontally, 15 degrees horizontally, 17 degrees horizontally, 20 degrees horizontally, 30 degrees horizontally).
As shown in
As illustrated in
Further, as illustrated in
Turning to
While SR cameras can span a wide range of orientations, the autonomous vehicle may further include MR cameras and LR cameras for longer-range detection of objects. In particular, with high-speed operation of the autonomous vehicle, detection and tracking of objects at farther distances via MR cameras and LR cameras enables safe and compliant operation of the autonomous vehicle.
In some embodiments, each of the LR cameras, MR cameras, and the SR cameras may be associated with a corresponding overlap amount. For instance, the SR cameras may overlap with each other by a first overlap amount, the MR cameras may overlap with each other by a second overlap amount, and the LR cameras may overlap with each other by a third overlap amount. Further, in some embodiments, there may be an overlap amount configured for overlaps of different camera types. For example, SR cameras and MR cameras may overlap with each other by a fourth overlap amount, MR cameras and LR cameras may overlap with each other by a fifth overlap amount, and SR and LR cameras may overlap with each other by a sixth overlap amount, in some embodiments.
Accordingly, in some embodiments, the autonomous vehicle includes cameras configured for different ranges and cameras configured to different spectrums. Cameras configured for infra-red spectrum can supplement occasional deficiencies of cameras configured for visible light, such as in environments with heavy rain, fog, or other conditions. Thus, with heterogenous camera ranges and heterogenous camera spectrums, an autonomous vehicle is equipped with awareness for multiple different scenarios.
As illustrated in
Camera configurations described herein in accordance with some example embodiments may be based on optimizations of different priorities and objectives. While one objective is to minimize the number of cameras necessary for full 360 degree coverage surrounding the autonomous vehicle, other objectives that relate to range, redundancy, FOV overlap, and stereovision and be considered as well. With example embodiments described herein, cameras may be configured to provide full environmental awareness for an autonomous vehicle, while also providing redundancy and enabling continued operation in the event of individual camera failure or deficiency, and also providing capabilities for object tracking and ranging at high speeds.
In an embodiment, an autonomous vehicle comprises a plurality of first cameras associated with a first FOV having a first horizontal aspect and a plurality of second cameras associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane. Horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane overlap in the horizontal plane by at least a predetermined number of degrees.
In an embodiment, the first cameras and the second cameras are operated at a corresponding frame rate, and wherein the predetermined number of degrees is based on (i) the any two consecutive cameras being two first cameras, two second cameras, or a first camera and a second camera, and (ii) the corresponding frame rate for the any two consecutive cameras.
In an embodiment, the corresponding frame rate for the first cameras and the second cameras is a universal frequency that is synchronized with a sensor frequency of at least one light detection and ranging sensor located on the autonomous vehicle.
In an embodiment, the predetermined number of degrees is further based on an expected speed at which objects located outside of the autonomous vehicle are in motion relative to the autonomous vehicle.
In an embodiment, respective fields-of-view of the plurality of first cameras and the plurality of second cameras together continuously span 360 degrees about the autonomous vehicle.
In an embodiment, the first cameras are associated with a first camera range. The second cameras are associated with a second camera range that is different from the first camera range. The angular locations on the autonomous vehicle at which the first cameras and the second cameras are located are based on the first camera range of the first cameras and the second camera range of the second cameras.
In an embodiment, the plurality of first cameras includes a pair of first cameras that are separated by a distance that is configured for stereovision-based detection of objects located within the first FOV of each of the pair of first cameras.
In an embodiment, the pair of first cameras are located at a front of the autonomous vehicle and oriented in a forward orientation, and wherein the distance by which the pair of first cameras is separated is perpendicular to a central axis along a length of the autonomous vehicle.
In an embodiment, the any two consecutive cameras are electronically coupled in parallel via separate interfaces to a computer located on the autonomous vehicle that is configured to operate the autonomous vehicle.
In an embodiment, the different angular locations on the autonomous vehicle at which the first cameras and the second cameras are located are symmetrical with respect to a central axis along a length of the autonomous vehicle.
In an embodiment, the first FOV has a first vertical aspect being defined by a range of distances from the autonomous vehicle. The autonomous vehicle further comprises a plurality of third cameras that are located on the autonomous vehicle and having a third FOV having a third vertical aspect. At least one third camera and at least one first camera are oriented such that respective vertical aspects of the respective FOVs of the at least one third camera and the at least one first camera overlap by a predetermined amount.
In an embodiment, the autonomous vehicle further comprises at least one wide-angle camera located at each lateral side of the autonomous vehicle.
In an embodiment, a sensor network for an autonomous vehicle comprises: a plurality of first cameras associated with a first FOV having a first horizontal aspect; and a plurality of second cameras associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane. Horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane of the autonomous vehicle overlap in the horizontal plane by at least a predetermined number of degrees.
In an embodiment, the first cameras and the second cameras are operated at a corresponding frame rate, and wherein the predetermined number of degrees is based on (i) the any two consecutive cameras being two first cameras, two second cameras, or a first camera and a second camera, and (ii) the corresponding frame rate for the any two consecutive cameras.
In an embodiment, the corresponding frame rate for the first cameras and the second cameras is a universal frequency that is synchronized with a sensor frequency of at least one light detection and ranging sensor located on the autonomous vehicle.
In an embodiment, the predetermined number of degrees is further based on an expected speed at which objects located outside of the autonomous vehicle are in motion relative to the autonomous vehicle.
In an embodiment, respective fields-of-view of the plurality of first cameras and the plurality of second cameras together continuously span 360 degrees about the autonomous vehicle.
In an embodiment, the first cameras are associated with a first camera range. The second cameras are associated with a second camera range that is different from the first camera range. The different angular locations on the autonomous vehicle at which the first cameras and the second cameras are located are based on the first camera range of the first cameras and the second camera range of the second cameras.
In an embodiment, the plurality of first cameras includes a pair of first cameras that are separated by a distance that is configured for stereovision-based detection of objects located within the first FOV of each of the pair of first cameras.
In an embodiment, the pair of first cameras are located at a front of the autonomous vehicle and oriented in a forward orientation, and wherein the distance by which the pair of first cameras is separated is perpendicular to a central axis along a length of the autonomous vehicle.
In an embodiment, the sensor network further comprises a computer configured to operate the autonomous vehicle, wherein the any two consecutive cameras are electronically coupled in parallel via separate interfaces to the computer.
In an embodiment, the different angular locations on the autonomous vehicle at which the first cameras and the second cameras are located are symmetrical with respect to a central axis along a length of the autonomous vehicle.
In an embodiment, the first FOV has a first vertical aspect being defined by a range of distances from the autonomous vehicle. The sensor network further comprises a plurality of third cameras that are located on the autonomous vehicle and having a third FOV having a third vertical aspect. At least one third camera and at least one first camera are oriented such that respective vertical aspects of the respective FOVs of the at least one third camera and the at least one first camera overlap by a predetermined amount.
In an embodiment, the sensor network further comprises at least one wide-angle camera located at each lateral side of the autonomous vehicle.
In an embodiment, a system for operating an autonomous vehicle comprises: a processor communicatively coupled with and configured to receive image data from a plurality of first cameras that are associated with a first FOV having a first horizontal aspect and a plurality of second cameras that are associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane. Horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane of the autonomous vehicle overlap in the horizontal plane by at least a predetermined number of degrees.
In an embodiment, the image data is received from the first cameras and the second cameras at a corresponding frame rate for the first cameras and the second cameras, and wherein the predetermined number of degrees is based on (i) the any two consecutive cameras being two first cameras, two second cameras, or a first camera and a second camera, and (ii) the corresponding frame rate for the any two consecutive cameras.
In an embodiment, the corresponding frame rate for the first cameras and the second cameras is a universal frequency that is synchronized with a sensor frequency of at least one light detection and ranging sensor located on the autonomous vehicle.
In an embodiment, the predetermined number of degrees is further based on an expected speed at which objects located outside of the autonomous vehicle are in motion relative to the autonomous vehicle.
In an embodiment, respective fields-of-view of the plurality of first cameras and the plurality of second cameras together continuously span 360 degrees about the autonomous vehicle.
In an embodiment, the first cameras are associated with a first camera range. The second cameras are associated with a second camera range that is different from the first camera range, and the different angular locations on the autonomous vehicle at which the first cameras and the second cameras are located are based on the first camera range of the first cameras and the second camera range of the second cameras.
In an embodiment, the processor is configured to execute operations for stereovision-based detection of objects from a pair of first cameras of the plurality of first cameras that are separated by a predetermine distance.
In an embodiment, the pair of first cameras are located at a front of the autonomous vehicle and oriented in a forward orientation, and wherein the distance by which the pair of first cameras is separated is perpendicular to a central axis along a length of the autonomous vehicle.
In an embodiment, the any two consecutive cameras are communicatively coupled in parallel via separate interfaces to the processor.
In an embodiment, the different angular locations on the autonomous vehicle at which the first cameras and the second cameras are located are symmetrical with respect to a central axis along a length of the autonomous vehicle.
In an embodiment, the first FOV has a first vertical aspect being defined by a range of distances from the autonomous vehicle. The processor is further communicatively coupled with a plurality of third cameras that are located on the autonomous vehicle and having a third FOV having a third vertical aspect. At least one third camera and at least one first camera are oriented such that respective vertical aspects of the respective FOVs of the at least one third camera and the at least one first camera overlap by a predetermined amount.
In an embodiment, the processor is further communicatively coupled with at least one wide-angle camera located at each lateral side of the autonomous vehicle.
In an embodiment, a method for operating an autonomous vehicle comprises receiving image data from a sensor network, the sensor network comprising: a plurality of first cameras associated with a first FOV having a first horizontal aspect and a plurality of second cameras associated with a second FOV having a second horizontal aspect. The first cameras and the second cameras are located at different angular locations on the autonomous vehicle along a horizontal plane, and horizontal aspects of two fields-of-view of any two consecutive cameras located along the horizontal plane overlap in the horizontal plane by at least a predetermined number of degrees. The method further comprises detecting one or more objects located outside of the autonomous vehicle based on the image data; determining a trajectory for the autonomous vehicle based on the detection of the one or more objects; and causing the autonomous vehicle to travel in accordance with the trajectory.
In an embodiment, detecting the one or more objects comprises estimating a distance between the autonomous vehicle and each of the one or more objects based on (i) each object being captured by each of a pair of first cameras of the plurality of first cameras, and (ii) a stereovision separation distance between the pair of first cameras.
In an embodiment, the one or more objects are in motion relative to the autonomous vehicle, and wherein detecting the one or more objects comprises tracking each object as the objects moves from an FOV of a given camera of the plurality of first cameras or the plurality of second cameras to an FOV of another camera of the plurality of first cameras or the plurality of second cameras.
In an embodiment, an autonomous truck comprises a controller configured to control autonomous driving operation of the truck and a sensor network comprising at least six sensors disposed on an exterior of the truck, each sensor oriented to capture sensor data from a corresponding directional beam having a corresponding beam width and a corresponding beam depth such that beam widths of the at least six sensors cover a surrounding region of the truck that is relevant to safe autonomous driving of the truck.
In an embodiment, the corresponding directional beam of the at least six sensors overlap by at least a predetermined amount with respect to the corresponding beam width.
In an embodiment, a first subset of the at least six sensors are configured to capture image data corresponding to a visible spectrum, and wherein a second subset of the at least six sensors are configured to capture image data corresponding to a LWIR spectrum.
In an embodiment, the second subset of the at least six sensors is five LWIR cameras.
In this document the term “exemplary” is used to mean “an example of” and, unless otherwise stated, does not imply an ideal or a preferred embodiment. In this document, the term “microcontroller” can include a processor and its associated memory.
Some of the embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Therefore, the computer-readable media can include a non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer- or processor-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
Some of the disclosed embodiments can be implemented as devices or modules using hardware circuits, software, or combinations thereof. For example, a hardware circuit implementation can include discrete analog and/or digital components that are, for example, integrated as part of a printed circuit board. Alternatively, or additionally, the disclosed components or modules can be implemented as an Application Specific Integrated Circuit (ASIC) and/or as a Field Programmable Gate Array (FPGA) device. Some implementations may additionally or alternatively include a digital signal processor (DSP) that is a specialized microprocessor with an architecture optimized for the operational needs of digital signal processing associated with the disclosed functionalities of this application. Similarly, the various components or sub-components within each module may be implemented in software, hardware or firmware. The connectivity between the modules and/or components within the modules may be provided using any one of the connectivity methods and media that is known in the art, including, but not limited to, communications over the Internet, wired, or wireless networks using the appropriate protocols.
While this document contains many specifics, these should not be construed as limitations on the scope of an invention that is claimed or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or a variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this disclosure.
This document claims priority to and the benefit of U.S. Provisional Application No. 63/369,497, filed on Jul. 26, 2022. The aforementioned application of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63369497 | Jul 2022 | US |