DETECTING ROAD AND WEATHER CONDITIONS FOR VEHICLE DRIVING

Information

  • Patent Application
  • 20250121815
  • Publication Number
    20250121815
  • Date Filed
    August 18, 2024
    9 months ago
  • Date Published
    April 17, 2025
    a month ago
Abstract
This application is directed to monitoring environmental conditions (e.g., weather and road conditions) associated with a vehicle to facilitate autonomous vehicle control or planning. A computer system is associated with a fixed installation having one or first more sensors. The computer system obtains, via the plurality of sensors, weather information and road surface information for a segment of a road. The computer system generates a road and weather condition estimation based on the weather information and the road surface information. The computer system transmits the road and weather condition estimation to one or more first vehicles in a vicinity of the fixed installation such that the road and weather condition estimation is configured to be used by the one or more first vehicles to at least partially autonomously drive in a first trajectory in accordance with the road and weather condition estimation.
Description
TECHNICAL FIELD

The present application generally relates to vehicle technology, and more particularly to, methods, systems, and non-transitory computer-readable storage medium for monitoring environmental conditions (e.g., weather and road conditions) associated with a vehicle to facilitate autonomous vehicle control and/or planning.


BACKGROUND

Vehicles are now capable of self-driving with different levels of autonomy. Each of these levels is characterized by the relative amount of human and autonomous control. For example, The Society of Automotive Engineers (SAE) defines 6 levels of driving automation ranging from 0 (fully manual) to 5 (fully autonomous). These levels have been adopted by the U.S. Department of Transportation. Autonomous vehicles provide numerous advantages including: (1) lowering the number of vehicles on the roads, (2) more predictable and safer driving behavior than human driven vehicles, (3) less emissions if there are fewer vehicles on the road, and if they are electrically powered, (4) improved travel efficiency, fuel economy, and traffic safety if they are controlled by computers, (5) increased lane capacity, (6) shorter travel times, and (7) increased mobility for users who are incapable of diving.


There are numerous advantages of autonomous vehicles, including: (1) lowering the number of vehicles on the roads (most privately owned vehicles are driven a small fraction of the time); (2) more predictable and safer driving behavior than human driven vehicles; (3) less emissions if more vehicles are electrically powered; (4) improved fuel efficiency; (5) increased lane capacity; (6) shorter travel times; and (7) mobility for users who are incapable of diving.


Presently, autonomous vehicles have been equipped with sensors that are used for object (e.g., obstacle) detection. Environmental conditions (e.g., road and weather conditions) may be estimated indirectly based on data about vehicle dynamics, such as vehicle velocities and accelerations. Estimation of the environmental conditions takes an extended duration of time and occurs when the vehicle is already experiencing those conditions. By the time a reasonable estimation is achieved, the environmental conditions may have changed, and it is unsafe to rely on the estimation to control the vehicle any more.


SUMMARY

Some embodiments of the present disclosure are directed to methods, systems, and non-transitory computer readable storage media for monitoring environmental conditions (e.g., weather and road conditions) to facilitate autonomous vehicle driving. In accordance with some embodiments of this application is a realization that a challenge facing the autonomous vehicle industry is the complexity of detecting and gathering data on various traffic-related parameters on the road (e.g., parameters associated with road conditions and weather conditions). The presence of road surface conditions potholes, cracks, oil spills, speed bumps, or ice on the road can influence the ability of an autonomous vehicle to operate safely. Adverse weather conditions, including rain, snow, ice, or fog, can also affect a vehicle's operations. If road and weather information are known and used, autonomous vehicles (especially heavy duty trucks) can plan their driving routes, speed profiles and actuation (e.g., engine, brake, or suspension) usage more effectively, leading to potentially huge safety and economic benefits that can be realized.


According to some aspects of the present disclosure, a fixed installation (e.g., an infrastructure, having at a fixed location) at a road is equipped with sensors that directly collect information regarding road and weather conditions (e.g., within a sensing range of the sensors). When a vehicle approaches or reaches the fixed installations, the collected information is collected by the vehicle, and used to control the vehicle to drive at least partially autonomously. In some embodiments, the fixed installation is located at a freeway entrance or exit, a lane merge zone (e.g., on a section of a road where two or more lanes merge), a tunnel, a toll booth, a traffic light area, an on-ramp merge point, and/or a road intersection. The fixed installation may have a distance of several meters to several miles away from the vehicle. In some embodiments, the sensors include an imaging sensor (e.g., a camera) or a wind speed/direction sensor, and are configured to continuously monitor road surface or wind conditions at a given region of the road. In some embodiments, the installation is communicatively coupled to vehicles that are traveling along the road. The installation gathers information regarding road or weather conditions indirectly from dynamic information of these vehicles. The information is further processed on a computer device to generate an estimation of the road condition or the weather condition. The estimation includes, but is not limited to, a road friction coefficient, information about pot holes or cracks on the road, a wind speed, and a wind directions. The estimation of the road condition or the weather condition is further communicated to vehicles that are traveling along the road and approaching the sensors, thereby enabling the vehicles to make safe and efficient motion planning decisions in advance.


Compared to existing approaches where road and weather conditions are estimated indirectly based on onboard vehicle sensors, environmental conditions are monitored at a fixed location and suited for continuous monitoring from a fixed perspective. The fixed location may have a distance of 10 meters to 10 miles away from a vehicle. According to some embodiments of the present disclosure, the use of sensors positioned at a fixed installation to gather information about the environmental conditions advantageously improves the quality and accuracy of the data collected. The gathered information is provided to a vehicle in advance before the vehicle has arrived at the fixed location, leaving time to the vehicle to plan its route adaptively based on the gathered information. This can, in turn, improve vehicle route planning, improve safety, and reduce operation costs.


In one aspect, a method for detecting conditions for vehicle driving is implemented at a computer system associated with a fixed installation that includes a plurality of sensors. The computer system has one or more processors and memory. The method includes obtaining, via the plurality of sensors, weather information and road surface information for a segment of a road. The method further includes generating a road and weather condition estimation based on the weather information and the road surface information. The method further includes transmitting the road and weather condition estimation to one or more first vehicles in a vicinity of the fixed installation, such that the road and weather condition estimation is configured to be used by the one or more first vehicles to at least partially autonomously drive in a first trajectory in accordance with the road and weather condition estimation.


In some embodiments, the method further includes obtaining vehicle information from one or more second vehicles that are traveling on the segment of the road. Generating the road and weather condition estimation is further based on the obtained vehicle information. In some embodiments, the vehicle information includes activation of an electronic stability control (ESC) system of one of the one or more second vehicles. In some embodiments, the vehicle information includes activation of an anti-lock braking system (ABS) of one of the one or more second vehicles.


In some embodiments, the road and weather condition estimation includes identification of one or more potholes on the segment of the road and respective locations of the one or more potholes. In some embodiments, the road and weather condition estimation includes an estimated wind speed and an estimated wind direction.


According to another aspect of the present application, a computer system is associated with a fixed installation that includes a plurality of sensors. The computer system includes one or more processors and memory having a plurality of programs stored in the memory. The programs, when executed by the one or more processors, cause the computer system to perform any of the methods for detecting conditions for vehicle driving as disclosed herein.


According to another aspect of the present application, a non-transitory computer readable storage medium stores a plurality of programs configured for execution by a computer system associated with a fixed installation that includes a plurality of sensors, the computer system having one or more processors and memory. The memory stores instructions that, when executed by the one or more processors, cause the computer system to perform any of the methods for detecting conditions for vehicle driving as disclosed herein.


Note that the various embodiments described above can be combined with any other embodiments described herein. The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are included to provide a further understanding of the embodiments, are incorporated herein, constitute a part of the specification, illustrate the described embodiments, and, together with the description, serve to explain the underlying principles.



FIG. 1 is an example vehicle driving environment for a plurality of vehicles, in accordance with some embodiments.



FIG. 2 is a block diagram of an example vehicle configured to be driven with a certain level of autonomy, in accordance with some embodiments.



FIG. 3 is a block diagram of an example server for monitoring and managing vehicles in a vehicle driving environment, in accordance with some embodiments.



FIG. 4 is a block diagram of a machine learning system for training and applying vehicle data processing models for facilitating at least partial autonomous driving of a vehicle, in accordance with some embodiments.



FIG. 5A is a structural diagram of an example neural network applied to process vehicle data in a vehicle data processing model, in accordance with some embodiments, and FIG. 5B is an example node in the neural network, in accordance with some embodiments.



FIG. 6 is a block diagram of an example computer system associated with an installation, in accordance with some embodiments.



FIGS. 7A and 7B are exemplary scenes depicting vehicles traveling on roads that include installations, in accordance with some embodiments.



FIGS. 8A and 8B provide a flowchart of an example process for detecting conditions for vehicle driving, in accordance with some embodiments.





Like reference numerals refer to corresponding parts throughout the several views of the drawings.


DETAILED DESCRIPTION

Reference will now be made in detail to specific embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous non-limiting specific details are set forth in order to assist in understanding the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that various alternatives may be used without departing from the scope of the claims and the subject matter may be practiced without these specific details. For example, it will be apparent to one of ordinary skill in the art that the subject matter presented herein can be implemented on many types of electronic devices with digital video capabilities.


Various embodiments of this application are directed to detecting conditions for vehicle driving. Exemplary conditions can include road conditions, weather conditions, or both. In some embodiments, the road conditions include information of static road conditions (e.g., road surface conditions). For example, the static road conditions include an existence of an oily patch on the road segment, icy conditions on the road, potholes and cracks on the road, a presence of speed bumps, or a combination thereof. In some embodiments, the road conditions include information of dynamic conditions such as dropped objects on the road. In some embodiments, the weather conditions include rain, snow, ice, or fog. In some instances, the road conditions or weather conditions can affect safety and operation of a vehicle that is traveling or planning to travel on the road.


In some embodiments, a computer system is associated with a fixed installation (e.g., infrastructure) that includes a plurality of sensors. The computer system (e.g., a microcontroller unit) can be physically co-located at the installation and/or communicatively connected with the installation (e.g., the sensors). The plurality of sensors can include one or more imaging sensors and one or more anemometers. In some embodiments, the plurality of sensors includes one or more of: a global positioning system (GPS), a thermal sensor, a light detection and ranging (LiDAR) scanner, one or more cameras, a radio detection and ranging (RADAR) sensor, an infrared sensor, and one or more ultrasonic sensors. The computer system obtains, via the plurality of sensors, weather information and road surface information for a segment of a road. The computer system generates a road and weather condition estimation based on the weather information and the road surface information. In some embodiments, the road and weather condition estimation includes an estimated road friction coefficient for the segment of the road, a pavement roughness level, identification of one or more potholes on the segment of the road and their respective locations, an estimated wind speed, and/or an estimated wind direction. The computer system transmits the road and weather condition estimation to one or more first vehicles (e.g., via wireless communication, such as 5G communication) in a vicinity of the fixed installation such that the road and weather condition estimation is configured to be used by the one or more first vehicles to at least partially autonomously drive in a first trajectory in accordance with the road and weather condition estimation.


In some embodiments, the computer system obtains vehicle information (e.g., vehicle operational conditions) from one or more second vehicles that are traveling on the segment of the road and generates the road and weather condition estimation further based on the obtained vehicle information. Exemplary vehicle information can include activation of an electronic stability control (ESC) system of one of the one or more second vehicles or activation of an anti-lock braking system (ABS) of one of the one or more second vehicles.



FIG. 1 is an example vehicle driving environment 100 having a plurality of vehicles 102 (e.g., vehicles 102P, 102T, and 102V), in accordance with some embodiments. Each vehicle 102 has one or more processors, memory, a plurality of sensors, and a vehicle control system. The vehicle control system is configured to sense the vehicle driving environment 100 and drive on roads having different road conditions. The plurality of vehicles 102 may include passenger cars 102P (e.g., sport-utility vehicles and sedans), vans 102V, trucks 102T, and driver-less cars. Each vehicle 102 can collect sensor data and/or user inputs, execute user applications, present outputs on its user interface, and/or operate the vehicle control system to drive the vehicle 102. The collected data or user inputs can be processed locally (e.g., for training and/or for prediction) at the vehicle 102 and/or remotely by one or more servers 104. The one or more servers 104 provide system data (e.g., boot files, operating system images, and user applications) to the vehicle 102, and in some embodiments, process the data and user inputs received from the vehicle 102 when the user applications are executed on the vehicle 102. In some embodiments, the vehicle driving environment 100 further includes storage 106 for storing data related to the vehicles 102, servers 104, and applications executed on the vehicles 102.


For each vehicle 102, the plurality of sensors includes one or more of: (1) a global positioning system (GPS) sensors; (2) a light detection and ranging (LiDAR) scanner; (3) one or more cameras; (4) a radio detection and ranging (RADAR) sensor; (5) an infrared sensor; (6) one or more ultrasonic sensors; (7) a dedicated short-range communication (DSRC) module; (8) an inertial navigation system (INS) including accelerometers and gyroscopes; (9) an inertial measurement unit (IMU) for measuring and reporting acceleration, orientation, angular rates, and other gravitational forces; and/or (10) an odometry sensor. In some embodiments, a vehicle 102 includes a 5G communication module to facilitate vehicle communication jointly with or in place of the DSRC module. The cameras are configured to capture a plurality of images in the vehicle driving environment 100, and the plurality of images are applied to map the vehicle driving environment 100 to a 3D vehicle space and identify a location of the vehicle 102 within the environment 100. The cameras also operate with one or more other sensors (e.g., GPS, LiDAR, RADAR, and/or INS) to localize the vehicle 102 in the 3D vehicle space. For example, the GPS identifies a geographical position (geolocation) of the vehicle 102 on the Earth, and the INS measures relative vehicle speeds and accelerations between the vehicle 102 and adjacent vehicles 102. The LiDAR scanner measures the distance between the vehicle 102 and adjacent vehicles 102 and other objects. Data collected by these sensors is used to determine vehicle locations determined from the plurality of images or to facilitate determining vehicle locations between two images.


The vehicle control system includes a plurality of actuators for at least steering, braking, controlling the throttle (e.g., accelerating, maintaining a constant velocity, or decelerating), and transmission control. Depending on the level of automation, each of the plurality of actuators (or manually controlling the vehicle, such as by turning the steering wheel) can be controlled manually by a driver of the vehicle, automatically by the one or more processors of the vehicle, or jointly by the driver and the processors. When the vehicle 102 controls the plurality of actuators independently or jointly with the driver, the vehicle 102 obtains the sensor data collected by the plurality of sensors, identifies adjacent road features in the vehicle driving environment 100, tracks the motion of the vehicle, tracks the relative distance between the vehicle and any surrounding vehicles or other objects, and generates vehicle control instructions to at least partially autonomously control driving of the vehicle 102. Conversely, in some embodiments, when the driver takes control of the vehicle, the driver manually provides vehicle control instructions via a steering wheel, a braking pedal, a throttle pedal, and/or a gear lever directly. In some embodiments, a vehicle user application is executed on the vehicle and configured to provide a user interface. The driver provides vehicle control instructions to control the plurality of actuators of the vehicle control system via the user interface of the vehicle user application. By these means, the vehicle 102 is configured to drive with its own vehicle control system and/or the driver of the vehicle 102 according to the level of autonomy.


In some embodiments, autonomous vehicles include, for example, a fully autonomous vehicle, a partially autonomous vehicle, a vehicle with driver assistance, or an autonomous capable vehicle. Capabilities of autonomous vehicles can be associated with a classification system, or taxonomy, having tiered levels of autonomy. A classification system can be specified, for example, by industry standards or governmental guidelines. For example, the levels of autonomy can be considered using a taxonomy such as level 0 (momentary driver assistance), level 1 (driver assistance), level 2 (additional assistance), level 3 (conditional assistance), level 4 (high automation), and level 5 (full automation without any driver intervention) as classified by the International Society of Automotive Engineers (SAE International). Following this example, an autonomous vehicle can be capable of operating, in some instances, in at least one of levels 0 through 5. According to various embodiments, an autonomous capable vehicle may refer to a vehicle that can be operated by a driver manually (that is, without the autonomous capability activated) while being capable of operating in at least one of levels 0 through 5 upon activation of an autonomous mode. As used herein, the term “driver” may refer to a local operator or a remote operator. The autonomous vehicle may operate solely at a given level (e.g., level 2 additional assistance or level 5 full automation) for at least a period of time or during the entire operating time of the autonomous vehicle. Other classification systems can provide other levels of autonomy characterized by different vehicle capabilities.


In some embodiments, the vehicle 102 drives in the vehicle driving environment 100 at level 5. The vehicle 102 collects sensor data from the plurality of sensors, processes the sensor data to generate vehicle control instructions, and controls the vehicle control system to drive the vehicle autonomously in response to the vehicle control instructions. Alternatively, in some situations, the vehicle 102 drives in the vehicle driving environment 100 at level 0. The vehicle 102 collects the sensor data and processes the sensor data to provide feedback (e.g., a warning or an alert) to a driver of the vehicle 102 to allow the driver to drive the vehicle 102 manually and based on the driver's own judgement. Alternatively, in some situations, the vehicle 102 drives in the vehicle driving environment 100 partially autonomously at one of levels 1-4. The vehicle 102 collects the sensor data and processes the sensor data to generate a vehicle control instruction for a portion of the vehicle control system and/or provide feedback to a driver of the vehicle 102. The vehicle 102 is driven jointly by the vehicle control system of the vehicle 102 and the driver of the vehicle 102. In some embodiments, the vehicle control system and driver of the vehicle 102 control different portions of the vehicle 102. In some embodiments, the vehicle 102 determines the vehicle status. Based on the vehicle status, a vehicle control instruction of one of the vehicle control system or driver of the vehicle 102 preempts or overrides another vehicle control instruction provided by the other one of the vehicle control system or driver of the vehicle 102.


For the vehicle 102, the sensor data collected by the plurality of sensors, the vehicle control instructions applied to the vehicle control system, and the user inputs received via the vehicle user application form a collection of vehicle data 112. In some embodiments, at least a subset of the vehicle data 112 from each vehicle 102 is provided to one or more servers 104. A server 104 provides a central vehicle platform for collecting and analyzing the vehicle data 112, monitoring vehicle operation, detecting faults, providing driving solutions, and updating additional vehicle information 114 to individual vehicles 102 or client devices 108. In some embodiments, the server 104 manages vehicle data 112 of each individual vehicle 102 separately. In some embodiments, the server 104 consolidates vehicle data 112 from multiple vehicles 102 and manages the consolidated vehicle data jointly (e.g., the server 104 statistically aggregates the data).


Additionally, in some embodiments, the vehicle driving environment 100 further includes one or more client devices 108, such as desktop computers, laptop computers, tablet computers, and mobile phones. Each client device 108 is configured to execute a client user application associated with the central vehicle platform provided by the server 104. The client device 108 is logged into a user account on the client user application, and the user account is associated with one or more vehicles 102. The server 104 provides the collected vehicle data 112 and additional vehicle information 114 (e.g., vehicle operation information, fault information, or driving solution information) for the one or more associated vehicles 102 to the client device 108 using the user account of the client user application. In some embodiments, the client device 108 is located in the one or more vehicles 102, while in other embodiments, the client device is at a location distinct from the one or more associated vehicles 102. As such, the server 104 can apply its computational capability to manage the vehicle data 112 and facilitate vehicle monitoring and control on different levels (e.g., for each individual vehicle, for a collection of vehicles, and/or for related client devices 108).


The plurality of vehicles 102, the one or more servers 104, and the one or more client devices 108 are communicatively coupled to each other via one or more communication networks 110, which is used to provide communications links between these vehicles and computers connected together within the vehicle driving environment 100. The one or more communication networks 110 may include connections, such as a wired network, wireless communication links, or fiber optic cables. Examples of the one or more communication networks 110 include local area networks (LAN), wide area networks (WAN) such as the Internet, or a combination thereof. The one or more communication networks 110 are, in some embodiments, implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Long Term Evolution (LTE), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VOIP), Wi-MAX, or any other suitable communication protocol. A connection to the one or more communication networks 110 may be established either directly (e.g., using 3G/4G/5G connectivity to a wireless carrier), or through a network interface (e.g., a router, a switch, a gateway, a hub, or an intelligent, dedicated whole-home control node), or through any combination thereof. In some embodiments, the one or more communication networks 110 allow for communication using any suitable protocols, like Transmission Control Protocol/Internet Protocol (TCP/IP). In some embodiments, each vehicle 102 is communicatively coupled to the servers 104 via a cellular communication network.


In some embodiments, deep learning techniques are applied by the vehicles 102, the servers 104, or both, to process the vehicle data 112. For example, in some embodiments, after image data is collected by the cameras of one of the vehicles 102, the image data is processed using an object detection model to identify objects (e.g., road features including, but not limited to, vehicles, lane lines, shoulder lines, road dividers, traffic lights, traffic signs, road signs, cones, pedestrians, bicycles, and drivers of the vehicles) in the vehicle driving environment 100. In some embodiments, additional sensor data is collected and processed by a vehicle control model to generate a vehicle control instruction for controlling the vehicle control system. In some embodiments, a vehicle planning model is applied to plan a driving control process based on the collected sensor data and the vehicle driving environment 100. The object detection model, vehicle control model, and vehicle planning model are collectively referred to herein as vehicle data processing models (i.e., machine learning models 250 in FIG. 2), each of which includes one or more neural networks. In some embodiments, such a vehicle data processing model is applied by the vehicles 102, the servers 104, or both, to process the vehicle data 112 to infer associated vehicle status and/or provide control signals. In some embodiments, a vehicle data processing model is trained by a server 104, and applied locally or provided to one or more vehicles 102 for inference of the associated vehicle status and/or to provide control signals. Alternatively, a vehicle data processing model is trained locally by a vehicle 102, and applied locally or shared with one or more other vehicles 102 (e.g., by way of the server 104). In some embodiments, a vehicle data processing model is trained in a supervised, semi-supervised, or unsupervised manner.


In some embodiments, the vehicle driving environment 100 further includes one or more installations 130 (e.g., an infrastructure) that are situated along a road. For example, in some embodiments, the installations 130 can positioned at locations along a road where traffic may be prone to buildup, such as a freeway entrance or exit, a lane merge zone (e.g., on a section of a road where two or more lanes merge), a tunnel, a toll booth, a traffic light area, an on-ramp merge point, and/or a road intersection. In some embodiments, a segment of a road can have multiple installations 130 that are positioned at regular intervals (e.g., every kilometer, every mile, every 2 miles, etc.) along the road. In some embodiments, the installations 130 comprise fixed, immovable structures. In some embodiments, the installations 130 are positioned ahead of traffic of interest (e.g., the vehicles are driving in a direction toward the installations).


The one or more installations 130, the plurality of vehicles 102, the one or more servers 104, and the one or more client devices 108 are communicatively coupled to each other via the one or more communication networks 110. In some embodiments, a vehicle 102 can be equipped with a vehicle-to-infrastructure (V2I) communication system, in which the vehicle 102 and the one of more installations 130 are communicating nodes that provide each other with information such as traffic information, weather information, road condition information, and safety warnings. In some embodiments, a respective vehicle 102 can be equipped with a vehicle-to-everything (V2X) communication system, in which the respective vehicle 102 can exchange information with the one of more installations 130 as well as with other vehicles that may be driving along the same road (or route) as the respective vehicle 102. The V2I and/or V2X communication system can be powered using 3G/4G/5G connectivity to a wireless carrier, or through a network interface (e.g., a router, a switch, a gateway, a hub, or an intelligent, dedicated whole-home control node), or through any combination thereof. In some embodiments, the V2I or V2X communication are powered by 5G, which advantageously allows large bandwidth, low latency information sharing between the vehicles and the installations, providing new opportunities for road condition estimation and weather conditions perception.


The installations 130 include one or more sensors 660 positioned at the installations 130. The sensors 660 are fixedly located on the installations 130 and are configured to detect, monitor, and gather data on various traffic-related parameters. In accordance with some embodiments of the present disclosure, the information collected by the sensors 660 are more detailed and instantaneous compared to information collected using a perception system on a single autonomous vehicle, because they have a fixed location, better detection coverage, and have a defined field of view. In some embodiments, the one or more sensors incudes one or more of: an imaging sensor, a camera, an anemometer (e.g., a wind speed and direction sensor), a global positioning system (GPS), a thermal sensor (e.g., a temperature sensor), an acoustic sensor, a microphone, a light detection and ranging (LiDAR) scanner, a radio detection and ranging (RADAR) sensor, an infrared sensor, an ultrasonic sensor. In some embodiments, the installations 130 include one or more inductive loop detectors for transmitting and receiving communication signals, and/or detecting the presence or vehicles.


In some embodiments, a respective installation 130 includes a communication module for facilitating information sharing between the vehicles 102 and the installation 130. For example, in some embodiments, the installation 130 gathers, from the vehicles 102 via the communication module, vehicle information 134. The vehicle information 134 can include information about vehicle dynamics (e.g., vehicle velocities and accelerations), vehicle data 112, and/or the additional vehicle information 114. In some embodiments, the vehicle information 134 can also include traffic, road, and/or weather information that are communicated from the vehicles 102 to the installation 130.


In some embodiments, the installation 130 provides at least a subset of infrastructure information 132 to the vehicles 102 and/or the one or more servers 104. The infrastructure information 132 can include sensor data collected by the sensors 660 and/or data processed by a computing unit of the installation 130 based on the sensor data and the vehicle information 134.


It is noted that the installation 130 illustrated in FIG. 1 does not reflect an actual size of the installation 130. In some embodiments, the installation 130 corresponds to an existing structure (e.g., a light pole, a billboard) standing near or on the road. Alternatively, in some embodiments, the installation 130 is a dedicated structure built at a fixed location near or on the road for collecting information of local road or whether conditions. The installation 130 may not be visible or discernable to passing vehicles from its appearance.



FIG. 2 is a block diagram of an example vehicle 102 configured to be driven with a certain level of autonomy, in accordance with some embodiments. The vehicle 102 typically includes one or more processing units (CPUs) 202, one or more network interfaces 204, memory 206, and one or more communication buses 208 for interconnecting these components (sometimes called a chipset). The vehicle 102 includes one or more user interface devices. The user interface devices include one or more input devices 210, which facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, in some embodiments, the vehicle 102 uses a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard. In some embodiments, the one or more input devices 210 include one or more cameras, scanners, or photo sensor units for capturing images, for example, of a driver and a passenger in the vehicle 102. The vehicle 102 also includes one or more output devices 212, which enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays (e.g., a display panel located near to a driver's right hand in right-hand-side operated vehicles typical in the U.S.).


The vehicle 102 includes a plurality of sensors 260 configured to collect sensor data in a vehicle driving environment 100. The plurality of sensors 260 include one or more of a GPS 262, a LiDAR scanner 264, one or more cameras 266, a RADAR sensor 268, an infrared sensor 270, one or more ultrasonic sensors 272, an SRC module 274, an INS 276 including accelerometers and gyroscopes, and an odometry sensor 278. The GPS 262 localizes the vehicle 102 in Earth coordinates (e.g., using a latitude value and a longitude value) and can reach a first accuracy level less than 1 meter (e.g., 30 cm). The LiDAR scanner 264 uses light beams to estimate relative distances between the scanner 264 and a target object (e.g., another vehicle 102), and can reach a second accuracy level better than the first accuracy level of the GPS 262. The cameras 266 are installed at different locations on the vehicle 102 to monitor surroundings of the camera 266 from different perspectives. In some situations, a camera 266 is installed facing the interior of the vehicle 102 and configured to monitor the state of the driver of the vehicle 102. The RADAR sensor 268 emits electromagnetic waves and collects reflected waves to determine the speed and a distance of an object over which the waves are reflected. The infrared sensor 270 identifies and tracks objects in an infrared domain when lighting conditions are poor. The one or more ultrasonic sensors 272 are used to detect objects at a short distance (e.g., to assist parking). The SRC module 274 is used to exchange information with a road feature (e.g., a traffic light). The INS 276 uses the accelerometers and gyroscopes to measure the position, the orientation, and the speed of the vehicle. The odometry sensor 278 tracks the distance the vehicle 102 has travelled, (e.g., based on a wheel speed). In some embodiments, based on the sensor data collected by the plurality of sensors 260, the one or more processors 202 of the vehicle monitor its own vehicle state 282, the driver or passenger state 284, states of adjacent vehicles 286, and road conditions 288 associated with a plurality of road features.


The vehicle 102 has a control system 290, including a steering control 292, a braking control 294, a throttle control 296, a transmission control 298, signaling and lighting controls, and other controls. In some embodiments, one or more actuators of the vehicle control system 290 are automatically controlled based on the sensor data collected by the plurality of sensors 260 (e.g., according to one or more of the vehicle state 282, the driver or passenger state 284, states of adjacent vehicles 286, and/or road conditions 288).


The memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices. In some embodiments, the memory includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. In some embodiments, the memory 206 includes one or more storage devices remotely located from one or more processing units 202. The memory 206, or alternatively the non-volatile the memory within the memory 206, includes a non-transitory computer readable storage medium. In some embodiments, the memory 206, or the non-transitory computer readable storage medium of the memory 206, stores the following programs, modules, and data structures, or a subset or superset thereof:

    • an operating system 214, which includes procedures for handling various basic system services and for performing hardware dependent tasks;
    • a network communication module 216, which connects each vehicle 102 to other devices (e.g., another vehicle 102, a server 104, or a client device 108) via one or more network interfaces (wired or wireless) and one or more communication networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
    • a user interface module 218, which enables presentation of information (e.g., a graphical user interface for an application 224, widgets, websites and web pages thereof, audio content, and/or video content) at the vehicle 102 via one or more output devices 212 (e.g., displays or speakers);
    • an input processing module 220, which detects one or more user inputs or interactions from one of the one or more input devices 210 and interprets the detected input or interaction;
    • a web browser module 222, which navigates, requests (e.g., via HTTP), and displays websites and web pages thereof, including a web interface for logging into a user account of a user application 224 associated with the vehicle 102 or another vehicle;
    • one or more user applications 224, which are executed at the vehicle 102. The user applications 224 include a vehicle user application that controls the vehicle 102 and enables users to edit and review settings and data associated with the vehicle 102;
    • a model training module 226, which trains a machine learning model 250. The model 250 includes at least one neural network and is applied to process vehicle data (e.g., sensor data and vehicle control data) of the vehicle 102;
    • a data processing module 228, which performs a plurality of on-vehicle tasks, including, but not limited to, perception and object analysis 230, vehicle localization and environment mapping 232, vehicle drive control 234, vehicle drive planning 236, local operation monitoring 238, and vehicle action and behavior prediction 240;
    • a vehicle database 242, which stores vehicle data 112, including:
      • device settings 243, including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, and/or medical procedure settings) of the vehicle 102;
      • user account information 244 for the one or more user applications 224 (e.g., user names, security questions, account history data, user preferences, and predefined account settings);
      • network parameters 246 for the one or more communication networks 110, (e.g., IP address, subnet mask, default gateway, DNS server, and host name);
      • training data 248 for training the machine learning model 250;
      • machine learning models 250 for processing vehicle data 112, where in some embodiments, the machine learning model 250 is applied to process one or more images captured by a first vehicle 102A and predict a sequence of vehicle actions of a second vehicle through a hierarchy of interconnected vehicle actions;
      • sensor data 254 captured or measured by the plurality of sensors 260;
      • mapping and location data 256, which is determined from the sensor data 254 to map the vehicle driving environment 100 and locations of the vehicle 102 in the environment 100;
      • a hierarchy of interconnected vehicle actions 258 including a plurality of predefined vehicle actions that are organized to define a plurality of vehicle action sequences; and
      • vehicle control data 259, which is automatically generated by the vehicle 102 or manually input by the user via the vehicle control system 290 based on predicted vehicle actions to drive the vehicle 102.


Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, the memory 206 stores a subset of the modules and data structures identified above. In some embodiments, the memory 206 stores additional modules and data structures not described above.



FIG. 3 is a block diagram of a server 104 for monitoring and managing vehicles 102 in a vehicle driving environment (e.g., the environment 100 in FIG. 1), in accordance with some embodiments. Examples of the server 104 include, but are not limited to, a server computer, a desktop computer, a laptop computer, a tablet computer, or a mobile phone. The server 104 typically includes one or more processing units (CPUs) 302, one or more network interfaces 304, memory 306, and one or more communication buses 308 for interconnecting these components (sometimes called a chipset). The server 104 includes one or more user interface devices. The user interface devices include one or more input devices 310, which facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, in some embodiments, the server 104 uses a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard. In some embodiments, the one or more input devices 310 include one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic serial codes printed on electronic devices. The server 104 also includes one or more output devices 312, which enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays.


The memory 306 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices. In some embodiments, the memory includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. In some embodiments, the memory 306 includes one or more storage devices remotely located from one or more processing units 302. The memory 306, or alternatively the non-volatile memory within memory 306, includes a non-transitory computer readable storage medium. In some embodiments, the memory 306, or the non-transitory computer readable storage medium of the memory 306, stores the following programs, modules, and data structures, or a subset or superset thereof:

    • an operating system 314, which includes procedures for handling various basic system services and for performing hardware dependent tasks;
    • a network communication module 316, which connects the server 104 to other devices (e.g., vehicles 102, another server 104, and/or client devices 108) via one or more network interfaces (wired or wireless) and one or more communication networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
    • a user interface module 318, which enables presentation of information (e.g., a graphical user interface for user application 324, widgets, websites and web pages thereof, audio content, and/or video content) at the vehicle 102 via one or more output devices 312 (e.g., displays or speakers);
    • an input processing module 320, which detects one or more user inputs or interactions from one of the one or more input devices 310 and interprets the detected input or interaction;
    • a web browser module 322, which navigates, requests (e.g., via HTTP), and displays websites and web pages thereof, including a web interface for logging into a user account of a user application 324;
    • one or more user applications 324, which are executed at the server 104. The user applications 324 include a vehicle user application that associates vehicles 102 with user accounts and facilitates controlling the vehicles 102, and enables users to edit and review settings and data associated with the vehicles 102;
    • a model training module 226, which trains a machine learning model 250, where the model 250 includes at least one neural network and is applied to process vehicle data (e.g., sensor data and vehicle control data) of one or more vehicles 102;
    • a data processing module 228, which manages:
      • a multi-vehicle operation monitoring platform 332 configured to collect vehicle data 112 from a plurality of vehicles 102, monitor vehicle operation, detect faults, provide driving solutions, and update additional vehicle information 114 to individual vehicles 102 or client devices 108. The data processing module 228 manages vehicle data 112 for each individual vehicle 102 separately or processes vehicle data 112 of multiple vehicles 102 jointly (e.g., statistically, in the aggregate); and
      • a multi-installation operation monitoring platform 334 configured to collect infrastructure information 132 from a plurality of installations 130, monitor installation operation, detect faults (e.g., sensor 660 faults). In some embodiments, the data processing module 228 manages infrastructure information 132 for each individual installation 130 separately or processes infrastructure data 134 of multiple installations 130 jointly (e.g., statistically, in the aggregate);
    • one or more databases 340 for storing vehicle server data and infrastructure server data, including:
      • device settings 342, which include common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, and/or medical procedure settings) of the server 104;
      • user account information 344 for the one or more user applications 324 (e.g., user names, security questions, account history data, user preferences, and predefined account settings);
      • network parameters 346 for the one or more communication networks 110, (e.g., IP address, subnet mask, default gateway, DNS server, and host name);
      • training data 248 for training the machine learning model 250;
      • machine learning models 250 for processing vehicle data;
      • vehicle data 112, which is collected from a plurality of vehicles 102 and includes sensor data 254, mapping and location data 256, and vehicle control data 259;
      • additional vehicle information 114, including vehicle operation information, fault information, and/or driving solution information, which are generated from the collected vehicle data 112; and
      • infrastructure information 132, including data collected by sensors 660 of the installations 130 and data processed by the installations 130 based on the data collected by the sensors 660 and the vehicle information 134;


Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, the memory 306 stores a subset of the modules and data structures identified above. In some embodiments, the memory 306 stores additional modules and data structures not described above.



FIGS. 4, 5A, and 5B provide background on the machine learning systems described herein, which are helpful in understanding the details of the embodiments described from FIG. 6 onward.



FIG. 4 is a block diagram of a machine learning system 400 for training and applying machine learning models 250 for facilitating driving of a vehicle, in accordance with some embodiments. The machine learning system 400 includes a model training module 226 establishing one or more machine learning models 250 and a data processing module 228 for processing vehicle data 112 using the machine learning model 250. In some embodiments, both the model training module 226 (e.g., the model training module 226 in FIG. 2) and the data processing module 228 are located within the vehicle 102, while a training data source 404 provides training data 248 to the vehicle 102. In some embodiments, the training data source 404 is the data obtained from the vehicle 102 itself, from a server 104, from storage 106, or from another vehicle or vehicles 102. Alternatively, in some embodiments, the model training module 226 (e.g., the model training module 226 in FIG. 3) is located at a server 104, and the data processing module 228 is located in a vehicle 102. The server 104 trains the data processing models 250 and provides the trained models 250 to the vehicle 102 to process real-time vehicle data 112 detected by the vehicle 102. In some embodiments, the training data 248 provided by the training data source 404 include a standard dataset (e.g., a set of road images) widely used by engineers in the autonomous vehicle industry to train machine learning models 250. In some embodiments, the training data 248 includes vehicle data 112 and/or additional vehicle information 114, which is collected from one or more vehicles 102 that will apply the machine learning models 250 or collected from distinct vehicles 102 that will not apply the machine learning models 250. The vehicle data 112 further includes one or more of sensor data 254, road mapping and location data 256, and control data 259. Further, in some embodiments, a subset of the training data 248 is modified to augment the training data 248. The subset of modified training data is used in place of or jointly with the subset of training data 248 to train the machine learning models 250.


In some embodiments, the model training module 226 includes a model training engine 410, and a loss control module 412. Each machine learning model 250 is trained by the model training engine 410 to process corresponding vehicle data 112 to implement a respective on-vehicle task. The on-vehicle tasks include, but are not limited to, perception and object analysis 230, vehicle localization and environment mapping 232, vehicle drive control 234, vehicle drive planning 236, local operation monitoring 238, and vehicle action and behavior prediction 240 (FIG. 2). Specifically, the model training engine 410 receives the training data 248 corresponding to a machine learning model 250 to be trained, and processes the training data to build the machine learning model 250. In some embodiments, during this process, the loss control module 412 monitors a loss function comparing the output associated with the respective training data item to a ground truth of the respective training data item. In these embodiments, the model training engine 410 modifies the machine learning models 250 to reduce the loss, until the loss function satisfies a loss criteria (e.g., a comparison result of the loss function is minimized or reduced below a loss threshold). The machine learning models 250 are thereby trained and provided to the data processing module 228 of a vehicle 102 to process real-time vehicle data 112 from the vehicle.


In some embodiments, the model training module 226 further includes a data pre-processing module 408 configured to pre-process the training data 248 before the training data 248 is used by the model training engine 410 to train a machine learning model 250. For example, an image pre-processing module 408 is configured to format road images in the training data 248 into a predefined image format. For example, the preprocessing module 408 may normalize the road images to a fixed size, resolution, or contrast level. In another example, an image pre-processing module 408 extracts a region of interest (ROI) corresponding to a drivable area in each road image or separates content of the drivable area into a distinct image.


In some embodiments, the model training module 226 uses supervised learning in which the training data 248 is labelled and includes a desired output for each training data item (also called the ground truth in some situations). In some embodiments, the desirable output is labelled manually by people or labelled automatically by the model training model 226 before training. In some embodiments, the model training module 226 uses unsupervised learning in which the training data 248 is not labelled. The model training module 226 is configured to identify previously undetected patterns in the training data 248 without pre-existing labels and with little or no human supervision. Additionally, in some embodiments, the model training module 226 uses partially supervised learning in which the training data is partially labelled.


In some embodiments, the data processing module 228 includes a data pre-processing module 414, a model-based processing module 416, and a data post-processing module 418. The data pre-processing modules 414 pre-processes vehicle data 112 based on the type of the vehicle data 112. In some embodiments, functions of the data pre-processing modules 414 are consistent with those of the pre-processing module 408, and convert the vehicle data 112 into a predefined data format that is suitable for the inputs of the model-based processing module 416. The model-based processing module 416 applies the trained machine learning model 250 provided by the model training module 226 to process the pre-processed vehicle data 112. In some embodiments, the model-based processing module 416 also monitors an error indicator to determine whether the vehicle data 112 has been properly processed in the machine learning model 250. In some embodiments, the processed vehicle data is further processed by the data post-processing module 418 to create a preferred format or to provide additional vehicle information 114 that can be derived from the processed vehicle data. The data processing module 228 uses the processed vehicle data to at least partially autonomously drive the vehicle 102 (e.g., at least partially autonomously). For example, the processed vehicle data includes vehicle control instructions that are used by the vehicle control system 290 to drive the vehicle 102.


In some embodiments, the data processing module 228 of the vehicle 102 (e.g., a first vehicle) is applied to perform perception and object analysis 230 by obtaining a road image including a road surface along which the first vehicle is travelling, identifying one or more identifiable objects on the road surface in the road image, and detecting a plurality of objects on the road surface in the road image. The data processing module 228 eliminates the one or more identifiable objects from the plurality of objects in the road image to determine one or more unidentifiable objects on the road surface in the road image. The first vehicle is at least partially autonomously driven by treating the one or more unidentifiable objects differently from the one or more identifiable objects. Further, in some embodiments, the machine learning models 250 of the vehicle 102 includes an object detection model 230A and a drivable area model 230B. The object detection model 230A is configured to identify the one or more identifiable objects in the road image and associate each identifiable object with a predefined object type or class. The drivable area model 230B is configured to determine a road surface in the road image. Additionally, in some embodiments, the machine learning models 250 includes a generic obstacle detection model 230C configured to detect a plurality of objects on the road surface in the road image, e.g., with or without determining a predefined object type or class of each of the plurality of objects. The generic obstacle detection model 230C is optionally modified from the drivable area model 230C by way of retraining.



FIG. 5A is a structural diagram of an example neural network 500 applied to process vehicle data in a machine learning model 250, in accordance with some embodiments, and FIG. 5B is an example node 520 in the neural network 500, in accordance with some embodiments. It should be noted that this description is used as an example only, and other types or configurations may be used to implement the embodiments described herein. The machine learning model 250 is established based on the neural network 500. A corresponding model-based processing module 416 applies the machine learning model 250 including the neural network 500 to process vehicle data 112 that has been converted to a predefined data format. The neural network 500 includes a collection of nodes 520 that are connected by links 512. Each node 520 receives one or more node inputs 522 and applies a propagation function 530 to generate a node output 524 from the one or more node inputs. As the node output 524 is provided via one or more links 512 to one or more other nodes 520, a weight w associated with each link 512 is applied to the node output 524. Likewise, the one or more node inputs 522 are combined based on corresponding weights w1, w2, w3, and w4 according to the propagation function 530. In an example, the propagation function 530 is computed by applying a non-linear activation function 532 to a linear weighted combination 534 of the one or more node inputs 522.


The collection of nodes 520 is organized into layers in the neural network 500. In general, the layers include an input layer 502 for receiving inputs, an output layer 506 for providing outputs, and one or more hidden layers 504 (e.g., layers 504A and 504B) between the input layer 502 and the output layer 506. A deep neural network has more than one hidden layer 504 between the input layer 502 and the output layer 506. In the neural network 500, each layer is only connected with its immediately preceding and/or immediately following layer. In some embodiments, a layer is a “fully connected” layer because each node in the layer is connected to every node in its immediately following layer. In some embodiments, a hidden layer 504 includes two or more nodes that are connected to the same node in its immediately following layer for down sampling or pooling the two or more nodes. In particular, max pooling uses a maximum value of the two or more nodes in the layer for generating the node of the immediately following layer.


In some embodiments, a convolutional neural network (CNN) is applied in a machine learning model 250 to process vehicle data (e.g., video and image data captured by cameras 266 of a vehicle 102). The CNN employs convolution operations and belongs to a class of deep neural networks. The hidden layers 504 of the CNN include convolutional layers. Each node in a convolutional layer receives inputs from a receptive area associated with a previous layer (e.g., nine nodes). Each convolution layer uses a kernel to combine pixels in a respective area to generate outputs. For example, the kernel may be to a 3×3 matrix including weights applied to combine the pixels in the respective area surrounding each pixel. Video or image data is pre-processed to a predefined video/image format corresponding to the inputs of the CNN. In some embodiments, the pre-processed video or image data is abstracted by the CNN layers to form a respective feature map. In this way, video and image data can be processed by the CNN for video and image recognition or object detection.


In some embodiments, a recurrent neural network (RNN) is applied in the machine learning model 250 to process vehicle data 112. Nodes in successive layers of the RNN follow a temporal sequence, such that the RNN exhibits a temporal dynamic behavior. In an example, each node 520 of the RNN has a time-varying real-valued activation. It is noted that in some embodiments, two or more types of vehicle data are processed by the data processing module 228, and two or more types of neural networks (e.g., both a CNN and an RNN) are applied in the same machine learning model 250 to process the vehicle data jointly.


The training process is a process for calibrating all of the weights wi for each layer of the neural network 500 using training data 248 that is provided in the input layer 502. The training process typically includes two steps, forward propagation and backward propagation, which are repeated multiple times until a predefined convergence condition is satisfied. In the forward propagation, the set of weights for different layers are applied to the input data and intermediate results from the previous layers. In the backward propagation, a margin of error of the output (e.g., a loss function) is measured (e.g., by a loss control module 412), and the weights are adjusted accordingly to decrease the error. The activation function 532 can be linear, rectified linear, sigmoidal, hyperbolic tangent, or other types. In some embodiments, a network bias term b is added to the sum of the weighted combination 534 from the previous layer before the activation function 532 is applied. The network bias b provides a perturbation that helps the neural network 500 avoid over fitting the training data. In some embodiments, the result of the training includes a network bias parameter b for each layer.



FIG. 6 is a block diagram of a computer system 600 associated with an installation 130 for detecting conditions for vehicle driving in a vehicle driving environment (e.g., the environment 100 in FIG. 1), in accordance with some embodiments. The installation 130 includes a plurality of sensors 660.


The plurality of sensors 660 include one or more of a GPS 662, a LIDAR scanner 664, one or more cameras 666, a RADAR sensor 668, one or more infrared sensor 670, one or more ultrasonic sensors 672, one or more thermal sensors 672 (e.g., for measuring heat and/or temperature), and one or more anemometers 676 for measuring wind speed and wind direction.


In some embodiments, the computer system 600 is physically co-located at the installation 130. For example, the computer system 600 comprises a microcontroller chip that is located locally at the installation 130. In some embodiments, the computer system 600 comprises a cloud computer system. Examples of the computer system 600 include, but are not limited to, a server computer, a desktop computer, a laptop computer, a tablet computer, or a mobile phone. The computer system 600 typically includes one or more processing units (CPUs) 602, one or more network interfaces 604, memory 606, and one or more communication buses 608 for interconnecting these components (sometimes called a chipset). The computer system 600 includes one or more user interface devices. The user interface devices include one or more input devices 610, which facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, in some embodiments, the computer system 600 uses a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard. In some embodiments, the one or more input devices 610 include one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic serial codes printed on electronic devices. The computer system 600 also includes one or more output devices 610, which enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays.


The memory 606 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices. In some embodiments, the memory includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. In some embodiments, the memory 606 includes one or more storage devices remotely located from the one or more processing units 602. The memory 606, or alternatively the non-volatile memory within memory 606, includes a non-transitory computer readable storage medium. In some embodiments, the memory 606, or the non-transitory computer readable storage medium of the memory 606, stores the following programs, modules, and data structures, or a subset or superset thereof:

    • an operating system 614, which includes procedures for handling various basic system services and for performing hardware dependent tasks;
    • a communication module 616, which connects the computer system to other devices (e.g., vehicles 102, server 104, installations 130, and/or client devices 108) via one or more network interfaces (wired or wireless) and one or more communication networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on. In some embodiments, the communications module 616 gathers information about road and weather conditions from vehicles 102 via a V2I or a V2X communication system that is installed on the vehicles 102. In some embodiments, the V2I or V2X communication system operate on a network that provides high speed, low latency communication;
    • a user interface module 618, which enables presentation of information, widgets, websites and web pages thereof, audio content, and/or video content) via one or more output devices 612 (e.g., displays or speakers);
    • an input processing module 620, which detects one or more user inputs or interactions from one of the one or more input devices 610 and interprets the detected input or interaction;
    • a web browser module 622, which navigates, requests (e.g., via HTTP), and displays websites and web pages thereof;
    • a data processing module 626, which manages a multi-installation operation monitoring platform 334 configured to collect infrastructure information 132 from a plurality of installations 130, monitor installation operation, detect faults (e.g., faults from sensor 660). In some embodiments, the data processing module 626 manages infrastructure information 132 for each individual installation 130 separately or processes infrastructure data 134 of multiple installations 130 jointly (e.g., statistically, in the aggregate);
    • data 630 that is stored locally on the computer system 600 or on one or more databases, including:
      • infrastructure information 132. In some embodiments, infrastructure information 132 includes data collected by sensors 660 of installations 130. IN some embodiments, infrastructure information 132 includes data processed by the installations 130 based on the data collected by the sensors 660 and the vehicle information 134; and
      • vehicle information 134. In some embodiments, vehicle information 134 includes information gathered by installations 130 from vehicles 102 via communication module 616. In some embodiments, vehicle information 134 includes information about vehicle dynamics (e.g., vehicle velocities and accelerations), vehicle data 112, and/or the additional vehicle information 114. In some embodiments, the vehicle information 134 includes include traffic, road, and/or weather information that are transmitted from the vehicles 102 to the installations 130.


Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, the memory 606 stores a subset of the modules and data structures identified above. In some embodiments, the memory 606 stores additional modules and data structures not described above. In some embodiments, a subset of the operations performed at the computer system 600 can also be performed at the server 104.



FIG. 7A illustrates an exemplary scene 700 depicting vehicles 102 (e.g., vehicle 102-A to vehicle 102-G) traveling along a road 702 that includes an installation 130, in accordance with some embodiments. The installation 130 comprises a structure that is immovably positioned along the road 702, and the vehicles 102-A to 102-G are traveling in a direction toward the installation 130. In some embodiments, the installation 130 is one of several installations that are positioned along the road 702, each of the installations separated by a respective predefined distance. The installation 130 includes sensors 660 that are fixedly positioned on the installation 130 and configured to monitor and gather data (e.g., traffic information). The sensors 660 have better detection coverage and focus on a fixed area of the road. In some embodiments, the sensors 660 have a sensing range that is capable of detecting all the vehicles 102-A to 102-G on the road 702. In some embodiments, a distance between the vehicle 102-A and the vehicle 102-F can be one mile, two miles, three miles, or five miles.



FIG. 7B illustrates another scene 750 showing fixed installations 130 with sensors 660 positioned at a tollbooth 760. The fixed installations may have a distance of several meters to several miles away from a respective vehicle 102 (e.g., vehicles 102P, 102T, and 102V) that is traveling on the road 762. In some embodiments, the installation 130 (e.g., sensors 660) be installed at a streetlight 764.


In the example of FIGS. 7A and 7B, a respective vehicle 102 (e.g., vehicle 102-A) is equipped with a V2I communication system (e.g., communication module 616) that facilitates communication between the vehicle 102 and the installation 130 (e.g., via CPU(s) 602). In some embodiments, the CPU(s) 602 generates traffic information according to data collected by the sensors 660. Exemplary traffic information can include real-time information about traffic flow (e.g., an average speed of vehicles traveling on the road, or an average speed of vehicles traveling on a respective lane of the road), traffic signal timings (when the road includes traffic lights), presence of traffic incidents, and/or traffic buildup (e.g., due to bottlenecks at the toll booth 760).



FIGS. 8A and 8B provide a flowchart of an example process for detecting conditions for vehicle driving, in accordance with some embodiments. The method 700 is performed at a computer system (e.g., computer system 600) associated with a fixed installation (e.g., installation 130) that includes a plurality of sensors (e.g., sensors 660). In some embodiments, the computer system is physically co-located at the fixed installation and the processing is performed locally at the fixed installation. In some embodiments, the computer system is located remotely from the fixed installation. The computer system includes one or more processors (e.g., CPU(s) 602) and memory (e.g., memory 606). In some embodiments, the memory stores one or more programs or instructions configured for execution by the one or more processors. In some embodiments, the operations shown in FIGS. 1, 2, 3, 4, 5A, 5B, 6A, and 6B correspond to instructions stored in the memory or other non-transitory computer-readable storage medium. The computer-readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. In some embodiments, the instructions stored on the computer-readable storage medium include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors. Some operations in the method 800 may be combined and/or the order of some operations may be changed.


The computer system obtains (802), via a plurality of sensors (e.g., sensors 660) at a fixed installation (e.g., installation 130), weather information and road surface information for a segment of a road. In some embodiments, the road surface condition includes information of static road conditions, such as the existence of an oily patch on the segment of the road, or icy conditions on the segment of the road, or potholes, cracks, or speed bumps the segment of the road. In some embodiments, the road surface information includes dynamic conditions such as dropped objects (e.g., obstacles) on the segment of the road.


In some embodiments, the fixed installation is (804) located at a section of the road that is in a vicinity of a tunnel or a merge zone.


In some embodiments, the plurality of sensors includes (806) one or more imaging sensors (e.g., cameras 666).


In some embodiments, the plurality of sensors includes (808) one or more anemometers (e.g., anemometer 676, or wind speed and direction sensor).


In some embodiments, the plurality of sensors includes (810) one or more of: a global positioning system (GPS) (e.g., GPS 662), a thermal sensor (e.g., thermal sensors 674), a light detection and ranging (LiDAR) scanner (e.g., LiDAR 664), one or more cameras (e.g., cameras 666), a radio detection and ranging (RADAR) sensor (e.g., Radar 668), an infrared sensor (e.g., infrared sensors 670), and one or more ultrasonic sensors (e.g., ultrasonic sensors 672).


In some embodiments, the computer system obtains (812) vehicle information (e.g., vehicle operational conditions) from one or more second vehicles that are traveling on the segment of the road.


In some embodiments, the vehicle information includes (814) activation of an electronic stability control (ESC) system of one of the one or more second vehicles. The ESC system monitors a vehicle's steering wheel input and ensures that the vehicle goes where the driver (or the autonomous driving system) directs it to go. In some instances, the ESC system activates upon detecting a probable loss of steering control. In some embodiments, activation of a vehicle's ESC system can indicate that the vehicle may be losing control due to road and weather conditions.


In some embodiments, the vehicle information includes (816) activation of an anti-lock braking system (ABS) of one of the one or more second vehicles. The ABS is a safety feature that helps a vehicle steer in emergencies by restoring traction to the tires and preventing the wheels from locking up while braking to avoid skidding. ABS technology automates the brake pumping process to enable a vehicle to steer to safety during an emergency situation. Because ABS tends to activate only in slippery conditions, or during emergency stops when a driver slams on the brakes and causes them to lock up, activation of a vehicle's ABS can indicate that the vehicle may be losing control due to road and weather conditions.


The computer system generates (818) a road and weather condition estimation based on the weather information and the road surface information.


In some embodiments, generating the road and weather condition estimation is (820) further based on the obtained vehicle information.


In some embodiments, the road and weather condition estimation includes (822) an estimated road friction coefficient for the segment of the road. For example, the road friction coefficient can affect a vehicle's longitudinal and lateral motions.


Referring to FIG. 7B, in some embodiments, the road and weather condition estimation includes (824) a pavement roughness level. For example, pavement roughness level can affect motion of the vehicle in the vertical direction. For example, irregularities in the pavement surface can affect a ride quality and cause vehicle delays.


In some embodiments, the computer system obtains (826) a stream of image data captured by a camera associated with the fixed installation. The computer system, based on the stream of image data, detects a vehicle vibration level (e.g., a vertical notion) of one or more second vehicles that are traveling on the segment of the road, and determines the pavement roughness level based on the vehicle vibration level. As noted above, the ESC system (also known as dynamic stability control (DSC) or electronic stability program (ESP)) is a computerized system that helps prevent a vehicle from skidding and the driver from losing control. The ESC system activates automatically when a car starts and works by monitoring how well the vehicle responds to steering input. If the car begins to veer off course, the ESC system can adjust the car speed, apply brakes selectively to one or more wheels, and modulate engine power. In some embodiments, excessive vibration can also be detected by frequency analysis of the vehicle's inertial measurement unit (IMU) sensor.


In some embodiments, the road and weather condition estimation includes (828) identification of one or more potholes on the segment of the road and respective locations of the one or more potholes.


For example, in some embodiments, the computer system obtains (e.g., via the sensors 660) an image of the segment of the road captured by a camera associated with the fixed installation; and analyzes the image to detect the one or more potholes. In some embodiments, the computer system obtains motion information of one or more second vehicles that are traveling on the segment of the road and detects the one or more potholes based on the motion of the one or more second vehicles.


In some embodiments, the road and weather condition estimation includes (830) an estimated wind speed and an estimated wind direction. For example, high winds (e.g., at speeds above 25 mph or 30 mph) can cause a vehicle to lift a bit, which reduces the necessary friction between the tires and the road surface. In some instances, high wind speeds make it harder to steer and handle a vehicle.


The computer system transmits (832) the road and weather condition estimation to one or more first vehicles (e.g., via wireless communication, such as 5G communication) in a vicinity of the fixed installation such that the road and weather condition estimation is configured to be used by the one or more first vehicles to at least partially autonomously drive in a first trajectory in accordance with the road and weather condition estimation. In some embodiments, the computer system transmits the road and weather condition estimation to the one or more first vehicles via 5G communication, which enables large bandwidth, low latency information sharing between road vehicles and infrastructure.)


In some embodiments, the road and weather condition estimation is transmitted to the vehicles in a lightweight format (e.g., text only, no images). For example, the windspeed can be transmitted as a numeric value, the wind direction can be transmitted as one of eight directions such as “N” for north, “W” for West, or “SE” for Southeast. Sharing the road and weather condition estimation with vehicles in the vicinity of the fixed installation enables the vehicles to make safer and more efficient motion planning decisions.


In some embodiments, the computer system transmits (834) the pavement roughness level to the one or more first vehicles. The one or more first vehicles are configured to receive the estimated pavement roughness level and operate an electrical stability control (ESC) system based at least partially on the pavement roughness level.


In some embodiments, the computer system causes (836) one or more to at least partially autonomously drive in the first trajectory by adjusting a respective steering model of the one or more vehicles.


Turning on to some example embodiments:

    • (A1) In accordance with some embodiments, a method for detecting conditions for vehicle driving is performed at a computer system associated with a fixed installation that includes a plurality of sensors. The computer system includes one or more processors and memory. The method includes (i) obtaining, via the plurality of sensors, weather information and road surface information for a segment of a road; (ii) generating a road and weather condition estimation based on the weather information and the road surface information; and (iii) transmitting the road and weather condition estimation to one or more first vehicles in a vicinity of the fixed installation such that the road and weather condition estimation is configured to be used by the one or more first vehicles to at least partially autonomously drive in a first trajectory in accordance with the road and weather condition estimation.
    • (A2) In some embodiments of A1, the method includes obtaining vehicle information from one or more second vehicles that are traveling on the segment of the road. The generating the road and weather condition estimation is further based on the obtained vehicle information.
    • (A3) In some embodiments of A2, the vehicle information includes activation of an electronic stability control (ESC) system of one of the one or more second vehicles.
    • (A4) In some embodiments of A2 or A3, the vehicle information includes activation of an anti-lock braking system (ABS) of one of the one or more second vehicles.
    • (A5) In some embodiments of any of A1-A4, the road and weather condition estimation includes an estimated road friction coefficient for the segment of the road.
    • (A6) In some embodiments of any of A1-A5, the road and weather condition estimation includes a pavement roughness level for the segment of the road.
    • (A7) In some embodiments of A6, the method further includes: (i) obtaining a stream of image data captured by a camera associated with the fixed installation; and (ii) based on the stream of image data: (a) detecting a vehicle vibration level of one or more second vehicles that are traveling on the segment of the road; and (b) determining the pavement roughness level based on the vehicle vibration level. The one or more first vehicles are configured to receive the pavement roughness level and operate an electrical stability control (ESC) system based at least partially on the pavement roughness level.
    • (A8) In some embodiments of any of A1-A7, the road and weather condition estimation includes identification of one or more potholes on the segment of the road and respective locations of the one or more potholes.
    • (A9) In some embodiments of any of A1-A8, the road and weather condition
    • estimation includes an estimated wind speed and an estimated wind direction.
    • (A10) In some embodiments of any of A1-A9, the fixed installation is located at a section of the road that is in a vicinity of a tunnel or a merge zone.
    • (A11) In some embodiments of any of A1-A10, the plurality of sensors includes one or more imaging sensors.
    • (A12) In some embodiments of any of A1-A11, the plurality of sensors includes one or more anemometers.
    • (A13) In some embodiments of any of A1-A12, the plurality of sensors includes one or more of: a global positioning system (GPS), a thermal sensor, a light detection and ranging (LiDAR) scanner, one or more cameras, a radio detection and ranging (RADAR) sensor, an infrared sensor, and one or more ultrasonic sensors.
    • (A14) In some embodiments of any of A1-A13, the one or more first vehicles are configured to at least partially autonomously drive in the first trajectory by adjusting a respective steering model of the one or more first vehicles.
    • (B1) In accordance with some embodiments, a computer system is associated with a fixed installation having a plurality of sensors. The computer system comprises one or more processors and memory coupled to the one or more processors. The memory stores instructions that, when executed by the one or more processors, cause the computer system to perform the method of any of A1-A14.
    • (C1) In accordance with some embodiments, a non-transitory computer-readable storage medium stores instructions that, when executed by one or more processors of computer system that is associated with a fixed installation having a plurality of sensors, cause the computer system to perform the method of any of A1-A14.


As used herein, the term “plurality” denotes two or more. For example, a plurality of components indicates two or more components. The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.


As used herein, the phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on” and “based at least on.”


As used herein, the term “exemplary” means “serving as an example, instance, or illustration,” and does not necessarily indicate any preference or superiority of the example over any other configurations or implementations.


As used herein, the term “and/or” encompasses any combination of listed elements. For example, “A, B, and/or C” includes the following sets of elements: A only, B only, C only, A and B without C, A and C without B, B and C without A, and a combination of all three elements, A, B, and C.


The terminology used in the description of the invention herein is for the purpose of describing particular implementations only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.


The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various implementations with various modifications as are suited to the particular use contemplated.

Claims
  • 1. A method for detecting conditions for vehicle driving, comprising: at a computer system associated with a fixed installation that includes a plurality of sensors elevated directly above a road surface of a road, the plurality of sensors including a camera and the computer system having one or more processors and memory: obtaining, via the plurality of sensors, weather information and road surface information for a segment of the road;generating a road and weather condition estimation based on the weather information and the road surface information; andtransmitting the road and weather condition estimation to one or more first vehicles in a vicinity of the fixed installation, including causing the road and weather condition estimation to be used by the one or more first vehicles to at least partially autonomously drive along a first trajectory in accordance with the road and weather condition estimation by adjusting a respective steering model of the one or more first vehicles.
  • 2. The method of claim 1, further comprising: obtaining vehicle information from one or more second vehicles that are traveling on the segment of the road,wherein generating the road and weather condition estimation is further based on the obtained vehicle information.
  • 3. The method of claim 2, wherein the vehicle information includes activation of an electronic stability control (ESC) system of one of the one or more second vehicles.
  • 4. The method of claim 2, wherein the vehicle information includes activation of an anti-lock braking system (ABS) of one of the one or more second vehicles.
  • 5. The method of claim 1, wherein the road and weather condition estimation includes an estimated road friction coefficient for the segment of the road.
  • 6. The method of claim 1, wherein the road and weather condition estimation includes a pavement roughness level.
  • 7. The method of claim 6, further comprising: obtaining a stream of image data captured by the camera at the fixed installation; andbased on the stream of image data: detecting a vehicle vibration level of one or more second vehicles that are traveling on the segment of the road; anddetermining the pavement roughness level based on the vehicle vibration level;wherein the one or more first vehicles are configured to receive the pavement roughness level and operate an electrical stability control (ESC) system based at least partially on the pavement roughness level.
  • 8. The method of claim 1, wherein the road and weather condition estimation includes identification of one or more potholes on the segment of the road and respective locations of the one or more potholes.
  • 9. The method of claim 1, wherein the road and weather condition estimation includes an estimated wind speed and an estimated wind direction.
  • 10. The method of claim 1, wherein the fixed installation is located at a section of the road that is in a vicinity of a tunnel or a merge zone.
  • 11. (canceled)
  • 12. The method of claim 1, wherein the plurality of sensors includes one or more anemometers.
  • 13. The method of claim 1, wherein the plurality of sensors includes one or more of: a global positioning system (GPS), a thermal sensor, a light detection and ranging (LiDAR) scanner, one or more cameras, a radio detection and ranging (RADAR) sensor, an infrared sensor, and one or more ultrasonic sensors.
  • 14. (canceled)
  • 15. A computer system associated with a fixed installation having a plurality of sensors elevated directly above a road surface of a road, the plurality of sensors including a camera, the computer system comprising: one or more processors; andmemory coupled to the one or more processors, the memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: obtaining, via the plurality of sensors, weather information and road surface information for a segment of the road;generating a road and weather condition estimation based on the weather information and the road surface information; andtransmitting the road and weather condition estimation to one or more first vehicles in a vicinity of the fixed installation, including causing the road and weather condition estimation to be used by the one or more first vehicles to at least partially autonomously drive in a first trajectory in accordance with the road and weather condition estimation by adjusting a respective steering model of the one or more first vehicles.
  • 16. The computer system of claim 15, the one or more programs further including instructions for: obtaining vehicle information from one or more second vehicles that are traveling on the segment of the road,wherein generating the road and weather condition estimation is further based on the obtained vehicle information.
  • 17. The computer system of claim 15, wherein the road and weather condition estimation includes a pavement roughness level.
  • 18. The computer system of claim 17, the one or more programs further including instructions for: obtaining a stream of image data captured by the camera at the the fixed installation; andbased on the stream of image data: detecting a vehicle vibration level of one or more second vehicles that are traveling on the segment of the road; anddetermining the pavement roughness level based on the vehicle vibration level;wherein the one or more first vehicles are configured to receive the pavement roughness level and operate an electrical stability control (ESC) system based at least partially on the pavement roughness level.
  • 19. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of computer system that is associated with a fixed installation having a plurality of sensors elevated directly above a road surface of a road, the plurality of sensors including a camera, the one or more programs comprising instructions for: obtaining, via the plurality of sensors, weather information and road surface information for a segment of the road;generating a road and weather condition estimation based on the weather information and the road surface information; andtransmitting the road and weather condition estimation to one or more first vehicles in a vicinity of the fixed installation, wherein the road and weather condition estimation is caused to be used by the one or more first vehicles to at least partially autonomously drive in a first trajectory in accordance with the road and weather condition estimation by adjusting a respective steering model of the one or more first vehicles.
  • 20. (canceled)
RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application No. 63/544,425, filed Oct. 16, 2023, titled “Motion Controlling for Autonomous Vehicles” and U.S. Provisional Application No. 63/636,090, filed Apr. 18, 2024, titled “Centralized Prediction and Planning Using V2X for Lane Platooning and Intersection Vehicle Behavior Optimizations and Lane Change Decision-Making by Combining Infrastructure and Vehicle Intelligence,” each of which is hereby incorporated by reference herein in its entirety. This application is related to the following applications, all of which are incorporated by reference herein in their entireties: U.S. patent application Ser. No. ______ (Attorney Docket Number 132692-5031-US), filed ______, titled “Automatic Event Capturing for Autonomous Vehicle Driving”; andU.S. patent application Ser. No. ______ (Attorney Docket Number 132692-5032-US), filed ______, titled “Motion Planning for Autonomous Vehicle Driving Using Vehicle-to-Infrastructure Communication.”

Provisional Applications (2)
Number Date Country
63544425 Oct 2023 US
63636090 Apr 2024 US