The present disclosure relates generally to vehicle navigation systems with road mapping capabilities. More specifically, aspects of this disclosure relate to intelligent navigation systems and control logic for estimating roadway segment layout.
Current production motor vehicles, such as the modern-day automobile, are originally equipped with or retrofit to include a network of onboard electronic devices that provide automated driving capabilities that help to minimize driver effort. In automotive applications, for example, the most recognizable type of automated driving feature is the cruise control system, which allows a vehicle operator to set a particular vehicle speed and have the onboard vehicle computer system maintain that speed without the driver operating the accelerator or brake pedals. Next-generation Adaptive Cruise Control (ACC) is a computer-automated driving feature that regulates vehicle speed while concomitantly managing fore and aft spacing between the host vehicle and leading/trailing vehicles. Another type of automated driving feature is the Collision Avoidance System (CAS), which detects imminent collision conditions and provides a warning to the driver while also taking preventative action autonomously, e.g., by steering or braking without driver input. Intelligent Parking Assist Systems (IPAS), Lane Monitoring and Automated Steering (“Auto Steer”) Systems, and other Advanced Driver Assistance Systems (ADAS) and autonomous driving features are also available on many modern-day automobiles.
As vehicle processing, communication, and sensing capabilities continue to improve, manufacturers will persist in offering more system-automated driving capabilities with the aspiration of eventually offering fully autonomous vehicles competent to operate among heterogeneous vehicle types in both urban and rural scenarios. Original equipment manufacturers (OEM) are moving towards vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) “talking” cars with higher-level driving automation that employ autonomous systems to enable vehicle routing with steering, lane changing, scenario planning, etc. Automated route generation systems utilize vehicle state and dynamics sensors, map and road condition data, and path prediction algorithms to provide path generation with automated lane center and lane change forecasting. Computer-assisted rerouting techniques offer predicted alternative travel routes that may be updated, for example, based on real-time and virtual vehicle data.
Many automobiles are now equipped with an onboard vehicle navigation system that utilizes a global positioning system (GPS) transceiver in cooperation with navigation software and a mapping database to obtain roadway layout, traffic and speed limit information associated with the vehicle's current location. ADAS and autonomous driving systems are often able to adapt certain automated driving maneuvers based on roadway information obtained by the in-vehicle navigation system. Vehicular ad-hoc-network-based ADAS, for example, employ GPS and mapping data in conjunction with multi-hop geocast V2V and V2I data exchanges to facilitate automated vehicle maneuvering to and through roadway intersections. However, roadway information stored in the mapping database may become obsolete and may not provide comprehensive information beyond roadway name, alignment, traffic, and speed limit. Additionally, employing a fleet of survey vehicles to update database-stored roadway information is extremely costly and time consuming.
Disclosed herein are intelligent vehicle navigation systems and attendant control logic for extracting multi-lane separation and trajectory information of roadway segments, methods for manufacturing and methods for operating such systems, and motor vehicles equipped with intelligent navigation systems having lane-level data assessment capabilities. By way of example, there are presented novel systems and methods for provisioning multi-lane separation and virtual trajectory extraction at road intersections by mining GPS-generated vehicle dynamics data traces. A representative method uses road-level navigation maps, such as an OPENSTREETMAP® (OSM) database, as a baseline source to derive an initial estimate of lane-level intersection topology. The method may generate lane-specific turning window estimates based on aggregated yaw rate and heading data, and employ a clustering algorithm to combine start points, end points, and perpendicular distances to derive a number of drivable lanes for each segment of a particular intersection. Lane-specific turning rules may be extracted by examining a statistical distribution of lane-oriented turning behaviors of third-party vehicles traversing a given intersection.
Disclosed techniques help to derive lane-level details for roadway intersections and other road segments in navigation map databases, which improves the integrity and functionality of automated and assisted vehicle navigation. Road-level data is generally limited to a number of road segments meeting at a given intersection (3-way intersection, 4-way intersection, 5-way intersection, etc.), a name for each road segment, a speed limit for each road segment, and basic lane alignment information. Lane-level data for a given road segment, on the other hand, may include: a total number of lanes (single-lane road, two-lane, two-way road, two-way road with center turn lane, etc.); a number of left-hand turn lanes (if any), a number of right-hand turn lanes (if any), and a number of straightaway lanes (if any); turning rules for each lane (e.g., dedicated right-turn only, dedicated left-turn only, straight and left-turn lane, etc.); virtual trajectory data (e.g., making turn onto street with option of multiple lanes), etc. Disclosed statistical techniques enable accurate, real-time lane-level data retrieval using baseline map database information and contributing “crowd-sourced” vehicles as sensors. Additionally, aggregating data from a large, open group of participating vehicles helps to eliminate the need for dedicated survey vehicles and, thus, significantly reduces the time and costs associated with updating database-stored roadway information. In addition to provisioning less costly, more timely updates, disclosed systems and methods help to improve ADAS and autonomous driving functionality.
Aspects of this disclosure are directed to probabilistic modeling techniques and computer-executable algorithms for estimating one or more distinct parameters of an observed dynamic driving environment. For instance, methods are presented for modulating a controller-regulated operation of a motor vehicle based on road segment trajectory and lane separation data derived through statistical analysis of vehicle dynamics data traces. A representative method for controlling operation of a motor vehicle includes, in any order and in any combination with any of the above and below disclosed options and features: determining, e.g., via a resident vehicle controller of the motor vehicle through cooperative operation with a GPS transceiver, cellular data chip, etc., a location of the motor vehicle; conducting, e.g., via the resident vehicle controller through cooperative operation with a resident vehicle navigation system or a remote third-party navigation data provider, a geospatial query to identify a designated road segment corresponding to the vehicle's location; and, receiving, e.g., by the resident vehicle controller from a resident or remote memory-stored map database, road-level data associated with the designated road segment.
Continuing with the above example, the method may further comprise: determining, e.g., by a high-speed, server-class computer of a host cloud computing platform, a turning angle and centerline for the designated road segment based on the road-level data; receiving, e.g., via the remote server computer, connected vehicle data indicative of vehicle locations and dynamics for multiple motor vehicles travelling on the designated road segment; determining, from this connected vehicle data, trajectory data indicative of respective start points, end points, and centerline offset distances for the multiple motor vehicles; identifying, e.g., via the remote server computer given the turning angle and centerline, a number of driving lanes for the designated road segment by processing the trajectory data with a clustering algorithm; extracting a respective virtual trajectory for each driving lane; and, transmitting some or all the foregoing lane-level data to the motor vehicle. The resident vehicle controller responsively transmits one or more command signals to a resident vehicle subsystem to execute a control operation based on one or more of the extracted virtual trajectories for one or more of the driving lanes of the designated road segment.
Other aspects of the present disclosure are directed to intelligent vehicle navigation systems for deriving lane-level roadway information through statistical analysis of crowd-sensed data aggregated from multiple participatory motor vehicles. As used herein, the term “motor vehicle” may include any relevant vehicle platform, such as passenger vehicles (internal combustion engine, hybrid, full electric, fuel cell, etc.), commercial vehicles, industrial vehicles, tracked vehicles, off-road and all-terrain vehicles (ATV), motorcycles, etc. In addition, the terms “assisted” and “automated” and “autonomous” may be used with respect to any relevant vehicle platform that may be classified as a Society of Automotive Engineers (SAE) Level 2, 3, 4 or 5 vehicle. SAE Level 0, for example, is generally typified as “unassisted” driving that allows for vehicle-generated warnings with momentary intervention, but otherwise relies solely on human control. By comparison, SAE Level 3 allows for unassisted, partially assisted, and fully autonomous driving with sufficient vehicle automation for full vehicle control (steering, speed, acceleration/deceleration, etc.), while obliging driver intervention within a calibrated timeframe. At the upper end of the spectrum is Level 5 automation that altogether eliminates human intervention (e.g., no steering wheel, gas pedal, or shift knob).
In an example, an intelligent vehicle navigation system includes a remote system server computer operable to communicate with multiple motor vehicles. Each motor vehicle includes a vehicle body, a vehicle powertrain attached to and operable for propelling the vehicle body, and a vehicle navigation system with a location tracking device and a graphical human machine interface (HMI) attached to the vehicle body. A resident vehicle controller is attached to the vehicle body and operatively connected to the vehicle powertrain and navigation systems. The resident vehicle controller is programmed to execute memory-stored instructions to: determine, via the location tracking device of the vehicle navigation system, the motor vehicle's location; determine, via the vehicle navigation system, a designated road segment corresponding to the vehicle's location; and receive, via the vehicle navigation system from a memory-stored map database, road-level data associated with the designated road segment.
Continuing with the above example, the remote system server computer is programmed to execute memory-stored instructions to: determine, from the road-level data, a turning angle and centerline for the designated road segment; receive connected vehicle data indicative of vehicle locations and dynamics for multiple motor vehicles travelling on the designated road segment; determine, from the connected vehicle data, trajectory data indicative of respective start points, end points, and centerline offset distances for the multiple motor vehicles; identify a number of driving lanes for the designated road segment by processing the trajectory data with a clustering algorithm given the road segment's turning angle and centerline; and, extract a respective virtual trajectory for each driving lane. The resident vehicle controller is operable to receive one or more of the extracted virtual trajectories from the remote system server computer, and responsively transmit one or more command signals to a resident vehicle subsystem to execute a control operation based on one or more of the extracted virtual trajectories.
For any of the disclosed systems, methods, and vehicles, determining a turning angle for a designated road segment may include determining, from the corresponding road-level data, an entry (IN) angle δIN and an exit (OUT) angle δOUT, and calculating the turning angle as θT, where θT=δOUT−δIN. In this instance, determining a centerline for a given road segment may include determining intersection center point coordinates x0, y0 from the road-level data, and determining entry and exit angles δIN and δOUT, respectively, from the road-level data. An entry centerline LIN is calculated as (yIN−y0)=δIN(xIN−x0), whereas an exit centerline LOUT is calculated as (yOUT−y0)=δOUT(xOUT−x0).
For any of the disclosed systems, methods, and vehicles, extracting a respective virtual trajectory for a corresponding driving lane may include estimating a respective start heading H1, a respective end heading H2, and a respective turning angle θ1 for the corresponding driving lane. Estimating a respective turning angle may include estimating the turning angle as θ1, where θ1=∫Ydt, and where Y is a yaw rate and dt is a time derivative. In this instance, each start point may be identified as a first location whereat the vehicle's yaw rate starts to increase/decrease away from a calibrated threshold estimate; each end point, on the other hand, may be identified as a second location whereat the vehicle's yaw rate increases/decreases back to this calibrated threshold estimate.
For any of the disclosed systems, methods, and vehicles, processing the trajectory data with the clustering algorithm includes identifying the number of driving lanes as an estimated number of clusters K, where K=Max(Diffpd)/Wlane, and where Diffpd is a difference among start points centerline offset distances or end points centerline offset distances, and Wlane is a standardized lane width corresponding to the designated road segment. As yet another option, determining a centerline offset distance for a participating vehicle may include calculating a respective perpendicular distance PD from the centerline as (|aX+bY+c|)/√(a2+b2), where aX+bY+c, which may be set equal to zero (0), is a linear equation representing the center line, X is a start point, and Y is an end point.
For any of the disclosed systems, methods, and vehicles, identifying the designated road segment corresponding to the vehicle's location may include determining a (square or rectangular) bounding box that delineates a set of geographical boundaries surrounding the vehicle's location. In some applications, the number of driving lanes for a designated road segment may include a single lane or multiple lanes (e.g., first, second, third, . . . N driving lanes); concomitantly, a single virtual trajectory or multiple virtual trajectories may be extracted for a single lane or a single virtual trajectory or multiple virtual trajectories may be extracted for each of multiple lanes. As an example, a first virtual trajectory may be extracted for a first driving lane and a second virtual trajectory, distinct from the first virtual trajectory, may be extracted for a second driving lane. Optionally, third and fourth virtual trajectories, distinct from the first and second virtual trajectories, may be extracted for a third driving lane.
For any of the disclosed systems, methods, and vehicles, the resident vehicle subsystem may include an ADAS intersection assistance system or other suitable vehicle steering and braking control system. In this instance, the control operation may include executing an automated steering and/or braking maneuver adapted by the ADAS intersection assistance module based on at least one of the extracted virtual trajectories. In addition, or alternatively, the resident vehicle subsystem may include a vehicle navigation system with an electronic display device. In this example, the control operation may include saving lane-level data, including the number of driving lanes and the extracted virtual trajectories for the driving lanes, in a memory-stored map database. An indication of one or more of the extracted virtual trajectories may be displayed on the vehicle's electronic display device.
The above summary is not intended to represent every embodiment or every aspect of the present disclosure. Rather, the foregoing summary merely provides an exemplification of some of the novel concepts and features set forth herein. The above features and advantages, and other features and attendant advantages of this disclosure, will be readily apparent from the following detailed description of illustrated examples and representative modes for carrying out the present disclosure when taken in connection with the accompanying drawings and the appended claims. Moreover, this disclosure expressly includes any and all combinations and subcombinations of the elements and features presented above and below.
The present disclosure is amenable to various modifications and alternative forms, and some representative embodiments are shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the novel aspects of this disclosure are not limited to the particular forms illustrated in the above-enumerated drawings. Rather, the disclosure is to cover all modifications, equivalents, combinations, subcombinations, permutations, groupings, and alternatives falling within the scope of this disclosure as encompassed by the appended claims.
This disclosure is susceptible of embodiment in many different forms. There are shown in the drawings and will herein be described in detail representative embodiments of the disclosure with the understanding that these representative examples are provided as an exemplification of the disclosed principles, not limitations of the broad aspects of the disclosure. To that extent, elements and limitations that are described, for example, in the Abstract, Introduction, Summary, and Detailed Description sections, but not explicitly set forth in the claims, should not be incorporated into the claims, singly or collectively, by implication, inference or otherwise.
For purposes of the present detailed description, unless specifically disclaimed: the singular includes the plural and vice versa; the words “and” and “or” shall be both conjunctive and disjunctive; the words “any” and “all” shall both mean “any and all”; and the words “including,” “containing,” “comprising,” “having,” and the like, shall each mean “including without limitation.” Moreover, words of approximation, such as “about,” “almost,” “substantially,” “approximately,” and the like, may be used herein in the sense of “at, near, or nearly at,” or “within 0-5% of,” or “within acceptable manufacturing tolerances,” or any logical combination thereof, for example. Lastly, directional adjectives and adverbs, such as fore, aft, inboard, outboard, starboard, port, vertical, horizontal, upward, downward, front, back, left, right, etc., may be with respect to a motor vehicle, such as a forward driving direction of a motor vehicle when the vehicle is operatively oriented on a normal driving surface.
Referring now to the drawings, wherein like reference numbers refer to like features throughout the several views, there is shown in
The representative vehicle 10 of
Communicatively coupled to the telematics unit 14 is a network connection interface 34, suitable examples of which include twisted pair/fiber optic Ethernet switch, internal/external parallel/serial communication bus, a local area network (LAN) interface, a controller area network (CAN), a media-oriented system transfer (MOST), a local interconnection network (LIN) interface, and the like. Other appropriate communication interfaces may include those that conform with ISO, SAE, and IEEE standards and specifications. The network connection interface 34 enables the vehicle hardware 16 to send and receive signals with each other and with various systems and subsystems both within or “resident” to the vehicle body 12 and outside or “remote” from the vehicle body 12. This allows the vehicle 10 to perform various vehicle functions, such as controlling vehicle steering, governing operation of the vehicle's transmission, controlling engine throttle, engaging/disengaging the brake system, and other automated driving functions. For instance, telematics unit 14 receives and/or transmits data to/from an ADAS electronic control unit (ECU) 52, an engine control module (ECM) 54, a powertrain control module (PCM) 56, sensor interface module(s) 58, a brake system control module (BSCM) 60, and assorted other vehicle ECUs, such as a transmission control module (TCM), a climate control module (CCM), etc.
With continuing reference to
CPU 36 receives sensor data from one or more sensing devices that use, for example, photo detection, radar, laser, ultrasonic, optical, infrared, or other suitable technology for executing an automated driving operation. In accord with the illustrated example, the automobile 10 may be equipped with one or more digital cameras 62, one or more range sensors 64, one or more vehicle speed sensors 66, one or more vehicle dynamics sensors 68, and any requisite filtering, classification, fusion and analysis hardware and software for processing raw sensor data. Digital camera 62 may use a charge coupled device (CCD) sensor or other suitable optical sensor to generate images indicating a field-of-view of the vehicle 10, and may be configured for continuous image generation, e.g., at least about 35 images generated per second. By way of comparison, range sensor 64 may emit and detect reflected radio, electromagnetic, or light-based waves (e.g., radar, EM inductive, Light Detection and Ranging (LIDAR), etc.) to detect, for example, presence, geometric dimensions, and/or proximity of an object. Vehicle speed sensor 66 may take on various forms, including wheel speed sensors that measure wheel speeds, which are then used to determine real-time vehicle speed. In addition, the vehicle dynamics sensor 68 may be in the nature of a single-axis or a triple-axis accelerometer, an angular rate sensor, an inclinometer, etc., for detecting longitudinal and lateral acceleration, yaw, roll, and/or pitch rates, or other dynamics related parameter. Using data from the sensing devices 62, 64, 66, 68, the CPU 36 identifies objects within a detectable range of the vehicle 10, and determines attributes of the target object, such as size, relative position, angle of approach, relative speed, etc.
In general, disclosed systems, methods and devices help to derive lane-level roadway information, such as lane separation and estimated trajectories, for specified road segments using baseline road-level map data and by mining vehicle dynamics data traces. Roadway navigation maps, such as those made available through the OPENSTREETMAP® collaborative project, are used as a baseline source to derive an initial estimate of lane-level intersection topology. This baseline “road-level” information may include a road segment count indicative of a number of road segments meeting at a given intersection, the name/names of the intersecting road segments, and basic lane alignment information (i.e., plan-view geometry). From this road-level data, the system is able to estimate a centerline and one or more turning angles (if any) for a road segment under analysis. Lane-specific turning window estimates are generated based on aggregated yaw rate and heading data of participatory vehicles. A lane-specific turning window estimate may generally comprise a distinct start heading, a distinct end heading, and a distinct turning angle for a given turn from a specific lane. Vehicle dynamics data used for turning window estimation may be generated, at least in part, by a large volume of vehicles participating as “crowd-sourced” sensors, e.g., using GPS information and vehicle Controller Area Network (CAN) bus data.
From the turning window estimate and corresponding map-matched vehicle dynamics data, sets of lane-specific start and end points are derived by juxtaposing changes in vehicle yaw rate against a calibrated threshold estimation. A clustering algorithm is then used to assign start points, end points, etc., to corresponding groups based on perpendicular distances to road center lines. By estimating a total number of cluster groups, the system is able to derive an estimated number of drivable lanes for each segment of a particular intersection. Each cluster is then analyzed to generate a virtual trajectory for the corresponding turn from a specific lane, including entry (IN) and exit (OUT) centerlines and an estimated path with a corresponding turning angle. The foregoing techniques help to derive lane-level details for roadway intersections and other road segments in navigation map databases, which in turn helps to improve the integrity and functionality of automated and assisted vehicle navigation systems. Disclosed statistical techniques also help to enable accurate, real-time lane-level data retrieval using baseline map database information and contributing “crowd-sourced” vehicles as sensors. This, in turn, helps to eliminate the need for dedicated survey vehicles and, thus, significantly reduces the time and costs associated with updating database-stored roadway information.
With reference now to the flow chart of
Method 100 begins at process block 101 with processor-executable instructions for a programmable controller or control module or similarly suitable processor or server computer to begin a Trip Processing protocol 102 that is designed to aggregate vehicle location and dynamics data from participating “crowd-sourced” vehicles. The Trip Processing protocol 102, as well as the Intersection Matching protocol 104, Lane Separation protocol 106 and Virtual Trajectories Extraction protocol 108 described below, may be executed in real-time, continuously, systematically, sporadically and/or at regular intervals, for example, each 100 milliseconds, etc., during ongoing system operation. As yet another option, protocols 102, 104, 106 and/or 108 may initialize responsive to a prompt signal received from a backend or middleware computing node tasked with collecting, analyzing, sorting, storing and distributing roadway information. Process block 101 may sample connected vehicle data composed of vehicle location information (e.g., GPS-generated latitude, longitude, and elevation geodetic datums), and vehicle dynamics information (e.g., speed, heading, acceleration (x, y, z components), yaw rate, etc.).
Once a sufficient amount of data is collected at process block 101, the method 100 continues to process block 103 to perform a data pre-processing routine. This pre-processing routine may be implemented, for example, to minimize or otherwise eliminate duplicative records, outliers, and data errors by using distance estimation, heading angle estimation, probabilistic data filtering, data smoothing protocols, or other applicable techniques. Aggregated and processed data is stored in a map database 105 through collaborative operation between a database-management system (DBMS) and a high-speed, server-class database computer. Map database 105 may take on many forms, including a special-purpose 3D mapping server database that supplies map-related data in association with global coordinates, frequently describing positions in terms of a latitudinal position, a longitudinal position, and a height or elevation. Coordinating operation of a GPS device in collaboration with a 3D mapping server database may be utilized to not only position a vehicle with respect to a cataloged road geometry, but also to place the vehicle in the context of road details, such as road name, speed limit, surface type, incline, bank gradient, etc.
With continuing reference to
Method 100 continues to decision block 113 to determine if the processed and cleaned data can be matched to an existing data set. In accord with the disclosed implementation, trip data—a set of data points following the order of a specific vehicle trip, including predecessor and successor points for each trip data point—is associated with one or more related data sets in an existing map database system. By matching a given vehicle trip to a roadway navigation map system, the method 100 is able to extract data sets that contain road-level information for further processing. If it is decided that a set of processed and cleaned data cannot be matched to an existing data set (block 113=NO), the method proceeds to process block 115 and discards that particular data set. At this juncture, the method 100 may circle back to block 101 and run in a continuous loop or may temporarily terminate. Upon determining that the processed and cleaned data from process blocks 109 and 111 can be linked to an existing data set (block 113=YES), the method 100 proceeds to the Intersection Matching, Lane Separation and Virtual Trajectories Extraction protocols 104, 106, 108.
Execution of the Lane Separation protocol 106 may begin at process block 117 with memory-stored, processor-executable instructions to extract from the Trip Processing protocol 102 trip data and attendant trajectory information for a given roadway segment under investigation. Retrieved trip and trajectory data may include, for example, in-vehicle, sensor-generated vehicle dynamics data procured by third-party motor vehicles and mapped to a specific intersection. Start Point (SP) and End Point (EP) detection for a specific driving maneuver is carried out at predefined process block 119 utilizing information afforded by the Trip Processing protocol 102 and Intersection Matching protocol 104. For at least some of the disclosed implementations, the method 100 accesses an OSM database 121 to “lookup” road-level data associated with a given intersection; from this data, an intersection center point is identified and a bounding box of any desired shape is generated to demarcate geographical boundaries of the intersection. Contemporaneous with identifying road-level data, geometric topology information of the intersection is retrieved from a road map database 123.
Using the information from the OSM database 121 and the road map database 123, the method 100 conducts an IN-OUT turning angle estimation at process block 125. In general, process block 125 provides instructions for identifying a road segment entry (IN) angle δIN and a road segment exit (OUT) angle δOUT relative to an intersection center point, which may be designated with coordinates (x0, y0). Spatial relationships within a map-based representation of a given road segment may be specified relative to a two-dimensional (2D) Cartesian coordinate plane. If the origin of the Cartesian space is set to coincide with the intersection center point (x0, y0), road-level centerlines for the given intersection can be represented as a linear function: y=ax+b, where x, y are variables for two independent dimensions, and b=0. In this example, a turning angle θT (e.g., clockwise degrees, North=0 degrees, South=180 degrees), an entry centerline LIN, and an exit centerline LOUT for the intersection may be calculated as:
θT=δOUT−δIN
LIN: (yIN−y0)=δIN(xIN−x0)
LOUT: (yOUT−y0)=δOUT(xOUT−x0)
with entry point coordinates xIN, yIN, and exit point coordinates xOUT, yOUT. An estimated IN/OUT turning angle is identified at process block 127 and estimated IN and OUT centerlines are identified at process block 129. Each center line may be represented by a set of GPS datum points.
Now that the Intersection Matching protocol 104 is complete, the SP, EP detection in predefined process block 119 of Lane Separation protocol 106 may be conducted to derive a respective start point and a respective end point for a specific turning maneuver at the given intersection. SP, EP detection may start with estimating a respective start heading H1, a respective end heading H2, and a respective turning angle θ1 for each driving lane of a given intersection. The method 100 utilizes stable start/end headings extracted from vehicle location and dynamics data of multiple participatory vehicles to estimate the lane start and end headings. For at least some implementations, the start heading H1 may be equal to the entry (approaching) angle δIN and the end heading H2 may be equal to the exit (departing) angle δIN. In this instance, the turning angle θ1 may be estimated as θ1=ƒYdt, where Y is a yaw rate and dt is a time derivative.
Continuing with the above discussion of method 200 of
With reference again to the Lane Separation protocol 106 of
where X and Y are Cartesian coordinates for a start point or end point, a is an entry angle (e.g., a=δIN), b is an integer (e.g., b=−1), c=y0−δINx0, with an entry angle δIN and intersection center point coordinates x0, y0, and aX+bY+c is a linear equation representing the center line, which may be set equal to zero (0).
Continuing with the above discussion of method 300 of
K=Max(Diffpd)/Wlane.
where Diffpd is a difference among start points centerline offset distances or end points centerline offset distances; and Wlane is a standardized lane width corresponding to the designated road segment. K-means clustering is carried out at process block 307, with cluster centroids determined at process block 309. These two procedures attempt to classify a given data set through a derived number of clusters by defining k centroids, one for each cluster. These centroids may be placed in a calculated fashion, and subsequently take each data point belonging to a given data set and associate it to the nearest centroid. The algorithm works iteratively to assign each data point to one of k groups based on the features fixed a priori. Rather than defining groups before looking at the data, clustering allows you to find and analyze the groups that have formed organically. At process block 311, a labeled SP, EP procedure is carried out to assign labels for the trajectory data, namely each SP/EP is assigned to a single cluster. Trip pairing is carried out at process block 313, and trip labels are assigned at process block 315.
Returning again to
Aspects of this disclosure may be implemented, in some embodiments, through a computer-executable program of instructions, such as program modules, generally referred to as software applications or application programs executed by an onboard vehicle computer or a distributed network of resident and remote computing devices. Software may include, in non-limiting examples, routines, programs, objects, components, and data structures that perform particular tasks or implement particular data types. The software may form an interface to allow a resident vehicle controller or control module or other suitable integrated circuit device to react according to a source of input. The software may also cooperate with other code segments to initiate a variety of tasks in response to data received in conjunction with the source of the received data. The software may be stored on any of a variety of memory media, such as CD-ROM, magnetic disk, bubble memory, and semiconductor memory (e.g., various types of RAM or ROM).
Moreover, aspects of the present disclosure may be practiced with a variety of computer-system and computer-network architectures, including multiprocessor systems, microprocessor-based or programmable-consumer electronics, minicomputers, mainframe computers, master-slave, peer-to-peer, or parallel-computation frameworks, and the like. In addition, aspects of the present disclosure may be practiced in distributed-computing environments where tasks are performed by resident and remote-processing devices that are linked through a communications network. In a distributed-computing environment, program modules may be located in both onboard and off-board computer-storage media including memory storage devices. Aspects of the present disclosure may therefore, be implemented in connection with various hardware, software or a combination thereof, in a computer system or other processing system.
Any of the methods described herein may include machine-readable instructions for execution by: (a) a processor, (b) a controller, and/or (c) any other suitable processing device. Any algorithm, software, control logic, protocol, or method disclosed herein may be embodied in software stored on a tangible medium such as, for example, a flash memory, a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), or other memory devices. The entire algorithm, control logic, protocol, or method, and/or parts thereof, may alternatively be executed by a device other than a controller and/or embodied in firmware or dedicated hardware in an available manner (e.g., it may be implemented by an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable logic device (FPLD), discrete logic, etc.). Further, although specific algorithms are described with reference to flowcharts depicted herein, there are many other methods for implementing the example machine readable instructions that may alternatively be used.
Aspects of the present disclosure have been described in detail with reference to the illustrated embodiments; those skilled in the art will recognize, however, that many modifications may be made thereto without departing from the scope of the present disclosure. The present disclosure is not limited to the precise construction and compositions disclosed herein; any and all modifications, changes, and variations apparent from the foregoing descriptions are within the scope of the disclosure as defined by the appended claims. Moreover, the present concepts expressly include any and all combinations and subcombinations of the preceding elements and features.
Number | Name | Date | Kind |
---|---|---|---|
6356838 | Paul | Mar 2002 | B1 |
6697730 | Dickerson | Feb 2004 | B2 |
7266438 | Kellum et al. | Sep 2007 | B2 |
7589643 | Dagci et al. | Sep 2009 | B2 |
7739036 | Grimm et al. | Jun 2010 | B2 |
7840427 | O'Sullivan | Nov 2010 | B2 |
8050855 | Coy et al. | Nov 2011 | B2 |
8170739 | Lee | May 2012 | B2 |
8384532 | Szczerba et al. | Feb 2013 | B2 |
8428843 | Lee et al. | Apr 2013 | B2 |
8605011 | Seder et al. | Dec 2013 | B2 |
8612139 | Wang et al. | Dec 2013 | B2 |
8633979 | Szczerba et al. | Jan 2014 | B2 |
8692739 | Mathieu et al. | Apr 2014 | B2 |
8818708 | Mathieu et al. | Aug 2014 | B2 |
8849515 | Moshchuk et al. | Sep 2014 | B2 |
8996273 | Lee et al. | Mar 2015 | B2 |
9014915 | Chatterjee et al. | Apr 2015 | B2 |
9099006 | Mudalige et al. | Aug 2015 | B2 |
9229453 | Lee | Jan 2016 | B1 |
9283967 | Lee | Mar 2016 | B2 |
9443429 | Mathieu et al. | Sep 2016 | B2 |
9487212 | Adam et al. | Nov 2016 | B1 |
9868443 | Zeng et al. | Jan 2018 | B2 |
20090030885 | DePasquale et al. | Jan 2009 | A1 |
20100228415 | Paul | Sep 2010 | A1 |
20110059693 | O'Sullivan | Mar 2011 | A1 |
20110313880 | Paul et al. | Dec 2011 | A1 |
20120101713 | Moshchuk et al. | Apr 2012 | A1 |
20120239452 | Trivedi et al. | Sep 2012 | A1 |
20130032421 | Bonne et al. | Feb 2013 | A1 |
20130035821 | Bonne et al. | Feb 2013 | A1 |
20130054128 | Moshchuk et al. | Feb 2013 | A1 |
20130204676 | Hindi et al. | Aug 2013 | A1 |
20130219294 | Goldman-Shenhar et al. | Aug 2013 | A1 |
20140011522 | Lin et al. | Jan 2014 | A1 |
20150077270 | Rubin | Mar 2015 | A1 |
20150353082 | Lee et al. | Dec 2015 | A1 |
20150353085 | Lee | Dec 2015 | A1 |
20160102986 | Ma | Apr 2016 | A1 |
20160231124 | Nickolaou et al. | Aug 2016 | A1 |
20160260328 | Mishra | Sep 2016 | A1 |
20160320194 | Liu et al. | Nov 2016 | A1 |
20160320195 | Liu et al. | Nov 2016 | A1 |
20160320198 | Liu et al. | Nov 2016 | A1 |
20160321566 | Liu et al. | Nov 2016 | A1 |
20160321771 | Liu et al. | Nov 2016 | A1 |
20170021830 | Feldman et al. | Jan 2017 | A1 |
20170316684 | Jammoussi | Nov 2017 | A1 |
20180257660 | Ibrahim | Sep 2018 | A1 |
20180364700 | Liu | Dec 2018 | A1 |
20180374341 | Branson | Dec 2018 | A1 |
20190369626 | Lui | Dec 2019 | A1 |
20190378412 | Zhu | Dec 2019 | A1 |
Number | Date | Country |
---|---|---|
102014016567 | May 2016 | DE |
Number | Date | Country | |
---|---|---|---|
20200064846 A1 | Feb 2020 | US |