The present disclosure relates to a system for associating a physical identity of a target vehicle that is detected by perception sensors within an ego vehicle to a virtual identity of the target vehicle that is received via wireless communication between the ego vehicle and the target vehicle.
In current systems, an ego vehicle using wireless vehicle to vehicle or vehicle to infrastructure communication channels receives information transmitted from a target vehicle that includes identification information about the target vehicle to allow the ego vehicle to identify the target vehicle. This information provides a virtual identity of the target vehicle. This allows the ego vehicle to locate the position of the target vehicle relative to the ego vehicle so the ego vehicle can take actions such as collaborative maneuvering and positioning and infrastructure-assisted coordination.
In addition, an ego vehicle will use perceptions sensors, such as lidar, radar and cameras, positioned within the ego vehicle to identify objects, such as target vehicles that are in proximity to the ego vehicle. This provides a physical identity of detected target vehicles. Often, the perception sensors of the ego vehicle may detect multiple target vehicles. Current systems generally trust the virtual identity information received, without confirming that the virtual identity information received is correlated to the correct physical identity information. In other words, current systems do not verify that information transmitted wirelessly corresponds to the correct one of multiple target vehicles physically identified by the ego vehicle. Thus, while current systems achieve their intended purpose, there is a need for a new and improved system and method for associating a physical identity of a target vehicle that is detected by perception sensors within an ego vehicle to a virtual identity of the target vehicle that is received via wireless communication between the ego vehicle and the target vehicle.
According to several aspects of the present disclosure, a method of associating a physical identity and a virtual identity of a target vehicle, includes collecting, with a plurality of perception sensors, data related to a physical identity of the target vehicle and communicating data related to the physical identity of the target vehicle, via a communication bus, to a data processor, collecting, with the data processor, via a wireless communication channel, data related to a virtual identity of the target vehicle, and associating, with the data processor, the physical identity of the target vehicle with the virtual identity of the target vehicle.
According to another aspect, the associating, with the data processor, the physical identity of the target vehicle with the virtual identity of the target vehicle further includes leveraging, with the data processor, a Bayesian Interference Model and estimating, with the data processor, a probability that data related to the physical identity and the data related to the virtual identity are for the same target vehicle.
According to another aspect, the associating, with the data processor, the physical identity of the target vehicle with the virtual identity of the target vehicle further includes using the data related to the physical identity of the target vehicle to determine, with the data processor, a relative position of the target vehicle, and to estimate, with the data processor, a real-time status of the target vehicle.
According to another aspect, the data related to the physical identity of the target vehicle includes global satellite positioning coordinates, speed, acceleration, yaw and heading, and the data related to the virtual identity of the target vehicle includes global satellite positioning coordinates, speed, acceleration, yaw and heading.
According to another aspect, computer vision features created for each model of all vehicles are stored on a cloud-based vehicle profile database, and the data related to the virtual identity of the target vehicle includes model information transmitted by the target vehicle, the method including using model information received from the target vehicle and receiving, with the data processor, corresponding vehicle profile data from the cloud based vehicle profile database.
According to another aspect, the model information transmitted by the target vehicle includes brand, model, year and color.
According to another aspect, the cloud-based vehicle profile database is a deep neural network.
According to another aspect, the data related to the virtual identity of the target vehicle includes data collected by perception sensors on the target vehicle related to the surroundings of the target vehicle.
According to another aspect, the data related to the virtual identity of the target vehicle includes observed lane lines, surrounding vehicles, vulnerable road users (VRUs), street signs, traffic lights and structures.
According to another aspect, the data related to the virtual identity of the target vehicle further includes computer vision features for the target vehicle that are stored on a cloud-based vehicle profile database.
According to several aspects of the present disclosure, a system for associating a physical identity and a virtual identity of a target vehicle includes a data processor, including a wireless communication module, positioned within an ego vehicle, and a plurality of perception sensors, positioned within the ego vehicle and adapted to collect data related to a physical identity of the target vehicle and to communicate the data related to the physical identity of the target vehicle to the data processor via a communication bus, the data processor adapted to receive, via a wireless communication channel, data related to a virtual identity of the target vehicle and to associate the physical identity of the target vehicle with the virtual identity of the target vehicle.
According to another aspect, when associating the physical identity of the target vehicle with the virtual identity of the target vehicle, the data processor is further adapted to leverage a Bayesian Interference Model and estimate a probability that the data related to the physical identity and the data related to the virtual identity are for the same target vehicle.
According to another aspect, when associating the physical identity of the target vehicle with the virtual identity of the target vehicle, the data processor is further adapted to use the data related to the physical identity of the target vehicle to determine a relative position of the target vehicle, and to estimate a real-time status of the target vehicle.
According to another aspect, the data related to the physical identity of the target vehicle includes global satellite positioning coordinates, speed, acceleration, yaw and heading, and the data related to the virtual identity of the target vehicle includes global satellite positioning coordinates, speed, acceleration, yaw and heading.
According to another aspect, the system further includes a cloud-based vehicle profile database that includes computer vision features created for each model of all vehicles, and the data related to the virtual identity of the target vehicle includes model information transmitted by the target vehicle, the data processor further adapted to use model information received from the target vehicle and to receive corresponding vehicle profile data from the cloud-based vehicle profile database.
According to another aspect, the model information transmitted by the target vehicle includes brand, model, year and color.
According to another aspect, the cloud-based vehicle profile database is a deep neural network.
According to another aspect, the data related to the virtual identity of the target vehicle includes data collected by perception sensors on the target vehicle related to the surroundings of the target vehicle, including observed lane lines, surrounding vehicles, vulnerable road users (VRUs), street signs, traffic lights and structures.
According to another aspect, the system further includes a cloud-based vehicle profile database that includes computer vision features created for each model of all vehicles, and the data related to the virtual identity of the target vehicle further includes computer vision features for the target vehicle.
Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
The figures are not necessarily to scale and some features may be exaggerated or minimized, such as to show details of particular components. In some instances, well-known components, systems, materials or methods have not been described in detail in order to avoid obscuring the present disclosure. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure.
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. As used herein, the term module refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Although the figures shown herein depict an example with certain arrangements of elements, additional intervening elements, devices, features, or components may be present in actual embodiments. It should also be understood that the figures are merely illustrative and may not be drawn to scale.
As used herein, the term “vehicle” is not limited to automobiles. While the present technology is described primarily herein in connection with automobiles, the technology is not limited to automobiles. The concepts can be used in a wide variety of applications, such as in connection with aircraft, marine craft, other vehicles, and consumer electronic components.
Referring to
The data processor 16 is a non-generalized, electronic control device having a preprogrammed digital computer or processor, memory or non-transitory computer readable medium used to store data such as control logic, software applications, instructions, computer code, data, lookup tables, etc., and a transceiver or input/output ports. Computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device. Computer code includes any type of program code, including source code, object code, and executable code.
The data processor 16 includes a wireless communication module 18 that is adapted to allow wireless communication between the ego vehicle 12 and other vehicles or other external sources. The data processor 16 is adapted to collect information from databases 22 via a wireless data communication network 20 over wireless communication channels such as a WLAN, 4G/LTE or 5G network, or the like. Such databases 22 can be communicated with directly via the internet, or may be cloud-based databases. Information that may be collected by the data processor 16 from such external sources includes, but is not limited to road and highway databases maintained by the department of transportation, a global positioning system, the internet, other vehicles via V2V communication networks, traffic information sources, vehicle-based support systems such as OnStar, etc.
The wireless communication module 18 enables bi-directional communications between the data processor 16 of the ego vehicle 12 and other vehicles, mobile devices and infrastructure for the purpose of triggering important communications and events.
The system 10 further includes a plurality of perception sensors 24, positioned within the ego vehicle 12. The plurality of perception sensors 24 includes sensors adapted to collect data related to a physical identity of the target vehicle 14. Such sensors 24 include, but are not limited to, Radar, Lidar and cameras, that allow the ego vehicle to “see” nearby objects. The plurality of perception sensors 24 communicate the data related to the physical identity of the target vehicle 14 to the data processor 16 via a communication bus 26 within the ego vehicle 12.
The data processor 16 is further adapted to receive, via a wireless communication channel 20, data related to a virtual identity of the target vehicle 14 and to associate the physical identity of the target vehicle 14 with the virtual identity of the target vehicle 14. The target vehicle 14 includes a plurality of perception sensors 24′ located within the target vehicle 14 and a data processor 16′ that is equipped with a wireless communication module 18′. The plurality of perception sensors 24′ communicate with the data processor 16′ via a communication bus 26′ within the target vehicle 14.
The wireless communication module 18′ within the target vehicle 14 allows the target vehicle 14 to transmit data related to a virtual identity of the target vehicle 14 to the ego vehicle 12 via the wireless communication network 20.
Referring to
When associating the physical identity of the target vehicle 14 with the virtual identity of the target vehicle 14, the data processor 16 is further adapted to leverage a Bayesian Inference Model and estimate a probability that the data related to the physical identity and the data related to the virtual identity are for the same target vehicle 14. In other words, the data processor 16 uses a Bayesian Inference Model to match the data received from the target vehicle 14 to the physical observations of the ego vehicle 12.
When leveraging a Bayesian Inference Model, the data processor 16 builds a two-dimensional discrete probability distribution table, such as:
where Σpi,j=1.
There are m virtual identities (V1 . . . Vm) and n physical identities (P1 . . . Pn). pi,j is the probability that Pj is matched to Vi. For each physical identity, such a state model is created, multiple such state models for all physical identities will form a two-dimensional table.
A Baye's theorem is given by:
where D represents data and h represents a hypothesis. The calculation is given:
where
D represents two sets of sensor observations (physical and virtual);
hj,i represents the hypothesis that Physical j is matched to Virtual i;
P(D|hj,i) is sensor data for a given hypothesis, or the likelihood probability distribution of observing the two sets of observation data given the hypothesis;
P(hj,i) is a prior hypothesis, or the prior probability distribution of the hypothesis (the state definition at t−1). At the beginning,
If there are ten target vehicles identified, initially, each probability would be 10%, then would be updated;
P(D) is the evidence probability of two sets of sensor observations; and
P(hj,i|D) is the posterior hypothesis, or the posterior probability distribution of the hypothesis (the state at t). Use sensor observation data to update the state table (hypothesis), as new data comes, the state table is updated to represent the more accurate likelihood that one physical identity is matched to a virtual identity.
A Bayesian Inference Algorithm is as follows:
Step 1: Collect sensor data from two sources. From local perception sensors (physical), and from a wireless communication channel 20 (virtual).
Step 2: Create or update the two-dimensional state table (create new rows/columns if new identities are detected, delete rows/columns in an identity is no longer present). If a new row is created, the columns in the new row are initialized to
Step 3: Use the state table as the prior probability distribution, P(hj,i).
Step 4: Use the sensor data to calculate P(D|hj,i) and P(D).
Step 5: Update the posterior probability distribution, P(hj,i|D).
Step 6: P(hj,i|D) is used to update the two-dimensional state table.
Step 7: In the state table, find the maximal probability of hypothesis (j,i) as the algorithm's current output, i.e. physical identity i with a probability pj,i.
Step 8: return to Step 1.
In one exemplary embodiment, when associating the physical identity of the target vehicle 14 with the virtual identity of the target vehicle 14, the data processor is further adapted to use the data related to the physical identity of the target vehicle 14 to determine a relative position of the target vehicle 14, and to estimate a real-time status of the target vehicle 14. The data related to the physical identity of the target vehicle 14 includes global satellite positioning coordinates, speed, acceleration, yaw and heading, and the data related to the virtual identity of the target vehicle 14 includes global satellite positioning coordinates, speed, acceleration, yaw and heading.
In this embodiment, the target vehicle 14 transmits only basic safety information, including global satellite positioning coordinates, speed, acceleration, yaw and heading. The ego vehicle 12 uses the plurality of perception sensors 24 to determine one or more target vehicle's relative position and estimate its real-time status, i.e. global satellite positioning coordinates, speed, acceleration, yaw and heading. The ego vehicle 12 receives one or more target vehicle's basic safety information, and the data processor within the ego vehicle 12 runs the Bayesian Inference Algorithm and calculates P(D|hj,i) and P(D).
Referring to
Referring to
In another exemplary embodiment, the system further includes a cloud-based vehicle profile database 22′, such as SIFT, SURF, BRIEF and ORB, that includes computer vision features created for each model of all vehicles. Such databases 22′ are located in the cloud 28 and are accessible via wireless communication channels 20. In an example, a target vehicle 14 transmits its model information (brand, model, year, color, etc.) to other vehicles in the vicinity using wireless communication channels 20 or cellular networks.
The data processor 16 is further adapted to use model information received from the target vehicle 14 and to receive corresponding vehicle profile data from the cloud-based vehicle profile database 22′. The ego vehicle 12 receives this model information and uses this model information to “look-up” corresponding vehicle profile data for the target vehicle 14 from the cloud-based vehicle profile database 22′. The ego vehicle 12 then has a set of physical identities from its camera-based perception sensors 24 and a set of identities and profiles received wirelessly via communication channels 20 and runs the Bayesian Inference Algorithm and calculates P(D|hj,i) and P(D).
In a hypothesis, h1,4, P1 (physical identity) and V4 (virtual identity) are the same identity. P1's feature can be calculated as FEATURE(P1) and V4's feature can be represented by FEATURE(V4). The feature distance can be calculated as: F_Distance(P1,V4)=|FEATURE (P1)−FEATURE(V4)|. The feature distance will follow certain probability distribution G(distance) 30, which can be created through field measurement, as illustrated in
In an exemplary embodiment, the cloud-based vehicle profile database 22′ is a deep neural network (DNN). The DNN is adapted to learn unique signature vectors (feature vectors) for each vehicle's captured images. Signature vectors should be robust to various lighting/weather conditions, camera characteristics, viewing perspectives, etc. This is achieved with a well-balanced training dataset and effective data augmentation method.
Given a vehicle image x, a DNN defines a feature extractor F:xi−Vi, such that jointly with a classifier C and target class label yi, C*F is trained to minimize a family of loss functions, given as:
min Σi=1Nloss(C·F(x),y)+αΩ(F),
Where Ω(F) is the regularization term, and α is the weighting factor for the regularization term.
In still another exemplary embodiment, the data related to the virtual identity of the target vehicle 14 includes data collected by the perception sensors 24′ on the target vehicle 14 related to the surroundings of the target vehicle 14, including observed lane lines, surrounding vehicles, vulnerable road users (VRUs), street signs, traffic lights and structures. The data related to the virtual identity of the target vehicle 14 may also include computer vision features for the target vehicle 14 that are retrieved from a cloud-based vehicle profile database 22′.
The target vehicle 14 leverages its on-board perception sensors 24′ to observe its surrounding environment. The target vehicle 14 shares its observed environment information with the ego vehicle 12 via a communication channel 20. For example, referring to
The Bayesian Inference Model is used as follows. Referring to
Virtual identity data, V7, from target vehicle, TV2, describes that it observed two dotted lane lines on the right, one target vehicle (TV1) on the left, and one target vehicle (TV3) in front. Target vehicle, TV3, is not visible, and therefore, not physically observed by the ego vehicle 12. One hypothesis is h2,7−P2 and V7 are the same identity, given by:
P(D|h2,7)=G(P2|h2,7)*G(V7|h2,7).
In reference to the lane lines:
G(P2|h2,7) is the probability of the ego car's 12 observation that there are one solid line and one dotted line in the left of a target vehicle and one dotted line on the right. As shown in
G(V7|h2,7) is the probability of a target's observation that there is one dotted line on the left of the target vehicle and one dotted line on the right of the target vehicle. As shown in
Therefore, P(D|h2,7)=100%*50%=50%. Similarly, P(D|h2,7) can be calculated for other nearby vehicles.
Referring to
Referring to
For example, within the cloud-based data processor 22″, a mini database of two vehicle's profile are created, using SIFT features. When these two vehicles 36 are approaching an intersection, they share their vehicle model information (make, model, color) with the cloud-based data processor 22″ (infrastructure) via the wireless communication network 20″. The cloud-based data processor 22″ retrieves the vehicle features and finds the correct match between physical identities, i.e. images captures by infrastructure camera, and virtual identities, i.e. information shared via wireless network communication.
Additionally, the system 10 of the present disclosure may be utilized for infrastructure-assisted precise positioning. The infrastructure uses perception sensors, such as the camera 34 shown in
Referring to
The ego vehicle 12 also observed from its' own perception sensors a physical identity P2 of the second target vehicle TV2, and three lane lines (two dotted and one solid). The physical identity of the first target vehicle TV1, and the third target vehicle TV3, are hidden from the ego vehicle 12. One hypothesis is: h2,7 P2 and V7 are the same identity (target vehicle TV2).
P(D|h2,7)=G(P2|h2,7)*G(V7|h2,7)
The ego vehicle 12 then uses the confirmation by the virtual identity V7 of the second target vehicle TV2 that V7P1 is V4 (the first target vehicle TV1), and determines that the first target vehicle TV1 is about to cut-in ahead of it and slows down. This can be extended to other scenarios where direct virtual-physical identity can not be associated (i.e. no line of sight), but can be done indirectly.
Referring to
Moving to block 104, the method further includes, collecting, with the data processor 16, via a wireless communication channel 20, data related to a virtual identity of the target vehicle 14, and, moving to block 106, associating, with the data processor 16, the physical identity of the target vehicle 14 with the virtual identity of the target vehicle 14. In an exemplary embodiment, the associating, with the data processor 16, the physical identity of the target vehicle 14 with the virtual identity of the target vehicle 14 further includes leveraging, with the data processor, a Bayesian Interference Model and estimating, with the data processor 16, a probability that data related to the physical identity and the data related to the virtual identity are for the same target vehicle 14.
In one exemplary embodiment, moving from block 104 to block 108, the associating, with the data processor 16, the physical identity of the target vehicle 14 with the virtual identity of the target vehicle 14 further includes using the data related to the physical identity of the target vehicle 14 to determine, with the data processor 16, a relative position of the target vehicle 14, and to estimate, with the data processor 16, a real-time status of the target vehicle 14.
The data related to the physical identity of the target vehicle 14 includes global satellite positioning coordinates, speed, acceleration, yaw and heading, and the data related to the virtual identity of the target vehicle 14 includes global satellite positioning coordinates, speed, acceleration, yaw and heading.
In another exemplary embodiment, moving from block 104 to block 110, computer vision features created for each model of all vehicles are stored on a cloud-based vehicle profile database 22′, and the data related to the virtual identity of the target vehicle 14 includes model information transmitted by the target vehicle 14, the method 100 including using model information received from the target vehicle 14 and receiving, with the data processor 16, corresponding vehicle profile data from the cloud based vehicle profile database 22′. The model information transmitted by the target vehicle 14 includes, but is not limited to, brand, model, year and color. In another exemplary embodiment, the cloud-based vehicle database 22′ is a deep neural network.
In still another exemplary embodiment, moving from block 104 to block 112, the data related to the virtual identity of the target vehicle 14 includes data collected by perception sensors 24′ on the target vehicle 14 related to the surroundings of the target vehicle 14. The data related to the virtual identity of the target vehicle 14 may include, but is not limited to, observed lane lines, surrounding vehicles, vulnerable road users (VRUs), street signs, traffic lights and structures, and in some exemplary embodiments, the data related to the virtual identity of the target vehicle 14 further includes computer vision features for the target vehicle 14 that are stored on a cloud-based vehicle profile database 22′.
A system and method of the present disclosure offers several advantages. These include allowing an ego vehicle to correctly associate a physical identity of a target vehicle that is detected by perception sensors within the ego vehicle to a virtual identity of the target vehicle that is received via wireless communication between the ego vehicle and the target vehicle. This ensures that the ego vehicle knows what vehicles it may be communicating with, and that the ego vehicle knows the correct position of the target vehicles nearby. This allows the ego vehicle to properly and safely operate on roadways and highways performing such tasks as collaborative lane changing, infrastructure coordinated maneuvers, infrastructure assisted precise positioning and sharing of physical/virtual association information with nearby vehicles.
The description of the present disclosure is merely exemplary in nature and variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure.