The present disclosure relates to vehicle position mapping systems using wireless technology.
Wireless signals and visual features are used separately for vehicle positioning and mapping. Positioning using wireless signals typically requires wireless infrastructure to be accurately mapped prior to use. Global Positioning System (GPS) operation for vehicles including automobile vehicles such as cars, trucks, vans, sport utility vehicles, autonomously operated vehicles and electrically powered vehicles and the like using wireless signals provides wireless infrastructure but may be negatively impacted by environmental conditions, including buildings, structure, reflective surfaces and the like. A precise location of an automobile vehicle, or pose, is necessary if the vehicle environment contains negative environmental conditions reducing accurate use of wireless signals.
Multipath is also known to degrade performance of a wireless based positioning system. In wireless and radio communication, multipath is a propagation phenomenon that results in signals reaching a receiving antenna by two or more paths. Causes of multipath include atmospheric ducting, ionospheric reflection and refraction, and reflection from water bodies and terrestrial objects such as mountains and buildings. When the same signal is received over more than one path, the multiple signal path receipt may create interference and phase shifting of the received signal and therefore use of the received signal may generate an inaccurate location of an automobile vehicle. Destructive interference causes fading which may cause a wireless signal to become too weak in certain areas to be received adequately.
Thus, while current automobile vehicle positioning systems achieve their intended purpose, there is a need for a new and improved automobile vehicle position mapping system.
According to several aspects, a system to map an outdoor environment includes at least one map including an access point (AP) position map identifying positions of multiple APs, and a reflector map generated from multiple visual features and multiple wireless signals collected by multiple automobile vehicles. A set of crowd-sourced data id collected from individual ones of the multiple automobile vehicles derived from multiple perception sensors when the at least one of the multiple automobile vehicles pass a mapping area. A group of wireless positioning measurements include: a time-of-flight, an angle-of-arrival, a channel state information, and power delay profiles. A data package is created from the set of crowd-sourced data including a group of wireless positioning samples and a group of visual features, the data package being forwarded to an On-Cloud database where On-Cloud Mapping is conducted. Multiple range measurements yield circular AP candidate positions within a free-space operating window of vehicle operation of at least one of the multiple automobile vehicles, wherein application of the range measurement plus multiple reflectors defined at multiple planar reflective surfaces improves the AP candidate positions.
In another aspect of the present disclosure, the wireless positioning measurements include: a time-of-flight, an angle-of-arrival, a channel state information, and power delay profiles.
In another aspect of the present disclosure, the perception sensor data collected includes images from one or more cameras, images from one or more laser imaging detection and ranging (lidar) systems, and images from a radar system.
In another aspect of the present disclosure, additional sensor data is collected including data from a GNSS, a vehicle speed, a vehicle yaw, and vehicle CAN bus data.
In another aspect of the present disclosure, the AP position map and the reflector map individually contain candidate locations of access-points (APs) and AP corresponding media-access-control (MAC) identities.
In another aspect of the present disclosure, locations of potential signal reflectors defining surfaces upon which wireless signals may reflect from are identified by the AP position map and the reflector map.
In another aspect of the present disclosure, at least one of the multiple automobile vehicles is equipped with a radio receiver, the radio receiver providing range measurements to different ones of the APs, with the range measurements provided as one of line-of-sight (LOS) or non-line-of-sight (NLOS) measurements.
In another aspect of the present disclosure, the AP position map and the reflector map further contain semantic data identifying walls, buildings or other real-world objects.
In another aspect of the present disclosure, at least one aggregate partial map is created for the individual automobile vehicles and optimized global maps of the wireless APs and the planar surfaces, wherein the AP position map and the reflector map may be further combined with data uploaded from one or more prior automobile vehicle generated maps.
In another aspect of the present disclosure, the On-Cloud Mapping Process includes individual ones of the multiple automobile vehicles' uploaded data, leveraged visual features, and wireless positioning programs applied to create the AP position map and the reflector map.
According to several aspects, a system to map an outdoor environment includes at least one map generated from multiple wireless signals collected by multiple automobile vehicles. An onboard-processing segment of at least one of the multiple automobile vehicles includes a perception sensor data derived from at least one camera, a lidar system or from a radar system and data from a GPS unit. A semantic feature detection module detects lane edges of a roadway. A 3D position detection module detects 3D positions of planar surfaces proximate to the multiple automobile vehicles. An image feature extraction module identifies objects including corners, and descriptors including pixels about a given vehicle location. An output of the image feature extraction module is forwarded to a 3D feature coordinate module which determines 3D feature coordinates via structure from motion of one of the multiple automobile vehicles. A model generator receives an output from the 3D position detection module, the 3D feature coordinate module, together with vehicle sensor data and a range data. An optimizer receives data from the model generator, the optimizer solving for a location of one of the automobile vehicles and any objects identified for input to the at least one map.
In another aspect of the present disclosure, the at least one map includes an access point (AP) position map identifying positions of multiple APs, and a reflector map generated from multiple visual features and multiple wireless signals collected by the multiple automobile vehicles.
In another aspect of the present disclosure, an On-Cloud database is provided where On-Cloud Mapping of the access point (AP) position map and the reflector map are conducted.
In another aspect of the present disclosure, the optimizer defines one of a Kalman filter and a non-linear least squares solver.
In another aspect of the present disclosure, a loop closure detection module recognizes if an object or a surface was previously identified and becomes identified for a second or later time.
In another aspect of the present disclosure, the onboard-processing segment further includes range data derived from an angle of attack (AoA) sensor.
In another aspect of the present disclosure, the onboard-processing segment further includes vehicle sensor data including from odometry information, an inertial-measurement-unit (IMU), a wheel-speed-sensor (WSS), and visual-odometry (VO) data.
According to several aspects, a method to collect data and to map an outdoor environment comprises: applying an individual vehicle's data processing step using one or more cameras or a lidar system to detect reflective surfaces, such as via semantic segmentation; collecting the reflective surfaces as a data set; fitting the reflective surfaces of the data set to planar models; creating one or more access point (AP) maps having estimated AP positions and planar surfaces; developing multiple planar surface maps; combining wireless AP range information with planar surface detections to estimate a true AP position; and applying a particle filter to obtain a spatial distribution of AP positions and an automobile vehicle pose.
In another aspect of the present disclosure, the method further includes extracting visual features, and matching and tracking the visual features for odometry and loop closure.
In another aspect of the present disclosure, the method further includes collecting multiple maps created by multiple automobile vehicles.
Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses.
Referring to
According to several aspects, at least one of the vehicles including the host automobile vehicle 18 is equipped with a radio receiver 26 such as but not limited to WiFi fine time measurement (FTM), 5G, and the like. An environment which the host automobile vehicle 18 operates in may hinder a global positioning system (GPS) performance and hinder identification of AP positions. The AP position map 12 and the reflector map 14 therefore contain candidate locations of access-points (APs) and their corresponding media-access-control (MAC) IDs. Locations of potential signal reflectors defining surfaces upon which wireless signals may reflect from are identified by the AP position map 12 and the reflector map 14. The AP position map 12 and the reflector map 14 may further contain image features developed from systems such as scale-invariant feature transform (SIFT) and their coordinates. The AP position map 12 and the reflector map 14 further contain other relevant semantic data identifying for example walls, buildings, roadways, intersections and the like. The radio receiver 26 may also provide range measurements to different APs, however multiple ranges may be reported due to the above noted signal reflectors and measurements may be provided as line-of-sight (LOS) or non-line-of-sight (NLOS) measurements as discussed in greater detail in the figures that follow.
A data package 28 is created from the set of crowd-sourced data 16 and includes a group of wireless positioning samples 30 and a group of visual features 32 which is forwarded for example by the radio receiver 26 to an On-Cloud database 34 where an On-Cloud Mapping Process 36 is then conducted. The On-Cloud Mapping Process 36 includes processing individual vehicles' uploaded data, leveraging visual features, algorithms and wireless positioning algorithms jointly to create the AP position map 12 and the reflector map 14. Aggregate partial maps are created for the individual vehicles and create optimized global maps of wireless access points and planar surfaces. The AP position map 12 and the reflector map 14 may be further combined with data uploaded from one or more automobile vehicle generated pre-existing or prior maps 38 created from ground surveys, aerial imagery, and the like.
Referring to
Referring to
Referring to
With continuing reference to
Via perception, planar surfaces are detected which may cause reflections. Possible locations of a transmitter may be determined by defining an equation 1 below:
{χ∈Rn|ψ(χ;p1,p2,r)=0} Equation 1:
Equation 1 identifies a set of points that would terminate at the origin after reflecting from a line segment with terminal points p1 and p2 after the distance r.
The LOS range model loss is typically L(χ)=(r−|χ|)2. The possibility of reflections is considered using an equation 2 below:
L(χ)=ψ(χ;p1,p2,r)2. Equation 2:
Referring to
The perception sensor data 72 is transferred to several modules including a semantic feature detection module 84 which detects lane edges for example, a 3D position detection module 86 which detects 3D positions of planar surfaces, and an image feature extraction module 88 which performs operations such as scale-invariant feature transform (SIFT) programming to identify objects such as corners, and descriptors such as pixels about a given location, and the like. An output of the image feature extraction module 88 is forwarded to each of a 3D feature coordinate module 90 which determines 3D feature coordinates via structure from motion of for example the host automobile vehicle 18, and a loop closure detection module 92 which recognizes if an object or surface was previously identified and becomes identified for a second or later time.
An output from individual ones of the 3D position detection module 86, the 3D feature coordinate module 90, the loop closure detection module 92, together with the vehicle sensor data 74 and the range data 82 are forwarded to a model generator 94. Data from the model generator 94 is forwarded to an optimizer 96, which may be for example a Kalman filter or a non-linear least squares solver. The optimizer 96 solves for a location of the automobile vehicle such as the host automobile vehicle 18 and any objects identified. An output from the optimizer 96 is forwarded to and generates a vehicle map 98. An output from the semantic feature detection module 84 is forwarded directly to the vehicle map 98 and added to the vehicle map 98 after a vehicle pose is identified. It is noted the model generator 94, the optimizer 96 and the vehicle map 98 may be processed in either the automobile vehicle or in the cloud.
Sensor information used to simultaneously estimate host pose and localizations of various features (SLAM). Semantic segmentation network is trained to identify planar reflective surfaces. Images features are estimated and mapped. Semantic features, for example lane edges, are added to the map after the vehicle pose is learned.
Referring to
Referring to
In parallel with analyses and processing performed on the single vehicle uploaded data set 116 a crowd sourced AP mapping data set 132 is separately processed. Data from the set of sample positions 126 is forwarded to a particle filter 134 which functions to update and resample data from the set of sample positions 126. A particle filter initialization module 136 receives an output of the particle filter 134 which initializes a next or APn object. A set of AP positions 138 defines a final output of the cloud processing group 114 using an output from the particle filter 134.
It is noted that image processing and the image processing conducted by the feature extraction and reconstruction module 118 may also be performed by one or more of the automobile vehicles in lieu of in the cloud processing group 114. The extracted features and 3D reconstructed image data may then be uploaded together with the data package 28 from the automobile vehicles via the cloud edge 112.
Referring to
For the On-Cloud location there are two mapping processes. A Process 1 defines One Vehicle's Data Preprocessing. The purpose is to process and integrate one vehicle's sensor measurement samples into three (3) databases: a Global Point Cloud database, a Planar Surface database, and a Sample Positions database.
With continuing reference to
The point cloud registration module 120 uses point registration algorithms to integrate this point cloud into the existing global point cloud data created by other crowd-sourced data.
The segmentation module 122 leverages surface algorithms to identify valid planar surfaces from the point cloud. For example, surfaces 142, 144, 146, 148 are detected. The surfaces 142, 144, 146, 148 are saved into the database including multiple planar surface models 130.
The sample positioning module 124 leverages the camera image from each frame of data to determine a precise position of the vehicle using positioning algorithms from Visual SLAM or SFM. Once the 3D point is determined, the sample positioning module 124 attaches wireless positioning measurement data such as FTM, channel state information (CSI), parallel distributed processing (PDP), and the like to this 3D sample point. Each frame of data from the uploaded sequence is then processed and the created 3D sample points are saved into the database defined by the set of sample positions 126.
These three databases will later be used as input by a Process 2 crowd sourced AP mapping data set 132, which defines a Crowd-sourced AP Mapping process, wherein the particle filter initialization module 136 initializes the particle filter 134 to locate a specific AP (i.e. APx).
The particle filter 134 updates this particle filter data using the wireless position measurement samples from the set of sample positions 126 database. This update process goes through several iterations until a predetermined finish condition is satisfied. Once it finishes, the final position of an APx is saved into the set of AP positions 138 database.
A Particle Weight Calculation is conducted in the particle filter 134. For a Particle gj, the weight is calculated as Equation 3 below:
Equation 3:
For Equation 3, gj is one of the particles, si is one of the wireless samples, pk is one of the paths from gj to si. The path can be a direct path (such as p0) or a reflection path (such as p4). PDP(qj,si,pk) is a function which gets the corresponding power level from the wireless measurement for a given path pk between qj and s8.
Since si and gj's positions are known, their direct path or reflection path's length is known. The path length can be converted to time of flight based on the known speed of light. The time-of-flight values can be mapped to the Power Delay Profile generated by the wireless measurement from a position sample.
Features detected by the perception system may optionally be associated with a mapped feature. States include a pose of the automobile vehicle such as the host automobile vehicle 18, coordinates of reflectors in the area of the host automobile vehicle 18, AP positions for individual ones of the hypotheses and image feature locations. A state model is developed based on equations 4 through 8 below:
x
t+1
host
=x
t
host
+Δx
t
odom Equation 4:
ϕt+1host=ϕthost+Δϕtodom Equation 5:
p
t+1
reflector,i
=p
t
reflector,i Equation 6:
p
t+1
AP,j
=p
t
AP,j Equation 7:
p
t+1
feat,l
=p
t
feat,l Equation 8:
Observations include: 1) Odometry information, inertial-measurement-unit (IMU) 76, wheel-speed-sensor (WSS) 78, visual-odometry (VO), and the like; 2) Image features; 3) GPS data; 4) Reflector coordinates from perception and 5) Range, MAC address and AoA measurements of APs if available. An observation model is developed based on equations 9 through 12 below:
{tilde over (x)}
GPS
=x
t
host Equation 9:
{tilde over (p)}
reflector,i
=R(−ϕt)(ptreflector,i−xthost) Equation 10:
{tilde over (p)}
feat,l
=R(−ϕt)(ptfeat,l−xthost) Equation 11:
ψ(R(−ϕt)(ptAP,j−xthost);ptreflector,i,{tilde over (r)}k)=0 Equation 12:
With the addition of loop-closure constraints, the system above represents a SLAM problem. Multiple AP locations may be estimated for each measurement as it may be unknown if the source is LOS or NLOS. AP information is associated based on MAC addresses. A solution may be obtained using Kalman filters, particle filters or factor graph optimization.
Referring to
In a data aggregation step 164 the map data from the AP map generation step 158, the planar surface map creation step 160 and the map collection step 162 are aggregated. Also in the aggregation step 164 aggregate partial maps are created by the data collected from the individual vehicles and are used to create optimized global wireless AP maps 166 of wireless access points and to create global planar surface maps 168. The mapping creation process may occur onboard any of the multiple automobile vehicles including the host automobile vehicle 18 or in the cloud processing group 114 described in reference to
The following marginal likelihood functions can be defined:
f(y/Ho,z,ψ)=P(p1 is LOS)P(p2 is NLOS)P(r1 is LOS)P(r2 is NLOS)
f(y|H1,z,ψ)=P(p1 is NLOS)(r1 is NLOS)P(p1 is random)P(r1 is random)
f(y|Ha,z,ψ)=P(y is random)=U(y,a,b)
where:
P(pj is LOS)=N(pj−p(|z|,1,p0),σ2p)
P(pj is NLOS)=N(pj−p(|z|,α,p0),σ2p)
P(rj is LOS)=N(rj−|z|,1,σ2r)
P(rj is NLOS)=1/|L|ΣiN(|wi−z|−rj,σ2r)I(angle(z−wi)∈θi)
N(x,σ2) is a likelihood of a zero-mean Gaussian with variance σ2 at x
U(y,a,b) is a likelihood of a uniform distribution between a and b
|L| is the cardinality of L
(wi, θi)=m(li,x0,ri)
I(cond) is 1 if cond is true else 0
ψ=(p0, α, σ2p, σ2r) are nuisance parameters
Given priors P(z), Phi), P(ψ), etc. goal is estimate of the posterior P(z,Hi,ψ/y)
Referring to
Where:
H0 defined by a first line segment 176: z represents the LOS position of the AP.
H1 defined by a second set of line segments 178′, 178″: z represents the first-order reflected position of the AP.
Ha defined by a third set of line segments 180′, 180″, 180′″: z represents a higher order reflection or is from an unknown reflector or is an outlier.
It is assumed that that a receiver power follows an inverse square law p(r,α,p0)=a (p0/(r/r0)2 where α=1 for LOS signals, and α<1 for reflections. For mapping, the state space consists of: a host automobile vehicle pose, a planar surface position, the AP 172 position, a reference power is p0 and a reflection loss is defined by α.
According to a first aspect, the cloud side such as the cloud processing group 114 defined in reference to
According to a second aspect, applying enhanced visual positioning with wireless signals, the cloud side such as the cloud processing group 114 defined in reference to
According to a third aspect, when the present system is run using wireless signals plus visual feature positioning on On-Board computing, on one or more individual vehicles; there is no Cloud Computing involved.
According to a fourth aspect, wherein smartphone positioning with wireless signals only is used, positioning may be challenging for the smartphone, especially in urban canyons or multi-floor parking structures. A smartphone can 1) leverage the reflector models and wireless AP models to compensate multipath errors and improve positioning accuracy with wireless signals only; and 2) when a camera on the smartphone is active, the smartphone camera may assist positioning by leveraging the visual features in the point cloud.
The system and method for mapping an outdoor environment 10 of the present disclosure leverages crowd-sourced vehicle sensor data to create maps of wireless access points and maps of reflection surfaces. The system and method for mapping an outdoor environment 10 utilizes visual feature algorithms (e.g. SLAM) to create 3D models of the environments and to extract planar surfaces which may cause multipath reflection. Based on the created planar surfaces, wireless reflection paths are modeled, and wireless AP's precise positions are determined.
The system and method for mapping an outdoor environment 10 of the present disclosure offers several advantages. These include a system that provides for mapping an outdoor environment using a combination of visual features and wireless signals. Multipath sources are identified using visual features from a camera, lidar or other sensors. The maps may be combined with other maps via the cloud and subsequently used by lower tier vehicles (vehicles lacking advanced guidance systems) for functions such as positioning. Visual features are used to identify reflections and dynamic objects in the environment when mapping. Visual features are also used to aid in creation of consistent maps along with wireless measurements.
The description of the present disclosure is merely exemplary in nature and variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure.