The present disclosure pertains to wireless networks. More specifically, the present disclosure pertains to measuring location and velocity of wireless devices by detecting sensing signals that carry information about distances to such devices. The present disclosure further pertains to fast and efficient determination and tracking of trajectories of devices using the distance information.
Personal area networks (PAN), such as Bluetooth (BT), Bluetooth Low Energy (BLE), and wireless local area networks (WLAN), such as Wi-Fi networks and other networks operating under the IEEE 802.11 or other wireless standards, provide wireless connection for various personal industrial, scientific, and medical applications. Many BT, BLE, and IEEE 802.11 applications use identification and secure communications that are predicated on correct localization of various objects that carry a wireless device. For example, automotive applications deploy passive keyless entry systems that localizes a key fob and locks/unlocks/starts the car based on the proximity of the key fob to the car. Similarly, a tire pressure monitoring system identifies a specific tire whose pressure falls below a certain reading. BLE specification defines a variety of techniques for performing object localization, such as by estimating signal strength of received wireless signals (e.g., received signal strength indication, RSSI), angle (direction) of arrival (AoA) of wireless signals, high-accuracy distance measurements (HADM) using time-of-flight (ToF) channel sensing, phase-based ranging (PBR), and other techniques. AoA uses multiple sensors (antennas) that exploit differences in phases of one or more unmodulated tones arriving at the sensors (positioned at different points in space) to estimate the directions of the wave propagation. Similarly, channel sensing (e.g., HADM) estimates a distance to an object (e.g., another BLE device) by measuring phase delays accumulated by a plurality of signals of different frequencies along a path from an initiator wireless device to return wireless device and back.
In various applications, wireless devices and various moving (or movable) objects that carry wireless devices, e.g., people, vehicles, cell phones, key fobs, items stored in a warehouse, etc., may be tracked using wireless (e.g., radio) signals. A distance to an object may be measured (e.g., using PBR techniques) for a series of times ti and the object's trajectory may be determined based on the measured distances di(ti). The trajectory of the object can then be estimated using a variety of techniques, such as deploying a Kalman filter to estimate the actual distance d(t), its first time derivative {dot over (d)}(t), second time derivative {umlaut over (d)}(t), and so on. In some applications, trajectory estimation is performed using neural networks to process measurement data, and/or other techniques. All such approaches typically involve a significant post-processing of measured data that can exceed computational resources of low-power microprocessor devices used in automotive applications, Internet-of-Things applications, sensing and monitoring applications, and the like.
Aspects and implementations of the present disclosure address these and other limitations of the existing technology by enabling systems and methods of trajectory determination and tracking using sensing signals whose phase changes in the course of propagation to and from an object may be detected and analyzed. More specifically, a variety of signal-processing techniques may be used to estimate a distribution P(d), also referred to as a likelihood vector throughout this disclosure, that the distance to an object has a particular value d at the time of sensing. Due to the inherently noisy radio environment, the distribution (likelihood vector) may have a maximum at some value of distance, but may possess some uncertainty (width). Unlike conventional techniques, that identify the most likely distance d for each sensing event and then track this most likely distance with additional sensing events to determine the object's trajectory, the techniques disclosed herein first convert the likelihood vector (or a set of likelihood vectors obtained for different sensing events Pi(d)) into a likelihood tensor, e.g., P(d0, ν). In one non-limiting example, the likelihood tensor P(d0, ν) may be a tensor in location-distance space d0-ν whose values (elements) characterize the likelihood that the trajectory of the object, e.g., d(t)=d0+νt, is described by a particular reference distance d0 (distance to the object at some reference time, e.g., t=0) and velocity ν. The estimation of the trajectory may then be performed by identifying maxima of the likelihood tensor P(d0, ν), which may be updated with each additional set of sensing data, so that the changes in the trajectory of the object (e.g., due to acceleration or deceleration) may be tracked in real time. The described techniques allow disambiguation of multiple paths of the returned signals, which appear as multiple maxima of the likelihood tensor. Numerous additional implementations are disclosed herein. For example, a reduced number of elements of the likelihood vector P(ds) for a limited set of distances ds may be computed and then expanded (e.g., using interpolation techniques) to a larger array of values of the likelihood vector. Similar techniques may be used for computation of the elements of the likelihood tensor P(d0, ν).
In one example of BLE systems, during a sensing event, N waves of different frequencies (tones) fj(j∈[1, N]), from BT bandwidth (i.e., from 2.402 GHz to 2.480 GHz) spaced within the 1-80 MHz interval, may be transmitted by a wireless device that performs trajectory tracking of another wireless device or an object that transports such a wireless device. In another example of some IEEE 802.11 wireless systems, tones of a sample set may be subcarriers transmitted simultaneously in a long training field (LTF) of a packet. The transmitted signals may be reflected by a return device (the device whose trajectory is being estimated) along the same path. A sensor (e.g., antenna) may detect arrival of N returned signals and extract a phase information from these signals, the phase information being representative of the length of the traveled path. Detection of each arriving frequency fj (referred to as a sub-event herein) generates a corresponding sensing value rj. The set of the sensing values {rj} may be used to obtain the likelihood vector Pi(d) for this specific sensing event (identified by subscript i).
In some instances, N signals reflected by the object may follow multiple, e.g., n, paths, including a path that correspond to the line-of-sight propagation and paths that involve reflections from walls, ceilings, and/or other objects, including multiple reflections. Such paths may be identified, e.g., as multiple maxima of the likelihood tensor that may further allow distinguishing the line-of-sight propagation from multipath reflections, as described below. Numerous implementations are disclosed herein that deploy phase-based ranging for tracking of motion of devices in wireless networks. The advantages of the disclosed implementations include computationally efficient trajectory tracking for fast real-time monitoring of the locations of various objects in wireless network environments, which may include multiple wireless devices and various additional objects.
Wireless device 100 may generate and transmit a plurality of sensing signals. In some implementations, the sensing signals may have different frequencies (tones). More specifically, wireless device 100 may generate a signal 106-1 that includes multiple (e.g., N) tones, e.g., f0, f0+Δf1, f0+Δf2 . . . , and may transmit the generated signal to object 104, which may be a responding device belonging to the same wireless network as the wireless device 100. The responding device may perform analysis of the received signal 106-1 and evaluate phase information that is used in returned signal 107-1. Wireless device 100 may similarly perform evaluation of phase information of the returned signal 107-1 to estimate the distance between wireless device 100 and the responding device (object 104-1) based on the total phase change. Each tone of the transmitted signal 106-1 (and, correspondingly, of the returned signal 107-1) may carry its own phase information. In particular, the total phase change Δϕ associated with the distance d1 between wireless device 100 and object 104 traveled by signal 106-1 and the same distance d1 traveled by returned signal 107-1 of frequency fj is Δϕj=4πfjd1/c, where c is the speed of light. This phase change is representative of the distance d1(t1) to object 104 at time t1. The callout portion of
At a later time t2, object 104 may move to a different location 105. A new sensing signal 106-2 (e.g., similarly having N sensing tones fj) may be transmitted by wireless device 100 and may cause a returned signal 107-2 carrying phase information representative of the new distance d2(t2) to object 104. As depicted in
The phase changes Δϕj carried by the returned sensing signals may be exploited using the multiple signal classification (MUSIC) algorithm, generalized cross-correlation (GCC) algorithm, inverse Fourier transform algorithm, or any other suitable processing algorithms that are further improved according to implementations of the present disclosure. The following operations may be performed for each of the sensing events ti to determine a respective likelihood vector Pi(d), as described below. Likelihood vector Pi(d) may be a vector (array) in a distance space with specific values of the likelihood vector indicating the likelihood (probability or be proportional to probability or has some relation to probability) of various distances to the wireless device that is being tracked. Multiple likelihood vectors Pi(d) may then be combined into a likelihood tensors described in more detail below in conjunction with
As illustrated in
where Sk represents the amplitude of the wave traveled over k-th path, nj is the noise associated with forward propagation (and detection) of j-th frequency (tone, channel) fj, n′J is the noise associated with backward propagation (and detection) of j-th frequency, and aj(d) is a steering vector (also denoted, in vector notations, as â(d)) that describes phase change over distance d, which may take one of the values d=d1 . . . dn. In particular, for N equidistant sensing tones, fj=f0+(j−1)Δf, the steering vector may have the form, aj(d)=exp[4πi(j−1)Δfd/c].
In MUSIC algorithm implementations, sensing values may be used to construct the N×N covariance matrix, Rjl=rjrl*, where the angular brackets . . . denote statistical averaging and rl* stands for complex conjugation of rl*. In some implementations, covariance matrix may be formed using square roots (with suitably chosen sign value) of sensing values, e.g., Rjl=√{square root over (rj)}√{square root over (rl*)}. In some implementations, statistical averaging may be performed using smoothing in the frequency domain, e.g., using smooth-MUSIC algorithm. In some implementations, statistical averaging may include averaging in the time domain, e.g., by collecting multiple instances of data. In some implementations, time averaging is not performed. For uncorrelated noise, njnl*=δjlσ2, where σ2 is the noise variance in a single sensing channel.
Covariance matrix {circumflex over (R)} may have n signal eigenvectors ŝ(1) . . . ŝ(n) and N−n noise eigenvectors ĝ(n+1) . . . ĝ(N) (which define what is commonly referred to as the noise subspace). For uncorrelated noise, noise eigenvectors are orthogonal to the steering vector: â†(dm)·ĝ(α)=0, where subscript a enumerates various eigenvectors. Accordingly, the localization vector P(d) (often referred to in MUSIC and GCC applications as the pseudo-spectrum), defined using noise eigenvectors as,
has maxima for the actual distances d=d1 . . . dn of signal propagation, some of which may correspond to direct (line-of-sight) signal propagation and some may correspond to paths that include at least one reflection. In some implementations, the localization vector may be defined using signal eigenvectors, e.g., as
where λ(α) is the eigenvalue corresponding to signal eigenvector ŝ(α).
The above example of the MUSIC localization vector is intended to be illustrative. In various implementations, the localization vector P(d) may be obtained using different procedures. For example, in the GCC method, the localization vector may be defined as,
This vector may similarly have maxima for the actual distances d=d1 . . . dn, and may be computed using Inverse Fast Fourier algorithm techniques. Numerous other ways of defining the localization vector P(d) are also within the scope of the present disclosure.
d(t)=d0+νt,
where d(t) is the distance, at time t, to the object from a sensor (e.g., antenna) of the wireless device and d0 is a reference distance of the object at time t=0 or any suitable instance of time, arbitrarily chosen.
In some implementations, a likelihood vector P(d) may be transformed into a likelihood tensor P(d0, ν). The transformation P(d)→P(d0, ν) may be performed in a variety of ways. This can be done because of replacement single independent variable by two dependent variables d→d0, ν in P(d). In some implementations, the likelihood vectors from multiple sensing events may be joined into the likelihood tensor as follows,
P(d0,ν)=P1(d0+νt1)+P2(d0+νt2).
In some implementations, different likelihood vectors may be weighted differently, e.g.,
P(d0,ν)=W1P1(d0+νt1)+W2P2(d0+νt2),
with weights W1 and W2, which may be used for normalization of P(d) vector, so, total sum of P(d) is one, e.g., with likelihood vectors corresponding to closer ranges given higher weights. In these formulas, the likelihood vectors are identified with subscripts 1 and 2, to reference the sensing events whose measurements contribute data for the respective likelihood vectors, even though the function that is used to construct each likelihood vector may be the same, e.g., P(d). The likelihood tensor P(d0, ν) is a quantity that is defined on the two-dimensional space of distance d0 and velocity ν. The actual values of d0 and ν for the object being tracked may be determined by an optimization procedure, e.g., by finding the maxima of P(d0, ν), or, alternatively, the minima of P−1(d0, ν). Although two likelihood vectors are combined in this example, any number of likelihood vectors may be combined in a similar fashion.
The above process may continue with the additional sensing data for each new sensing event, i=3, 4, . . . , being used to update the likelihood tensor using a new likelihood vector Pi(d0+νti):
P(d0,ν)→P(d0,ν)+Wi·Pi(d0+νti),
In some implementations, the number of sensing events that are counted towards the likelihood tensor may be limited to a predetermined number M of sensing events, such that after M contributions into the likelihood tensor are collected, when each additional contribution is added, the earliest contribution is subtracted from the likelihood tensor (e.g., the i=1 contribution in this example):
P(d0,ν)→P(d0,ν)+WM+1·PM+1(d0+νti)−W1·P1(d0+νti).
The number M may be selected to include multiple sensing events but still be small enough so that the velocity of the object is unlikely to change substantially over the duration of the last M sensing events. For example, if one sensing event occurs every 0.1 sec, the maximum number of events included in the likelihood tensor may be M=10. As a result, the optimization of the likelihood tensor provides an accurate estimate of the average velocity (and reference distance) to the object over the sliding window of the last 0.5 sec, which under many practical conditions may be short enough for the constant velocity model to be sufficiently accurate.
In the implementation described above, the likelihood tensor for a combination of events is the harmonic mean of the likelihood vectors for each event, P=P1+P2+ . . . . In some implementations, the likelihood tensor for a combination of events may instead be the sum of likelihood vectors computed for each event, =P1+P2+ . . . , or any other suitable measure. In some implementations, e.g., where different sensing events have unequal number of sub-events, the likelihood vectors computed for individual events may be weighed with suitably chosen weights, e.g., weights proportional to the number of sub-events in each sensing event, weights that are empirically determined, and so on.
As depicted in
In the above examples, the location-velocity determination and tracking is illustrated using the two-dimensional (2D) likelihood tensor P(d0, ν) in location-velocity space. Similar techniques may be used for trajectory determination and tracking in higher dimensions, where multiple coordinates of the object and multiple components of velocity are being determined. For example, in the more general case, the trajectory may be determined using a vector model, {right arrow over (r)}={right arrow over (r)}0+{right arrow over (ν)}t, with a vector reference location, {right arrow over (r)}0=(x0, y0 . . . ) and a vector velocity {right arrow over (ν)}=(νx, νy . . . ), where x, y . . . are any suitable coordinates including Cartesian coordinates, polar coordinates, elliptical coordinates, spherical coordinates, cylindrical coordinates, and so on. A higher-dimensional (HD) likelihood tensor may be a tensor in 2m-dimensional space of m (e.g., m=2 or m=3) coordinates x, y . . . and m velocity components, νx, νy . . . . More specifically, the likelihood tensor may be
|{right arrow over (r)}0+{right arrow over (ν)}ti|=√{square root over ((x0+νxti)2+(y0+νyti)2)}.
Correspondingly, the HD likelihood tensor P({right arrow over (r)}0, {right arrow over (ν)}) is a tensor in the 4-dimensional space, P(x0, y0; νx, νy) whose extrema determine the locations of the objects or images of the objects created by reflections from various bodies in the environment. In the instances of reflections, the apparent locations of the objects {right arrow over (r)}0+{right arrow over (ν)}ti may be behind the reflecting bodies. If polar coordinates are being used, the elements of the steering matrix may be approximated with
|{right arrow over (r)}0+{right arrow over (ν)}ti|=d0−νti cos(θ−ϕ0),
where d0 and ϕ0 are reference distance and azimuthal angle, and ν and θ are the absolute value and the direction of velocity, respectively. Correspondingly, the HD likelihood tensor is a tensor in the four-dimensional space of polar coordinates P(r0, ϕ0; ν, θ); its extrema determine the locations of the objects (or images) in substantially the same way as described above.
The transformations described above, e.g., from individual likelihood vectors, which characterize individual sensing events, to 2D likelihood tensors, P(d)→P(d0, ν), or from likelihood vectors to HD likelihood tensors, P(d)→P({right arrow over (r)}0, {right arrow over (ν)}), may be performed using various interpolation techniques to reduce computational costs and memory use. For example, a 2D likelihood tensor P(d0, ν) may be defined on a discrete set of distances {d0} and a discrete set of velocities {ν}. The corresponding set of values {P(d0, ν)} may include a large number of the likelihood tensor elements for various possible combinations of distances and velocities. To optimize computations, elements of the likelihood vectors and tensors may be computed for some values of distances and velocities, e.g., sparsely distributed values, while the remaining elements may be obtained by interpolation methods, e.g., using linear interpolation, polynomial splines, or any other suitable methods.
In one specific non-limiting example, after a set of new sensing values for sensing time ti is detected, location-velocity estimator 110 may compute a new likelihood vector Pi(d) (or Pi−1(d)) for a discrete set of values ds=c+sΔd, with the starting value c, the step Δd, and s=0, 1, 2 . . . . Subsequently, the computed likelihood vectors Pi(ds) may be used to populate a denser array of values of the likelihood tensor Pi(d0, ν=0), e.g., the bottom row of cells in
In some implementations, interpolation techniques may be used to reduce the amount of memory required to store the 2D likelihood tensor Pi(d0, ν) and/or HD likelihood tensor P({right arrow over (r)}0, {right arrow over (ν)}). For example, only the values Pi(ds) may be stored in memory whereas the values of Pi(d0, ν) and/or Pi({right arrow over (r)}0, {right arrow over (ν)}) may be computed on-the-fly when the corresponding values are being updated or used for trajectory estimation.
As depicted in
The wireless device may optionally (as depicted with the corresponding dashed boxes) construct, at block 621, the HD likelihood tensor, P({right arrow over (r)}0, {right arrow over (ν)}). In some implementations, the HD likelihood tensors may have four dimensions, if the trajectory is two-dimensional (a planar motion), or six dimensions, if the trajectory is three-dimensional. The 2D likelihood tensor P(d0, ν) or the HD likelihood tensor P({right arrow over (r)}0, {right arrow over (ν)}) may be used to estimate the trajectory (block 650) of the object, by finding one or more extrema of the respective likelihood tensor.
In some implementations, additional filtering may be performed to increase accuracy of trajectory determination and tracking, as enabled by location-velocity (d-ν) filters 615 and 631. Filtering may involve combining measurement (sensing) data, e.g., newly determined likelihood vectors and/or tensors, with any suitable model predictions of dynamics of these likelihood vectors and/or tensors. Filtering may further use estimated statistics of the likelihood vectors and/or tensors. Filtering may include estimating the probable actual values of the likelihood tensors by selecting a suitable (e.g., empirically-chosen) set of weights used to weight relative importance that is placed on model predictions versus measurement data. Filtering may include applying a 2D location-velocity filter 615 to 2D likelihood tensor P(d0, ν) and/or applying an HD location-velocity filter 631 to for HD likelihood tensor P({right arrow over (r)}0, {right arrow over (ν)}), e.g., in the instances where the HD likelihood tensor is being used. In some embodiments, more than one filter may be used (e.g., both the 2D location-velocity filter 615 the HD location-velocity filter 631). Operations of the filters are described in more detail below.
Numerous variants of operations 600 may be implemented. In some implementations, HD likelihood tensor may be generated, at block 621, directly from likelihood vectors (obtained at block 603) without the intermediate operations of generating, at block 611, the 2D likelihood tensor (as well as filter 615, which is described in more detail below), as depicted with dashed arrow in
As depicted in
Each wireless device may then construct a respective HD likelihood tensor. More specifically, e.g., wireless device WA may construct, at block 621, HD likelihood tensor PA({right arrow over (r)}0A, {right arrow over (ν)}A) and wireless device WB may construct, at block 622, HD likelihood tensor PB({right arrow over (r)}0B, {right arrow over (ν)}B). In some implementations, the HD likelihood tensors may have four dimensions, if the trajectory is two-dimensional (if only planar motion is being tracked), or six dimensions, if the trajectory is three-dimensional. Separate HD likelihood tensors may subsequently be joined into a combined (C) tensor PC({right arrow over (r)}0, {right arrow over (ν)}), using any suitable coordinate transformations (block 640) to transform coordinates and velocity from the reference frame of wireless device WA to some common reference frame, e.g., dA0, ϕA; νA, θA→x0, y0; νx, νy, and similarly transform the coordinates and velocity from the reference frame of wireless device WB to the same common reference frame, dB0, ϕB; νB, θB→x0, y0; νx, νy. Although in this example, the common reference frame uses Cartesian coordinates, any other system of coordinates may alternatively be used, including a reference frame of one of the objects, e.g., of wireless device WA. In such implementations, only the transformation from the reference frame of wireless device WB may have to be performed for tensor PB({right arrow over (r)}0B, {right arrow over (ν)}B), e.g., using a relative shift rAB between the two reference frames, PB({right arrow over (r)}0B, {right arrow over (ν)}B)→PB({right arrow over (r)}0B+{right arrow over (r)}AB, {right arrow over (ν)}B). After the higher-dimensional tensors are expressed in the combined reference frame, both tensors may be used to obtain a combined tensor, e.g., PC=PA+PB, and the combined likelihood tensor PC may be used to estimate the trajectory (block 650) of the object, e.g., by finding one or more extrema of the combined likelihood tensor PC.
Similarly to
In some implementations, additional filtering may be performed to increase accuracy of trajectory determination and tracking, as indicated by various location-velocity (d-ν) filters 615-616, 631-632, and 660. Filtering may involve combining measurement (sensing) data, e.g., newly determined likelihood vectors and/or tensors, with any suitable model predictions of dynamics of these likelihood vectors and/or tensors. Filtering may further use estimated statistics of the likelihood vectors and/or tensors. Filtering may involve estimating the probable actual values of the likelihood tensors by selecting a suitable (e.g., empirically-chosen) set of weights to assign a relative importance of model predictions and measurement data. Filtering may include applying 2D location-velocity filters 615 and 616 to the 2D likelihood tensors, applying HD location-velocity filters 631 and 632 to the HD likelihood tensors, applying a combined location-velocity (d-ν) filter 660 to the combined likelihood tensor, or applying any combination thereof. In some embodiments, only one (or a pair) of location-velocity filters may be used, e.g., a combined location-velocity filter 660 (or a pair of HD location-velocity filters 631 and 632). In some embodiments, more than one location-velocity filter may be used.
Location-velocity filters may be or include Kalman filters, finite impulse response (FIR) filters, infinite impulse response (IIR) filters, particle filters, or any combination thereof. In one illustrative example, a Kalman filter may be applied to elements of the 2D and/or likelihood tensors of each cell of the location-velocity space. In some implementations, each element (cell) of the location-velocity space may be filtered independently of other elements. More specifically, a state of a particular tensor element (cell) at time ti in the constant rate of change model may be represented as a vector {circumflex over (x)}(ti), e.g., {circumflex over (x)}T (ti)=(P(ti), {dot over (P)}(ti)), where P(ti) is the current value of the cell and P({dot over (t)}l) its rate of change. The evolution of the state from a previous instance of time {circumflex over (x)}(ti−1) may be predicted from the difference equation,
{circumflex over (x)}(ti)={circumflex over (F)}{circumflex over (x)}(ti−1),
where {circumflex over (F)} is a state-transition matrix, e.g.,
Measured (sensed) values may be described by a vector {circumflex over (z)}T(τ)=(Pm(ti), 0). The measurement vector {circumflex over (z)}(τ) is determined by the state vector,
{circumflex over (z)}(ti)=Ĥ{circumflex over (x)}(ti)+{circumflex over (n)},
in terms of a measurement matrix Ĥ (which may be taken as a unit matrix), up to random measurement noise {circumflex over (n)}.
In some implementations, tracking the element of the tensor may include predicting the state vector {circumflex over (x)}(ti) from the estimate at the previous time {circumflex over (x)}(ti−1), obtaining actual measurement data {circumflex over (z)}(ti), identifying the difference with the expected measurement data, {circumflex over (z)}(ti)−Ĥ{circumflex over (x)}(ti), based on the estimated state, and improving (as indicated by the primed values) the estimate of the state vector using the Kalman gain, K,
{circumflex over (x)}′(ti)={circumflex over (x)}(ti)+{circumflex over (K)}(ti)[{circumflex over (z)}(ti)−Ĥ{circumflex over (x)}(ti)],
by a correction that is weighted by the Kalman gain difference between the actual measurement and the predicted value. The Kalman gain is selected to minimize the error (mean square deviation) between the predicted value and the actual value of the state vector, which may be achieved by choosing,
{circumflex over (K)}(ti)=(ĤŜ(ti)ĤT+{circumflex over (R)})−1Ŝ(ti)ĤT,
where Ŝ(ti) is the error covariance matrix and {circumflex over (R)} is a measurement noise covariance matrix. In addition to estimating the state vector {circumflex over (x)}(ti), the Kalman filtering may also include predicting the estimate of the error covariance matrix
Ŝ(ti)={circumflex over (F)}Ŝ(ti−1){circumflex over (F)}T.
The predicted estimate of the error covariance matrix is further improved by the Kalman gain,
Ŝ′(ti)=(1−{circumflex over (K)}(ti)Ĥ)Ŝ(ti).
At each time increment, when new measurement data becomes available, Kalman filtering (e.g., combined d-ν filter 660) may include generating a new expected state vector {circumflex over (x)}(ti), based on the previous estimate {circumflex over (x)}(t_1), obtaining a measurement vector {circumflex over (z)}(ti), obtaining an improved estimate of the state vector {circumflex over (x)}′(ti) using the Kalman gain matrix {circumflex over (K)}(ti), retrieving an error covariance matrix Ŝ(ti), obtaining an improved covariance matrix Ŝ′(ti), and generating the new covariance matrix for the next iteration ti, based on the improved covariance matrix and using the state-transition matrix. The described procedure may be continued until the trajectory of the object is being tracked (e.g. until the object leaves the environment). The filtered values of the combined HD likelihood tensor may be used to obtain, at block 670, the final estimate of the trajectory {right arrow over (r)}(t)={right arrow over (r)}0(t)+{right arrow over (ν)}t.
The above illustration of a Kalman filter is intended as an illustration only. Numerous other filtering algorithms and techniques may be used instead. For example, in some implementations the state vector may include and track not only the cell's value and its rate of change, but also its second (third, etc.) time derivative, e.g., {circumflex over (x)}T=(P, {dot over (P)}, {umlaut over (P)}).
Although
In some implementations, collection of sensing data (blocks 601, 602) as well as processing corresponding to blocks 603-631 and 604-632 may be performed on the corresponding wireless device before the obtained likelihood tensors are joined on a single device, which may be wireless device WA, wireless device WB, or any additional device. In some implementations, collection of sensing data (blocks 601, 602) may be performed on the corresponding wireless device (e.g., WA and/or WB) while the processing corresponding to blocks 603-631 and 604-632 (or any part of it) may be performed on a single device, which may be wireless device WA, wireless device WB, or any additional device.
Wireless device 704 may use one or more antennas 706 to receive and transmit radio waves. A signal received by antenna(s) 706 may be processed by radio 710 which may include filters (e.g., band-pass filters), low-noise radio-frequency amplifiers, down-conversion mixer(s), intermediate-frequency amplifiers, analog-to-digital converters, inverse Fourier transform modules, deparsing modules, interleavers, error correction modules, scramblers, and other (analog and/or digital) circuitry that may be used to process modulated signals received by antenna(s) 706. Radio 710 may further include a tone (frequency) generator to generate radio signals at selected tones. Radio 710 may also include antenna control circuits to control access to one or more antennas 706, including switching between antennas. Radio 710 may additionally include radio control circuits, such as phase measurement circuits and a tone selector circuit. The phase measurement circuits can perform phase measurements on received signals, e.g., IQ decomposition, which may include measuring a phase difference between the received signal and a local oscillator signal. The tone selector circuit can select tones for transmission.
Radio 710 may provide the received (and digitized) signals to a PHY 720 components. PHY 720 may support one or more operation modes, e.g., BLE operation modes. Although one PHY 720 is shown, any suitable number of PHY layers (supporting a respective number of operation modes) may be present. PHY 720 may convert the digitized signals received from radio 710 into frames that can be fed into a Link Layer 730. Link Layer 730 may have a number of states, such as advertising, scanning, initiating, connection, standby. Link Layer 730 may transform frames into data packets. During transmission, data processing may occur in the opposite direction, with Link Layer 730 transforming data packets into frames that are then transformed by PHY 720 into digital signals provided to radio 710. Radio 710 may convert digital signals into radio signals and transmit the radio signals using antenna(s) 706. In some implementations, radio 710, PHY 720, and Link Layer 730 may be implemented as parts of a single integrated circuit.
Wireless device 704 may include a protocol stack 740. The protocol stack 740 may include a number of protocols, e.g., Logical Link Control Adaptation Protocol (L2CAP), which may perform segmentation and reassembly of data packets that are generated by one or more applications 703 operating on host device 702. Specifically, L2CAP may segment data packets of arbitrary size, as output by the application(s) 703, into packets of the size and format that can be processed by Link Layer 730. L2CAP may also perform error detection operations. The protocol stack 740 may also include generic access profile (GAP) and generic attribute profile (GATT). GAP may specify how wireless device 704 advertises itself on the wireless network, discovers other network devices, and establishes wireless links with the discovered devices. GATT may specify how a data exchange between communicating wireless devices is to occur once the connection between the two devices is established. The protocol stack 740 may further include a security manager (SM) that controls how pairing, signing, and encryption of data is performed. GATT may use attribute protocol (ATT) that specifies how units of data are transferred between devices. Wireless device 704 may also include other components not explicitly shown in
Wireless device 704 may have a controller 750, which may include one or more processors 752, such as central processing units (CPUs), finite state machines (FSMs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASIC), or the like. Processor(s) 752 may also include custom logic and/or programmable logic, or any combinations thereof. In some embodiments, controller 750 may be a single processing device that supports processes associated with data transmission and reception as well as distance (and/or angle) estimation computations. In some implementations, wireless device 704 may have a dedicated processor for distance (and/or angle) estimation computations that is separate from a processor that executes other operations on wireless device 704 (e.g., processes associated with data transmission and reception).
Wireless device 704 may also include a power management unit (PMU) 770, which manages clock/reset and power resources. Wireless device 704 may further include an input/output (I/O) controller 780 to enable communications with other external devices (including non-network devices) and structures. In some implementations, I/O controller 780 may enable a general purpose I/O (GPIO) interface, a USB interface, a serial digital interface (SDI), a PCM digital audio module, a universal asynchronous receiver transmitter (UART), I2C, I2S, or any other I/O components.
Controller 750 may include a memory 760, which may be (or include) a non-volatile, e.g., read-only (ROM) memory, and/or volatile, e.g., random-access (RAM), memory. Memory 760 may store codes and supporting data for an object localization engine 762, a tensor-based tracking engine 764, a tone selection engine 766, and other suitable engines. In some implementations, any one or more of the engines may be located on host device 702, as indicated with the respective dashed boxes in
Application 703 may use information about various objects located in the environment of the host device 702/wireless device 704 (which may, in some implementations, be mounted on a single platform or in proximity of each other). Such information may include distances to the objects, directions to the objects, orientations of the objects relative to host device 702/wireless device 704, or any other spatial characteristics data. The data may be provided by the object localization engine 762, which receives and processes locations and velocities of various objects in the environment, as may be determined by the tensor-based tracking engine 764, e.g., as described above in relation to
Selected tones may be provided to protocol stack 740 (and link layer 730 and PHY 720) that may cause radio 710 to generate signals at the selected tones and transmit the generated signals to the outside environment. Radio 710 may then receive the reflected (returned) signals from various objects (other wireless devices) of the environment and determine phase shifts experienced by the reflected signals, e.g., by comparing the phase information carried by the reflected signals with phase information of the local oscillator copies of the transmitted signals. Radio 710 may further determine amplitudes of the reflected signals. The amplitude and phase information may be provided to the tensor-based tracking engine 764 (e.g., in the form of sensing values), which computes the covariance matrix. The tensor-based tracking engine 764 may include the location-velocity estimator 110 (depicted in
At block 820, the processing device may obtain a plurality of likelihood vectors (LVs), e.g., Pi(d). Each of the plurality of LVs may be obtained using a respective set of the plurality of sets of sensing values. For example, a first (second, third, etc.) LV may be generated using sensing values obtained as part of a first (second, third, etc.) sensing event that takes place at time t1 (t2, t3, etc.). Each of the plurality of LVs may characterize a likelihood that the object is located at one of a set of distances (e.g., discrete distances) from the wireless device, e.g., with Pi(d1)>Pi(d2) indicating that at the time of the sensing event ti, the object is more likely located at distance d1 than at distance d2.
In some implementations, each of the plurality of LVs may be obtained using a steering vector having a plurality of elements aj(dk). Each of the plurality of elements may characterize propagation of a respective radio signal (e.g., signal of frequency fj) of the one or more radio signals over a path between the wireless device and the object (e.g., a path of length 2dk). In implementations that deploy MUSIC algorithms, each of the plurality of LVs may be obtained using one or more noise eigenvectors or signal eigenvectors of a covariance matrix of a respective set of the plurality of sets of sensing values.
In some implementations, obtaining each of the plurality of LVs may include interpolation. More specifically, elements of any given LV, e.g., Pi(d) for sensing event ti, may be computed based on measured sensing values for a limited number of distances d, followed by interpolation to additional distances. For example, a given LV may have 100 elements, of which 20 elements (e.g., corresponding to 20 different distances d) may be computed based on the measured sensing values while the remaining 80 elements may be interpolated using the 20 elements.
At block 830, method 800 may continue with the processing device generating, using two or more LVs of the plurality of LVs, a likelihood tensor (LT), e.g., P(d0, ν), P({right arrow over (r)}0, {right arrow over (ν)}), and the like. The LT may be defined on a location-velocity space (e.g., d-ν space or {right arrow over (r)}0-{right arrow over (ν)} space) that includes one or more distance dimensions and one or more velocity dimensions. The one or more distance dimensions of the location-velocity space may include one (e.g., do) or two (e.g., any suitable components of {right arrow over (r)}0) spatial coordinates of the object at a reference time (e.g., some time that taken as time t=0). The distance dimension(s) and the velocity dimension(s) (e.g., ν or any suitable components of {right arrow over (ν)}) may be defined in relation to any coordinates, such as, Cartesian coordinates (e.g., x0, y0; νx, νy), polar coordinates (e.g., r0, ϕ0; ν, θ), elliptical coordinates, spherical coordinates, cylindrical coordinates, and the like). The LT may characterize a likelihood that the object is moving along one of a set of trajectories identified by a point in the location-velocity space.
In some implementations, as indicated with the callout portion of
In some implementations, the LT is generated using one or more LVs obtained based on sensing values measured by a different wireless device, e.g., as described above in conjunction with
At block 840, method 800 may include determining, using the LT, an estimated trajectory of the object. For example, determining the estimated trajectory of the object may include identifying one or more extrema of the LT, and may further include selecting an extremum of the LT that is associated with the shortest distance to the object (e.g., the line-of-sight path, which is unobstructed by other objects). The estimated trajectory of the object may include a velocity of the object ν (or multiple components of the vector velocity νx, νy, νz) and a distance to the object d0 (or reference vector location with coordinates x0, y0, z0) at a reference time (e.g., zero or any other reference time to). In some implementations, the estimated trajectory of the object may be provided to an application being executed on a host device that is in communication with the wireless device that estimates the trajectory of the object.
It should be understood that the above description is intended to be illustrative, and not restrictive. Many other implementation examples will be apparent to those of skill in the art upon reading and understanding the above description. Although the present disclosure describes specific examples, it will be recognized that the systems and methods of the present disclosure are not limited to the examples described herein, but may be practiced with modifications within the scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. The scope of the present disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The implementations of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. “Memory” includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, “memory” includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices, and any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
Reference throughout this specification to “one implementation” or “an implementation” means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation of the disclosure. Thus, the appearances of the phrases “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same implementation. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations.
In the foregoing specification, a detailed description has been given with reference to specific exemplary implementations. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of implementation, implementation, and/or other examplar language does not necessarily refer to the same implementation or the same example, but may refer to different and distinct implementations, as well as potentially the same implementation.
The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example’ or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an implementation” or “one implementation” or “an implementation” or “one implementation” throughout is not intended to mean the same implementation or implementation unless described as such. Also, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.