Global navigation satellite system (GNSS) navigation may be aided using or replaced with signals of opportunity (SOPs). Using SOPs with or instead of GNSS navigation requires determining observability and estimability of the SOPs landscape for a different number of receivers, a different number of SOPs, and various a priori knowledge scenarios. The SOPs may provide receiver localization and timing. Cellular SOPs may be used to leverage a large number of base transceiver stations in environments where GNSS signals are typically challenged.
Terrestrial SOPs are abundant and are available at varying geometric configurations, and may be used to improve GNSS navigation. The vehicle 110 receives GNSS signals from the first GNSS satellite 120 and calculates a first range 125, where the first range provides an estimated radius for a first range are 125. Similarly, vehicle 110 calculates a second range arc 135 based on the second GNSS satellite and calculates a third range are 145 based on the SOP transceiver 140. The SOP transceiver 140 may be used to improve navigation reliability whenever GNSS signals become inaccessible or unreliable.
The SOP transceiver 140 may include a cellular long term evolution (LTE) tower. LTE has become a prominent standard for fourth generation (4G) communication systems. LTE provides multiple-input multiple-output (MIMO) capabilities, which allows higher data rates to be achieved compared to the previous generations of wireless standards. The high bandwidths and data rates employed in LTE systems have made LTE signals attractive for navigation. In LTE Release 9, a broadcast positioning reference signal (PRS) was introduced to enable network-based positioning capabilities within the LTE protocol. However, PRS-based positioning suffers from a number of drawbacks: (1) the user's privacy is compromised since their location is revealed to the network, (2) localization services are limited only to paying subscribers and from a particular cellular provider, (3) ambient LTE signals transmitted by other cellular providers are not exploited, and (4) additional bandwidth is required to accommodate the PRS, which caused the majority of cellular providers to choose not to transmit the PRS in favor of dedicating more bandwidth for traffic channels. To address these drawbacks, user equipment (UE)-based positioning approaches may use the cell-specific reference signal (CRS).
To use signals provided by an LTE tower, vehicle 110 may include a receiver capable of extracting navigation observables from LTE signals. In particular, vehicle 110 may include a LTE-compatible software-defined radio (SDR). There are several challenges associated with navigating with such proposed SDRs, which rely on tracking the primary synchronization signal (PSS) transmitted by the LTE base station, or eNodeB. The first challenge results from the near-far effect created by the strongest PSS, which makes it difficult or impossible for the receiver to individually track the remaining ambient PSSs. Though the SDR could track only the strongest PSSs (up to three), this would give rise to a second challenge: the number of intra-frequency eNodeBs that the receiver can simultaneously use for positioning is limited. Alternatively, other cell-specific signals can be tracked, in which case the receiver must obtain high level information of the surrounding eNodeBs, such as their cell ID, signal bandwidth, and the number of transmit antennas. A third challenge is that LTE information is assumed to be known a priori, however in practice the vehicle 110 SDR must be able to obtain this information in unknown enviromnents.
The systems and methods described herein address the challenges described above. In particular, these systems and methods provide for a navigation cellular LTE SDR architecture with low-level signal models for optimal extraction of relevant navigation and timing information from received signals. In an embodiment, when vehicle 110 enters an unknown LTE environment, the first step it performs to establish communication with the network is synchronizing with at least one surrounding eNodeB (e.g., SOP transceiver 140). This is achieved by acquiring the primary synchronization signal (PSS) and the secondary synchronization signal (SSS) transmitted by the eNodeB. Steps taken by the vehicle 110 to acquire these LTE signals are described below with respect to
The subcarriers in an LTE frame are typically separated by Δf=15 KHz, and the total number of subcarriers, Nc is set by the operator. This implies that different LTE systems may have different bandwidths, which are summarized in Table I. Unused subcarriers create a guardband between LTE bands.
To transmit a data, data symbols are first mapped onto the frame REs. An inverse fast Fourier transform (IFFT) is applied to each LTE symbol across all the subcarriers. The resulting signal is serialized and appended with a cyclic prefix (CP) before transmission over the wireless channel. This process may be referred to herein as orthogonal frequency division multiplexing (OFDM). The LTE receiver will then reverse these steps in order to reconstruct the LTE frame. However, the LTE receiver must first determine the frame timing, which is achieved by acquiring the PSS and SSS, as it will be discussed below.
As shown in
where (.)* is the complex conjugate operator, NCP is the set of CP indices, and Ts is the sampling interval. 100291
A downlink physical channel corresponds to a set of REs carrying high level system information or communication data. There are typically seven physical channels in the LTE downlink. The MIB is transmitted on the physical broadcast channel (PBCH), and the SIB is transmitted on the physical downlink shared channel (PDSCH). The physical control format indicator channel (PCFICH) and the physical downlink control channel (PDCCH) is also decoded to extract the SIB, as explained below.
As shown in
Because there is no unique bandwidth for LTE systems, it is important for the UE to determine the bandwidth of the system it is trying to connect to. Furthermore, to provide increased transmission rates, LTE systems employ a multiple input multiple output (MIMO) structure where the number of transmit antennas could be either 1, 2, or 4. To decode the LTE signal correctly, the UE must be aware of the actual transmission bandwidth. Information about the number of transmit antennas and the actual transmission bandwidth is provided to the UE in the MIB, which is transmitted in the second slot of the first subframe. The MIB is mapped to the first 6 RBs around the carrier frequency and the first four symbols of the slot. The MIB symbols are not transmitted on the subcarriers reserved for the reference signals, as discussed below.
The CFI is transmitted on the PCFICH. It indicates the number of OFDM symbols dedicated for the downlink control channel, and can take the values 1, 2, or 3. In order to decode the CFI, the UE first locates the 16 PCFICH REs and demodulates them by reversing the steps in
Knowing the CFI, the UE can identify the REs associated with the PDCCH and demodulate them, resulting in a block of bits corresponding to the DCI message. The packing of these bits can take one of several formats, and is not communicated with the UE. A blind search over the different formats is performed by the UE to unpack these bits. The “candidate” formats are either on the common search space, or on the UE-specific search space, though there are two candidate formats and are both located on the common search space. A cyclic redundancy check (CRC) is obtained to identify the right format.
The DCI is next parsed to give the configuration of the corresponding PDSCH REs carrying the SIB, which are then demodulated. Next, the received bits in downlink-shared channel (DL-SCH) are decoded, resulting in the SIB bits. Subsequently, these bits are decoded using an ASN.1 decoder, which will extract the system information sent on the SIB by the eNodeB.
During signal acquisition, the frame timing and the eNodeB cell ID are determined. Then, the MIB is decoded and the bandwidth of the system as well as the frame number are extracted. This will allow the UE to demodulate the OFDM signal across the entire bandwidth and locate the SIB1 REs. The UE moves on to decode the SIB1 message, from which the scheduling for SIB4 is deduced and, subsequently, decoded. SIB4 contains the cell ID of intra-frequency neighboring cells as well as other information pertaining to these cells. Decoding this information gives the UE the ability to simultaneously track signals from different eNodeBs and produce time-of-arrival (ToA) estimates from each of these eNodeBs. Signal tracking and ToA estimation is described below.
The signal tracking block diagram 800 includes a FLL-assisted PLL, which includes a phase discriminator, a phase loop filter, a frequency discriminator, a frequency loop filter, and a numerically-controlled oscillator (NCO). Because no data is modulated on the SSS, an atan2 phase discriminator, which remains linear over the full input error range of ±π, could be used without the risk of introducing phase ambiguities. A third-order PLL may be used to track the carrier phase, with a loop filter transfer function given by
where ωn,p is the undamped natural frequency of the phase loop, which can be related to the PLL noise-equivalent bandwidth Bn,PLL by Bn,PLL=0.7845ωn,p. The output of the phase loop filter is the rate of change of the carrier phase error 2πfDk, expressed in rad/s, where fDk is the Doppler frequency. The phase loop filter transfer function in (1) is discretized and realized in state-space. The noise-equivalent bandwidth Bn,PLL is chosen to range between 4 and 8 Hz. The PLL is assisted by a second-order FLL with an atan2 discriminator for the frequency. The frequency error at time step k is expressed as
where Spk=lpk+jQpk is the prompt correlation at time step k, and Tsub=0.01 s is the sub-accumulation period, which is chosen to be one frame length. The transfer function of the frequency loop filter is given by
where ωn,f is the undamped natural frequency of the frequency loop, which can be related to the FLL noise-equivalent bandwidth Bn,FLL by Bn,FLL=0.53 ωn,f. The output of the frequency loop filter is the rate of change of the angular frequency 2πfDk expressed in rad/s2. It is therefore integrated and added to the output of the phase loop filter. The frequency loop filter transfer function in (2) is discretized and realized in state-space. The noise-equivalent bandwidth Bn,FLL is chosen to range between 1 and 4 Hz.
The signal tracking block diagram 800 also includes a carrier-aided DLL, which may use a non-coherent dot product discriminator. To compute the SSS code phase error, the dot product discriminator uses the prompt, early and late correlations, denoted by Spk, Sek, and Slk, respectively. The early and late correlations are calculated by correlating the received signal with an early and a delayed version of the prompt SSS sequence, respectively. The time shift between Sek and Slk is defined by an early-minus-late time teml, expressed in chips. The chip interval Tc for SSS (and PSS), can be expressed as
where BW is the bandwidth of the synchronization signal. Since the SSS and PSS occupy only 62 subcarriers, BW is calculated to be BW=62×15=930 KHz, which gives Tc≈1.0752 μs. The autocorrelation function of the transmitted LTE SSS is wide at its peak and therefore a wider teml is preferable in order to have a significant difference between Spk, and Slk.
The DLL loop filter may include a simple gain K, with a noise-equivalent bandwidth
The output of the DLL loop filter vDLL is the rate of change of the SSS code phase, expressed in s/s. Assuming low-side mixing, the code start time is updated according to
Finally, the SSS code start time estimate is used to reconstruct the transmitted LTE frame.
where S(u)(k) is the u-th eNodeB's CRS sequence, A(u) is the set of subcarriers in which S(u)(k) is transmitted, and D(u)(k) represents some other data signals. Assuming that the transmitted signal propagates in an additive white Gaussian noise (AWGN) channel, the received signal will be
where H(u)(k) is the channel frequency response at the k-th subcarrier and w(u)(k) is an AWGN.
The timing information extraction block diagram 1000 includes an estimation of the channel impulse response. The channel frequency response estimate of the desired eNodeB, u′, is obtained according to
From the properties of the CRS sequence,
═S(u′)(k)|2=L
hence
Ĥ
(u′)(k)=H(u′)a(k)+I(u′)(k)+V(k).
The data transmitted by each eNodeB is scrambled by a pseudo-random sequence that is orthogonal to the sequences of other eNodeBs, which means that I(u′) must be zero. However, since the DC component of the transmitted data is removed, the orthogonality between different pseudo-random codes is lost, and the resulting correlation can be modeled as a zero-mean Gaussian random variable. Letting
Γ(k)I(u′)(k)+V(k).
then
Ĥ
(u′)(
k)=H(u′)9k)+Γ(k),
h
(u′)(n)=IFFT {H(u′)(k)}=h(u′)(n)+γ(n) (4)
In general, a multipath channel can be modeled as
where a(u)(l) and n(u)(l) are the attenuation and the delay of the l-th path to the u-th eNodeB, respectively. Estimating n(u′)(l) is achieved through the following hypothesis test
where it can be shown that |ĥ(u′)(n)| has a Rayleigh distribution under H0, and a Rician distribution under H1. To increase the probability of detection, the channel impulse response estimates at different slots can be added incoherently. Similarly, the channel impulse response estimates for different transmit antennas can also be added incoherently, assuming that they have the same line-of-sight (LOS) path. In this paper, the channel frequency response estimates are accumulated across the entire frame to improve the detection performance.
The LTE signals cam be used to extract pseudorange measurements. The receiver state is defined by xr=[rrTcδtr]T, where rr=[xr,yr,zr]T is the receiver's position vector, δtr is the receiver's clock bias, and c is the speed-of-light. Similarly, the state of the i-th eNodeB is defined as x=[rT,cδt]T, where rs, =[x,,z]T is the eNodeB's position vector and δtst is its clock bias. Subsequently, the pseudorange measurement to the i-th eNodeB at time t,ρi, can be expressed as
where vi is the measurement noise, which is modeled a zero-mean Gaussian random variable with variance σi2. By drawing pseudorange measurements to four or more eNodeBs, the UE can estimate its state, provided that the position and the clock bias of the eNodeBs are known. The SOP position can be mapped with a high degree of accuracy collaboratively or non-collaboratively. Additionally, the location of LTE base stations can be obtained from online databases, ground surveys, satellite imagery, or other sources, which may provide a reliable estimate of the position of the eNodeBs to the UE. However, the clock bias of these base stations is a stochastic dynamic process, and needs to be continually estimated. In an embodiment, only the difference δti, δtr−δtsi is considered, though additional embodiments may consider the receiver and eNodeB's individual clock biases. This difference is modeled as a first order polynomial, i.e., δti(t)=ait+bi, where ai is the clock drift between the receiver and the i-th eNodeB and bi is the corresponding constant bias. The coefficients of δti are calculated from the GPS data and the measured pseudoranges using a least-squares (LS) fit. The pseudorange at time t is re-expressed as ρ≈hi(rr,rsi)+vi, where hi(rr,rs,) ∥rr−rs∥2+e·[ait+bi]. By making pseudorange measurements to N≥3 eNodeBs with known position states, the receiver can estimate its position state using an iterative weighted nonlinear LS (WNLS) solver. The receiver's position estimate update at the l-th iteration is given by
is the measurement noise covariance matrix, and H is the Jacobian matrix with respect to the receiver's position, given by
The timing information extraction block diagram 1000 may provide improved performance in the presence of multipath signals. As described above, the LTE receiver may be used to track the SSS, however the transmission bandwidth of the SSS is less than 1 MHz, leading to low TOA accuracy in a multipath environment. However, the SSS can provide computationally low-cost and relatively precise pseudorange information using conventional delay-locked-loops (DLLs). A cell-specific reference signal (CRS) receiver may be used to track a channel impulse response (CIR), which may be used to improve the result of the SSS tracking. The CRS is a reference sequence, which may be used to estimate a channel between an eNodeB and the UE. The CRS may provide increased accuracy in estimating the TOA due to its higher transmission bandwidth. The CIR tracking may be improved further by applying an adaptive threshold for the CRS receiver, such as described below.
An adaptive threshold may be determined through the use of constant false alarm rate (CFAR). An initial threshold may be dependent on the noise variance, σh2, however the noise variance continuously changes in a dynamic environment, and the threshold must be updated accordingly. Changing the threshold to keep a constant pFA is defined as constant false alarm rate (CFAR). Cell-averaging CFAR (CACFAR) is an example CFAR technique. In CA-CFAR, each cell is tested for the presence of a signal. For a given cell under test (CUT), a functional of Nr training cells separated from the CUT by Ng guard cells is computed. In a square-law detector, this functional will be the sum of |ĥ(u)(n)|2, which proportional to the background noise level given by
where xm is the functional evaluated at the m-th training cell. A threshold can be obtained by multiplying Pn by a constant K, hence η=KPn, which can be shown to have a noncentral chi-square distribution with 2Ntdegrees of freedom. The probability of false alarm for a specified threshold is
The pFA in CA-CFAR can be obtained by taking the average of above over all possible values of the decision threshold. This yields
η=(pFA−1/N
To improve the probability of detection while maintaining a constant pFA, a non-coherent integration can be used. For this purpose, it is proposed to integrate squared envelopes of h′(u)(n) at different slots and for different transmitting antennas (assuming that they have the same LOS path) in one frame duration. Defining nt as the number of non-coherent integrations, averaging is performed over niNttraining cells. Therefore, after integration, the threshold will have a noncentral chi-square distribution with 2niNt degrees of freedom. By taking the average of the probability of false alarm given the threshold over the new pdf of this threshold, it can be shown that
where K can be solved for numerically, and the threshold will be determined from η=KPn.
Using the proposed method for tracking the TOA, the probability of false alarm in detecting the first peak means that noise is erroneously detected as a valid signal, which can cause significant errors and potentially loss of track. To resolve this problem, a low-pass filter is applied after the CFAR detector, which removes sudden changes in the estimated TOA. The localization error with the proposed method is acceptable for medium to high bandwidth LTE signals (e.g., above 10 MHz). For lower bandwidths, other methods could be exploited. After detecting d(u)(0), the residual TOA, τ=Tsd(u)(0), is fed-back to the tracking loops to improve the estimated frame start time t′s.
The position states of these eNodeBs were previously mapped, and are shown on map 1200. As shown on map 1200, all measurements and trajectories were projected onto a two-dimensional (2-D) space, which shows the true and estimated receiver trajectories. Subsequently, only the horizontal position of the receiver was being estimated. As shown in
Map 1200 shows 3 eNodeBs tracked in an example experiment, however various numbers of nodes may be tracked to improve the navigation solution. To estimate the position of the receiver in a two-dimensional (2-D) plane using a static estimator, the pseudoranges to at least three eNodeBs are required and can be obtained by tracking the signal of each eNodeB. However, tracking all signals is computationally involved and can prohibit real-time implementation. The received signal from an eNodeB can be highly attenuated, and therefore it may not be possible to track all ambient SSSs. In an embodiment, the frequency reuse factor of six in the LTE CRS signals may be used to extract the pseudorange of multiple eNodeBs while tracking only one eNodeB.
The received symbol at the UE can be written as
where r(1)(n) is the received symbol from the main eNodeB, r(u)(n) is the received signal from the u-th eNodeB at time n, and w(n) is modeled as an additive white Gaussian noise with variance σIQ2. Defining the received time delay of the u-th eNodeB as d(u)(0), which in effect measures the TOA and the clock biases, the signal will be received in one of three possible scenarios. The first scenario happens when the difference of the distances to the main eNodeB and to the neighboring eNodeB is less than the duration of the CP. For a CP of length 4.69 μs, this difference must be less than 1406 m. In the second scenario, the difference is more than the length of a CP. In the third scenario, the neighboring eNodeB is closer to the receiver than the main eNodeB. In the second scenario, the neighboring eNodeBs are significantly far, and it is assumed that the received signals from these eNodeBs are highly attenuated. It is also assumed that the third scenario does not happen since the main eNodeB is defined as the eNodeB with the highest power, which is usually the closest eNodeB to the receiver.
Defining nd(u), n(u)(0)-n(1)(0) as the time delay difference between the u-th eNodeB and the main eNodeB, it can be concluded that for 0≤nd(u)≤LCP,
By taking the FFT of r(n) and using Ri(k) and r(u)(n), the received signal in the frequency-domain becomes
For the symbols carrying the CRS, Y is defined as
Therefore, the CFR of the main eNodeB can be obtained from
and the estimated CFRs for other eNodeBs are obtained according to
Subsequently, the CIRs are calculated using
hĥ
(1)(n)=h(1)(n)+v(1)(n),
ĥ
(u)(n)=h(u)(n−nd(u))+v(u)(n).
d
(u)(0)=d(1)(0)+nd(u).
FIG, 14 is an LTE receiver architecture block diagram 1400, in accordance with at least one embodiment. As it can be seen in
One example computing device in the form of a computer 1510, may include a processing unit 1502, memory 1504, removable storage 1512, and non-removable storage 1514. Although the example computing device is illustrated and described as computer 1510, the computing device may be in different forms in different embodiments. For example, the computing device may instead be a smartphone, a tablet, or other computing device including the same or similar elements as illustrated and described with regard to
Returning to the computer 1510, memory 1504 may include volatile memory 1506 and non-volatile memory 1508. Computer 1510 may include or have access to a computing environment that includes a variety of computer-readable media, such as volatile memory 1506 and non-volatile memory 1508, removable storage 1512 and non-removable storage 1514. Computer storage includes random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM) & electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD ROM). Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of storing computer-readable instructions. Computer 1510 may include or have access to a computing environment that includes input 1516, output 1518, and a communication connection 1520. The input 1516 may include one or more of a touchscreen, touchpad, mouse, keyboard, camera, and other input devices. The input 1516 may include a navigation sensor input, such as a GNSS receiver, a SOP receiver, an inertial sensor (e.g., accelerometers, gyroscopes), a local ranging sensor (e.g., LIDAR), an optical sensor (e.g., cameras), or other sensors. The computer may operate in a networked environment using a communication connection 1520 to connect to one or more remote computers, such as database servers, web servers, and other computing device. An example remote computer may include a personal computer (PC), server, router, network PC, a peer device or other common network node, or the like. The communication connection 1520 may be a network interface device such as one or both of an Ethernet card and a wireless card or circuit that may be connected to a network. The network may include one or more of a Local Area Network (LAN), a Wide Area Network (WAN), the Internet, and other networks.
Computer-readable instructions stored on a computer-readable medium are executable by the processing unit 1502 of the computer 1510. A hard drive (magnetic disk or solid state), CD-ROM, and RAM are some examples of articles including a non-transitory computer-readable medium. For example, various computer programs 1525 or apps, such as one or more applications and modules implementing one or more of the methods illustrated and described herein or an app or application that executes on a mobile device or is accessible via a web browser, may be stored on a non-transitory computer-readable medium.
To better illustrate the method and apparatuses disclosed herein, a non-limiting list of embodiments is provided here.
Each of these non-limiting examples can stand on its own, or can be combined in various permutations or combinations with one or more of the other examples.
Conventional terms in the fields of computer vision have been used herein. The terms are known in the art and are provided only as a non-limiting example for convenience purposes. Accordingly, the interpretation of the corresponding terms in the claims, unless stated otherwise, is not limited to any particular definition. Thus, the terms used in the claims should be given their broadest reasonable interpretation.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. Many adaptations will be apparent to those of ordinary skill in the art. Accordingly, this application is intended to cover any adaptations or variations.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
All publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer-readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read-only memories (ROMs), and the like.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
This application is related and claims priority to U.S. Provisional Application No. 62/398,403, filed on Sep. 22, 2016 and entitled “PERFORMANCE CHARACTERIZATION OF POSITIONING IN LTE SYSTEMS,” and is related and claims priority to U.S. Provisional Application No. 62/561,023, filed on Sep. 20, 2017 and entitled “MITIGATING MULTIPATH FOR POSITIONING IN LTE SYSTEMS,” the entirety of which are incorporated herein by reference.
The invention was made with Government support under Grant No. N00014-16-1-2305, awarded by the Office of Naval Research-N99914. The Government has certain rights in this invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/053002 | 9/22/2017 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62561023 | Sep 2017 | US | |
62398403 | Sep 2016 | US |