The following description relates to a wireless communication system, and more particularly to a method and apparatus for performing beam search or beam transmission based on location error information.
Wireless communication systems have been widely deployed to provide various types of communication services such as voice or data. In general, a wireless communication system is a multiple access system that supports communication of multiple users by sharing available system resources (a bandwidth, transmission power, etc.) among them. For example, multiple access systems include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, a single carrier frequency division multiple access (SC-FDMA) system, and a multi-carrier frequency division multiple access (MC-FDMA) system.
Device-to-device (D2D) communication is a communication scheme in which a direct link is established between user equipments (UEs) and the UEs exchange voice and data directly without intervention of an evolved Node B (eNB). D2D communication may cover UE-to-UE communication and peer-to-peer communication. In addition, D2D communication may be applied to machine-to-machine (M2M) communication and machine type communication (MTC).
D2D communication is under consideration as a solution to the overhead of an eNB caused by rapidly increasing data traffic. For example, since devices exchange data directly with each other without intervention of an eNB by D2D communication, compared to legacy wireless communication, network overhead may be reduced. Further, it is expected that the introduction of D2D communication will reduce procedures of an eNB, reduce the power consumption of devices participating in D2D communication, increase data transmission rates, increase the accommodation capability of a network, distribute load, and extend cell coverage.
At present, vehicle-to-everything (V2X) communication in conjunction with D2D communication is under consideration. In concept, V2X communication covers vehicle-to-vehicle (V2V) communication, vehicle-to-pedestrian (V2P) communication for communication between a vehicle and a different kind of terminal, and vehicle-to-infrastructure (V2I) communication for communication between a vehicle and a roadside unit (RSU).
Autonomous driving have grown into a reality with progress in object detection, recognition and mapping on the fusion of different sensing techniques. 2D or 3D imaging is an important part of autonomous driving since image information can help vehicles in object recognition and route planning [1]. In recent years, real-time detection systems based on light detection and ranging (LiDAR) and vision data are quite popular, where laser scanners and cameras are used for data collection. Ultrasonic sensors also play an important role in short range detection [2]. However, all the detection and recognition techniques above cannot function well in non-line-of-sight (NLoS) condition. To handle this problem, some techniques are proposed by establish vehicle-to-vehicle (V2V) communications, which enable vehicles to share the information through cooperation. But the V2V communication may not be stable under dynamic road conditions. In this specification, an imaging system using millimeter wave is proposed, which is able to capture the 3D images of vehicles around uvnder both LoS and NLoS conditions. Moreover, in foggy weathers, laser scanners and cameras may not perform well, where millimeter-wave (MMW) system is much more robust.
Multisensors are widely used in autonomous driving, where information gathers from different sensors helps guarantee the driving safety and plan the route wisely [3]. Alignment of information from different sensors improves the accuracy and reliability of sensing. Most widely used sensors include cameras, LiDARs, radar sensors, etc. Among all types of sensors, MMW systems is mostly considered as a type of radar sensors in autonomous driving to ensure the driving safety in foggy weather, which can offer high resolution and decent detection range simultaneously. For MMW imaging techniques, conventional synthetic aperture radar (SAR) [4] and inverse synthetic aperture radar ISAR [5] rely on the motion of the sensor or target, which have been maturely applied in aircraft or spacecraft. Over the last few years, highly integrated circuits with moderate costs are available in MMW frequencies. Therefore, the popularity of MMW antenna array based imaging techniques are increasing due to their high resolution and fast electronic scanning capability [6]. The array based imaging techniques can be divided into three categories: monostatic switched array, multiple-input multiple-output (MIMO) array, and phased array. However, all these MMW imaging techniques are based on the requirement of antenna scanning process, where antennas transmit signals sequentially because the round-trip distances cannot be determined if all transmit antennas and received antennas are switched on at the same time. The scanning process is quite time consuming, especially for the 3D imaging. Some compressive sensing techniques [6], [7] are proposed to reduce the scanning time, while it may cause some safety issues if applied to autonomous driving.
An object of the present disclosure is to provide a method for acquiring a shape of target vehicle.
It will be appreciated by persons skilled in the art that the objects that could be achieved with the present disclosure are not limited to what has been particularly described hereinabove and the above and other objects that the present disclosure could achieve will be more clearly understood from the following detailed description.
In one aspect of the present invention, provided herein is a method for performing a vehicle image reconstruction by a sensing vehicle (SV) in a wireless communication system, the method comprising: receiving a plurality of stepped-frequency-continuous-wave (SFCW) from target vehicle (TV); receiving signature waveforms in a different frequency range for the plurality of SFCWs; performing synchronization by using phase-difference-of-arrival (PDoA) based on the signature waveforms; reconstructing one or more virtual images of the TV; and deriving a real image form the determined one of more Virtual Imaging.
The signature waveforms corresponds to two pairs of signature waveforms, each pair contains two signature waveforms.
The each pair of signature waveforms are the transmitted from two specific transmit antenna at the TV.
The signature waveforms of each pair are received at different frequencies outside the bandwidth of the SFCW.
The synchronization is performed by deriving a synchronization gap between the TV and the SV.
The synchronization gap is derived based on phase difference between the two pairs of signature waveforms.
The one or more Virtual images are reconstructed by using 3D Fourier transform.
A deriving the real image is performed using that the real image is on a symmetric position of the virtual images on the basis of a reflection side of a mirror vehicle.
Two common point of virtual image are =(, , ) and =(, , ).
The two common point corresponds to two specific transmit antenna of TV
The two common point of real image are x1=(x1, y1, z1) and x2=(x2, y2, z2).
A relation between the coordinates of (x1, x2) is represented as
wherein denote the directed angle from the x-axis to the virtual line between ( and x1) or ( and x2).
The x-axis corresponds to the SV's moving direction.
In another aspect of the present invention, provided herein is a sensing vehicle (SV) performing a vehicle image reconstruction in a wireless communication system, the first terminal comprising: a transmitting device and a receiving device; and a processor, wherein the processor is configured to receive a plurality of stepped-frequency-continuous-wave (SFCW) from target vehicle (TV); to receive signature waveforms in a different frequency range for the plurality of SFCWs; to perform synchronization by using phase-difference-of-arrival (PDoA) based on the signature waveforms; to reconstruct one or more virtual images of the TV; and to derive a real image form the determined one of more Virtual Imaging.
According to embodiments of the present invention, shape of target vehicle which is not visible from the sensing vehicle is acquired.
It will be appreciated by persons skilled in the art that the effects that can be achieved with the present disclosure are not limited to what has been particularly described hereinabove and other advantages of the present disclosure will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings.
The accompanying drawings, which are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this application, illustrate embodiments of the present disclosure and together with the description serve to explain the principle of the disclosure. In the drawings:
The embodiments of the present disclosure described hereinbelow are combinations of elements and features of the present disclosure. The elements or features may be considered selective unless otherwise mentioned. Each element or feature may be practiced without being combined with other elements or features. Further, an embodiment of the present disclosure may be constructed by combining parts of the elements and/or features. Operation orders described in embodiments of the present disclosure may be rearranged. Some constructions or features of any one embodiment may be included in another embodiment and may be replaced with corresponding constructions or features of another embodiment.
In the embodiments of the present disclosure, a description is made, centering on a data transmission and reception relationship between a base station (BS) and a user equipment (UE). The BS is a terminal node of a network, which communicates directly with a UE. In some cases, a specific operation described as performed by the BS may be performed by an upper node of the BS.
Namely, it is apparent that, in a network comprised of a plurality of network nodes including a BS, various operations performed for communication with a UE may be performed by the BS or network nodes other than the BS. The term ‘BS’ may be replaced with the term ‘fixed station’, ‘Node B’, ‘evolved Node B (eNode B or eNB)’, ‘Access Point (AP)’, etc. The term ‘relay’ may be replaced with the term ‘relay node (RN)’ or ‘relay station (RS)’. The term ‘terminal’ may be replaced with the term ‘UE’, ‘mobile station (MS)’, ‘mobile subscriber station (MSS)’, ‘subscriber station (SS)’, etc.
The term “cell”, as used herein, may be applied to transmission and reception points such as a base station (eNB), a sector, a remote radio head (RRH), and a relay, and may also be extensively used by a specific transmission/reception point to distinguish between component carriers.
Specific terms used for the embodiments of the present disclosure are provided to help the understanding of the present disclosure. These specific terms may be replaced with other terms within the scope and spirit of the present disclosure.
In some cases, to prevent the concept of the present disclosure from being ambiguous, structures and apparatuses of the known art will be omitted, or will be shown in the form of a block diagram based on main functions of each structure and apparatus. Also, wherever possible, the same reference numbers will be used throughout the drawings and the specification to refer to the same or like parts.
The embodiments of the present disclosure can be supported by standard documents disclosed for at least one of wireless access systems, Institute of Electrical and Electronics Engineers (IEEE) 802, 3rd Generation Partnership Project (3GPP), 3GPP long term evolution (3GPP LTE), LTE-advanced (LTE-A), and 3GPP2. Steps or parts that are not described to clarify the technical features of the present disclosure can be supported by those documents. Further, all terms as set forth herein can be explained by the standard documents.
Techniques described herein can be used in various wireless access systems such as code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), single carrier-frequency division multiple access (SC-FDMA), etc. CDMA may be implemented as a radio technology such as universal terrestrial radio access (UTRA) or CDMA2000. TDMA may be implemented as a radio technology such as global system for mobile communications (GSM)/general packet radio service (GPRS)/Enhanced Data Rates for GSM Evolution (EDGE). OFDMA may be implemented as a radio technology such as IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, evolved-UTRA (E-UTRA) etc. UTRA is a part of universal mobile telecommunications system (UMTS). 3GPP LTE is a part of Evolved UMTS (E-UMTS) using E-UTRA. 3GPP LTE employs OFDMA for downlink and SC-FDMA for uplink. LTE-A is an evolution of 3GPP LTE. WiMAX can be described by the IEEE 802.16e standard (wireless metropolitan area network (WirelessMAN)-OFDMA Reference System) and the IEEE 802.16m standard (WirelessMAN-OFDMA Advanced System). For clarity, this application focuses on the 3GPP LTE and LTE-A systems. However, the technical features of the present disclosure are not limited thereto.
With reference to
In a cellular orthogonal frequency division multiplexing (OFDM) wireless packet communication system, uplink and/or downlink data packets are transmitted in subframes. One subframe is defined as a predetermined time period including a plurality of OFDM symbols. The 3GPP LTE standard supports a type-1 radio frame structure applicable to frequency division duplex (FDD) and a type-2 radio frame structure applicable to time division duplex (TDD).
The number of OFDM symbols in one slot may vary depending on a cyclic prefix (CP) configuration. There are two types of CPs: extended CP and normal CP. In the case of the normal CP, one slot includes 7 OFDM symbols. In the case of the extended CP, the length of one OFDM symbol is increased and thus the number of OFDM symbols in a slot is smaller than in the case of the normal CP. Thus when the extended CP is used, for example, 6 OFDM symbols may be included in one slot. If channel state gets poor, for example, during fast movement of a UE, the extended CP may be used to further decrease inter-symbol interference (ISI).
In the case of the normal CP, one subframe includes 14 OFDM symbols because one slot includes 7 OFDM symbols. The first two or three OFDM symbols of each subframe may be allocated to a physical downlink control channel (PDCCH) and the other OFDM symbols may be allocated to a physical downlink shared channel (PDSCH).
The above-described radio frame structures are purely exemplary and thus it is to be noted that the number of subframes in a radio frame, the number of slots in a subframe, or the number of symbols in a slot may vary.
In a wireless communication system, a packet is transmitted on a radio channel. In view of the nature of the radio channel, the packet may be distorted during the transmission. To receive the signal successfully, a receiver should compensate for the distortion of the received signal using channel information. Generally, to enable the receiver to acquire the channel information, a transmitter transmits a signal known to both the transmitter and the receiver and the receiver acquires knowledge of channel information based on the distortion of the signal received on the radio channel. This signal is called a pilot signal or an RS.
In the case of data transmission and reception through multiple antennas, knowledge of channel states between transmission (Tx) antennas and reception (Rx) antennas is required for successful signal reception. Accordingly, an RS should be transmitted through each Tx antenna.
RSs may be divided into downlink RSs and uplink RSs. In the current LTE system, the uplink RSs include:
i) Demodulation-reference signal (DM-RS) used for channel estimation for coherent demodulation of information delivered on a PUSCH and a PUCCH; and
ii) Sounding reference signal (SRS) used for an eNB or a network to measure the quality of an uplink channel in a different frequency.
The downlink RSs are categorized into:
i) Cell-specific reference signal (CRS) shared among all UEs of a cell;
ii) UE-specific RS dedicated to a specific UE;
iii) DM-RS used for coherent demodulation of a PDSCH, when the PDSCH is transmitted;
iv) Channel state information-reference signal (CSI-RS) carrying CSI, when downlink DM-RSs are transmitted;
v) Multimedia broadcast single frequency network (MBSFN) RS used for coherent demodulation of a signal transmitted in MBSFN mode; and
vi) Positioning RS used to estimate geographical position information about a UE.
RSs may also be divided into two types according to their purposes: RS for channel information acquisition and RS for data demodulation. Since its purpose lies in that a UE acquires downlink channel information, the former should be transmitted in a broad band and received even by a UE that does not receive downlink data in a specific subframe. This RS is also used in a situation like handover. The latter is an RS that an eNB transmits along with downlink data in specific resources. A UE can demodulate the data by measuring a channel using the RS. This RS should be transmitted in a data transmission area.
As shown in
R
i=min(NT,NR) [Equation 1]
For instance, in an MIMO communication system, which uses four Tx antennas and four Rx antennas, a transmission rate four times higher than that of a single antenna system can be obtained. Since this theoretical capacity increase of the MIMO system has been proved in the middle of 1990s, many ongoing efforts are made to various techniques to substantially improve a data transmission rate. In addition, these techniques are already adopted in part as standards for various wireless communications such as 3G mobile communication, next generation wireless LAN, and the like.
The trends for the MIMO relevant studies are explained as follows. First of all, many ongoing efforts are made in various aspects to develop and research information theory study relevant to MIMO communication capacity calculations and the like in various channel configurations and multiple access environments, radio channel measurement and model derivation study for MIMO systems, spatiotemporal signal processing technique study for transmission reliability enhancement and transmission rate improvement and the like.
In order to explain a communicating method in an MIMO system in detail, mathematical modeling can be represented as follows. It is assumed that there are NT Tx antennas and NR Rx antennas.
Regarding a transmitted signal, if there are NT Tx antennas, the maximum number of pieces of information that can be transmitted is NT. Hence, the transmission information can be represented as shown in Equation 2.
s=└s
1
,s
2
, . . . ,s
N
┘T [Equation 2]
Meanwhile, transmit powers can be set different from each other for individual pieces of transmission information s1, s2, . . . , sN
ŝ=[ŝ
1
,ŝ
2
, . . . ,ŝ
N
]T=[P1s1,P2s2, . . . ,PN
In addition, can be represented as Equation 4 using diagonal matrix P of the transmission power.
Assuming a case of configuring NT transmitted signals x1, x2, . . . , xN
In Equation 5, Wij U denotes a weight between an ith Tx antenna and jth information. W is also called a precoding matrix.
If the NR Rx antennas are present, respective received signals y1, y2, . . . , yN
y=[y
1
,y
2
, . . . ,y
N
]T [Equation 6]
If channels are modeled in the MIMO wireless communication system, the channels may be distinguished according to Tx/Rx antenna indexes. A channel from the Tx antenna j to the Rx antenna i is denoted by hij. In hij, it is noted that the indexes of the Rx antennas precede the indexes of the Tx antennas in view of the order of indexes.
h
i
T
=[h
i1
,h
i2
, . . . ,h
iN
] [Equation 7]
Accordingly, all channels from the NT Tx antennas to the NR Rx antennas can be expressed as follows.
An AWGN (Additive White Gaussian Noise) is added to the actual channels after a channel matrix H. The AWGN n1, n2, . . . , nN
n=[n
1
,n
2
, . . . ,n
N
]T [Equation 9]
Through the above-described mathematical modeling, the received signals can be expressed as follows.
Meanwhile, the number of rows and columns of the channel matrix H indicating the channel state is determined by the number of Tx and Rx antennas. The number of rows of the channel matrix H is equal to the number NR of Rx antennas and the number of columns thereof is equal to the number NT of Tx antennas. That is, the channel matrix H is an NR×NT matrix.
The rank of the matrix is defined by the smaller of the number of rows and the number of columns, which are independent from each other. Accordingly, the rank of the matrix is not greater than the number of rows or columns. The rank rank(H) of the channel matrix H is restricted as follows.
rank(H)≤min(NT,NR) [Equation 11]
Additionally, the rank of a matrix can also be defined as the number of non-zero Eigen values when the matrix is Eigen-value-decomposed. Similarly, the rank of a matrix can be defined as the number of non-zero singular values when the matrix is singular-value-decomposed. Accordingly, the physical meaning of the rank of a channel matrix can be the maximum number of channels through which different pieces of information can be transmitted.
In the description of the present document, ‘rank’ for MIMO transmission indicates the number of paths capable of sending signals independently on specific time and frequency resources and ‘number of layers’ indicates the number of signal streams transmitted through the respective paths. Generally, since a transmitting end transmits the number of layers corresponding to the rank number, one rank has the same meaning of the layer number unless mentioned specially.
Now, a description will be given of synchronization acquisition between UEs in D2D communication based on the foregoing description in the context of the legacy LTE/LTE-A system. In an OFDM system, if time/frequency synchronization is not acquired, the resulting inter-cell interference (ICI) may make it impossible to multiplex different UEs in an OFDM signal. If each individual D2D UE acquires synchronization by transmitting and receiving a synchronization signal directly, this is inefficient. In a distributed node system such as a D2D communication system, therefore, a specific node may transmit a representative synchronization signal and the other UEs may acquire synchronization using the representative synchronization signal. In other words, some nodes (which may be an eNB, a UE, and a synchronization reference node (SRN, also referred to as a synchronization source)) may transmit a D2D synchronization signal (D2DSS) and the remaining UEs may transmit and receive signals in synchronization with the D2DSS.
D2DSSs may include a primary D2DSS (PD2DSS) or a primary sidelink synchronization signal (PSSS) and a secondary D2DSS (SD2DSS) or a secondary sidelink synchronization signal (SSSS). The PD2DSS may be configured to have a similar/modified/repeated structure of a Zadoff-chu sequence of a predetermined length or a primary synchronization signal (PSS). Unlike a DL PSS, the PD2DSS may use a different Zadoff-chu root index (e.g., 26, 37). And, the SD2DSS may be configured to have a similar/modified/repeated structure of an M-sequence or a secondary synchronization signal (SSS). If UEs synchronize their timing with an eNB, the eNB serves as an SRN and the D2DSS is a PSS/SSS. Unlike PSS/SSS of DL, the PD2DSS/SD2DSS follows UL subcarrier mapping scheme.
The SRN may be a node that transmits a D2DSS and a PD2DSCH. The D2DSS may be a specific sequence and the PD2DSCH may be a sequence representing specific information or a codeword produced by predetermined channel coding. The SRN may be an eNB or a specific D2D UE. In the case of partial network coverage or out of network coverage, the SRN may be a UE.
In a situation illustrated in
A resource pool can be classified into various types. First of all, the resource pool can be classified according to contents of a D2D signal transmitted via each resource pool. For example, the contents of the D2D signal can be classified into various signals and a separate resource pool can be configured according to each of the contents. The contents of the D2D signal may include a scheduling assignment (SA or physical sidelink control channel (PSCCH)), a D2D data channel, and a discovery channel. The SA may correspond to a signal including information on a resource position of a D2D data channel, information on a modulation and coding scheme (MCS) necessary for modulating and demodulating a data channel, information on a MIMO transmission scheme, information on a timing advance (TA), and the like. The SA signal can be transmitted on an identical resource unit in a manner of being multiplexed with D2D data. In this case, an SA resource pool may correspond to a pool of resources that an SA and D2D data are transmitted in a manner of being multiplexed. The SA signal can also be referred to as a D2D control channel or a physical sidelink control channel (PSCCH). The D2D data channel (or, physical sidelink shared channel (PSSCH)) corresponds to a resource pool used by a transmitting UE to transmit user data. If an SA and a D2D data are transmitted in a manner of being multiplexed in an identical resource unit, D2D data channel except SA information can be transmitted only in a resource pool for the D2D data channel. In other word, REs, which are used to transmit SA information in a specific resource unit of an SA resource pool, can also be used for transmitting D2D data in a D2D data channel resource pool. The discovery channel may correspond to a resource pool for a message that enables a neighboring UE to discover transmitting UE transmitting information such as ID of the UE, and the like.
Although contents of D2D signal are identical to each other, it may use a different resource pool according to a transmission/reception attribute of the D2D signal. For example, in case of the same D2D data channel or the same discovery message, the D2D data channel or the discovery signal can be classified into a different resource pool according to a transmission timing determination scheme (e.g., whether a D2D signal is transmitted at the time of receiving a synchronization reference signal or the timing to which a prescribed timing advance is added) of a D2D signal, a resource allocation scheme (e.g., whether a transmission resource of an individual signal is designated by an eNB or an individual transmitting UE selects an individual signal transmission resource from a pool), a signal format (e.g., number of symbols occupied by a D2D signal in a subframe, number of subframes used for transmitting a D2D signal), signal strength from an eNB, strength of transmit power of a D2D UE, and the like. For clarity, a method for an eNB to directly designate a transmission resource of a D2D transmitting UE is referred to as a mode 1 (mode 3 in case of V2X). If a transmission resource region is configured in advance or an eNB designates the transmission resource region and a UE directly selects a transmission resource from the transmission resource region, it is referred to as a mode 2 (mode 4 in case of V2X). In case of performing D2D discovery, if an eNB directly indicates a resource, it is referred to as a type 2. If a UE directly selects a transmission resource from a predetermined resource region or a resource region indicated by the eNB, it is referred to as type 1.
A mode-1 UE may transmit an SA (D2D control signal, or sidelink control information (SCI)) in resources configured by an eNB. For a mode-2 UE, the eNB configures resources for D2D transmission. The mode-2 UE may select time-frequency resources from the configured resources and transmit an SA in the selected time-frequency resources.
An SA period may be defined as illustrated in
In vehicle-to-vehicle communication, a Cooperative Awareness Message (CAM) of a periodic message type, a Decentralized Environmental Notification Message (DENM) of an event triggered message type, and the like may be transmitted. The CAM may contain basic vehicle information such as dynamic state information about a vehicle including the direction and speed, static vehicle data such as dimensions, external lighting conditions, and route history. The size of the CAM message may be 50 to 300 bytes. The CAM message shall be broadcast and the latency shall be shorter than 100 ms. The DENM may be a message generated in an unexpected situation such as a vehicle malfunction or an accident. The size of the DENM may be less than 3000 bytes, and any vehicle within the transmission range may receive the message. In this case, the DENM may have a higher priority than the CAM. Having a high priority may mean that when transmissions are triggered simultaneously from the perspective of a UE, a transmission with a higher priority is preferentially performed, or mean that transmission of a message with a higher priority among multiple messages is preferentially attempted in terms of time. From the perspective of multiple UEs, a message with a higher priority may be set to be less subjected to interference than a message with a lower priority to lower the probability of reception errors. When security overhead is included, CAM may have a larger message size than when the security overhead is not included.
Similar to radar systems, conventional MMW imaging system using coherent signals for detection, where the transmitter and the receiver are connected [8]. However, the system may not be able to work in real time due to the long scanning process. In our scenario, the target vehicle (TV) transmits signals with antennas located around the vehicle body, while the sensing vehicle (SV) recovers the shape information from the received signals. In this scenario, the imaging algorithm allows the TV to transmit all signals together. Thus the scanning process is no more necessary. Moreover, conventional imaging techniques detect the shapes of objects based on the reflection process, while in our work, the shape information is contained in the location of transmit antennas. Thus the whole process is more efficient compared with traditional techniques. With the assistance of multiple mirror vehicles (MVs), we deal with a more general problem where the TV may be invisible to the sensing vehicle. Actually, the LoS case is an easier problem, where the receiver can distinguish the signals in LoS by their power levels and get the location information directly. On the other hand, a common-point detection approach to figure out the location information of the TV with signals reflected from multiple paths is proposed. It is also showed that the LoS case is an special case and can be solved in the same way in this specification.
Based on the
Based on the received signal, the SV performs synchronization by using phase-difference-of-arrival (PDoA), based on the signature waveforms. The synchronization is performed by deriving a synchronization gap between the TV and the SV. The synchronization gap is derived based on phase difference between the two pairs of signature waveforms. After synchronization, the SV reconstruct one or more virtual images of the TV and derive a real image form the determined one of more Virtual Imaging. The one or more Virtual images are reconstructed by using 3D Fourier transform. Deriving the real image is performed using that the real image is on a symmetric position of the virtual images on the basis of a reflection side of a mirror vehicle. Two common point of virtual image are =(, , ) and =(, , ), the two common point corresponds to two specific transmit antenna of TV. And the two common point of real image are x1=(x1, y1, z1) and x2=(x2, y2, z2). A relation between the coordinates of (x1, x2) is represented as
wherein denote the directed angle from the x-axis to the virtual line between ( and x1) or ( and x2).
Hereinafter, the specific explanations are provided for the above embodiments.
As aforementioned, all transmit antennas at the TV simultaneously broadcast the same SFCW waveform denoted by s(t), which is given as
s(t)=[exp(j2πf1(t)), . . . ,exp(j2πfK(t))]T, [equation 1]
where {fk}k=1K represents the set of frequencies with constant gap Δf such that fk=f1+Δf(k−1) for k=1, . . . , K. The received signals at the SV's receive antenna m is expressed as
Where represents the signal path reflected by the -th MV as
Here, is the complex reflection coefficient given as =||exp(j∠), σ is the synchronization gap between the TV and the SV, and is the signal travel time from transmit antenna n to receive antenna m proportional to the travel distance dn,m, i.e., dn,m=c· where c=3·108 (in meter/sec) is the speed of light. Note that signal path =0 represents the LoS path of which the reflection coefficient Γ(0) is one without loss of generality. Last, we assume that the signals reflected by different MVs come from different directions, facilitating to differentiate signals from different MVs according to the angle-of-arrival (AoA). In other words, it is possible to rewrite equation 2 as a K by L+1 matrix Rm(t) as follows:
R
m(t)=[rm(0)(t),rm(1)(t), . . . ,rm(L)(t)]. [equation 4]
The received signal of equation 4 can be demodulated by multiplying D=diag{s(t)H} as
Y
m
=[y
m
(0)
,y
m
(1)
, . . . ,y
m
(L)
]=DR
m(t), [equation 5]
where =(t)=[, , . . . , ]T with the component being
Moreover, two representative antenna n1 and n2 are selected from the antenna array in the TV whose coordinates are denoted as =(, , ) and =(, , ), respectively. Using the selected antennas, TV simultaneously transmits signature waveforms with different frequencies {fK+1, fK+2}∉{fk}k=1K at n1 and {fK+3, fK+4}∉{fk}k=1K at n2 [11]. By using the similar demodulation step in equation 6, the SV can distinguish the waveforms as
Image Reconstruction of Target Vehicle
Assume that the TV and the SV are synchronized (σ=0). The above is then rewritten by the following surface integral form:
where |x is an indicator to become one if a transmit antenna exists on point X and zero otherwise. D(x, pm) represent the total propagation distance between p and the location of receive antenna m denoted by pm. Recalling that the antennas are densely deployed on the surface of the TV, estimating {|p} is equivalent to reconstructing TVs image , namely,
It is worth noting that in case of LoS path (=0), the distance D(p, pm) is given as the direct distance between p and pm. Thus, the reconstructed image is on the location of real TV. In case of NLoS paths (=1, . . . , L), on the other hand, D(p, pm) is the total distance from p via MV to pm. Since SV has no prior information of MV's location, the reconstructed image could be located different from the real one, which is called a virtual image (VI) (see
As a result, the proposed TV imaging technique follows three steps:
1) synchronization between the TV and the SV introduced in the sequel,
2) the reconstruction of image by solving equation 9, and
3) mapping multiple VIs by solving equation 10.
Case 1: LOS Signal Path
In this section, we consider a case when LoS path between the TV and the SV exists, making it reasonable to ignore other NLoS paths due to the significant power difference between LoS and NLoS paths. Thus only 1) synchronization and 2) imaging reconstruction steps are needed.
Synchronization
We apply the techniques of phase-difference-of-arrival (PDoA) [12] and localization to make the zero-synchronization gap (σ=0), which is illustrated in the following.
For synchronization in the LoS scheme, only the signals from the representative antenna n1 is considered. Thus equation 7 can be rewritten as
y
m
(k)=exp(j2πfk(σ−τm)), m=1, . . . ,M k=K+1,K+2, [equation 11]
where the indices of path and the transmit antenna n1 are omitted for brevity.
At each receive antenna, the phase difference between two waveforms are ηm=2πΔ(τm−σ)+εm, where Δ=fK+2−fK+1 and εm is the phase error at the receive antenna m. In other words, the travel distance, denoted by dm, is given in terms of ηm as
Noting that the location of SV's antenna m is known, dm is calculated when four unknown parameters are estimated: the location of the selected SV's antenna located on xn=(xn, yn, zn) and the synchronization gap σ. In other words, we can derive xn and σ together if there are at least four phase difference information.
Denote the location of the transmit antenna n as xn, and the locations of the receive antennas as {pm}m=1M. With the PDoA information, the SV can derive the location of the transmit antenna xn first by using a hyperbolic positioning approach [13]. Then the synchronization gap σ can be obtained straightforward.
From equation 12, the travel distance difference between two receive antennas mi and mj can be expressed as
which can be rewritten as
where the partial derivation with respect to xn is
If the initial value of xn as xn,0=(xn,0, yn,0, zn,0), by using the Newton-Raphson method, the first updated solution can be represented as
where the second and higher order terms of the Taylor expansion of Fm
Similarly, with M selected antennas,
equations can be established, and thus we have
Rewrite the equation 19 in matrix from as
GΔxn,0=b. [equation 20]
By using the MMSE estimator, the estimated value of Δxn,0 can thus be given as
Δ{circumflex over (x)}n,0=(GTG)−1GTb. [equation 21]
The estimated location of xn is then updated as
x
n,1
=x
n,0
+Δ{circumflex over (x)}
n,0. [equation 22]
By repeating the updating process several times, a sufficient approximate solution for the transmit antenna location xn is obtained.
Proposition 1 (Optimality of the Hyperbolic Positioning Approach). The hyperbolic positioning approach above gives an optimal solution, where the original optimization problem is given as
where the constraints consist of
equations if M receive antennas are selected, and ζ=[ζ12, ζ13, . . . , ζ23, ζ24, . . . , ζ(M−1)M]. Under the condition εm
Remark 1 (Sampling Requirement). The synchronization procedures above are based on the assumption that the phase gap estimated at each two adjacent receive antennas is no larger than
Thus the distance between each two adjacent receive antenna at the SV needs to satisfy
This requirement can be easily satisfied in practice.
The measurement of the phase difference ηm=2πΔ(τm−σ)+εm may be affected by phase ambiguity when it is larger than 2π. At the SV equation 12 can be written as
where 0<{circumflex over (η)}m<2π is the phase detected at the receive antenna, and {circumflex over (σ)} is the corresponding synchronization gap satisfying
However, when Remark 1 is satisfied, the phase difference gap ηm
ηm
Thus the hyperbolic positioning approach still works and dm can be obtained. According to equation 24, the SV can estimate the value of {circumflex over (σ)} with
where the effect of noise is negligible when M is sufficiently large (e.g. M≥32). By compensating the synchronization gap in equation 6 with {circumflex over (σ)}, the signals yml,k at the SV can be expressed as
It can be observed from equation 27 that
if and only if fk=p·Δ, p∈Z because the estimated synchronization gap ensures that Δ(σ−{circumflex over (σ)}) is an integer.
Remark 2 (Phase Ambiguity Constraints). The imaging algorithm works only when equation 28 holds, where the phase of the received signals is only relevant to the travel distance or time. As illustrated above, it requires fk=p·Δ, p∈Z, which means all frequencies used for the imaging procedure needs to be integer times of Δ. Thus Δ is no larger than the frequency step of the signals for imaging Δf.
Image Reconstruction
The synchronization gap σ can be removed by the preceding step, facilitating to solve equation 9 in the following. Recall the synchronized demodulation equation 8 enabling to express ym(k) as a 3D surface integral form as
where √{square root over ((x−pm)(x−pm)T)} represents the Euclidean distance between point x and the location of the SV antenna m, denoted by pm=[xm, ym, z0]. Let fk=[fk(x), fk(y), fk(z)] denote the vector of which the components represent the spatial frequencies to the corresponding directions, namely,
f
k=√{square root over ((fk(x))2+(fk(y))2+(fk(z))2)}. [equation 30]
As the spherical wave can be decomposed into an infinite superposition of plane waves [14], the exponential term in equation 27 can be rewritten in terms of fk such that
Changing the order of the integrals in the above leads to
where FT3D(⋅) represents 3D Fourier transform. Note that the term
can be decomposed into each spatial frequency as
By plugging equation 33 into equation 32, we have
where FT2D−1(⋅) represent 2D inverse Fourier transform. As a result, the estimated indicators denoted by {tilde over (|)}x is estimated into the reverse direction as
where FT3D−1, FT2D represent 3-D inverse Fourier transform and 2-D Fourier transforms, respectively. Note that due to the finite deployment of antennas at the TV, the estimated {tilde over (|)}x could be a continuous value between [0,1]. It is thus necessary to map {tilde over (|)}x into either one or zero, namely,
where v represents the detection threshold affecting the performance of the image reconstruction described in the simulation part.
Remark 3 (Resampling in Frequency Domain). To calculate inverse 3D Fourier transform in equation 35, sampling on frequency domain with constant interval is necessary. However, due to the nonlinear relation of each frequency component as √{square root over (fkfkT)}=fk, regular samplings on fk(x) and fk(y) lead to the irregular sampling sequence on fk(z) domain. It is thus required to resample the irregular sampling sequence by using a interpolation to make them regular.
Remark 4 (Spatial and Range Resolution). Due to the relation of each component as √{square root over (fkfkT)}=fk, the samplings FT2D({ym(k)}m=1M, fk(x), fk(y)) obtained from the 2-D Fourier transform are constrained on fk(x) and fk(y), which needs to be set to zero if (fk(x))2+(fk(y))2>(fk)2. Thus the spatial resolution is affected by the selected frequency on fk(z), which limits the bandwidth on f(x) and fk(y). The spatial resolution [9] in x and y directions can be approximated as
Moreover, the range resolution in z direction is
1). Let denote the directed angle from the x-axis (SV's moving direction) to the virtual line between and x1 or and x2 (see
2). Let denote the directed angle from the x-axis to the line segment of VI as shown in
Proof: See Appendix A.
Some intuitions are made from Lemma 1. First, it is shown in equation 40 that the coordinates of (x1, x2) can be calculated when and are given. Second, noting that and are observable from VIs and directly, is easily calculated when is given. Last, and the resultant (x1, x2) are said to be correct if another combination of two VIs, for example and , can yield the equivalent result of (x1, x2). As a result, we can lead to the following feasibility condition.
Proposition 1 (Feasibility Condition of Image Reconstruction). To reconstruct the real image of TV, at least three VIs are required: L≥3.
Proof: See Appendix B.
Suppose L VIs of the same TV are obtained at the SV. First the SV divides the VIs into different couples {, , which are composed of VI and VI . With a given value of , the SV can calculate (x1, x2) with each couple of VIs {, } based on equation 42 and equation 43. Thus (L−1) estimates on (x1, x2) can be obtained in SV, which is denoted as {({circumflex over (x)}1(q), {circumflex over (x)}2(q))}q=2L. Then the SV searches the angle in [−π, π] to minimize
After the searching process, optimal resultant (x1*, x2*) can be obtained by taking average on the (L−1) estimates {({circumflex over (x)}1(q), {circumflex over (x)}2(q))}q=2L, which are determined by the optimal . Moreover, based on the optimal resultant, the line function of the reflection side of the MV can be given by
Therefore, the SV can shift the VI to the symmetric position with respect to . Thus the image of the TV is obtained in SV.
Case 2: NLOS Signal Path
In this section we consider a more complicated scheme where the TV is blocked by some trucks and thus is invisible to the SV. To recover the image of the TV in the SV, synchronization and imaging procedures are also necessary and are similar to the LoS scheme. Moreover, a common-point detection approach is used in this scheme to find the location of the TV because it cannot be directly obtained.
Synchronization
In the synchronization procedure for the NLoS scheme, signals from both representative antennas n1 and n2 are used in the SV. First we consider the signals from the antenna n1, where the SV can distinguish the two waveforms as equation 7. Moreover, the travel distance is given as
where =2πΔ(−σ) is the phase difference between the two waveforms. Similar to the synchronization process in the LoS scheme, σ can be derived by using at least four phase difference information, and synchronization can be achieved as well.
The only difference between the synchronization parts in two schemes is that in equation 39 is the distance from the path . Note that equals to the distance from the virtual point =(, , ), which is symmetric to x1 with respect to the -th MV, to the receive antenna m. Without the knowledge of the MVs, the SV will regard as the location of the transmit antenna. Thus can be derived together with σ, and can be derived in the same way. Also because of the unawareness of MVs, the SV reconstructs VIs in the following imaging procedures.
Virtual Imaging Reconstruction
The SV can take the same imaging process as the LoS scheme. The shape information can still be obtained from the reconstructed VIs, while the location of the TV cannot be determined directly with one image. In the following, we show that the shape information can be obtained similarly.
Without loss of generality, we take the signals from the path as an example. The 3D surface integral form of can be expressed as
where √{square root over ((−pm)(−pm)T)} represents the Euclidean distance from point , which is symmetric to X with respect to the -th MV, to the location of the SV antenna m denoted by pm=[xm, ym, z0]. Then decompose the spherical wave into an infinite superposition of plane waves, the exponential term in equation 40 can be rewritten as
From the comparison of equation 41 and equation 31, it is easy to infer that exp(j∠){} can be obtained by taking the same procedures as the LoS scheme. Although this results are affected by the phase change due to reflection, the shape information is still remained because exp(−j∠) only affects the angle of the received signals. Thus in the NLoS scheme, the SV is still able to recover the VIs of the TV with the imaging algorithm introduced in the LoS scheme. However, the accurate location information of the TV cannot be obtained from only one VI. Therefore, we introduce a common-point detection approach assisted by multiple VIs to locate the TV in the next subsection.
Reconstructing Real Image from Multiple Virtual Images
In this subsection, we aim at reconstructing the TV's real image from multiple VIs. To this end, we utilize the fact that the two representative points in VI whose coordinates are =(, , ) and =(, , ) have some geographical relations with the counterpart points on the real image denoted by x1=(x1, y1, z1) and x2=(x2, y2, z2) which are summarized in Lemma 1. Using the properties, the feasible condition of real image reconstruction is derived and the corresponding algorithm is designed based on the feasible condition.
Lemma 1. Consider VIs and whose representative points are {}i=12 and {}i=12 respectively, which have relations with the counterpart points of the real image denoted by (x1, x2) as follows.
Remark 5 (Existence of LoS paths). The SV may not be able to distinguish the signals from LoS under some conditions. Note that the LoS case is a special realization of NLoS case, where one couple of representative points (x1(0), x2(0)) coincide with the correct resultant (x1, x2). Therefore, all mathematical expression in the common-point detection approach still holds in the LoS condition, and the resultant (x1*, x2*) can be obtained in the same way. Moreover, a threshold ò with appropriate value is set in the SV to detect the LoS condition. Once ∥x1*−∥+∥x2*−∥<ò, the SV will detect it as the LoS condition and treat VI as the image of the TV.
In simulation, some figures are given to show the performance of the imaging algorithm, which also helps the illustration of the whole procedure. Signature waveforms at 6 frequencies are used for the synchronization procedure. The frequencies used in the imaging processing are from 27 GHz to 30 GHz in the Ka band with Δf=46.87 MHz and K=64. The number of transmit antennas in the TV is 300 and the number of receive antenna in the receive aperture is 16×16=256. The receive aperture size is 1 m×1 m.
Firstly, we introduce the metric used to measure the performance of images. We use the Hausdorff distance [15] as the performance metric to evaluate the performance of images obtained with appropriate detection thresholds. Hausdorff distance is widely used in the image registration, which is given as
H(A,B)=max(h(A,B),h(B,A)), [equation 45]
where
is the direct Hausdorff distance from B to A . The Hausdorff distance measures the similarity between two images. We show that the Hausdorff distance between the VIs and the TV model set in advance is quite small, which means these two images are replaceable by each other. Thus the SV can capture the shape of the TV from the VIs.
(a) Point model of TV, reflection surface and the receive aperture.
(b) Envelop Diagram of TV.
and the aperture in the SV is also ploted. The receive aperture is parallel to the y axis and centered at (0,0,0) The envelop diagram is also given in
Without loss of generality, signals received from the direction of the 1-st VI is selected. Therefore, the SV can receive the signals reflected by the 1-st MV and distinguish the signature waveforms from the common points in the 1-st VI. Then the synchronization procedures are taken and the location of the common points can be figured out. After synchronization, the SV can reconstruct the 1-st VI based on the received signals. The performance of the VIs is shown in
When other AoAs are selected, other VIs can also be reconstructed in the same way. Note that the synchronization procedure needs to be done every time before imaging because synchronization procedure can also help recognize if the signals are from the same TV, which indicates whether the images belong to the same vehicle.
After the AoA selection, synchronization and imaging processing procedures, the SV collects the location information of all the common points {, from different VIs and use the common-point detection approach to locate the TV. Then the relation between the TV and all the VIs can be figured out. Here we select the 1-st reconstructed VI and shift it to the estimated location of the TV. The Hausdorff distance between the reconstructed image and the ideal TV model is H(A,B)=0.1262 m, which is relatively small compared to the size of the TV model. The performance is shown in
(a) Reconstructed TV image and the initial TV model.
(b) Envelop Diagram of the reconstructed TV image.
APPENDIX A: PROOF OF LEMMA 1
In
are
Thus we have
The general relations in equation 42 can be derived similarly.
Due to the symmetric relation between the VIs and the TV, we have
π−φ=−=−, [equation 49]
which can be observed from
APPENDIX B: PROOF OF PROPOSITION 1
Bring equation 49 into equation 47, x1 can be simplified as
which indicates that x1 is only determined by the angle ∈[−π, π]. Given the angle , the estimation of x1 from equation 50 is denoted as {circumflex over (x)}1.
Similarly, another estimation of x1 can be obtained from the common points (, ) and (, ), which is given as
The estimation of x1 from equation 51 is denoted as {tilde over (x)}1, and it can be observed from equation 51 that {tilde over (x)}1 is also only determined by . Therefore, the SV can search in the range of [−π, π] to minimize |{circumflex over (x)}1−{tilde over (x)}1|. Two solutions can be obtained with the optimal , which is denoted as {circumflex over (x)}1* and {tilde over (x)}1*. Then the optimal solution of x1 can be obtained as
With x1*, y1 and z1 can be calculated according to equation 42. Thus the location of x1 is obtained. The location of the point x1 can be derived in the same way. Therefore, three VIs is enough for the SV to reconstruct the real image of TV. This finishes the proof
The above embodiment can be summarized as follows
What signal/message should be transmitted?
The stepped-frequency-continuous-wave (SFCW) [9] at millimeter-wave range has been widely used in radar systems to detect target at different distances. It is composed of continuous wave (CW) signals with multiple frequencies, each of which are separated by a certain amount. In our work, all transmit antennas at the TV simultaneously broadcast the same SFCW waveform denoted by s(t), which is given as
s(t)=[exp(j2πf1(t)), . . . ,exp(j2πfK(t))]T, (2)
where {fk}k=1K represents the set of frequencies with constant gap Δf such that fk=f1+Δf(k−1) for k=1, . . . , K.
Moreover, two representative antenna n1 and n2 are selected from the antenna array in the TV whose coordinates are denoted as =(, , ) and =(, , ), respectively. Using the selected antennas, TV simultaneously transmits signature waveforms with different frequencies {fK+1, fK+2}∉{fk}k=1K at n1 and {fK+3, fK+4}∉{fk}k=1K at n2 [11].
2). In 1), is there any specific condition or assumption?
a. The transmitted signals are continuous wave, and should be in the millimeter-wave range to ensure the reflection property on the metal surface or tinted glass.
b. The signals for imaging and the signals for synchronization are transmitted together.
c. After the synchronization procedure, the estimated synchronization gap {circumflex over (σ)} may not be the real synchronization gap σ due to the phase ambiguity. Therefore, only signals at some specific frequencies can achieve the synchronization, which set a constraint on the frequency gap Δ=fK+2−fK+1 that the frequency step Δf of the signals for the imaging needs to be integer times of Δ.
d. We assume that the signals reflected by different MVs come from different directions, which meets the conditions in practice.
e. We assume that the signals experience ‘mirror’ reflection on the surface of the vehicles nearby. This is because the signals at millimeter-wave range are quite directional and has good optical properties. And its wavelength is short compared with signals in lower frequencies, such as WiFi signals, which brings this technique good resolution [14]. Moreover, compared with some new techniques such as LiDAR, millimeter-wave shows better reflection property on the metal surface [16], [17] and thus can help handle the NLoS condition. The reflection coefficient of millimeter-wave on the metal surface is approximately 1 in power and the phase shift is approximately constant if the incident angle is not too close to 0° or 90°. Therefore, considering that the vehicle surface is flat in most cases, millimeter-wave will have good performance to capture the shape information of the TV in both the LoS and NLoS conditions.
3). What receiver behavior should be performed when the signal in 1) is received?
A. Receiver internal operation
B. What response message should be transmitted?
A. In the first step, the receiver demodulate the received signals with asynchronous signals at different frequencies. Therefore, signals at different frequencies can be differentiated easily.
Secondly, the demodulated signals at frequencies {fk}k=1K will be stored to wait for the synchronization. And the receiver also take the signals at frequency fK+1 and fK+2 through the phase detector. The detected phases will be used in the synchronization part, and an estimation {circumflex over (σ)} on the synchronization gap σ will be obtained. Synchronization can be achieved by compensating the synchronization gap in equation 6 with {circumflex over (σ)}. After the signals achieved synchronization, the receiver can apply the imaging algorithm with these signals directly.
If it is in a NLoS condition (the general condition), then the SV also needs to locate the TV with the location information obtained in the synchronization step. And the recovered image will be shifted to the correct location in the end.
The transmission process is illustrated in detail in the system model section. And the synchronization, imaging, and location detection procedures are given in the following sections.
B. The SV does not send any feedback to the TV. To capture the shape information of the TV, the SV only need to receive the waveforms from the TV. Meanwhile, considering the high mobility of the vehicles, it may not easy to establish stable link between the TV and the SV, while utilizing some waveforms to do the sensing should be more practical.
4) From above procedures, what benefit can be achieved?
Firstly, the synchronization problem can be solved by using signature waveforms at two frequencies in LoS condition or four frequencies in NLoS condition (LoS condition can be included in the NLoS condition). The synchronization algorithm is easy to apply and works efficiently.
Secondly, we apply the imaging algorithm to capture the shape information. The imaging algorithm is evolved from some imaging techniques applied in the radar systems [4]-[7] where the transmitters and the receivers are at the same terminal. All these radar systems need long time scanning and can hardly to achieve real time results. However, in our work, the shape information contains in the location of the transmit antennas, and the reflection is on the surface of the MVs, which is flat in most cases. Therefore, the TV can transmit all the signals simultaneously can the SV can recover its shape information quickly, as well as the location information.
Referring to
The processor 13 of the transmission point apparatus 10 may also perform a function of computationally processing information received by the transmission point apparatus 10 and information to be transmitted to the outside, and the memory 14 may store the computationally processed information and the like for a predetermined time, and may be replaced by a component such as a buffer (not shown).
Next, referring to
The processor 23 of the UE 20 according to one embodiment may process necessary details in each of the above-described embodiments. Specifically, the processor may receive error information related to the location of a second UE, determine one or more of a beam direction, a range for performing beam search, and an order in which the beam search is to be performed using the error information about the second UE, perform beam search according to the determination, and transmit a signal to the second UE through beamforming according to a result of the beam search. The processor 23 of the UE 20 may also perform a function of computationally processing information received by the UE 20 and information to be transmitted to the outside, and the memory 24 may store the computationally processed information and the like for a predetermined time and may be replaced by a component such as a buffer (not shown).
The specific configuration of the transmission point apparatus and the UE may be implemented such that the details described in the various embodiments of the present invention may be applied independently or implemented such that two or more of the embodiments are applied at the same time. For clarity, redundant description is omitted.
In the example of
The embodiments of the present disclosure may be implemented through various means, for example, hardware, firmware, software, or a combination thereof.
In a hardware configuration, the embodiments of the present disclosure may be achieved by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, etc.
In a firmware or software configuration, a method according to embodiments of the present disclosure may be implemented in the form of a module, a procedure, a function, etc. Software code may be stored in a memory unit and executed by a processor. The memory unit is located at the interior or exterior of the processor and may transmit and receive data to and from the processor via various known means.
As described before, a detailed description has been given of preferred embodiments of the present disclosure so that those skilled in the art may implement and perform the present disclosure. While reference has been made above to the preferred embodiments of the present disclosure, those skilled in the art will understand that various modifications and alterations may be made to the present disclosure within the scope of the present disclosure. For example, those skilled in the art may use the components described in the foregoing embodiments in combination. The above embodiments are therefore to be construed in all aspects as illustrative and not restrictive. The scope of the disclosure should be determined by the appended claims and their legal equivalents, not by the above description, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
Those skilled in the art will appreciate that the present disclosure may be carried out in other specific ways than those set forth herein without departing from the spirit and essential characteristics of the present disclosure. The above embodiments are therefore to be construed in all aspects as illustrative and not restrictive. The scope of the disclosure should be determined by the appended claims and their legal equivalents, not by the above description, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein. It is apparent to those skilled in the art that claims that are not explicitly cited in each other in the appended claims may be presented in combination as an embodiment of the present disclosure or included as a new claim by a subsequent amendment after the application is filed.
[1] Y. Maalej, S. Sorour, A. Abdel-Rahim, and M. Guizani, “Vanets meet autonomous vehicles: Multimodal surrounding recognition using manifold alignment,” IEEE Access, vol. 6, pp. 29026-29040, 2018.
[2] G. L. Foresti and C. S. Regazzoni, “Multisensor data fusion for autonomous vehicle navigation in risky environments,” IEEE Transactions on Vehicular Technology, vol. 51, pp. 1165-1185, September 2002.
[3] M. A. Abidi and R. C. Gonzalez, “The use of multisensor data for robotic applications,” IEEE Transactions on Robotics and Automation, vol. 6, pp. 159-177, April 1990.
[4] R. K. Raney, H. Runge, R. Bamler, I. G. Cumming, and F. H. Wong, “Precision sar processing using chirp scaling,” IEEE Transactions on Geoscience and Remote Sensing, vol. 32, pp. 786-799, July 1994.
[5] J. M. Munoz-Ferreras, J. Calvo-Gallego, F. Perez-Martinez, A. B. del Campo, A. Asensio-Lopez, and B. P. Dorta-Naranjo, “Motion compensation for isar based on the shift-and-convolution algorithm,” in 2006 IEEE Conference on Radar, pp. 5 pp.-, April 2006.
[6] Q. Cheng, A. Alomainy, and Y. Hao, “Near-field millimeter-wave phased array imaging with compressive sensing,” IEEE Access, vol. 5, pp. 18975-18986, 2017.
[7] Y. lvarez, Y. Rodriguez-Vaqueiro, B. Gonzalez-Valdes, C. M. Rappaport, F. Las-Heras, and J. Martłnez-Lorenzo, “Three-dimensional compressed sensing-based millimeter-wave imaging,” IEEE Transactions on Antennas and Propagation, vol. 63, pp. 5868-5873, December 2015.
[8] A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, and K. P. Papathanassiou, “A tutorial on synthetic aperture radar,” IEEE Geoscience and Remote Sensing Magazine, vol. 1, pp. 6-43, March 2013.
[9] C. Nguyen and J. Park, Stepped-Frequency Radar Sensors: Theory, Analysis and Design. Springer International Publishing, 2016.
[10] S. R. Saunders and S. R. Simon, Antennas and Propagation for Wireless Communication Systems. New York, N.Y., USA: John Wiley & Sons, Inc., 1st ed., 1999.
[11] D. Cohen, D. Cohen, Y. C. Eldar, and A. M. Haimovich, “Sub-Nyquist pulse doppler MIMO radar,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3201-3205, March 2017.
[12] P. V. Nikitin, R. Martinez, S. Ramamurthy, H. Leland, G. Spiess, and K. V. S. Rao, “Phase based spatial identification of UHF RFID tags,” in 2010 IEEE International Conference on RFID (IEEE RFID 2010), pp. 102-109, April 2010.
[13] K. Fujii, Y. Sakamoto, W. Wang, H. Arie, A. Schmitz, and S. Sugano, “Hyperbolic positioning with antenna arrays and multi-channel pseudolite for indoor localization,” Sensors, vol. 15, no. 10, pp. 25157-25175, 2015.
[14] D. M. Sheen, D. L. McMakin, and T. E. Hall, “Three-dimensional millimeter-wave imaging for concealed weapon detection,” IEEE Transactions on Microwave Theory and Techniques, vol. 49, pp. 1581-1592, September 2001.
[15] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, “Comparing images using the Hausdorff distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, pp. 850-863, September 1993.
[16] H. Zhao, R. Mayzus, S. Sun, M. Samimi, J. K. Schulz, Y. Azar, K. Wang, G. N. Wong, F. Gutierrez, and T. S. Rappaport, “28 GHz millimeter wave cellular communication measurements for reflection and penetration loss in and around buildings in new york city,” in 2013 IEEE International Conference on Communications (ICC), pp. 5163-5167, June 2013.
[17] I. Cuinas, D. Martinez, M. G. Sanchez, and A. V. Alejos, “Modelling and measuring reflection due to flat dielectric at 5.8 GHz,” IEEE Transactions on Antennas and Propagation, vol. 55, pp. 1139-1147, April 2007.
The above-described embodiments of the present disclosure are applicable to various mobile communication systems.
Number | Date | Country | Kind |
---|---|---|---|
10-2018-0089012 | Jul 2018 | KR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2019/009571 | 7/31/2019 | WO | 00 |