The present disclosure relates generally to an electronic system and method, and, in particular embodiments, to a radar-based tracker for target sensing.
Applications in the millimeter-wave frequency regime have gained significant interest in the past few years due to the rapid advancement in low cost semiconductor technologies, such as silicon germanium (SiGe) and fine geometry complementary metal-oxide semiconductor (CMOS) processes. Availability of high-speed bipolar and metal-oxide semiconductor (MOS) transistors has led to a growing demand for integrated circuits for millimeter-wave applications at e.g., 24 GHz, 60 GHz, 77 GHz, and 80 GHz and also beyond 100 GHz. Such applications include, for example, automotive radar systems and multi-gigabit communication systems.
In some radar systems, the distance between the radar and a target is determined by transmitting a frequency modulated signal, receiving a reflection of the frequency modulated signal (also referred to as the echo), and determining a distance based on a time delay and/or frequency difference between the transmission and reception of the frequency modulated signal. Accordingly, some radar systems include a transmit antenna to transmit the radio-frequency (RF) signal, and a receive antenna to receive the reflected RF signal, as well as the associated RF circuits used to generate the transmitted signal and to receive the RF signal. In some cases, multiple antennas may be used to implement directional beams using phased array techniques. A multiple-input and multiple-output (MIMO) configuration with multiple chipsets can be used to perform coherent and non-coherent signal processing as well.
In accordance with an embodiment, a method for tracking targets includes: receiving data from a radar sensor of a radar; processing the received data to detect targets; identifying a first geometric feature of a first detected target at a first time step, the first detected target being associated to a first track; identifying a second geometric feature of a second detected target at a second time step; determining an error value based on the first and second geometric features; and associating the second detected target to the first track based on the error value.
In accordance with an embodiment, a radar system includes: a millimeter-wave radar sensor including: a transmitting antenna configured to transmit radar signals; first and second receiving antennas configured to receive reflected radar signals; an analog-to-digital converter (ADC) configured to generate, at an output of the ADC, raw digital data based on the reflected radar signals; and a processing system configured to process the raw digital data to: detect targets, identify a first geometric feature of a first detected target at a first time step, identify a second geometric feature of a second detected target at a second time step, determine an error value based on the first and second features, and associate the second detected target to a first track associated to the first detected target based on the error value.
In accordance with an embodiment, a method includes: receiving data from a radar sensor of a radar; processing the received data to detect humans; clustering detected humans into clusters of cells using k-mean clustering to generate a plurality of clusters; identifying a first geometric feature of a first cluster of the plurality of clusters at a first time step; identifying a second geometric feature of a second cluster of the plurality of clusters at a second time step, where the first and second time steps are consecutive time steps; determining an error value based on the first and second features; and associating the second cluster to a first track associated to the first cluster based on the error value.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Corresponding numerals and symbols in different figures generally refer to corresponding parts unless otherwise indicated. The figures are drawn to clearly illustrate the relevant aspects of the preferred embodiments and are not necessarily drawn to scale.
The making and using of the embodiments disclosed are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the invention, and do not limit the scope of the invention.
The description below illustrates the various specific details to provide an in-depth understanding of several example embodiments according to the description. The embodiments may be obtained without one or more of the specific details, or with other methods, components, materials and the like. In other cases, known structures, materials or operations are not shown or described in detail so as not to obscure the different aspects of the embodiments. References to “an embodiment” in this description indicate that a particular configuration, structure or feature described in relation to the embodiment is included in at least one embodiment. Consequently, phrases such as “in one embodiment” that may appear at different points of the present description do not necessarily refer exactly to the same embodiment. Furthermore, specific formations, structures or features may be combined in any appropriate manner in one or more embodiments.
Embodiments of the present invention will be described in a specific context, a millimeter-wave radar-based tracker for people sensing. Embodiments of the present invention may be used for tracking other targets (e.g., animals, vehicles, robots, etc.) and/or may operate in regimes different than millimeter-wave.
In an embodiment of the present invention, a millimeter-wave radar is used to track human targets based on features extracted from the detected targets. In some embodiments, some or all of the extracted features used to associate a target to a track are not based on a motion model. Thus, some embodiments are advantageously capable of detecting human targets without knowledge (or with little knowledge) of the motion and/or localization (actual or predicted) of the human targets. Therefore, some embodiments are advantageously capable of tracking human targets using low frame-rates. In some embodiments, using low frame-rates advantageously allows for power savings, which may extend battery life in battery powered applications, and/or may advantageously allow for compliance with regulatory requirements (such as FCC requirements) associated with maximum duty cycle (maximum frame rate) for the radar operation, without sacrificing tracking performance.
In some embodiments, features, such as range, Doppler, and/or angle, are additionally tracked and are also used for associating detected targets to tracks based on a motion model, which may advantageously increase the tracking performance. For example, the range and Doppler velocity at a previous time step may be used to predict the location of the target at a future time step, and such information may be used to increase the confidence that a target assignment is correct (e.g., by using a gating region of expected locations for the target at the future time step).
A radar, such as a millimeter-wave radar, may be used to detect and track humans. For example,
During normal operation, millimeter-wave radar sensor 102 operates as a frequency-modulated continuous-wave (FMCW) radar sensor and transmits a plurality of TX radar signals 106, such as chirps, towards scene 120 using transmitter (TX) antenna 114. The radar signals 106 are generated using RF and analog circuits 130. The radar signals 106 may be in the 20 GHz to 122 GHz range. The objects in scene 120 may include one or more humans, which may be moving or idle, for example. Other objects may also be present in scene 120, other moving or static objects, such as furniture, machinery, mechanical structures, walls, etc.
The radar signals 106 are reflected by objects in scene 120. The reflected radar signals 108, which are also referred to as the echo signal, are received by receiver (RX) antennas 116a and 116b. RF and analog circuits 130 processes the received reflected radar signals 108 using, e.g., band-pass filters (BPFs), low-pass filters (LPFs), mixers, low-noise amplifier (LNA), and/or intermediate frequency (IF) amplifiers in ways known in the art to generate an analog signal xouta(t) and xoutb(t).
The analog signal xouta(t) and xoutb(t) are converted to raw digital data xout_dig(n) using ADC 112. The raw digital data xout_dig(n) is processed by processing system 104 to detect humans and their positions, and to track the detected humans.
Although
Although
Controller 110 controls one or more circuits of millimeter-wave radar sensor 102, such as RF and analog circuit 130 and/or ADC 112. Controller 110 may be implemented, e.g., as a custom digital or mixed signal circuit, for example. Controller no may also be implemented in other ways, such as using a general purpose processor or controller, for example. In some embodiments, processing system 104 implements a portion or all of controller 110.
Processing system 104 may be implemented with a general purpose processor, controller or digital signal processor (DSP) that includes, for example, combinatorial circuits coupled to a memory. In some embodiments, processing system 104 may be implemented as an application specific integrated circuit (ASIC). In some embodiments, processing system 104 may be implemented with an ARM, RISC, or x86 architecture, for example. In some embodiments, processing system 104 may include an artificial intelligence (AI) accelerator. Some embodiments may use a combination of hardware accelerator and software running on a DSP or general purpose microcontroller. Other implementations are also possible.
In some embodiments, millimeter-wave radar sensor 102 and a portion or all of processing system 104 may be implemented inside the same integrated circuit (IC). For example, in some embodiments, millimeter-wave radar sensor 102 and a portion or all of processing system 104 may be implemented in respective semiconductor substrates that are integrated in the same package. In other embodiments, millimeter-wave radar sensor 102 and a portion or all of processing system 104 may be implemented in the same monolithic semiconductor substrate. Other implementations are also possible.
As a non-limiting example, RF and analog circuits 130 may be implemented, e.g., as shown in
The TX radar signal 106 transmitted by transmitting antenna 114 is reflected by objects in scene 120 and received by receiving antennas 116a and 116b. The echo received by receiving antennas 116a and 116b are mixed with a replica of the signal transmitted by transmitting antenna 114 using mixer 146a and 146b, respectively, to produce respective intermediate frequency (IF) signals xIFa(t) xIFb(t) (also known as beat signals). In some embodiments, the beat signals xIFa(t) xIFb(t) have a bandwidth between 10 kHz and 1 MHz. Beat signals with a bandwidth lower than 10 kHz or higher than 1 MHz is also possible.
Beat signals xIFa(t) xIFb(t) are filtered with respective low-pass filters (LPFs) 148a and 148b and then sampled by ADC 112. ADC 112 is advantageously capable of sampling the filtered beat signals xouta(t) xoutb(t) with a sampling frequency that is much smaller than the frequency of the signal received by receiving antennas 116a and 116b. Using FMCW radars, therefore, advantageously allows for a compact and low cost implementation of ADC 112, in some embodiments.
The raw digital data xout_dig(n), which in some embodiments include the digitized version of the filtered beat signals xouta(t) and xoutb(t), is (e.g., temporarily) stored, e.g., in matrices of Nc×Ns per receiver antenna 116, where Nc is the number of chirps considered in a frame and Ns is the number of transmit samples per chirp, for further processing by processing system 104.
In some embodiments, ADC 112 is a 12-bit ADC with multiple inputs. ADCs with higher resolution, such as 14-bits or higher, or with lower resolution, such as 10-bits, or lower, may also be used. In some embodiments, an ADC per receiver antenna may be used. Other implementations are also possible.
As shown in
Frames are repeated every FT time. In some embodiments, FT time is 50 ms. A different FT time may also be used, such as more than 50 ms, such as 60 ms, 100 ms, 200 ms, or more, or less than 50 ms, such as 45 ms, 40 ms, or less.
In some embodiments, the FT time is selected such that the time between the beginning of the last chirp of frame n and the beginning of the first chirp of frame n+1 is equal to PRT. Other embodiments may use or result in a different timing.
The time between chirps of a frame is generally referred to as pulse repetition time (PRT). In some embodiments, the PRT is 5 ms. A different PRT may also be used, such as less than 5 ms, such as 4 ms, 2 ms, or less, or more than 5 ms, such as 6 ms, or more.
The duration of the chirp (from start to finish) is generally referred to as chirp time (CT). In some embodiments, the chirp time may be, e.g., 64 μs. Higher chirp times, such as 128 μs, or higher, may also be used. Lower chirp times, may also be used.
In some embodiments, the chirp bandwidth may be, e.g., 4 GHz. Higher bandwidth, such as 6 GHz or higher, or lower bandwidth, such as 2 GHz, 1 GHz, or lower, may also be possible.
In some embodiments, the sampling frequency of millimeter-wave radar sensor 102 may be, e.g., 1 MHz. Higher sampling frequencies, such as 2 MHz or higher, or lower sampling frequencies, such as 500 kHz or lower, may also be possible.
In some embodiments, the number of samples used to generate a chirp may be, e.g., 64 samples. A higher number of samples, such as 128 samples, or higher, or a lower number of samples, such as 32 samples or lower, may also be used.
During steps 302a and 302b, raw ADC data xout_dig(n) is received, e.g., from millimeter-wave radar sensor 102. As shown, the raw ADC data xout_dig(n) includes separate baseband radar data from multiple antennas (e.g., 2 in the example shown in
During steps 304a and 304b, signal conditioning, low pass filtering and background removal are performed on the raw ADC data of the respective antenna 116. The raw ADC data xout_dig(n) radar data are filtered, DC components are removed to, e.g., remove the Tx-Rx self-interference and optionally pre-filtering the interference colored noise. Filtering may include removing data outliers that have significantly different values from other neighboring range-gate measurements. Thus, this filtering also serves to remove background noise from the radar data.
During steps 306a and 306b, 2D moving target indication (MTI) filters are respectively applied to data produced during steps 304a and 304b to remove the response from static targets. The MTI filter may be performed by subtracting the mean along the fast-time (intra-chirp time) to remove the transmitter-receiver leakage that perturbs the first few range bins, followed by subtracting the mean along the slow-time (inter-chirp time) to remove the reflections from static objects (or zero-Doppler targets).
During steps 308a and 308b, a series of FFTs are performed on the filtered radar data produced during steps 306a and 306b, respectively. A first windowed FIT having a length of the chirp is calculated along each waveform for each of a predetermined number of chirps in a frame of data. The FFTs of each waveform of chirps may be referred to as a “range FFT.” A second FFT is calculated across each range bin over a number of consecutive periods to extract Doppler information. After performing each 2D FIT during steps 308a and 308b, range-Doppler images are produced, respectively.
During step 310, a minimum variance distortionless response (MVDR) technique, also known as Capon, is used to determine angle of arrival based on the range and Doppler data from the different antennas. A range-angle image (RAI) is generated during step 310. In some embodiments, a range-Doppler-angle data cube is generated during step 310.
During step 312, an ordered statistics (OS) Constant False Alarm Rate (OS-CFAR) detector is used to detect targets. The CFAR detector generates a detection image in which, e.g., “ones” represent targets and “zeros” represent non-targets based, e.g., on the power levels of the RAI, by comparing the power levels of the RAI with a threshold, points above the threshold being labeled as targets (“ones”) while points below the threshold are labeled as non-targets (“zeros”).
In some embodiments, targets present in the detection image generated during step 312 are clustered during step 314, e.g., based on similar feature characteristics, such as empirical mode decomposition (EMD), and/or scale invariant feature transform (SIFT), associated with the detected targets. In some embodiments, other types of features of the detected targets, such as motion model-based features based on, e.g., range, Doppler, and/or angle, may also be used to cluster cells together. In some embodiments, metrics such as correlation and/or Wasserstein distance may be used to determine the similarities between clusters. In some embodiments, the feature-based clustering is performed by using k-means clustering, in which targets are grouped (clustered) based on having similar features to the one of k clusters having the nearest mean of such (e.g., combined) features.
For example, in some embodiments, a vector of features includes a plurality of features (e.g., intrinsic mode functions (IMFs) and/or number of IMFs, which are associated with EMD, and/or magnitude M(m,n) and/or phase ϕ(m,n), which are associated with SIFT), where each channel describes a type of feature (e.g., IMFs, number of IMFs, magnitude M(m,n) and/or phase ϕ(m,n)). Each channel may be described as a Gaussian distribution (taking mean and variance over available vectors of the same feature). A weighted sum over all the different Gaussian distributions over the channels is obtained to provide a descriptor for each cell, where the descriptor is associated with all the feature types and which may be a value or vector that is indicative of the characteristics (features) of the associated cluster and which may be used to determine how similar are clusters. Such descriptor is used for clustering, e.g., using the k-means clustering algorithm.
In some embodiment, a density-based spatial clustering of applications with noise (DBSCAN) algorithm may also be used to associate targets to clusters during step 314. The output of DBSCAN is a grouping of the detected points into particular targets. DBSCAN is a popular unsupervised algorithm, which uses minimum points and minimum distance criteria to cluster targets, and may be implemented in any way known in the art. Other clustering algorithms may also be used.
In some embodiments, thus, clustering results in the radar image (e.g., RAI or RDI) or data cube being divided into groups of cells with similar descriptors. In some embodiments, each cluster corresponds to a (e.g., potential) detected target. Since the spread of features is not necessarily uniform, in some embodiments, each cluster is not necessarily equal. Thus, in some embodiments, the radar image or data cube is divided into clusters of cells, but each cluster of cells is not necessarily of the same size (e.g., does not have the same number of cells/sub-cells).
During step 316, detected (clustered) targets are associated with respective tracks. As will be described in more detail later, in some embodiments, detected targets are associated to respective tracks using feature-based template matching (during step 318). For example, in some embodiments, geometric features are used during step 318 for template matching. A geometric feature may be understood as a feature that is recognizable despite changes in rotation of the target, as well as changes in the range, Doppler velocity, and angle of the centroid of the target. In some embodiments a geometric feature may include a physical geometric feature, such as physical edges of the target (e.g., from the radar image). In some embodiments, additionally or alternatively, a geometric feature may include a metric (e.g., a vector, function, or group of functions) based on the relationship between cells of the raw data (e.g., of the data cube), such as the relationship between range cells, Doppler velocity cells, and/or angle cells. Examples of such metric include functions extracted using functional decomposition of the data cube, gradients of the data cube, and/or statistical properties of the data cube (such as histograms/PDF of the data cube). Examples of geometric features include EMD features and SIFT features.
In some embodiments, geometric features allow for identification of a target without relying on a motion model. In some embodiments, geometric features allow for distinguishing between tracked targets.
In some embodiments, geometric features such as EMD, and/or SIFT, are tracked for each target. For each clustered cell (for each detected target) a feature vector is generated for each time step i with values of each feature associated with the clustered cell. Detected targets at time step i+1 are assigned to respective tracks based on the similarities between feature vectors (e.g., based on the error between the feature vectors), e.g., using Hungarian assignment. For example, in some embodiments, a similarity measure is identified between feature clusters at consecutive time steps (e.g., i, and i+1), and the assignments that minimize the error (e.g., increase correlation) between feature clusters is selected for track assignment.
In some embodiments, the data association step (316) may include, additionally, data association methods that do not rely on featured-based template matching.
In some embodiments, the data assignment of detected targets (clusters) to tracks relies on the geometric features of the cluster and does not rely (or does not rely solely) on the actual physical locations and/or velocities of the detected targets.
During step 320, track filtering is performed, e.g., for tracking a target over time. For example, in some embodiments, the unscented Kalman filter is used to perform track filtering during step 320. For example, in some embodiments, the features (e.g., SIFT, EMD, range, Doppler, angle, deep learning-based parameters, and/or other parameters associated with the track) are, e.g., additional features used to perform data association (which may also be tracked by the Kalman filter). The unscented Kalman filter may also track localization of each track and may rely on the track history of such localization to enhance data association. The Kalman filter may be implemented in any way known in the art.
It is understood that although targets may be identified using template matching (during step 316) that may not include spatial and/or movement information (e.g., range, Doppler, angle), such localization information may still be tracked during step 320. Thus, in some embodiments, featured-based template matching (step 318) is an enabler for data association in environments, such as low frame rate, and/or multi-target scenarios, and/or distributed radar implementations in which relying in localization information alone may be difficult.
During step 324, track management tasks, such as generating tracks and killing tracks are performed. For example, during step 324, track initializations, re-initialization, and/or tracks killing may be performed, e.g., based on whether detected targets are no longer in the field-of-view (in scene 120), or re-entered the field of view, for example.
In some embodiments, steps 316, 320, and 324 may be implemented in different order. For example, in some embodiments, track initialization (during step 324) may be performed before performing step 316.
As shown in
As shown in
Since some embodiments rely on geometric features that are not based on a motion model (such as such as EMD, and SIFT) to track targets, some embodiments are advantageously suitable for tracking targets using distributed radar, where a human may move from fields-of-view of different radars and where the radars may lack information about movement of a target outside their own filed-of-view. For example,
As shown in
In some embodiments, controller 510 may be implemented as part of processing system 104 of one of the radars 100 of distributed radar system 500. In some embodiments, controller 510 may be implemented externally to processing systems 104 of radar systems 100 of distributed radar system 500, e.g., with a general purpose processor, controller or digital signal processor (DSP) that includes, for example, combinatorial circuits coupled to a memory. In some embodiments, processing system 104 may be implemented as an application specific integrated circuit (ASIC). In some embodiments, processing system 104 may be implemented with an ARM, RISC, or x86 architecture, for example. In some embodiments, processing system 104 may include an artificial intelligence (AI) accelerator. Some embodiments may use a combination of hardware accelerator and software running on a DSP or general purpose microcontroller. Other implementations are also possible.
For example, in some embodiments, the features of the detected target along with spatial and movement parameters are passed to a central processor 510 (external to each radar of distributed radar system 500) for data association and tracking. In some embodiments, one of the processing systems 104 of the distributed radar system 500 may operate as the central processor 510, e.g., for data association and tracking.
As shown in
During step 612, assignments between L clusters at time step i and L clusters at time step i+1 are made, e.g., to minimize the error between error vectors (e.g., so that the summation of the errors of the error vectors between assigned clusters is minimized). In some embodiments, Hungarian assignment is used during step 612 to associate clusters at time step i to clusters at time step i+1. In some embodiments, each cluster at time step i+1 is associated to the tracks that corresponds to the associated cluster at time i.
In some embodiments, applying Hungarian assignment comprises:
calculating the cost matrix C, where ci,j is the cost between the clusters pi at time step i and clusters yi at time step i+1 according to a metric F (e.g., correlation, difference, Wasserstein distance, Euclidean distance, means square error, etc.), and may be given by
ci,j=F(pi,yj) (1)
finding the assignment matrix A that minimizes the element-wise product between C and A, e.g., by
reordering the vector y according to the ones in the assignment matrix A, and, e.g., sequentially assigning clusters from vector p to clusters from the ordered vector y.
In some embodiments, the number of clusters at time steps i and i+1 are different (e.g., since a previously detected target disappears, or a new target arrives to the field-of view). In some such embodiments, assignment is made to minimize the error between vectors at each time step, and the error of the additional vectors that are not associated with a corresponding vector at the other time step are assigned a default error. In some embodiments, for each unassigned cluster or target, and error counter is used to count how many times there are unsigned targets, and corresponding tracks are killed when the counter reaches a predetermined threshold (e.g., 5, 6, etc.).
In some embodiments, a motion model that relies on features such as range, Doppler, and/or angle, is also used for associating detected targets to tracks, which may advantageously increase the tracking performance. For example, the range and Doppler velocity at a previous time step may be used to predict the location of the target at a future time step, and such information may be used to increase the confidence that a target assignment is correct (e.g., by using a gating region of expected locations for the target at the future time step), and where the confidence level is used to associate the target to a track. Thus, in some embodiments, associating a target to a track is further based on a motion-based model, e.g., based on range, Doppler, and/or angle.
During step 702, EMD feature extraction is performed. EMD may be understood as a way to decompose a signal data into intrinsic mode functions (IMF) of instantaneous frequencies contained in the original signal data. The disintegrated signal components form a complete or nearly orthogonal basis of the original data signal. In some embodiments, EMD is performed on raw data (e.g., the Doppler signal) associated with the cluster cell. For example, in some embodiments, EMD is performed on raw data (e.g., from the data cube) associated with a particular cluster cell at time step i.
During step 724, the IMFs having an energy higher than a predetermined value are identified. During step 726, the identified IMFs are sorted (e.g., in ascending order). During step 728, the error between the sorted identified IMFs at time i (for each cluster cell) and the sorted IMFs of each cluster at time i+1 is computed to generate, e.g., L, error values for each cluster cell at time i (which may be used as part of the error vectors in step 610).
In some embodiments, the error between IMFs at time steps i and i+1 is determined by using, e.g., means square error. In some embodiments, the number of IMF above the threshold may also be used to determine the error between clusters at time steps i and i+1.
In some embodiments, the EMD features (e.g., the sorted IMFs) between all clusters at time step i and all clusters at time step i+1, are compared, and the cluster pairs at time steps i and i+1 resulting in the lowest means square error are associated.
In the embodiment of
During step 922, SIFT feature extraction is performed. SIFT may be understood as a pattern recognition method for detecting features of a radar image (e.g., RDI, RAI) that are, e.g., invariant to scaling, rotation, translation, and geometrical distortion, or any affine distortion of the image. For example, in some embodiments, SIFT feature extraction is obtained by computing gradients between the different cells of an image. For example, for each clustered cell, magnitude M(m,n) and phase ϕ(m,n) may be determined by applying Equations 3 and 4:
where X(m,n) are the sub-cells of the clustered cell of the radar image, m,n being the location of the sub-cell.
In some embodiments, SIFT feature extraction is performed on, e.g., RDI, RAI, or data cube (e.g., from steps 308a, 308b, and/or 310) on a region associated with a particular cluster cell at time step i, by, e.g., using Equations 3 and 4.
During step 928, the error between the magnitude and/or phase of the cluster cell at time i and of each of the other clusters at time i+1 is computed to generate L error values (which may be used as part of the error vectors in step 610). For example, in some embodiments, a correlation value r may be used, which may be given by
where x is the magnitude M or phase ϕ vector at time i,
In some embodiments, the correlation r between SIFT features is computed between all clusters at time step i and all clusters at time step i+1, e.g., by using Equation 5. In some embodiments, the clusters at time steps i and i+1 having the highest correlation and greater than a predetermined correlation threshold are associated. In some embodiments, the predetermined correlation threshold is between 0.5 and 0.9.
During step 1101, the Wasserstein distance is computed between clusters at time steps i and i+1. The Wasserstein distance (also referred to as earth mover's distance, Wasserstein metric or Kantorovich-Rubinstein metric) is a mathematical function that computes, in addition to the similarities between two probability distributions, the distance between the two probability distributions. It is understood that the term “distance,” as used with respect to the Wasserstein metric, is the distance between distributions and not necessarily a physical distance. For example, the Wasserstein metric may be used to determine the distance between SIFT features of two clusters, and/or between EMD features of two clusters. The Wasserstein metric may be a physical distance in other scenarios, e.g., when used with respect to RAI or RDI data.
In some embodiments, the raw data (e.g., a vector of different feature values, such as SIFT and/or EMD, as well as, e.g., range, Doppler and/or angle) associated with each cluster is modeled (approximated) as a Gaussian distribution having a respective mean μ and standard deviation G. For example, in an embodiment having a cluster (P1) at time step i having mean μ1 and standard deviation σ1, and a cluster (P2) at time step i+1 having mean μ2 and standard deviation σ2, the Wasserstein metric may be computed as
W12=√{square root over (∥μ1−μ2∥22+σ1+σ1−2√{square root over (σ1σ2)})} (6)
where ∥ ∥22 is the L2 norm.
In some embodiments, the Wasserstein metric is computed between all clusters at time step i and all clusters at time step i+1, e.g., by using Equation 6. In some embodiments, the clusters at time steps i and i+1 having the lowest Wasserstein distance are associated.
In some embodiments, image 1201 is a representation of features (e.g., SIFT, EMD) and the Wasserstein metric is used to determine the similarities between features. In some embodiments, image 1201 is a radar image (e.g., RAI, RDI), and the Wasserstein metric is used for motion-based tracking (e.g., using Euclidean distance).
In some embodiments, a single type of feature is used during template matching. For example, in some embodiments, step 316 is performed by performing steps 702 and 710, and assigning clusters at time steps i and i+1, e.g., by minimizing the total means square error. In some embodiments, step 316 is performed by performing steps 904 and 910, and assigning clusters at time steps i and i+1, e.g., by maximizing the correlation between clusters. In some embodiments, step 316 is performed by performing step 1101 based solely on EMD features or based solely on SIFT features, and assigning clusters at time steps i and i+1, e.g., by minimizing the Wasserstein distance (e.g., minimizing the summation of the Wasserstein distances between matched clusters).
As shown,
As shown in
A similarity metric is then generated, e.g., by using a (e.g., weighted) average of each of the errors (in this example Wasserstein distances) inside each of the error vectors. Thus, the similarity metric may be used as a metric indicating how similar two clusters are. Assignment is then made (e.g., using Hungarian assignment) to minimize the total error. For example, in this example, the sum of errors D402_406 and D404_408 is lower than the sum of the sum of errors D404_406 and D402_408 and thus clusters 402 and 406 are matched and clusters 404 and 408 are matched (as also shown in
In some embodiments, such as shown in
As shown in
For example, in some embodiments, the output of the Lth layer of a DCNN may be given by (Wl,Hl,Dl), where Wl,Hl are the width and height of each feature map and DI is the dimension/number of feature maps at Lth layer. In some embodiments, instead of one layer, multiple layer outputs can be treated as extracted features, for e.g., layer L and layer L+1.
In some embodiments, the DCNN is trained to learn distinct geometric features by using supervised learning with a data set including multiple (e.g., human) targets performing a plurality of activities (e.g., walking, running, standing idle, etc.).
In some embodiments, template matching is performed, in addition to one or more of EMD, SIFT and/or deep learning-based geometric features, on a motion model relying on one or more of range, Doppler, and angle.
Example embodiments of the present invention are summarized here. Other embodiments can also be understood from the entirety of the specification and the claims filed herein.
Example 1. A method for tracking targets, the method including: receiving data from a radar sensor of a radar; processing the received data to detect targets; identifying a first geometric feature of a first detected target at a first time step, the first detected target being associated to a first track; identifying a second geometric feature of a second detected target at a second time step; determining an error value based on the first and second geometric features; and associating the second detected target to the first track based on the error value.
Example 2. The method of example 1, where associating the second detected target to the first track includes associating the second detected target to the first track when the error value is lower than a predetermined threshold.
Example 3. The method of one of examples 1 or 2, where associating the second detected target to the first track includes associating the second detected target to the first track using Hungarian assignment.
Example 4. The method of one of examples 1 to 3, where: identifying the first geometric feature includes: performing a first empirical mode decomposition (EMD) on received data associated with the first detected target, identifying first intrinsic mode functions (IMFs) from the first EMD that are higher than a predetermined threshold, and sorting the first IMFs; identifying the second geometric feature includes: performing second EMD on received data associated with the second detected target, identifying second IMFs from the second EMD that are higher than the predetermined threshold, and sorting the second IMFs; and determining the error value includes determining a mean square error based on the sorted first and second IMFs.
Example 5. The method of one of examples 1 to 4, where identifying the first geometric feature includes performing invariant feature transform (SIFT) feature extraction on a first radar image associated with the first time step based on the received data to extract first magnitude or first phase associated with the first detected target, where identifying the second geometric feature includes performing SIFT feature extraction on a second radar image associated with the second time step based on the received data to extract second magnitude or second phase associated with the second detected target, and where determining the error value includes determining a correlation between the first magnitude and the second magnitude or between the first phase and the second phase.
Example 6. The method of one of examples 1 to 5, where determining the error value includes determining the correlation between the first magnitude and the second magnitude and between the first phase and the second phase.
Example 7. The method of one of examples 1 to 6, where identifying the first geometric feature includes approximating data associated with the first detected target at the first time step to a Gaussian distribution having a first mean and a first standard deviation, where identifying the second geometric feature includes approximating data associated with the second detected target at the second time step to a Gaussian distribution having a second mean and a second standard deviation, and where determining the error value includes determining the error value based on the first mean, the second mean, the first standard deviation, and the second standard deviation.
Example 8. The method of one of examples 1 to 7, further including clustering detected targets into clusters of cells using k-mean clustering to generate a plurality of clusters, where the first and second detected targets are first and second clusters of the plurality of clusters.
Example 9. The method of one of examples 1 to 8, further including tracking the first detected target using an unscented Kalman filter.
Example 10. The method of one of examples 1 to 9, where tracking the first detected target includes tracking localization information of the first detected target over time.
Example 11. The method of one of examples 1 to 10, further including: identifying first range and first Doppler velocity associated with the first detected target at the first time step; and identifying second range and second Doppler velocity associated with the second detected target at the second time step, where associating the second detected target to the first track is further based on the first and second ranges, and on the first and second Doppler velocities.
Example 12. The method of one of examples 1 to 11, where the first and second detected targets are human targets.
Example 13. The method of one of examples 1 to 12, where the first and second time steps are consecutive time steps.
Example 14. The method of one of examples 1 to 13, further including: transmitting radar signals using a transmitter antenna of the radar; receiving reflected radar signals using a receiver antenna of the radar; and generating digital data from the received reflected radar signals using an analog-to-digital converter (ADC), where receiving the data from the radar includes receiving the data from the ADC, and where transmitting the radar signals includes transmitting the radar signals with a frame rate of 10 frames per second or slower.
Example 15. The method of one of examples 1 to 14, where the radar is a millimeter-wave radar.
Example 16. The method of one of examples 1 to 15, further including: receiving further data from a further radar sensor of a further radar; processing the received further data to detect further targets; identifying a further geometric feature of a further detected target at a third time step; determining a further error value based on the first and further geometric features; and associating the further detected target to the first track associated to the first detected target based on the further error value.
Example 17. A radar system including: a millimeter-wave radar sensor including: a transmitting antenna configured to transmit radar signals; first and second receiving antennas configured to receive reflected radar signals; an analog-to-digital converter (ADC) configured to generate, at an output of the ADC, raw digital data based on the reflected radar signals; and a processing system configured to process the raw digital data to: detect targets, identify a first geometric feature of a first detected target at a first time step, identify a second geometric feature of a second detected target at a second time step, determine an error value based on the first and second geometric features, and associate the second detected target to a first track associated to the first detected target based on the error value.
Example 18. The radar system of example 17, where the transmitting antenna is configured to transmit radar signals at a rate of 10 frames per second or slower.
Example 19. A method including: receiving data from a radar sensor of a radar; processing the received data to detect humans; clustering detected humans into clusters of cells using k-mean clustering to generate a plurality of clusters; identifying a first geometric feature of a first cluster of the plurality of clusters at a first time step; identifying a second geometric feature of a second cluster of the plurality of clusters at a second time step, where the first and second time steps are consecutive time steps; determining an error value based on the first and second geometric features; and associating the second cluster to a first track associated to the first cluster based on the error value.
Example 20. The method of example 19, where: identifying the first geometric feature includes: performing a first empirical mode decomposition (EMD) on received data associated with the first cluster, identifying first intrinsic mode functions (IMFs) from the first EMD that are higher than a predetermined threshold, and sorting the first IMFs; identifying the second geometric feature includes: performing second EMD on received data associated with the second cluster, identifying second IMFs from the second EMD that are higher than the predetermined threshold; and determining the error value includes determining the error value based on the first and second IMFs.
Example 21. The method of one of examples 19 or 20, where identifying the first and second geometric features includes using a deep convolutional neural network based on a data cube derived from the received data.
While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
Number | Name | Date | Kind |
---|---|---|---|
4241347 | Albanese et al. | Dec 1980 | A |
6147572 | Kaminski et al. | Nov 2000 | A |
6414631 | Fujimoto | Jul 2002 | B1 |
6636174 | Arikan et al. | Oct 2003 | B2 |
7048973 | Sakamoto et al. | May 2006 | B2 |
7057564 | Tsai et al. | Jun 2006 | B2 |
7171052 | Park | Jan 2007 | B2 |
7317417 | Arikan et al. | Jan 2008 | B2 |
7596241 | Rittscher et al. | Sep 2009 | B2 |
7692574 | Nakagawa | Apr 2010 | B2 |
7873326 | Sadr | Jan 2011 | B2 |
7889147 | Tam et al. | Feb 2011 | B2 |
8228382 | Pattikonda | Jul 2012 | B2 |
8497805 | Rofougaran et al. | Jul 2013 | B2 |
8659369 | Rofougaran et al. | Feb 2014 | B2 |
8731502 | Salle et al. | May 2014 | B2 |
8836596 | Richards et al. | Sep 2014 | B2 |
8847814 | Himmelstoss et al. | Sep 2014 | B2 |
8860532 | Gong et al. | Oct 2014 | B2 |
8976061 | Chowdhury | Mar 2015 | B2 |
9172132 | Kam et al. | Oct 2015 | B2 |
9182476 | Wintermantel | Nov 2015 | B2 |
9202105 | Wang et al. | Dec 2015 | B1 |
9229102 | Wright | Jan 2016 | B1 |
9413079 | Kamgaing et al. | Aug 2016 | B2 |
9495600 | Heu et al. | Nov 2016 | B2 |
9664779 | Stainvas Olshansky et al. | May 2017 | B2 |
9886095 | Pothier | Feb 2018 | B2 |
9935065 | Baheti et al. | Apr 2018 | B1 |
10795012 | Santra et al. | Oct 2020 | B2 |
20030179127 | Wienand | Sep 2003 | A1 |
20040238857 | Beroz et al. | Dec 2004 | A1 |
20060001572 | Gaucher et al. | Jan 2006 | A1 |
20060049995 | Imaoka et al. | Mar 2006 | A1 |
20060067456 | Ku et al. | Mar 2006 | A1 |
20070210959 | Herd et al. | Sep 2007 | A1 |
20080106460 | Kurtz et al. | May 2008 | A1 |
20080238759 | Carocari et al. | Oct 2008 | A1 |
20080291115 | Doan et al. | Nov 2008 | A1 |
20080308917 | Pressel et al. | Dec 2008 | A1 |
20090073026 | Nakagawa | Mar 2009 | A1 |
20090085815 | Jakab et al. | Apr 2009 | A1 |
20090153428 | Rofougaran et al. | Jun 2009 | A1 |
20090315761 | Walter et al. | Dec 2009 | A1 |
20100152600 | Droitcour | Jun 2010 | A1 |
20100207805 | Haworth | Aug 2010 | A1 |
20110299433 | Darabi et al. | Dec 2011 | A1 |
20120087230 | Guo et al. | Apr 2012 | A1 |
20120092284 | Rofougaran et al. | Apr 2012 | A1 |
20120116231 | Liao et al. | May 2012 | A1 |
20120195161 | Little et al. | Aug 2012 | A1 |
20120206339 | Dahl | Aug 2012 | A1 |
20120265486 | Klofer et al. | Oct 2012 | A1 |
20120268314 | Kuwahara et al. | Oct 2012 | A1 |
20120280900 | Wang et al. | Nov 2012 | A1 |
20130027240 | Chowdhury | Jan 2013 | A1 |
20130106673 | McCormack et al. | May 2013 | A1 |
20140028542 | Lovitt et al. | Jan 2014 | A1 |
20140070994 | Schmalenberg et al. | Mar 2014 | A1 |
20140145883 | Baks et al. | May 2014 | A1 |
20140324888 | Xie et al. | Oct 2014 | A1 |
20150181840 | Tupin, Jr. et al. | Jul 2015 | A1 |
20150185316 | Rao et al. | Jul 2015 | A1 |
20150212198 | Nishio et al. | Jul 2015 | A1 |
20150243575 | Strothmann et al. | Aug 2015 | A1 |
20150277569 | Sprenger et al. | Oct 2015 | A1 |
20150325925 | Kamgaing et al. | Nov 2015 | A1 |
20150346820 | Poupyrev et al. | Dec 2015 | A1 |
20150348821 | Iwanaga et al. | Dec 2015 | A1 |
20150364816 | Murugan et al. | Dec 2015 | A1 |
20160018511 | Nayyar et al. | Jan 2016 | A1 |
20160041617 | Poupyrev | Feb 2016 | A1 |
20160041618 | Poupyrev | Feb 2016 | A1 |
20160061942 | Rao et al. | Mar 2016 | A1 |
20160061947 | Patole et al. | Mar 2016 | A1 |
20160098089 | Poupyrev | Apr 2016 | A1 |
20160103213 | Ikram et al. | Apr 2016 | A1 |
20160109566 | Liu et al. | Apr 2016 | A1 |
20160118353 | Ahrens et al. | Apr 2016 | A1 |
20160135655 | Ahn et al. | May 2016 | A1 |
20160146931 | Rao et al. | May 2016 | A1 |
20160146933 | Rao et al. | May 2016 | A1 |
20160178730 | Trotta et al. | Jun 2016 | A1 |
20160187462 | Altus et al. | Jun 2016 | A1 |
20160191232 | Subburaj et al. | Jun 2016 | A1 |
20160223651 | Kamo et al. | Aug 2016 | A1 |
20160240907 | Haroun | Aug 2016 | A1 |
20160249133 | Sorensen | Aug 2016 | A1 |
20160252607 | Saboo et al. | Sep 2016 | A1 |
20160259037 | Molchanov et al. | Sep 2016 | A1 |
20160266233 | Mansour | Sep 2016 | A1 |
20160269815 | Liao et al. | Sep 2016 | A1 |
20160291130 | Ginsburg et al. | Oct 2016 | A1 |
20160299215 | Dandu et al. | Oct 2016 | A1 |
20160306034 | Trotta et al. | Oct 2016 | A1 |
20160320852 | Poupyrev | Nov 2016 | A1 |
20160320853 | Lien et al. | Nov 2016 | A1 |
20160327633 | Kumar et al. | Nov 2016 | A1 |
20160334502 | Ali et al. | Nov 2016 | A1 |
20160349845 | Poupyrev et al. | Dec 2016 | A1 |
20170033062 | Liu et al. | Feb 2017 | A1 |
20170045607 | Bharadwaj et al. | Feb 2017 | A1 |
20170052618 | Lee et al. | Feb 2017 | A1 |
20170054449 | Mani et al. | Feb 2017 | A1 |
20170060254 | Molchanov et al. | Mar 2017 | A1 |
20170070952 | Balakrishnan et al. | Mar 2017 | A1 |
20170074974 | Rao et al. | Mar 2017 | A1 |
20170074980 | Adib et al. | Mar 2017 | A1 |
20170090014 | Subburaj et al. | Mar 2017 | A1 |
20170090015 | Breen et al. | Mar 2017 | A1 |
20170115377 | Giannini et al. | Apr 2017 | A1 |
20170131395 | Reynolds et al. | May 2017 | A1 |
20170139036 | Nayyar et al. | May 2017 | A1 |
20170141453 | Waelde et al. | May 2017 | A1 |
20170170947 | Yang | Jun 2017 | A1 |
20170176574 | Eswaran et al. | Jun 2017 | A1 |
20170192847 | Rao et al. | Jul 2017 | A1 |
20170201019 | Trotta | Jul 2017 | A1 |
20170212597 | Mishra | Jul 2017 | A1 |
20170364160 | Malysa et al. | Dec 2017 | A1 |
20180046255 | Rothera et al. | Feb 2018 | A1 |
20180071473 | Trotta et al. | Mar 2018 | A1 |
20180101239 | Yin et al. | Apr 2018 | A1 |
20180232947 | Nehmadi | Aug 2018 | A1 |
20180275265 | Bilik et al. | Sep 2018 | A1 |
20190317191 | Santra et al. | Oct 2019 | A1 |
20200126249 | Tang | Apr 2020 | A1 |
20210208247 | John Wilson | Jul 2021 | A1 |
20210255307 | Bongio Karman | Aug 2021 | A1 |
20220076432 | Ramezani | Mar 2022 | A1 |
20220161782 | Laine | May 2022 | A1 |
Number | Date | Country |
---|---|---|
1463161 | Dec 2003 | CN |
1716695 | Jan 2006 | CN |
101490578 | Jul 2009 | CN |
101585361 | Nov 2009 | CN |
102788969 | Nov 2012 | CN |
102967854 | Mar 2013 | CN |
103529444 | Jan 2014 | CN |
203950036 | Nov 2014 | CN |
102008054570 | Jun 2010 | DE |
102011100907 | Jan 2012 | DE |
102011075725 | Nov 2012 | DE |
102014118063 | Jul 2015 | DE |
2247799 | Mar 1992 | GB |
2001174539 | Jun 2001 | JP |
2004198312 | Jul 2004 | JP |
2006234513 | Sep 2006 | JP |
2008029025 | Feb 2008 | JP |
2008089614 | Apr 2008 | JP |
2009069124 | Apr 2009 | JP |
2011529181 | Dec 2011 | JP |
2012112861 | Jun 2012 | JP |
2013521508 | Jun 2013 | JP |
2014055957 | Mar 2014 | JP |
20090063166 | Jun 2009 | KR |
20140082815 | Jul 2014 | KR |
2007060069 | May 2007 | WO |
2013009473 | Jan 2013 | WO |
2016033361 | Mar 2016 | WO |
Entry |
---|
Zhang, et al, “EMD Interval Thresholding Denoising Based on Correlation Coefficient to Select Relevant Modes”; Proceedings of the 34th Chinese Control Conference; Jul. 2015; Hangzhou, China (Year: 2015). |
“BT24MTR11 Using BGT24MTR11 in Low Power Applications 24 GHz Rader,” Application Note AN341, Revision: Rev 1.0, Infineon Technologies AG, Munich, Germany, Dec. 2, 2013, 25 pages. |
Chen, Xiaolong et al., “Detection and Extraction of Marine Target with Micromotion via Short-Time Fractional Fourier Transform in Sparse Domain,” IEEE International Conference on Signal Processing, Communications and Computing, CSPCC, Aug. 5-8, 2016, 5 pages. |
Chen, Xiaolong et al., “Detection and Extraction of Target with Micromotion in Spiky Sea Clutter via Short-Time Fractional Fourier Transform”, IEEE Transactions on Geoscience and Remote Sensing, vol. 52, No. 2, Feb. 2014, pp. 1002-1018. |
Chioukh, Lydia et al., “Noise and Sensitivity of Harmonic Radar Architecture for Remote Sensing and Detection of Vital Signs”, IEEE Transactions on Microwave Theory and Techniques, vol. 62, No. 9, Sep. 2014, pp. 1847-1855. |
Chuanhua, Du, “FMCW Radar Range-Doppler Processing and Beam Formation Technology,” Chinese Doctoral Dissertations & Master's Theses Full Text Database (Masters)—Information Science and Technology Series, China National Knowledge Infrastructure, ISSN 1674-0246, CN 11-9144/G, Dec. 16, 2004-Mar. 2015, 14 pages. |
Deacon, Peter et al., “Frequency Modulated Continuous Wave (FMCW) Radar,” Design Team 6 Technical Lecture, Nov. 9, 2011, 27 pages. |
Dham, Vivek “Programming Chirp Parameters in TI Radar Devices,” Application Report SWRA553, Texas Instruments, May 2017, 15 pages. |
Diederichs, Kailtyn et al., “Wireless Biometric Individual Identification Utilizing Millimeter Waves”, IEEE Sensors Letters, vol. 1, No. 1, IEEE Sensors Council 3500104, Feb. 2017, 4 pages. |
Dooring Alert Systems, “Riders Matter,” http:\\dooringalertsystems.com, printed Oct. 4, 2017, 16 pages. |
Filippelli, Mario et al., “Respiratory dynamics during laughter,” J Appl Physiol, (90), 1441-1446, Apr. 2001, http://iap.physiology.org/content/jap/90/4/1441.full.pdf. |
Fox, Ben, “The Simple Technique That Could Save Cyclists' Lives,” https://www.outsideonline.com/2115116/simple-technique-could-save-cyclists-lives, Sep. 19, 2016, 6 pages. |
Gigie, Andrew et al., “Novel Approach for Vibration Detection Using Indented Radar”, Progess in Electromagnetic Research C, vol. 87, pp. 147-162, Oct. 4, 2018. |
Gouveia, Carolina et al., “A Review on Methods for Random Motion Detection and Compensation in Bio-Radar Systems”, Sensors, MDPI, Jan. 31, 2019, 17 pages. |
Gu, Changzhan et al., “Assessment of Human Respiration Patterns via Noncontact Sensing Using Doppler MultiRadar System”, Sensors Mar. 2015, 15(3), 6383-6398, doi: 10.3390/s150306383, 17 pages. |
Gu, Changzhan et al., “Deep Neural Network based Body Movement Cancellation for Doppler Radar Vital Sign Detection”, IEEE MTT-S International Wireless Symposium (IWS) May 19-22, 2019, 3 pages. |
Gu, Changzhu “Short-Range Noncontact Sensors for Healthcare and Other Emerginng Applications: A Review”, Sensors, MDPI, Jul. 26, 2016, 24 pages. |
Gu, Changzhan et al., “From Tumor Targeting to Speed Monitoring”, IEEE Microwave Magazine, ResearchGate, Jun. 2014, 11 pages. |
Guercan, Yalin “Super-resolution Algorithms for Joint Range-Azimuth-Doppler Estimation in Automotive Radars,” Technische Universitet Delft, TUDelft University of Technology Challenge the Future, Jan. 25, 2017, 72 pages. |
Hu, Wei et al., “Noncontact Accurate Measurement of Cardiopulmonary Activity Using a Compact Quadrature Doppler Radar Sensor”, IEEE Transactions on Biomedical Engineering, vol. 61, No. 3, Mar. 2014, pp. 725-735. |
Immoreev, I. Ya. “Ultrawideband Radars: Features and Capabilities”, Journal of Communications Technology and Electronics, ISSN: 1064-2269, vol. 54, No. 1, Feb. 8, 2009, pp. 1-26. |
Inac, Ozgur et al., “A Phased Array RFIC with Built-In Self-Test Capabilities,” IEEE Transactions on Microwave Theory and Techniques, vol. 60, No. 1, Jan. 2012, 10 pages. |
Killedar, Abdulraheem “XWRIxxx Power Management Optimizations—Low Cost LC Filter Solution,” Application Report SWRA577, Texas Instruments, Oct. 2017, 19 pages. |
Kishore, N. et al., “Millimeter Wave Antenna for Intelligent Transportation Systems Application”, Journal of Microwaves, Optoelectronics and Electromagnetic Applications, vol. 17, No. 1, Mar. 2018, pp. 171-178. |
Kizhakkel, V., “Pulsed Radar Target Recognition Based on Micro-Doppler Signatures Using Wavelet Analysis”, A Thesis, Graduate Program in Electrical and Computer Engineering, Ohio State University, Jan. 2013-May 2013, 118 pages. |
Kuehnke, Lutz, “Phased Array Calibration Procedures Based on Measured Element Patterns,” 2001 Eleventh International Conference on Antennas and Propagation, IEEE Conf., Publ. No. 480, Apr. 17-20, 2001, 4 pages. |
Li, Changzhi et al., “A Review on Recent Advances in Doppler Radar Sensors for Noncontact Healthcare Monitoring”, IEEE Transactions on Microwave Theory and Techniques, vol. 61, No. 5, May 2013, pp. 2046-2060. |
Vinci, Gabor et al., “Microwave Interferometer Radar-Based Vital Sign Detection for Driver Monitoring Systems”, IEEE MTT-S International Conference on Microwaves for Intelligent Mobility, Apr. 27-29, 2015, 4 pages. |
Li, Changzhi et al., “A Review on Recent Progress of Portable Short-Range Noncontact Microwave Radar Systems”, IEEE Transactions on Microwave Theory and Techniques, vol. 65, No. 5, May 2017, pp. 1692-1706. |
Li, Changzhi et al., “Random Body Movement Cancellation in Doppler Radar Vital Sign Detection”, IEEE Transactions on Microwave Theory and Techniques, vol. 56, No. 12, Dec. 2008, pp. 3143-3152. |
Li, Changzhi et al., “Robust Overnight Monitoring of Human Vital Signs by a Non-contact Respiration and Heartbeat Detector”, IEEE Proceedings of the 28th EMBS Annual International Conference, FrA05.5, Aug. 30-Sep. 3, 2006, 4 pages. |
Li, Changzhi “Vital-sign monitoring on the go”, Sensors news and views, www.nature.com/naturelectronics, Nature Electronics, vol. 2, Jun. 2019, 2 pages. |
Lim, Soo-Chui et al., “Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors,” Sensors 2015, ISSN 1424-8220, vol. 15, 16642-16653, doi:10.3390/s150716642, www.mdpi.com/journal/sensors, Jul. 15, 2009, 12 pages. |
Lin, Jau-Jr et al., “Design of an FMCW radar baseband signal processing system for automotive application,” SpringerPlus a SpringerOpen Journal, (2016) 5:42, http://creativecommons.org/licenses/by/4.0/, DOI 10.1186/s40064-015-1583-5; Jan. 2016, 16 pages. |
Massagram, Wansuree et al., “Assessment of Heart Rate Variability and Respiratory Sinus Arrhythmia via Doppler Radar”, IEEE Transactions on Microwave Theory and Techniques, vol. 57, No. 10, Oct. 2009, pp. 2542-2549. |
Mercuri, Marco et al., “Vital-sign monitoring and spatial tracking of multiple people using a contactless radar-based sensor”, Nature Electronics, vol. 2, Articles, https://doi.org/10.1038/s41928-019-0258-6, Jun. 2019, 13 pages. |
Microwave Journal Frequency Matters, “Single-Chip 24 GHz Radar Front End,” Infineon Technologies AG, www.microwavejournal.com/articles/print/21553-single-chip-24-ghz-radar-front-end, Feb. 13, 2014, 2 pages. |
Mostov, K., et al., “Medical applications of shortwave FM radar: Remote monitoring of cardiac and respiratory motion”, Am. Assoc. Phys. Med., 37(3), Mar. 2010, pp. 1332-1338. |
Oguntala, G et al., “Indoor location identification technologies for real-time IoT-based applications: an inclusive survey”, Elsevier Inc., http://hdl.handle.net/10454/16634, Oct. 2018, 42 pages. |
Peng, Zhengyu et al., “A Portable FMCW Interferometry Radar with Programmable Low-IF Architecture for Localization, ISAR Imaging, and Vial Sign Tracking”, IEEE Transactions on Microwave Theory and Techniques, Dec. 15, 2016, 11 pages. |
Qadir, Shahida G., et al., “Focused ISAR Imaging of Rotating Target in Far-Field Compact Range Anechoic Chamber,” 14th International Conference on Aerospace Sciences & Aviation Technology, ASAT-14-241-IP, May 24-26, 2011, 7 pages. |
Richards, Mark A., “Fundamentals of Radar Signal Processing,” McGraw Hill Electronic Engineering, ISBN: 0-07-144474-2, Jun. 2005, 93 pages. |
Sakamoto, Takuya et al., “Feature-Based Correlation and Topological Similarity for Interbeat Interval Estimation Using Ultrawideband Radar”, IEEE Transactions on Biomedical Engineering, vol. 63, No. 4, Apr. 2016, pp. 747-757. |
Santra, Avik et al., “Short-range multi-mode continuous-wave radar for vital sign measurement and imaging”, ResearchGate, Conference Paper, Apr. 2018, 6 pages. |
Schroff, Florian et al., “FaceNet: A Unified Embedding for Face Recognition and Clustering,” CVF, CVPR2015, IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Mar. 12, 2015, pp. 815-823. |
Simon, W., et al., “Highly Integrated KA-Band Tx Frontend Module Including 8×8 Antenna Array,” IMST GmbH, Germany, Asia Pacific Microwave Conference, Dec. 7-10, 2009, 63 pages. |
Singh, Aditya et al., “Data-Based Quadrature Imbalance Compensation for a CW Doppler Radar System”, https://www.researchgate.net/publication/258793573, IEEE Transactions on Microwave Theory and Techniques, Apr. 2013, 7 pages. |
Suleymanov, Suleyman, “Design and Implementation of an FMCW Radar Signal Processing Module for Automotive Applications,” Master Thesis, University of Twente, Aug. 31, 2016, 61 pages. |
Thayaparan, T. et al., “Micro-Doppler Radar Signatures for Intelligent Target Recognition,” Defence Research and Development Canada, Technical Memorandum, DRDC Ottawa TM 2004-170, Sep. 2004, 73 pages. |
Thayaparan, T. et al., “Intelligent target recognition using micro-Doppler radar signatures,” Defence R&D Canada, Radar Sensor Technology III, Proc. of SPIE, vol. 7308, 730817, Dec. 9, 2009, 11 pages. |
Tu, Jianxuan et al., “Fast Acquisition of Heart Rate in Noncontact Vital Sign Radar Measurement Using Time-Window-Variation Technique”, IEEE Transactions on Instrumentation and Measurement, vol. 65, No. 1, Jan. 2016, pp. 112-122. |
Chadha, H. et al., “The Unscented Kalman Filter: Anything EKF can do I can do it better!,” https://towardsdatascience.com/the-unscented-kalman-filter-anything-ekf-can-do-i-can-do-it-better-ce7c773cf88d, Apr. 27, 2018, 14 pages. |
Soumekh, M., “SAR-ECCM Using Phase-Perturbed LFM Chirp Signals and DRFM Repeat Jammer Penalization,” IEEE, Jun. 6, 2005, 6 pages. |
Zhang, H. et al., “EMD-based Gray Association Algorithm for Group Ballistic Target,” The 2019 6th International Conference on Systems and Informatics (ICSAI 2019), IEEE, Feb. 27, 2020, 6 pages. |
Vinci, Gabor et al., “Six-Port Radar Sensor for Remote Respiration Rate and Heartbeat Vital-Sign Monitoring”, IEEE Transactions on Microwave Theory and Techniques, vol. 61, No. 5, May 2013, pp. 2093-2100. |
Wang, Fu-Kang et al., “Wrist Pulse Rate Monitor Using Self-Injection-Locked Radar Technology”, Biosensors, MDPI, Oct. 26, 2016, 12 pages. |
Wikipedia, “Scale-invariant feature transform”, https://en.wikipedia.org/wiki/Scale-invariant_feature_transform, 18 pages, printed Oct. 20, 2020. |
Wikipedia, “Earth mover's distance”, https://en.wikipedia.org/wiki/Earth_mover%27s_distance#:˜:text=In statistics%2C the earth mover's, known as the Wasserstein metric, printed Oct. 20, 2020, 5 pages. |
Wikipedia, “Hilbert-Huang transform”, https://en.wikipedia.org/wiki/Hilber, Huang_transform, printed Oct. 20, 2020, 11 pages. |
Wikipedia, “K-means clustering”, https://en.wikipedia.org/wiki/K-means_clustering, printed Oct. 20, 2020, 16 pages. |
Wikipedia, “Kalman Filter”, https://en.wikipedia.org/wiki/Kalman_filter#Unscented_Kalman_filter, printed Oct. 20, 2020, 35 pages. |
Wilder, Carol N., et al., “Respiratory patterns in infant cry,” Canada Journal of Speech, Human Communication Winter, 1974-75, http://cjslpa.ca/files/1974_HumComm_Vol_01/No_03_2-60/Wilder_Baken_HumComm_1974.pdf, pp. 18-34. |
Will, Christoph et al., “Advanced Template Matching Algorithm for Instantaneous Heartbeat Detection using Continuous Wave Radar Systems”, ResearchGate, May 2017, 5 pages. |
Will, Christoph et al., “Human Target Detection, Tracking, and Classification Using 24-GHz FMCW Radar”, IEEE Sensors Journal, vol. 19, No. 17, Sep. 1, 2019, pp. 7283-7299. |
Will, Christoph et al., “Local Pulse Wave Detection using Continuous Wave Radar Systems”, IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, Oct. 25, 2017, 9 pages. |
Will, Christoph et al., “Radar-Based Heart Sound Detection”, Scientific Reports, www.nature.com/scientificreports, Jul. 26, 2018, 15 pages. |
Xin, Qin et al., “Signal Processing for Digital Beamforming FMCW SAR,” Hindawi Publishing Corporation, Mathematical Problems in Engineering, vol. 2014, Article ID 859890, http://dx.doi.org/10.1155/2014/859890, Apr. 15, 2014, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20220155434 A1 | May 2022 | US |