The field of the invention relates to a millimeter wave radar system for determining an activity record and to related methods and sensor devices.
A portion of the disclosure of this patent document contains material, which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Radars operating in the radio frequency (RF) and millimeterwave (mm-wave) bands present the ability to track and monitor position and movement in both indoor and outdoor environments. A wide range of applications exists. However, the number of specific use cases that have been implemented is still limited.
Microwave radars have been used previously for detection of vital signs, through analysis of signal amplitude or phase. However they have not been used for long-term monitoring, prediction, and evaluation of physical wellbeing.
Known solutions for subject behavior, such as fall detection or prediction have either used video-based techniques or wearable devices including sensor suites such as gyroscopes or accelerometers. There is a need for a less intrusive solution.
Further, detecting multiple moving targets is a challenge, as dynamic scenes with a lot of motion leads to clutter and noise, which interfere with the responses of targets of interest. Additionally, complex indoor environments can clutter scenes, leading to false detections or missed readings.
The present invention addresses the above vulnerabilities and also other problems not described above.
One aspect of the invention relates to a mm-wave radar system for detecting an activity record from multiple targets comprising:
Aspects of the invention will now be described, by way of example(s), with reference to the following Figures, which each show features of a mm-wave radar system that implements the invention:
Millimeterwave (mm-wave) radar coupled with advanced processing techniques, including a spatial filtering-based approach and machine learning, is used to make high-resolution tracking, activity classification, and vital signs detection possible, all at low cost, without the use of wearable devices, and at higher precisions than is possible with most other wireless approaches. This is due in part because of the shorter wavelengths, and larger multi-gigahertz bandwidths available, principally at around 60 GHz, which is unlicensed in many regions across the globe.
With reference to
The system is composed of one, or a plurality, of mm-wave radars or sensors, each of which contains an on-board microprocessor comprising the necessary analog electronics to generate mm-wave signals (including but not limited to a waveform generator, voltage-controlled oscillator, linear and power amplifiers, multipliers, phase shifters), to transmit and receive the signals (transmit and receive antennas and their respective delay lines), as well as an analog-to-digital/digital-to-analog converter, and the necessary digital electronics to generate the waveforms and process the returned radar signals (including but not limited to microprocessors, memory). A waveform is generated in the digital domain, and then converted to analog, mixed to the mm-wave frequencies, amplified and transmitted. The returned signal is then mixed down to baseband, digitized and analyzed through radar signal processing and other digital processing techniques. The radar is based on a frequency modulated continuous wave (FMCW) architecture, and signal modulation types could include time division multiplexing (TDM) or binary pulse modulation (BPM), depending on noise and processing requirements.
The returned signals, in the form of complex in-phase and quadrature (IQ) components, are further processed using the on-board microprocessor, to obtain a radar data cube. The microprocessor stores the radar data cube (RDC) or a portion of the RDC. The size of the complex-valued RDC is given as,
RDC size=2×data type(bytes)×NRx×NTx×Nchirp×Nadc. (1)
For a 32-bit data type, with NRx=4 receive antennas, NTx=3 transmit antennas, Nchirp=128 chirps per frame, and Nadc=128 ADC samples per chirp, the RDC will have size 1,572,864 bytes. At a common frame rate of 20 fps, which is suitable for capturing human motions and micro-movements, a data transfer rate of 30 MB/s is necessitated. This is excessive, both from the perspective of the data pipeline as it would be challenging for most wireless transmission protocols (Wi-Fi, etc.), and because data storage and transmission off-site (especially on commercial Cloud storage facilities) would be expensive. Therefore, it is preferable to reduce the data size on-chip (i.e., on the edge). Algorithmic radar signal processing approaches exist to estimate many parameters, such as positions and velocities of targets (people and objects), as well as to remove clutter and to track the targets. Machine learning approaches, including but not limited to k-nearest neighbors (KNN), k-means clustering, multi-level perceptron (MLP), and artificial neural networks (ANN) can be used to classify activities, however it is challenging to do this accurately on-chip as these techniques tend to be computationally expensive.
There are various approaches to accurately classifying activities in sensor and Internet-of-Things (IoT) systems, ranging from embedding higher-power processors on-chip (multi-core CPU, GPUs, TPU, etc.), to transmitting data packets for off-chip processing. The first approach minimizes transmitted data and is most responsive; but embedding an additional high-performance processor and relevant subcircuits can be prohibitively expensive (for both price and power requirements), and reduces the potential for using radar data for time-series trend prediction or for multi-sensor fusion approaches, in which the outputs of multiple sensors (radar or non-radar) are combined. The second approach has the aforementioned drawback of large data storage and transmission costs, which can rapidly escalate as more radar-based sensors are deployed. A hybrid approach is therefore deemed most suitable. In this approach, a data minimization approach is initially applied, such that the radar data is first processed to find targets, then target features are analyzed either algorithmically or using lightweight machine learning classifiers to determine whether they are of interest, and subsequently transmitted off-sensor for additional processing or classification. Off-chip could mean a local gateway device, which combines and processes data from one or more sensors before transmitting the data off-site, or a local server, which is not connected to an external network, or to the Cloud.
With reference to
It is necessary to track and classify multiple targets, as otherwise the responses of two or more targets in the detection range of a sensor will lead to interference, and will lead to false readings and classification. This is particularly problematic with highly dynamic motions such as falls. Standard radar techniques are first used to find points of interest (using constant false alarm rates {CFAR} for example), which are aggregated to generate a point cloud of the scene, and to track moving targets (using the extended or unscented Kalman filters for example). Micro-Doppler signatures are characteristic responses which can be used to distinguish targets based on their micro-motions— particularly small periodic motions such as the swinging of arms and legs, the rotation of a drone's rotary blades, or the movement of a chest due to breathing and a beating heart. They are also recognized as a useful tool for identification of targets—including people, vehicles, drones and birds—and for classification of activities (including fall detection and gestures). Beamforming is first applied to the radar cube, at the range bin corresponding to the location of the primary target. The beamforming weights are determined by the angle-of-arrival of the target from the sensor, as obtained using the clustering and tracking algorithms, and have the purpose of minimizing interference due to secondary targets in other range bins and/or at other angles. The output will be the slow-time response of the target, which can then be transformed to the target's complex Doppler (velocity) spectral response via an FFT. The target's complex micro-Doppler signature is then obtained by collecting the spectral response over a series of sequential data cubes (which corresponds to a new frame of measurement time with a new timestamp). Optionally, the position that is monitored can track the position of the target, or it can remain in a fixed location. Furthermore, a window (such as a Hamming window) may be applied to the spectral profile, and the profile may also be transformed to an alternative domain, such as the wavelet domain. These variations are all encompassed by the term ‘micro-Doppler signatures’. It is further notable that the beamforming approach can be applied to each detected target so, for every time segment in which they are detected, they each have an associated micro-Doppler signature.
This micro-Doppler signature of each target, as well as other measurements, can serve as inputs to an ML algorithm. An example of this could be a convolutional neural network (CNN) or multi-layered perceptron (MLP). It is hugely beneficial to be able to analyze this information in real-time on the radar microprocessor, i.e. on the ‘edge’, and so the CNN for example is simplified to reduce the numbers of hidden layers, and to minimize the number of neurons present. Because the signatures of different actions (for example standing, sitting, walking, running, falling) are typically easily distinguishable, the ML subsystem complexity can be reduced significantly, which allows real-time ‘edge’ processing for these scenarios. Alternatively, a dictionary of classifiers based on radar parameters can be calculated for a target, such as but not restricted to positional center-of-mass, velocity, change of position or velocity, envelope of position or velocity, with these parameters serving as inputs to a lightweight ML classifier, such as but not restricted to KNN or k-means clustering. The ML subsystem may also be used to differentiate or distinguish between a person, a pet, a bed, or another object.
Additionally, the system may use any stored or collected data from one individual or from a population to predict user behavior. Examples are provided below.
Collected data may also include other known subject-related characteristics such as gender, age, health information or fitness level as well as other environmental measurements such as temperature data.
The system may also include an additional subsystem for wireless transmission (for example via Wi-Fi or Bluetooth) of encrypted radar data to a central server or cloud server. This data could be sent using one of a number of lightweight protocols, such as but not restricted to MQTT. The central server can then further process the data from a plurality of radars, send processed data to additional devices, trigger alarms, or store data for further analysis.
The wireless transmission subsystem can also be used to remotely update the configuration of the radar. Commands can be transmitted to the radar to update its configuration, whereby the configuration may include but is not restricted to the waveform configuration (for example the number of chirps per frame, the number of ADC samples per chirp, the sampling rate, the time period of each chirp), or software parameters to assist with processing (for example the orientation of the sensor, the areas or volumes of space in which to track targets, the tracking filter parameters, classification parameters, etc.). Examples of usage include: updating the radar when it is moved to a different room; updating the radar configuration to monitor areas within a room, such as desks and doorways; increasing/decreasing sensitivity of the tracking algorithm if it is missing targets/generating false detections; increasing sensitivity and updating tracking parameters for specific parts of the scene, such as through a wall to monitor an adjoining room; updating the waveform configuration or modulation scheme to improve resolution over a smaller monitoring area; updating the waveform configuration to increase sensitivity for vital signs measurements.
Response of the radar to clutter in the environment is important to its correct operation and functionality. Remote configuration of the radar enables fine-tuning of its performance in response to measurements. This can be performed manually, by updating the waveform and software parameters, or it can be performed algorithmically or through machine learning approaches. In either case, the system responds to inputs, which convey how accurately the radar is performing and subsequently updates the waveform and software parameters. These could be user inputs which may include the number of occupants detected compared to a ground truth, or the location of a false reading or missed detection, and the ground truths could be entered manually by an operator or they could be obtained through some other measurement method. In addition to the waveform parameters, the software parameters which may be updated include, but are not restricted to: number of points in a cluster to assign a track, required signal-to-noise ratio of points to assign a track, point velocity or spatial spread criteria, number of frames before a detected cluster is assigned as a track, etc. While these can be updated through defined guidelines, and specific knowledge of the system combined with user experience, a machine learning approach is suitable as it can be trained to optimize across the full set of parameters.
Multiple radars can be used to improve performance of the system. They can be used independently with their detections projected onto a single output based on the radars' positions within the global coordinate system. Alternatively, the sensor detections can be combined together, e.g. if detected point clouds are transmitted to a central server, gateway, or other processing device, then using the radars' respective positions in the global coordinate system, the point clouds can be combined, accounting for relative accuracy levels, to generate more precise clusters. Due to the fact that the radar measurements are more accurate at the antenna array's boresight, this can be used to weight the detections to more accurate readings. Optionally, a positional calibration process can be run, in which each radar sequentially transmits a known sequence (such as continuous wave, single frequency emission), while the other radars detect and localize that position. The radars will then have their relative positions within the error bounds of their measurements.
The sensors can also be connected using mesh networking technology (such as mesh Wi-Fi). This is beneficial because it: enables more efficient sharing of radar data and computational burden between the sensors, gateway devices, and other connected sensors and processors; allows for more reliable and robust connectivity; reduces burden on existing Wi-Fi networks or other communications infrastructure, which is particularly important in environments such as hospitals where connectivity may be poor, or where they are considered critical infrastructure; allows sensor coverage in areas or rooms where there is no existing networking capability, and also extends coverage to other Internet-of-Things (IoT) sensors; provides a simpler but more secure connection to external servers or clouds, as data is sent through a single node, fully encrypted and compressed.
The system may also include a sensor or sensors to detect its orientation or movements, through inertial sensors such as an accelerometer/gyroscope. This can provide notification if the sensor has been knocked over, rotated, or moved to another room. The configuration can then be updated to account for its new environment, or a notification provided for maintenance crews to reposition the sensor.
Hence the mm-wave radar system presented provides coherent wireless sensing that presents an attractive form of environmental monitoring in locations such as hospitals, office buildings, and in the home. This is because it can be used to concurrently track location and monitor activities, and there is significant opportunity to do this without the use of tags, wearables or cameras—which are inappropriate in sensitive areas such as operating theatres or restrooms.
Additionally, homecare monitoring is also one attractive proposition, particularly in the case of vulnerable residents living alone where it is important to be able to monitor for abnormal conditions, emergencies, and degradation of wellbeing, including detection falls, when the resident is in distress, and for deterioration in mobility. The mm-wave radar system provides many advantages as it can be used to monitor throughout the home, through certain walls, including in sensitive areas such as bathrooms where cameras are not suitable. The system can even operate in many occluded environments, such as through smoke, providing valuable use cases for fire safety. It also avoids the need for pendants and other wearable devices, which are only activated in a fraction of expected cases. Another related application is as support for hospital ‘virtual wards’. Virtual wards aim to keep patients out of hospitals, or accelerate the hospital checkout process, because increased lengths-of-stay are highly correlated with increased risk of infection and decompensation (a phenomenon which leads to increased recovery times). This is done by providing an environment in their home from which their health and wellbeing can be monitored remotely. This is achieved through monitoring of their movement and mobility, as well as detection of falls, and in conjunction with other sensor measurements (oxygen levels, blood pressure, temperature, etc.).
The increasing number of millimeter-wave (mm-wave) band applications—including 5G, IEEE 802.11ad/ay, the 60 GHz ISM band, and automotive radar—are of significant interest as they enable high bandwidth (and thus high resolution) sensing, small package sizes (due to the small wavelengths), and relatively low-cost devices (due to the proliferation of commercialization activities).
Examples of use cases and applications are now described.
Vital Signs Monitoring:
We now describe a specific example with a multi target tracking and activity classification system based on a digital beamforming approach using MIMO radar. The machine learning model is based on a Deep Neural Network (DNN) that has been configured to recognize six exercise-based classes. The system is able to achieve prediction with over 95% classification accuracy for all classes.
The system is extendable to classify any other use cases, as described in the above section, and can be applied to detection of other activities, such as fall detection.
This system presents a methodology for high accuracy tracking of multiple targets using a 60 GHz radar system, and a deep neural network (DNN) applied to the micro-Doppler response for classification of exercise activities, which are selected as demonstrators due to their mix of high- and low dynamic movements, that take place in all three spatial dimensions.
The system described provides a range resolution of about 6.4 cm and Doppler resolution of about 0.18 m/s. The system further successfully reduces interference between closely neighboring targets.
Typical system variants may provide different range resolution depending on a number of factors, such as operational bandwidth.
Measurements of individual target micro-Doppler signatures are demonstrated, even in the presence of multiple other moving targets. The signatures are used to train a Deep Neural Network (DNN) for activity classification.
A NodeNs ZERO 60 GHz IQ radar is used for all experiments presented here. For the presented experiments a bandwidth of BW=1.8 GHz is used. It operates using the principles of FMCW in conjunction with a time-division multiplexing (TDM) scheme, in which a linear chirp ramps the frequency up over Nadc ADC samples. These correspond to the fast time samples in the context of radar signal processing, on which a Fast Fourier Transform is performed to obtain the range profile. A value of Nadc=96 is used, as a tradeoff between maximum range and data volume. The radar therefore has a range resolution of
and a maximum detection range of rmax=NadcΔr=8.1 m. In slow time, the radar emits Nchirp=96 identical, linear chirps. Following a second FFT, this presents a maximum detectable velocity of
with a velocity resolution of
The radar transceiver consists of an antenna array with 3 Tx transmitters and 4 Rx receivers, each with dipole-like radiation patterns, resulting in a 12-element virtual MIMO array. In this application it enables detection in two angular dimensions: azimuth (measured on the projection to the x-y plane, from the x-axis) and elevation (from the z-axis).
With reference to
Digital beamforming is used to obtain the angular spectral response, {circumflex over (P)}(ϕ, θ) with high precision, although we note that the ability to distinguish between objects which are close to one another is limited by the array aperture size. Popular approaches include the Minimum Variance Distortionless Response (MVDR/Capon) beamformer, in which the total collected power is minimized in order to create nulls away from the pattern of the main search beam. The angular response is given as,
where ν is the steering vector for a specific search angle pair (ϕ, θ), R=E{xxH} is the spatial covariance matrix, E{⋅} is the expected value, and x(t) is the vector of measurements at each of the 12 virtual antennas. Other suitable approaches exist, such as MUltiple SIgnal Classification (MUSIC) which detect sources by searching the noise subspace, however this comes at the cost of additional computation complexity and some knowledge of the number of targets. For an antenna at position (x,z)=(nλ/2, mλ/2), the steering vector element is
νnm(ϕ,θ)=e−jπ[(n-1)sin ϕ][(m-1)sin θ]. (3)
The radar scans the environment at 20 Hz—which is sufficient to capture most human micro-motions—and associates a timestamp to each scan. The receivers are coherent and measure the complex IQ parameters. The phase sensitivity enables detection of minute motions. This is because one wavelength, λ≈5 mm, perpendicular to the plane of the array corresponds to a phase rotation of 2π, and so even small movements can be detected through phase shift measurements. A phase shift of Δφ=36°, for example, will correspond to a movement of 0.5 mm.
Once the radar cube—which consists of the complex-valued IQ data corresponding to each range, azimuth, elevation and Doppler bin—is calculated, the aim is then to identify relevant subjects in the scene, locate their positions, and then classify their actions. The radar cube, however, is large and not suitable for real-time processing, and so a detection algorithm is necessary to identify areas of interest. The output of the detection algorithm is a point cloud, which is a list of points corresponding to range, azimuth, elevation and velocity. The naïve method is to simply poll the amplitudes of the response at each spatial bin (r,ϕ,θ), assigning a point to each bin if it exceeds a certain threshold value. Alternatively, the Constant False Alarm Rate (CFAR) algorithm is more robust to noise. On selecting a list of points, with their associated range and angles, the velocity is subsequently defined as:
ν=argmax|ν|, (4)
where ν is the vector representing the Doppler spectrum at that spatial point.
With reference to
Targets are then identified through application of the DBSCAN clustering algorithm to the point clouds, such that each point will be assigned either to a specific cluster or as noise, as shown. Each cluster m is then associated to a subject, and will have a corresponding center-of-mass (xm,ym,zm), which is the position of the target with respect to the radar. The nearest appropriate range bin and azimuthal bin is used for 2D tracking, and an additional elevation bin for 3D tracking. An experiment was performed with two subjects near one another, and both at a range of 2 m from the radar. The subjects alternately performed dynamic actions (jumping up-and-down in this case). It is important to be able to distinguish between the actions of different users, so as not to confuse an activity classification system with dynamic noise. A potential use case is an environment with thin wall, which may be transparent to RF and mm-waves, in which dynamic motions of a resident in an adjoining room may mask detection of a fall in the primary room, because the Doppler responses are convolved if appropriate processing is not performed. The presented approach will avoid this scenario.
A significant challenge in using Doppler responses for activity classification is the cross-contamination of a target's signature with that of a nearby target. This is particularly challenging for targets at similar distances from the radar.
To mitigate this, spatial filtering is used to isolate each target from nearby targets. Analogously to the angle-of-arrival estimations, the corresponding beamforming weights associated with each antenna are calculated using the angular response from Eq. (2). The weighting vector is given by
w=P
MVDR(ϕ,θ)·R−1ν(ϕ,θ). (5)
If we consider that the time interval between chirps is tc, then the spatially-filtered response of the target at range r and azimuth angle ϕ, to chirp p is
z
p
=z(ptc)=wT(ϕ)·x(r). (6)
The Doppler spectrum at time T is subsequently calculated as the discrete Fourier transform of z, i.e.
Z(T)=DFT(z). (7)
On initial detection of a target, its center-of-mass (xm,ym) is monitored continuously for a period of time (1.5 s corresponding to 30 frames). The micro-Doppler signature is a time-frequency plot which shows the evolution of the Doppler spectrum Z with time. Note that, whilst it is also possible to update the location to the target's recalculated CoM at each frame, we use a fixed location as this maintains a consistent phase reference, which aids in fine-motion detection (including for vital signs monitoring).
The figure shows the efficacy of the method, wherein even two very closely spaced targets have relatively little contribution to the other's micro-Doppler signature.
Bilinear interpolation is used to smooth the plots, which helps to highlight features. The 0-Doppler bin (center of the vertical axis) is set to 0, which is equivalent to subtracting out the mean value of the samples. This is a commonly used radar processing technique used to suppress stationary objects, and in particular to mitigate against the effects of clutter. From the plots, the Standing signature is clearly evident with any motion restricted to the near-0 bins. Squats involve relatively slow vertical movements, and so are similarly restricted to the near-0 Doppler bins. Running (performed in one spot) is clearly the most dynamic motion.
There is a significant and growing body of work on using machine learning techniques for classification of activities based on their Doppler responses. The primary innovations here are the approaches for distinguishing between multiple moving targets and their respective activities. A deep neural network (DNN) trained using a transfer learning approach is used to classifying human activities. The SqueezeNet DNN is used as it is lightweight, requiring <0.5 MB of memory and 50× fewer parameters than comparable networks such as AlexNet, while maintaining similar accuracy. SqueezeNet was originally trained for computer vision, with around 1,000 classes. It is used here for the classification of six activities using a training set of 1,283 micro-Doppler signatures, with 315 (i.e. 80%-20% training-validation sample split) signatures kept for validation. The overall classification accuracy is over 99%, with only some miss-classifications between running and jumping jacks, which are highly-dynamic movement.
Hence a method for tracking and activity classification of multiple targets using mm-wave radar has been provided, coupled with MIMO radar and beamforming techniques. The DBSCAN clustering algorithm is used to identify targets of interest, and to isolate potential noise detections. This can be extended with a tracking algorithm (such as the Unscented Kalman Filter) to improve tracking precision of moving targets.
Beamforming is then used to digitally reduce the field-of-view of the array to focus on each of the targets, in order to minimize clutter and movements from other targets. Imaging techniques combined with multi-radar fusion can be used to further improve accuracy. The micro-Doppler signatures of the targets are then measured, and are used in conjunction with a Deep Neural Network built upon an AlexNet transfer learning model. This shows excellent performance. We note that in future a one-class classifier can be used to discard noisy movements, which may be misidentified as one of the six trained classes, and the. classification can be further improved by accounting for motion in the (X, Y, Z) planes, rather than using just the velocity response.
The present invention offers solutions for the use of a mm-wave radar system for short and long-term tracking of subject activity or behavior such as physical activity, as well as for determining or predicting long-term health and wellbeing trends. Using machine-learning techniques, the mm-wave radar system converts a stream of radar data or backscattered signals into meaningful data outputs which near instantaneously describe the activity of multiple targets within an environment. The data outputs may then be used for immediate analysis of an environment and its occupants in real time. Additionally, they may also be used to determine whether further radar data should be sent for subsequent analysis. The techniques presented are particularly useful in a homecare environment, where the knowledge in near real time of a person's location and current state is of benefit to determine their wellbeing and/or physical fitness.
Advantageously, tracking may be done tag-free (with no wearable device). Tag free use cases are particularly attractive as wearables are uncomfortable to many people, are subject to being lost, and require frequent charging. A wearable-free system provides continuous wellbeing and security monitoring without requiring a user to remember to charge or put on the device.
There are no limitation on the number of targets and their associated activity, provided that the number of sensors used may be increased depending on the environmental configuration.
In the following sections, we outline key features of the mm-wave radar system; we list also various optional sub-features for each feature. Note that any feature can be combined with one or more other features; any feature can be combined with any one or more sub-features (whether attributed to that feature or not) and every sub-feature can be combined with one or more other sub-features. The invention is however defined in the appended claims.
Feature 1: Multi-Target Classification Using Machine Learning
In one implementation, a mm-wave radar system enables the classification of multiple activities from multiple subjects or targets using a machine learning model. The received radar data or backscattered signal is filtered using digital beamforming in order to determine a micro-Doppler response of at least one target. The micro-doppler responses of one or more targets are then used as input to a machine learning engine for classification purposes. Hence, the system is able to convert a stream of backscattered signals into data outputs or activity records that describe multiple targets. The data outputs are configured to be meaningful such that instantaneous information related to the activity or state of multiple targets in an environment can be used to provide real-time information related to multiple targets. Further, the data outputs can also be used to determine whether specific radar data should be transmitted for subsequent data analytics. Target classification may include for example: identification, whether they are human/non-human, whether they are an adult or child, whether they are in a wheelchair. Activity classification may include for example: walking, running, jumping, exercising, sitting, lying down, standing, falling over, sleeping or any other activities.
We can Generalize as Follows:
A mm-wave radar system for determining an activity record from multiple targets comprising:
Backscattered Signals
Radar Data
Activity Record
Output
ML Engine
Vital Signs
Other Applicable Optional Features
Feature 2: Combination of Edge- and Cloud-Classification
We can generalize as follows:
A mm-wave radar system for detecting an activity record from multiple targets comprising:
Optional Features
Feature 3: Remote Configuration
The configuration of the mm-wave radar system can be automatically or manually updated based on the analysis of the radar data. The configuration of the radar includes control of the radar waveform parameters (including chirp, ADC sampling rate, waveform duration, sampling time, etc.), and software parameters (tracking filter {Kalman} parameters, monitoring zones, and further processing parameters such as CFAR, etc.). Configuration updates are sent wirelessly to the sensor, e.g. through Wi-Fi, and are communicated through an application that is running on a gateway hub, a server, or on the Cloud. Configuration changes can be entered manually by a user, such as: to define the monitoring/room area when a sensor is first installed, or to specify specific location such as desks which should be monitored; or it can be done programmatically. In the latter case, the user provides inputs which can include: too many false target detections, not enough detections, detections from neighbouring rooms, improved sensitivity required in a certain area (such as through a wall). These inputs can be processed through a machine learning (e.g. reinforcement learning) to update the configuration to achieve the desired performance.
It is hugely beneficial to be able to adaptively recalibrate and reconfigure a radar to optimize performance, especially when trying to monitor moving targets as dynamic scenes with a lot of motion leads to clutter and noise or when complex indoor environments are monitored.
We can generalize as follows:
Optional Features
Feature 4: Detection of Abnormal or Change of Behavior of a Subject, Such as Mobility Deterioration
The mm-wave radar system logs activities of an occupant (e.g. in the home) over a period of time (days, weeks, months). The mm-wave radar system uses activity classification to identify when a person performs activities including: standing up, walking across a room, sitting down, etc. The radar detects the location of the occupant, and in combination with timestamps, a record is generated of how long it takes for the occupant to perform defined activities, such as: getting up from a sofa and walking to the kitchen, or walking from a bedroom to a bathroom. The trend of this data is measured over a period of time to generate a time-series, at which point it can be referred to by a physician to diagnose whether a person's mobility is improving or degrading. A recurrent neural network (RNN) or other machine learning techniques can be applied to this data to identify mobility trends, to predict how mobility will change, or to predict if a person is at risk of falling (degradation in mobility is a leading indicator of potential fall risk).
We can generalize as follows:
Optional Features
Note
It is to be understood that the above-referenced arrangements are only illustrative of the application for the principles of the present invention. Numerous modifications and alternative arrangements can be devised without departing from the spirit and scope of the present invention. While the present invention has been shown in the drawings and fully described above with particularity and detail in connection with what is presently deemed to be the most practical and preferred example(s) of the invention, it will be apparent to those of ordinary skill in the art that numerous modifications can be made without departing from the principles and concepts of the invention as set forth herein.
Number | Date | Country | Kind |
---|---|---|---|
2014577.7 | Sep 2020 | GB | national |
2109457.8 | Jun 2021 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2021/052409 | 9/16/2021 | WO |