This application claims foreign priority to EP 18156277.8, filed Feb. 12, 2018, the content of which is incorporated by reference herein in its entirety.
The present disclosure relates to methods for the determination of a boundary of a space of interest using a radar sensor, and is more particularly, although not exclusively concerned, with the determination of a boundary of an enclosure, such as, a room.
In the domain of home automation, there is a requirement to be able to sense the presence of an individual in a room or space within the home. Passive infrared (PIR) systems are well known for sensing the presence of individuals in a room can be incorporated into home automation systems. However, such systems tend to only detect movement in the room and no detection of position nor tracking is involved.
If a PIR system is replaced by a millimetre wavelength radar system, due to the penetrating nature of electromagnetic radiation at such wavelengths, it is often difficult to determine the perimeter of a room as some materials are transparent or translucent to the radiation. This results in electromagnetic radiation leaking beyond the boundary of the enclosure or room with the generation of erratic or unintended detections of the radar sensor. In addition, doorways, open doors and open windows can also lead to detections outside of the perimeter of the room.
International Publication No. WO 01/01168 A2 describes a system and method for intrusion detection using a time domain radar array. Two or more radar sensors are located in a sparse array around a building which is to be monitored and are configured to communicate with one another and with a processor. The sensors are installed in a conventional electrical wall socket, and, if forward scattering data is to be used, the sensors need to be synchronised. The processor determines whether an alarm signal should be generated by subtracting a clutter map indicative of the normal situation from a current radar image to produce a differential clutter map. Calibration of the system may be performed by moving a transmitter along a calibration path around the area to be protected, and, each sensor tracks the movement of the transmitter. Calibration may also be performed manually using information about the area being monitored or using radar pulses emitted from each sensor.
U.S. Pat. No. 7,250,853 discloses a surveillance system which combines target detection sensors with adjustable identification sensors to identify targets within a guard zone. The target detection sensors, for example, radar sensors, detect targets entering the guard zone and the adjustable identification sensors, for example, steerable cameras, which classify and identify the targets detected by the target detection sensors. A system controller controls the operation and coordination of the target detection sensor and the adjustable identification sensors in the surveillance system in accordance with predetermined threat levels.
However, each of the systems described above requires multiple radar systems to be able to monitor an area. This can be expensive for home automation. Moreover, these systems do not allow for the penetrating nature of radiation at radar frequencies and may trigger false alarms unless the target is confirmed by some other means.
Moreover, as the systems described above require at least two radar sensors and wireless links therebetween, a complex system is required to be able to derive the boundary of an enclosure. Furthermore, the method cannot readily adjust to changes in the field of view of the radar sensor without having to re-calibrate the system.
It is an object of the present disclosure to provide a method for the determination of a boundary of a space of interest by detecting and tracking targets moving in that space.
It is another object of the present disclosure to provide a method of determining a boundary of a space by identifying a line in a scan plane which is perpendicular to a normal of a view angle of a radar sensor and deriving the boundary based on the determination of the line and the distance of the line from the radar sensor.
It is a further object of the present disclosure to provide a method of determining a boundary of a space of interest where moving targets following a predetermined path are tracked to derive the boundary.
It is yet another object of the present disclosure to provide a method which can refine a boundary of a space of interest by tracking moving targets within a field of view of a radar sensor.
In accordance with one aspect of the present disclosure, there is provided a method of determining a boundary of a space of interest using a radar sensor, the method comprising the steps of:
a) scanning in at least one plane within the field of view of the radar sensor;
b) detecting one or more moving targets within the field of view;
c) tracking the one or more detected moving targets;
d) deriving information relating to the one or more tracked moving targets;
e) determining, from the derived information, regions not traversed by the one or more tracked moving targets; and
f) deriving a boundary of the space of interest from the derived regions.
In accordance with the present disclosure, the described method is less complex than those of the prior art, and, is relatively inexpensive to implement using off-the-shelf components. Moreover, no extra active mobile devices are required for setting up the radar sensor for operation.
In an embodiment, step d) further comprises deriving positional information relating to the one or more tracked moving targets. The positional information provides an indication of where the one or more tracked moving targets moves within the field of view of the radar sensor.
In an embodiment, step b) further comprises the steps of:
b1) detecting one or more fixed targets present within the field of view of the radar sensor; and
b2) determining positional information for the one or more fixed targets;
and wherein step f) further comprises the step of:
refining the boundary of the space of interest using the positional information relating to the one or more fixed targets.
By determining positional information relating to one or more fixed targets, these can be used to define better the boundary of a space of interest.
In an embodiment, step d) further comprises deriving velocity information relating to movement of the one or more tracked moving targets. The velocity information indicates whether or not a moving target is able to move within a particular region within the field of view of the radar sensor.
In an embodiment, step d) further comprises dividing the field of view into sub-space regions in accordance with range resolution, OR and angle resolution in azimuth, ∂θ. By dividing the field of view into sub-space regions, it is possible to map more accurately the movement of one of more moving targets within the azimuth plane of the field of view of the radar sensor.
In addition, step d) may further comprise dividing the field of view into sub-space regions in accordance with angle resolution in elevation, ∂Ø. This increases the resolution of the map as it can be extended to both the azimuth and elevation planes within the field of view of the radar sensor.
In an embodiment, step d) further comprises assigning a histogram to each sub-space region. By assigning a histogram to each sub-space region, information relating to the targets within the field of view can readily be identified.
In an embodiment, step d) further comprises mapping the positional information relating to the one or more tracked moving targets to sub-space regions, and, populating each histogram with velocity information relating to movement of the one or more tracked moving targets for a respective sub-space region.
In this way, each histogram contains information relating to movement of one or more tracked targets with respect to an associated sub-space region.
In an embodiment, step e) further comprises identifying one or more histograms for which the velocity information is zero, and, identifying sub-space regions associated with the identified one or more histograms as being regions not traversed by the one or more tracked moving targets.
The identification of histograms which effectively have no data relating to movement of one or tracked moving targets indicates, by way of associated sub-space regions, regions which comprise obstacles or other “no-go” regions.
In accordance with the present disclosure, the method may further comprise the steps of:
g) updating each histogram in accordance with velocity information relating to movement of the one or more tracked targets; and
h) refining the boundary of the space of interest using the updated histograms.
By updating the histograms and using that information to refine the boundary of the space of interest, it is possible to update the positions of obstacles, fixed targets and “no-go” regions without having to re-calibrate the radar sensor.
In accordance with another aspect of the present disclosure, there is provided a method of determining a boundary of a space of interest using a radar sensor, the method comprising the steps of:
i) performing a plurality of scans of the space of interest in at least one of: the azimuth plane and the elevation plane within the field of view of the radar sensor;
ii) determining a range-angle map after each scan;
iii) identifying at least three points forming a line in the scan plane, the line being perpendicular to a normal of a view angle of the radar sensor;
iv) determining the location of one part of the boundary of the space of interest from the identified points forming the line in the scan plane; and
v) deriving an extent of the boundary in the scan plane.
This method provides a simple way of deriving the boundary of a space of interest without having to track any moving targets. Simple processing of the range-angle map after each scan determines one boundary of the space of interest and infers other boundaries therefrom.
In an embodiment, the space of interest may be pre-defined, and step iv) comprises locating a first dimension of the pre-defined space of interest, and step v) comprises locating a second dimension of the pre-defined space of interest. In an embodiment, the pre-defined space of interest comprises a rectangle.
In the case of the determination of a boundary of a room, knowing that the room is rectangular, it is possible to determine the position of a wall opposite to the radar by identifying three points forming a line on that wall from the radar scans. The extent of the wall, width or height depending on whether the scan is a scan in azimuth or elevation, provides another dimension of the rectangular room.
In accordance with a further aspect of the present disclosure, there is provided a method of determining a boundary of a space of interest using a radar sensor, the method comprising the steps of:
I) scanning in at least one plane within a field of view of the radar sensor;
II) detecting one or more moving targets within the field of view;
III) tracking one or more detected moving targets following a path within the field of view; and
IV) deriving the boundary of the space of interest from the path of the one or more tracked moving targets.
Using this method, a boundary of a space of interest can be derived using movement of one or more tracked targets following a chosen path determined to provide information for the radar sensor.
In an embodiment step III) comprises tracking the one or more moving target in one of: a random path, a planned path and a perimeter path within the space of interest.
Whilst a perimeter path and a planned path can readily provide information in an empty space of interest, the random path may be used to navigate around obstacles within the space of interest, for example, furniture. The use of such paths provides a calibration of space of interest for the radar sensor which may be refined by tracking one or more moving targets as indicated above.
Where the path comprises one of: the random path and the planned path, step IV) comprises using a morphological processing operation on the path of the one or more tracked moving targets.
Morphological processing, such as, dilation, effectively increases the accuracy of the determination of the boundary of the space of interest, and, can be refined by tracking one or more moving targets and indicated above.
Where the path comprises the perimeter path, step IV) comprises determining the position of the one or more moving targets at any time on the perimeter path.
For a better understanding of the present disclosure, reference will now be made, by way of example, to the accompanying drawings in which:—
The present disclosure will be described with respect to particular embodiments and with reference to certain drawings but the disclosure is not limited thereto. The drawings described are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes.
In accordance with the present disclosure, there are several of options for mounting a radar sensor to determine or define a boundary of space of interest or an enclosure.
The term “space” as used herein refers to a volume or area having no physical boundaries which can be used to define the volume or area. This can include an open space which is only delimited by barriers, cordons, ropes or the like forming “no-go” regions.
The term “space of interest” as used herein refers to a space which is to be defined and monitored by a radar sensor in accordance with the present disclosure.
The term “space model” as used herein refers to a model of the space of interest derived in accordance with methods of the present disclosure. The space model may be two-dimensional or three-dimensional depending on whether scans are performed in both azimuth and elevation.
The term “enclosure” as used herein refers to a volume or area having physical boundaries which define the volume or area. This can include a room or any other area or volume which has walls or other physical delimiters.
The term “room” as used herein refers to a specific type of enclosure which is generally rectangular and has walls. This may also include a room of non-rectangular shape, such as, an L-shape or curved shape.
The term “RCS” as used herein refers to radar cross-section and is a measure of the ability of a target to reflect radar signals in the direction of a radar receiver or the detectability of the target. In effect, this is a measure of the ratio of backscatter power per steradian (unit solid angle) in the direction of the radar (from the target) to the power density that is intercepted by the target. Typically, RCS is calculated in three dimensions and can be expressed as:
where σ is the RCS, Si is the incident power density measured at the target, and Ss is the scattered power density seen at a distance r away from the target.
The terms “MIMO radar system”, “MIMO radar sensor” and “MIMO” as used herein refer to a radar system comprising a plurality of transmit-receive antennas, each transmitter antenna being configured for transmitting a signal and each receiver antenna being configured for receiving the transmitted signal reflected from at least one target.
The terms “monostatic radar system”, “monostatic radar sensor” and “monostatic radar” as used herein refer to a radar sensor which comprises a transmitter configured for transmitting a signal and a receiver configured for receiving a signal corresponding to the transmitted signal when reflected from a target. The transmitter and the receiver are collocated. Typically, a transmitter and a receiver are referred to as being collocated when the distance between the transmitter antenna and the receiver antenna is comparable to the wavelength of the transmitted signal. A monostatic radar may have one transmitter and multiple receivers, for example, eight receivers (termed “1×8” monostatic radar).
The terms “bi-static radar system”, “bi-static radar sensor” and “bi-static radar” as used herein refer to a radar system which comprises a transmitter configured for transmitting a signal and a receiver configured for receiving a signal corresponding to the transmitted signal when reflected from a target. The transmitter and the receiver are separated by a distance which is comparable to the expected distance between the bi-static radar system and the target.
The terms “multi-static radar system”, “multi-static radar sensor” and “multi-static radar” as used herein refer to a radar system including multiple monostatic or bi-static radar subsystems with a shared area of coverage. As such, a multi-static radar system may comprise multiple transmitters with one receiver, i.e. a multiple input single output (MISO) radar system. Further, a multi-static radar system may comprise multiple receivers with one transmitter, i.e. a single input multiple output (SIMO) radar system. Furthermore, a multi-static radar system may comprise a MIMO radar system.
The term “radar sensor” as used herein refers to a radar configured for generating a radar beam, for transmitting the radar beam using a transmit module, for scanning the radar beam in at least one of azimuth or elevation planes (preferably both azimuth and elevation), and for receiving the reflected beam at a plurality of receive modules. In addition, the radar includes a processor/controller and a memory. Beam steering may also be used. Each radar sensor is configured to operate totally independently from any other radar sensor and effectively forms an independent unit.
The term “wall detection” as used herein refers to a method for detecting a wall which detects a straight line or a plane and wherein the normal of the detected line or plane and the normal of the view angle of the radar sensor are share or aligned. In other words, the detected line or the detected plane is perpendicular to the normal of the view angle of the radar. In particular, the straight line or plane may be defined by at least three points on the wall being detected.
The term “supervised boundary estimation” as user herein refers to a method in which a moving target, such as, a person, is used to determine the boundary of a space of interest by moving within the space of interest.
The term “heat mapping” as used herein refers to a method of determining a space of interest using a radar sensor or system due to the movement of a target within that space. There is no monitoring of heat or infrared radiation, only the detection and tracking of a target as it moves within the space of interest. Heat mapping is a form of supervised boundary estimation.
The term “unsupervised boundary estimation” as used herein refers to a method of tracking the movement of targets, for example, people, in and around the space of interest. In some implementations of unsupervised boundary estimation, the target being detected and tracked may be other than people, for example, grain in a silo to which a radar sensor is fitted.
The term “intent mapping” as used herein refers to a method of providing boundary information for a space of interest by tracking the movement of targets, for example, people, within or around a space of interest. Intent mapping is a form of unsupervised boundary estimation.
The term “intent of motion” as used herein refers to whether it is possible for a target to move to a given location in space adjacent to the space currently occupied by the target.
The term “convex polygon” as used herein refers to a simple polygon where none of the sides thereof are self-intersecting, and, in which no line segment between two points on the boundary of the polygon extends outside the polygon. In other words, a convex polygon is a simple polygon whose interior forms a convex set. A rectangular room is an example of a “convex-shaped enclosure”. The term “concave polygon” as used herein refers to a polygon which is not convex. A simple polygon is concave if, at least one of its internal angles is greater than 180°. An example of a concave polygon is a star polygon where line segments between two points on the boundary extend outside of the polygon. An example of a “concave-shaped enclosure” is an L-shaped room.
In accordance with the present disclosure, a radar sensor is installed to monitor a space of interest, such as, an enclosure having walls or a space delimited by barriers, for example, cordons or ropes. The radar sensor needs to be calibrated to determine the extent of the space of interest, that is, the positions of “no-go” regions, for example, walls or other barriers which define the space of interest. The determination of the extent of the space of interest can be performed using methods described below. Once calibrated, tracking of detected moving targets within the space of interest can be used to refine further the extent of the space of interest.
One method for determining the boundary of a space of interest in accordance with the present disclosure is wall detection where the boundary of an enclosure or room is derived using trigonometry from a straight line or a plane perpendicular to the normal of the view angle of the radar sensor.
Another method for determining the boundary of a space of interest in accordance with the present disclosure is heat mapping where tracking of a moving target within the space of interest after installation of the radar sensor is used to derive the boundary.
A further method for determining the boundary of a space of interest in accordance with the present disclosure is intent mapping where tracking of moving targets both inside and outside of the space of interest provides information on “no-go” regions from which the boundary can be derived.
In accordance with one method of the present disclosure, a two-step process is used to map a space of interest which has delimiters, such as walls. The first step of the process involves estimating the distance of the wall or the corners formed by walls situated in front of the installed radar. This step is performed once at the time of installation of the radar sensor. In the second step of the process, continuous refinement is performed which involves tracking detected targets to fine tune the extent of the space of interest having delimiters, for example, a room. The second step can be referred to as “intent mapping” and is described in more detail below.
As an alternative to the wall estimation part of the two-step process, heat mapping may be used as described in more detail below.
Intent mapping may be used as a separate method as described in more detail below.
Different embodiments and combinations of embodiments of the present disclosure are described below with respect to a room which has physical barriers (walls). However, the principles of the disclosure can be applied to other situations, such as, spaces or enclosures without physical delimiters or barriers.
The methods of the present disclosure require a radar sensor with beamforming capabilities. The radar sensor may have a single transmitter element with multiple, e.g. eight, receiver elements. The range resolution of the radar sensor must be sufficient in order to distinguish the positions of the walls, as well as targets or objects in front of the wall and behind the wall (outside of the room). For a given angular resolution of the radar sensor, there is a limit on the minimum size of the room below which the angle of detections may become cluttered. The largest dimension of an enclosure is limited by the link budget of the radar sensor, that is, the gains and losses from the radar transmitter through space to the receiver in the radar receiver.
Position A is on wall 16 situated about 1.2 m from the floor 12 (and coincides with one of the heights at which electrical sockets may typically be installed in some countries). In this position, the wall and corners opposite to the radar sensor are within its field of view. There are up to 8 corners including those formed by two adjacent walls and those formed by walls with the ceiling and floor. In
As shown in
From Position B, a top view of the room 10 is obtained. Like Position A, all 9 points including the point opposite to the radar position and the corners formed by the floor with 4 walls of the room are visible to the radar sensor.
Position C is similar to Position A but the radar is located at a higher position. From this position, the radar sensor is most likely able to view the corners on the top portion of the wall, that is, corners a, c and edge b.
Similarly, the radar sensor installed in position B could be oriented to monitor the walls instead of the top view looking at the floor. This orientation is to be considered as Position D.
From the perspective of Position D, with special antenna design, the radar sensor could potentially obtain a 360° view the enclosure or room. It is be noted that there could be situations when some of these corners are occluded by objects with poor RCS.
In the positions described with reference to
In one method where an enclosure or room is to be mapped and monitored, it is important that the farthest perimeter of the enclosure is visible to the radar sensor and is situated perpendicular to the normal of the view angle thereof.
In accordance with one method of the present disclosure, wall detection is performed in two steps, namely, scanning the enclosure or room for artefacts and then inferring which of these artefacts define the perimeter. Artefacts, such as fixed target, which can be detected by the radar sensor may be present at and around the perimeter of the space of interest.
A wall is detected by collocating detections made by the radar sensor on an opposite wall. In order to observe corners, the radar sensor scans in both azimuth and elevation planes for detection of corners. However, it is not essential to scan in both planes, and, scanning in only one plane is also sufficient. A range-angle map is obtained after each scan. The beam-width of the antenna radiation pattern is inversely proportional to its aperture. In the case of MIMO radar sensors having many antenna elements, a sharp directional beam, also popularly referred to as pencil-beam, can be generated. If the antenna array is suitably placed, the sharp beam can be steered in both the azimuth and elevation planes.
For example, when a radar 22 installed in a rectangular room at “o” (equivalent to Position A as shown in
Although nine points connected by respective lines are shown in
In an empty enclosure with ideal conditions, all nine points as shown in
Once the opposite wall is identified, it is assumed that the enclosure is a pre-defined shape, for example, rectangular, L-shaped, etc. so that the detected points can be mapped. A user may select or define the shape of the model to assist with the mapping of the detected points. From the detected points located on the identified wall, the extent of the wall (length from corner-corner) and an estimate of the room dimension is deduced by simple trigonometric identities as shown in
In
The extent of the wall H is obtained by using simple trigonometric identities and are described as:
H=R
l sin(θl)+Rr sin(θr)
Here, H is the enclosure extent which can be either the width or height depending upon the scanning planes. Rr, θr and Rl, θl are observed as being points on the wall 210 which are farthest from the radar sensor 200. Ro is computed and corroborated as Ro=Rl cos(θl) and Ro=Rr cos(θr). Further, Ro is also an observed point on the opposite wall at zero degrees in both azimuth and elevation planes.
It is to be noted that Ro gives the length of the enclosure or room and H gives the width (assuming a rectangular enclosure or room).
It will be understood that
Different room shapes will need to be considered when determining the position of the opposite wall. In
In
In another method, the perimeter of the space or enclosure can be determined by detecting movement of targets within the field of view of the radar sensor using heat mapping as described with reference to
The heat mapping process comprises two parts. After the radar sensor has been installed in the enclosure, in a supervised setup, a moving target is guided within the enclosure in order to validate all spatial points within the enclosure. Here, the target is tracked by the radar sensor and the tracking history is used to infer the spatial coordinates belonging to the enclosure.
Whilst it is preferred that a single moving target is used for heat mapping, it will readily be appreciated that more than one moving target may be used.
The heat mapping for calibrating the radar sensor is initiated by a command to begin calibration or is initiated automatically once the radar sensor is powered on. During this phase, a designated target moves in a predetermined pattern to cover the area and perimeter of the enclosure, and, the target path can begin at any point within the enclosure. Preferably, the target path begins from a point in the enclosure that is very close to the radar. The radar sensor detects and tracks the movement of the designated target in order to map the room. Three path patterns are proposed for this purpose as shown in
In
From the tracked locations as shown in
In
For the tracked locations as shown in
In
For convex-shaped enclosures, walking along the perimeter is sufficient to infer all spatial coordinates within the enclosure as shown in
If the enclosure in which the radar is installed is opaque to the radiation frequency of the radar, the heat mapping method provides an estimation of all possible spatial points within the enclosure, and it is sufficient for detecting targets within the enclosure. However, radiation at radar frequencies can penetrate or “see” through several materials (including building materials such as wood, plastic) making enclosures partially transparent. This affects the calibration process as targets outside of the enclosure or space of interest may also be detected during calibration of the radar affecting the estimate of spatial coverage.
In the second step of this process, the radar sensor continuously tracks movements of several moving targets and determines a distribution which increases the confidence relating to the position of the perimeter over time. This second step also allows for adaptations in the perimeter, for example, due to the movement of generally immovable targets within the enclosure, such as, relocation of furniture etc. as well as to alleviate the difficulties in perimeter detection when the enclosure comprises “see-through” materials as described above.
In this second step, the actual spatial position of the target together with the intent of motion of the target along any path, that is, the random path (
Intent information can also be obtained by tracking one or more moving targets, for example, people, and recording the instantaneous velocity of the targets being tracked over a long period of time. While each target moves, it is tracked and vectors of the spatial locations with respect to the target is recorded. The instantaneous velocity vector of the targets suggest intent and by observing many tracks for a long period, the perimeter of the room can be determined with a sufficiently high confidence.
In accordance with the present disclosure, an automatic or unsupervised boundary estimation method may be used to build on top of a supervised boundary estimation method. In the supervised boundary estimation method, the radar sensor continues to track a moving target following a random path, a planned path or a perimeter path, as described above with reference to
In accordance with a method of the present disclosure, intent mapping for one or more targets moving in a rectangular room are described with reference to
In
In
It is to be noted that two orthogonal directions are shown in the
In
In
It is to be noted that two orthogonal directions are shown in the
In
It is to be noted that the position of one or more arrows associated with a particular location does not define the entire wall, and, there needs to be many instances when one or more targets occupy these cells along the wall and do not pass through the wall. Over a long period of time, the histogram is populated to make the inference of a barrier and hence a “no-go” region.
In contrast to
In effect, the method of tracking targets within the field of view of the radar sensor to derive a boundary, and thereby a model of an enclosure, can be used for any space of interest irrespective of whether there are physical barriers defining such a space. However, this method can also be used to refine boundary and the model of enclosures as described below.
After receiving an initialization input (step 820), that is, the determination of non-moving targets within the space of interest, the histograms are updated based on the initialization input where non-moving targets, such as walls, furniture etc. are determined, if necessary (step 830). The radar sensor scans the field of view for targets (step 840) and then tracks all observable moving targets (step 850).
During tracking, the instantaneous velocity of the moving targets is recorded along with their spatial position within the field of view. For each spatial position, the instantaneous velocity of the moving targets within the field of view of the radar sensor is recorded. Each spatial position corresponds to, or is mapped to, a sub-space of the field of view of the radar sensor. The histograms are continuously populated with direction information for every sub-space of the field of view traversed by the targets. By mapping a spatial location to a sub-space and identifying histograms in the mapped sub-space regions which are empty, the space of interest can be determined, as these empty histograms indicate sub-space regions where there is no movement of one or more tracked moving targets.
In step 850, the histograms are continuously populated with tracking data relating to targets moving within the field of view of the radar sensor. In step 860, it is determined if there is sufficient data collected for use in determining the space boundary. If there is not sufficient data, the method returns to step 850 to gather more data from the tracking of moving targets within the field of view. If there is sufficient data, the next step, step 870, is to extract the histograms having no tracking data which correspond to “no-go” areas or regions. These extracted empty histograms are used to determine and refine the boundaries and to derive a space model, step 880, which is output at 890. The derived space boundary is fed back as an input to step 830 thereby refining the regions in the space of interest in which moving targets should be expected to be detected by updating each histogram.
It will readily be understood that empty histograms contain no velocity data as none of the one or more tracked targets move into the sub-space region associated with such histograms, and, therefore no velocity data is stored therein.
This method of tracking one or more targets within the field of view of the radar sensor may also be used for refining the boundary as well as the space model of a space of interest if the boundary of the space of interest is determined by another method. For example, a boundary of the space of interest, typically a room, can be derived from the presence of an opposite wall as described with reference to
Furthermore, a boundary derived by using positional information relating to the one or more objects detectable by the radar sensor and present within and around the boundary of the space of interest can be refined using this tracking method. Such objects can be placed at different locations around the perimeter of the enclosure at locations observable by the radar sensor, and vice versa, such objects may be used to refine boundary derived by the tracking method.
Tracking of moving targets in accordance with the present disclosure can readily adjust for changes in the room layout without requiring a recalibration. Moreover, the model obtained in this way can readily be adjusted for non-moving targets within the enclosure, and, for the relocation of such targets within the enclosure.
Starting with block 910, the first step after installation of the radar sensor in a suitable position within the space of interest, for example, an enclosure such as a room, is to orient the radar sensor so that it has an optimal field of view in at least one plane, azimuth or elevation (step 912). Ideally, both planes will be used as radar sensors will normally scan in both planes, but this is not essential to be able to determine the location of an opposite wall as described with reference to
In block 920, by scanning with the radar sensor in at least one plane, azimuth or elevation, a rough boundary of the enclosure is obtained (step 922). By way of heat mapping as described above with reference to
In block 930, by scanning with the radar sensor in at least one plane azimuth or elevation (step 932), data, such as, spatial position and direction of motion relating to moving targets, can be obtained as described with reference to
The space boundary derived from inference of the opposite wall (block 910) and the associated space model, as well as the boundary derived from heat mapping (block 920), can be used separately. However, these two models may lack accuracy due to the penetrating properties of the radiation emitted from the radar sensor, and, the use of the tracking of moving targets (block 930) can be used to refine these models.
By combining various techniques described above in accordance with the present disclosure, a good level of robustness against disturbances is obtained.
Number | Date | Country | Kind |
---|---|---|---|
18156277.8 | Feb 2018 | EP | regional |