The present disclosure pertains generally to security systems and more particularly to reducing redundant alarm notifications within a security system.
A security system may include a number of video cameras within a monitored area. The monitored area may be indoors or outdoors, for example. Each video camera has a field of view (FOV) that describes what that particular video camera can see. If an object is within the FOV of a particular video camera, and that particular video camera is operating, the object will be captured in the video stream of that particular video camera. It will be appreciated that in some cases, the FOV of a first camera of a security system may overlap with the FOV of a second camera of the security system in an overlapping FOV region. The overlap may be minor, or the overlap may be substantial. If each video camera is executing video analytics on their respective video streams, or if a remote device (e.g. remote server) is executing video analytics on the respective video streams, and a security event occurs in the overlapping FOV region of the respective video streams, the video analytics associated with each of the video streams may issue an alarm for the same security event. These alarms may be considered redundant alarms because they both related to the same security event, just captured by different cameras. This can significantly increase the workload to a security operator monitoring the security system, and in some cases, may draw the operator's attention away from other security events. What would be beneficial are improved methods and systems for detecting security cameras that have overlapping FOVs, and to reduce or eliminate redundant alarms that correspond to the same security event captured by multiple cameras in an overlapping FOV.
This disclosure relates generally to improved methods and systems for detecting cameras with overlapping FOVs in order to reduce redundant alarm notifications in a security system. An example may be found in a method for reducing alarm notifications from a security system deploying a plurality of cameras within a monitored area. A first camera of the plurality of cameras has a first field of view (FOV) and a second camera of the plurality of cameras has a second FOV, wherein at least part of the first FOV of the first camera includes a first overlapping region that corresponds to where the second FOV of the second camera overlaps with the first FOV of the first camera. At least part of the second FOV of the second camera includes a second overlapping region that corresponds to where the first FOV of the first camera overlaps with the second FOV of the second camera. The method includes processing a first video stream captured by the first camera of the security system to detect an alarm event observed in the first overlapping region of the FOV of the first camera and processing a second video stream captured by the second camera of the security system to detect the same alarm event observed in the second overlapping region of the FOV of the second camera. A combined alarm notification corresponding to the alarm event is sent, wherein the combined alarm notification includes the alarm event and identifies the first camera and the second camera as both detecting the alarm event in their respective FOVs.
Another example may be found in a method for reducing alarm notifications from a security system deploying a plurality of cameras within a monitored area, at least some of the plurality of cameras having a field of view (FOV) that overlaps with that of at least one other of the plurality of cameras. The illustrative method includes receiving video frames from each of a first camera having a first FOV and a second camera having a second FOV, where a determination has been made that the first FOV overlaps with the second FOV. One or more objects are detected within the video frames from the first camera. At the same time, at least one of the same one or more objects are detected within the video frames from the second camera. An overlapping region between the first FOV and the second FOV is determined based at least in part on the one or more detected object. An alarm event is detected in the overlapping region between the first FOV and the second FOV. A combined alarm notification corresponding to the alarm event is sent.
Another example may be found in a method for finding an overlap region between a field of view (FOV) of a first camera and a FOV of a second camera. The method includes determining that the FOV of the first camera overlaps with the FOV of the second camera. Video frames from the first camera having a first FOV and video frames from the second camera having a second FOV are received. One or more moving people are found within the video frames from the first camera. At least one of the same one or more moving people are found within the video frames from the second camera. Over time, the at least one of the same one or more moving people are tracked through subsequent video frames from each of the first camera and the second camera. The tracking is used to define an overlap region in which the FOV of the first camera overlaps the FOV of the second camera and/or the an overlap region in which the FOV of the second camera overlaps the FOV of the first camera.
The preceding summary is provided to facilitate an understanding of some of the features of the present disclosure and is not intended to be a full description. A full appreciation of the disclosure can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
The disclosure may be more completely understood in consideration of the following description of various illustrative embodiments of the disclosure in connection with the accompanying drawings, in which:
While the disclosure is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit aspects of the disclosure to the particular illustrative embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure.
The following description should be read with reference to the drawings wherein like reference numerals indicate like elements. The drawings, which are not necessarily to scale, are not intended to limit the scope of the disclosure. In some of the figures, elements not believed necessary to an understanding of relationships among illustrated components may have been omitted for clarity.
All numbers are herein assumed to be modified by the term “about”, unless the content clearly dictates otherwise. The recitation of numerical ranges by endpoints includes all numbers subsumed within that range (e.g., 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5).
As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include the plural referents unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.
It is noted that references in the specification to “an embodiment”, “some embodiments”, “other embodiments”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is contemplated that the feature, structure, or characteristic may be applied to other embodiments whether or not explicitly described unless clearly stated to the contrary.
In some cases, the controller 16 may receive video streams from the video cameras 12 over the network 14, and may perform video analytics on those video streams. In some cases, at least some of the video cameras 12 may be configured to perform video analytics on their own video streams. In some cases, the video analytics may be split between the video cameras 12 and the controller 16, depending at least in part upon the capabilities of the video cameras 12. The controller 16 may be located close to at least some of the video cameras 12, such as at the edge. In some instances, the controller 16 may be remote from the video cameras 12, such as on the cloud. In some cases, the security system 10 includes a monitoring station 18 that is operably coupled with the controller 16 via the network 14. This is just one example security system configuration.
The monitoring station 18 may receive alarms from the controller 16 when the controller 16 detects a possible security event in one or more video streams provided to the controller 16 from one or more of the video cameras 12. In situations in which at least some of the video cameras 12 (or intervening edge devices) are performing video analytics on their own video streams, the monitoring station 18 may receive alarms from those video cameras 12. The monitoring station 18 may be local to where the video cameras 12 are located (e.g. in same facility), or the monitoring station 18 may be remote (e.g. remote from the facility). The monitoring station 18 may be configured to display video streams, or clips from video streams, for review by security personnel. In some cases, the monitoring station 18 may display video so that the security personnel are able to verify, or perhaps dismiss, possible alarms that have been received by the monitoring station 18, regardless of whether those alarms were raised by one or more video cameras 12 or by the controller 16.
For pan-tilt-zoom cameras, the FOV 26 and/or the FOV 28 may expand less or more rapidly than shown with increasing distance from the first video camera 22 and/or the second video camera 24 depending on zoom setting for each of the first video camera 22 and/or the second video camera 24. Also, the position/orientation of the FOV 26 and/or the FOV 28 may change depending on a pan and/or tilt setting for each of the first video camera 22 and/or the second video camera 24.
As shown, the FOV 26 (of the first video camera 22) may be divided into a region 30, a region 32 and a region 34 while the FOV 28 (of the second video camera 24) may be divided into a region 36, a region 38 and a region 40. It will be appreciated that the region 32 (of the FOV 26) is the same as the region 38 (of the FOV 28). Accordingly, any activity that occurs within this shared region 32, 38 is visible to both the first video camera 22 and the second video camera 24. Any activity that occurs within the region 30 or the region 34 is visible to the first video camera 22 but not the second video camera 24. Any activity that occurs within the region 36 or the region 40 is visible to the second video camera 24 but not the first video camera 22. Areas of the monitored area 20 that are outside of the FOV 26 and the FOV 28 are not visible to either the first video camera 22 or the second video camera 24, and presumably are within a FOV of other video cameras (not illustrated).
If suspicious activity is detected within the region 30 or the region 34, such activity will be detected by the first video camera 22 and possibly (if necessary) reported such as by alarm. If suspicious activity is detected within the region 36 or the region 40, such activity will be detected by the second video camera 24 and possibly (if necessary) reported such as by alarm. However, any suspicious activity that is detected within the shared region 32, 38 will be detected by the first video camera 22 and the second video camera 24, and thus could be reported by separate alarms by both the first video camera 22 and the second video camera 24. It will be appreciated that if both the first video camera 22 and the second video camera 24 report the same event, a single event will appear to be two distinct events reported by two distinct alarms. This can double (or more) the events that need to be checked out by an operator at the monitoring station 18, for example. In some cases, determining where the FOV of the first video camera 22 overlaps with the FOV of the second video camera 24 (or any other video cameras not shown) is useful in limiting redundant event reporting.
In some instances, the method 42 may further include receiving user input that manually defines the first overlapping region and the second overlapping region, as indicated at block 50. As an example, and in some cases, receiving user input that manually defines the first overlapping region and the second overlapping region includes receiving user inputs relative to the first FOV that define vertices of the first overlapping region, as indicated at block 52, and receiving user inputs relative to the second FOV that define vertices of the second overlapping region, as indicated at block 54.
The illustrative method 56 includes processing a first video stream captured by the first camera of the security system to detect an alarm event observed in the first overlapping region of the FOV of the first camera, as indicated at block 58. In some instances, the method 56 may further include identifying the nearby cameras of the first camera of the security system in order to identify the second camera of the securing system using either manual or automatic self-discovery methods. A second video stream captured by the second camera of the security system is processed to detect the same alarm event (e.g. same object at same time) observed in the second overlapping region of the FOV of the second camera, as indicated at block 60. A combined alarm notification corresponding to the alarm event is sent. In some cases, the combined alarm notification includes the alarm event and identifies the first camera and the second camera as both detecting the alarm event in their respective FOVs, as indicated at block 62, but this is not required. In some cases, the method 56 further includes automatically defining the first overlapping region and the second overlapping region, as indicated at block 64.
The light pattern includes a plurality of unique pattern elements that can be uniquely identified, as indicated at block 86. The first video stream captured by the first camera of the security system and the second video stream captured by the second camera of the security system are processed to identify one or more of the plurality of unique pattern elements that are found in both the first FOV and in the second FOV at the same time, as indicated at block 88. Relative positions within the first FOV and the second FOV of each of the plurality of unique pattern elements that are found at the same time in both the first FOV and in the second FOV are determined, as indicated at block 90. The first overlapping region in the first FOV is determined based at least in part on the relative positions within the first FOV of each of the plurality of unique pattern elements found at the same time in both the first FOV and in the second FOV, as indicated at block 92. The second overlapping region in the second FOV is determined based at least in part on the relative positions within the second FOV of each of the plurality of unique pattern elements found at the same time in both the first FOV and in the second FOV, as indicated at block 94.
In some instances, the illustrative method 96 further include determining candidate ones of the plurality of cameras as possibly having overlapping FOVs, as indicated at block 104. The method 96 may further include determining whether the candidate ones of the plurality of cameras have overlapping FOVs, as indicated at block 106. In some cases, determining candidate ones of the plurality of cameras as possibly having overlapping FOVs may include identifying cameras that are neighboring cameras in the security system. In some cases, the neighboring cameras may be identified by a self-discovery module. In some cases, the self-discovery module can receive inputs from previously known knowledge, a building map, or a spatial or hierarchal mapping of the cameras. Once candidate ones of the plurality of cameras as possibly having overlapping FOVs are identified, one or more of the illustrative methods of, for example,
An overlapping region between the first FOV and the second FOV is determined based at least in part on the one or more detected objects, as indicated at block 116. In some cases, determining the overlapping region may include fine tuning the overlapping region as additional objects are found within the FOV of the first camera and the same additional objects are found to be present at the same time (e.g. same time stamp) within the FOV of the second camera.
An alarm event is detected in the overlapping region between the first FOV and the second FOV, as indicated at block 118. A combined alarm notification corresponding to the alarm event is sent, as indicated at block 120. In some instances, the combined alarm notification may include the alarm event and may identify the first camera and the second camera as both detecting the alarm event in their respective FOVs.
One or more of the plurality of unique pattern elements that are found are identified at the same time (e.g. same time stamp) in both the first FOV and in the second FOV, as indicated at block 138. Relative positions within the first FOV and the second FOV of each of the plurality of unique pattern elements that are found at the same time (e.g. same time stamp) in both the first FOV and in the second FOV are determined, as indicated at block 140. The overlapping region is determined based at least in part on the extent of the relative positions of each of the plurality of unique pattern elements found at the same time (e.g. same time stamp) in both the first FOV and in the second FOV, as indicated at block 142.
In some instances, defining the overlapping region may continue over time as additional moving people are found within the FOV of the first camera and also found within the FOV of the second camera. In some instances, defining the overlap region is repeated over time as the FOV of the first camera and/or the FOV of the second camera are modified as a result of the first camera and/or the second camera accidently moving (e.g. bumped or intentionally moved) or being partially blocked by an obstruction.
In some cases, particularly when the FOV of the first camera and the FOV of the second camera each cover at least part of a real world physical space, the illustrative method 144 further includes identifying a plurality of image location pairs, wherein each of the plurality of image location pairs includes a first image location (x, y) in the FOV of the first camera and a corresponding second image location (x,y) in the FOV of the second camera that both correspond to a common physical location in the real world physical space, as indicated at block 158.
Continuing with
In cases in which a hierarchal or spatial mapping of the cameras is available, the lowest hierarchy level cameras may be considered, as indicated at block 210. In some cases, the cameras that are at the lowest hierarchy level may all be in the same zone or region of a facility, and thus may have a good chance of having overlapping FOVs. In other cases, whether the latitude and longitude values are known, or the camera locations are known from a building map, neighboring and nearby cameras may be considered, as indicated at block 214. In some instances, a threshold of several meters may be used in ascertaining whether cameras are neighboring, for example. In either case, this yields a listing of cameras that should be considered as possibly having overlapping FOVs, as indicated at block 212.
Those skilled in the art will recognize that the present disclosure may be manifested in a variety of forms other than the specific embodiments described and contemplated herein. Accordingly, departure in form and detail may be made without departing from the scope and spirit of the present disclosure as described in the appended claims.