The present invention relates generally to alarm systems. More specifically, the present invention relates to alarm systems with enhanced performance to reduce nuisance alarms.
In conventional alarm systems, nuisance alarms (also referred to as false alarms) are a major problem that can lead to expensive and unnecessary dispatches of security personnel. Nuisance alarms can be triggered by a multitude of causes, including improper installation of sensors, environmental noise, and third party activities. For example, a passing motor vehicle may trigger a seismic sensor, movement of a small animal may trigger a motion sensor, or an air-conditioning system may trigger a passive infrared sensor.
Conventional alarm systems typically do not have on-site alarm verification capabilities, and thus nuisance alarms are sent to a remote monitoring center where an operator either ignores the alarm or dispatches security personnel to investigate the alarm. A monitoring center that monitors a large number of premises may be overwhelmed with alarm data, which reduces the ability of the operator to detect and allocate resources to genuine alarm events.
As such, there is a continuing need for alarm systems that reduce the occurrence of nuisance alarms.
With the present invention, contextual information is extracted from sensor signals of an alarm system monitoring an environment. A contextualized alarm output representative of a situation associated with the monitored environment is produced as a function of the extracted contextual information.
In most situations, remote monitoring system 16 is an off-site call center, staffed with a human operator, that monitors a multitude of conventional alarm panels 14 located at a multitude of different premises. Conventional alarm panels 14 communicate alarm data to remote monitoring system 16, which typically appear as text on a computer screen, or a symbol on a map, indicating that a sensor has detected an alarm event. Conventional alarm systems 10 do not provide contextual information about the facts and circumstances surrounding alarm events, and thus every alarm event must be treated as genuine. This lack of contextual information about the facts and circumstances surrounding an alarm event impairs the ability of remote monitoring system 16 to efficiently allocate security resources to simultaneous alarms.
With a conventional system such as alarm system 10, before making decision 17 about the truth of an alarm event, security personnel must investigate the alarm event to verify whether the alarm event is a nuisance alarm event or a genuine alarm event. The need for conducting an investigation is necessitated by a lack of contextual information about the situation responsible for causing the alarm event. Such investigations can entail visiting the premises in which the alarm event occurred or viewing the premises via remote viewing equipment. The alarm system of the present invention can reduce or eliminate the need for security personnel to conduct such investigations to determine whether an alarm event is genuine.
As shown in
In the embodiment of
Situation context output 30 describes or characterizes situation 22 of environment 18 for decision-making purposes by alarm panel 26 or remote monitoring system 16. For example, situation context output 30 may include a location of an activity, a nature of an activity, an identity of a person associated with an activity, a state of environment 18, or combinations of these. In most embodiments, context output 30 is a contextualized alarm message that is directly actionable by security or maintenance personnel. Examples of such contextualized alarm messages include “two unknown people entering building illegally at entrance X”, “motion alarm triggered by 3 human intruders in zone X”, “4 people acting suspiciously detected”, “1 human intruder breaking into the safe”, or “door sensor at location X is faulty and in need of repair”.
Contextual information Ia-In includes one or more context elements, which can be of a variety of forms. Examples of such context elements include statistical information (e.g., a duration of an alarm or a frequency of an alarm over time), spatial/temporal information (e.g., a location of a particular sensor 24a-24n within environment 18 or a location of a particular sensor 24a-24n relative to other sensors 24a-24n or to layout features of environment 18), user information, an acceleration of an object, a number of objects entering or exiting an area, whether an object is a person, a speed of an object, a direction of a movement, an identity of a person, a size of a person, an intention of a person, an identity of possible attack tools, or combinations of these. The nature and number of context elements that can be extracted from a particular sensor 24 depends upon the particular type of sensor.
Any type of conventional sensor or smart sensor may be used with alarm system 20. Examples of sensors 24 for use in alarm system 20 include portable identification devices, motion sensors, temperature sensors, seismic sensors, access readers, scanners, conventional video sensors, video sensors equipped with or in communication with video content analyzers, oxygen sensors, global positioning (GPS) devices, accelerometers, microphones, heat sensors, door contact sensors, proximity sensors, pervasive computing devices, and any other security/alarm sensor known in the art. These sensors can provide information to alarm panel 26 in the form of a “detect” (e.g., “1”) or “no detect signal” (e.g., “0”), raw sensor data (e.g., temperature data from a temperature sensor), contextual information, or combinations of these.
After aggregation in step 44, the aggregated categories are then further processed (step 50) to yield situation context output 30. In some embodiments, the context information from different categories is further fused using a context manipulation technique in accordance with the dependencies existing among the contextual information using methods such as set theory, direct graph, first order logic, and composite capability/preference profiles, or any other method known in the art. In some embodiments, subjective belief models are used in context aggregation 34 to quantify contextual information Ia-In and/or categories and enhance the reliability of situation context output 30. For example, in some embodiments, each category represents a possible context scenario occurring within environment 18 and an opinion measure is computed for each context scenario. These opinion measures are then used to assess the probability of each context scenario and eliminate context scenarios with low probabilities. Examples of such context scenarios include access violations, intrusion, attack of protected assets, and removal of protected assets. In some embodiments, particularized subsets of these context scenarios relevant to the particular environment 18 being monitored can be included in the categorization process.
The below discussion of categories for use in step 42 is included to further illustrate some of the example categories referenced above. A multitude of additional categories (or variations of the above categories) can also be considered by context aggregation 34, depending upon the particular security needs of environment 18. In some embodiments, some or all of the categories of step 42 are user-defined.
User behavior context categories describe user-behaviors that are associated with an alarm event. Examples of contextual information for classification in a user behavior category include a number of user(s), an identity of a user(s), a status of a user(s) (e.g., authorized vs. non-authorized), a tailgating event, and a mishandling of alarm system 20 by a user(s) (e.g., failure to arm/disarm). Examples of sources of such contextual information include access control devices, smart badges, hand held devices, facial recognition systems, iris readers, walking gesture recognition devices, hand readers, and video behavior analysis systems.
Activity context categories describe specific activities associated with an alarm event. Examples of such activity categories include intrusion, access, property damage, and property removal. Examples of contextual information that may be categorized in such activity context categories include a type of an event, a time of an event, user activities (e.g., an authorized user working late), third party activities (e.g., a cleaning crew working), an intruder breaking into a protected area of environment 18, a protected asset being removed or damaged, and abnormal behaviors (e.g., loitering, sudden changes in speed, people congregating, and person(s) falling). Examples of sources of such contextual information include site models (e.g., information about the physical layout of environment 18), accelerometers, pressure sensors, temperature sensors, oxygen sensors, global positioning devices, motion sensors, and video sensors with video content analysis.
Examples of contextual information that may be categorized into environmental context categories include a location of a detected object(s) within environment 18 and a proximity of a detected object(s) to a protected area or asset within environment 18. Examples of sources of such contextual information include sensors for measuring ambient conditions of environment 18, historical records of ambient conditions of environment 18, site models (e.g., physical layout information for environment 18), accelerometers, pressure sensors, temperature sensors, oxygen sensors, global positioning devices, motion sensors, and video sensors with video content analysis
Device context categories generally describe a condition or health of a device or an identity or other characteristic of a person using a device. Device diagnostics and statistical data (e.g., alarm frequency, sensor alarm duration, and sensor alarm time) can be used to infer a health of a sensor. In some situations, device context categories can be used by context aggregation 34 to filter out nuisance alarms due to device malfunctions and produce situation context outputs 30 to notify maintenance personnel of maintenance issues. In some embodiments, if a sensor continues to indicate detection of an alarm event and no other sensors indicate any changes in environment 18, then the sensor is deemed faulty and data from the sensor is automatically discounted by context aggregation 34. A device context category may play an important role, for example, when a passive infrared (PIR) motion sensor that frequently detects alarm events sends a motion alarm to alarm panel 26. Given the history of the PIR motion sensor for sending motion alarms, alarm panel 26 can use a health-related device category to assess the reliability of the PIR motion alarm. If, for example, no movement patterns are identified by other nearby motion sensors and a nearby temperature sensor detects a high environment temperature but no fire or smoke alarm is received, then the PIR motion alarm can be deemed false by alarm panel 26 due to the fact that PIR motion sensors are less reliable at high ambient temperatures.
Historical categories describe historical contexts related to environment 18 that can be used to affirm or disaffirm contextual information Ia-In or categories for inclusion in context aggregation 34. Sources of contextual information for categorization in historical categories include, for example, historic security data for alarm events occurring within environment 18, weather patterns, and crime rates.
To generate alarm decision 62, alarm panel 26 of
In some embodiments, each of sensor decisions 64 represent an opinion ωx about the truth of an alarm event x expressed in terms of belief, disbelief, and uncertainty in the truth of alarm event x. As used, herein, a “true” alarm event is defined to be a genuine alarm event that is not a nuisance alarm event. The relationship between these variables can be expressed as follows:
b
x
+d
x
+u
x=1, (Equation 1)
where bx represents the belief in the truth of event x, dx represents the disbelief in the truth of event x, and ux represents the uncertainty in the truth of event x.
Values for bx, dx, and ux are assigned based upon, for example, empirical testing involving conventional sensors 12a-12n and environment 18. In addition, predetermined values for bx, dx, and ux for a given sensor 12a-12n can be assigned based upon prior knowledge of that particular sensor's performance in environment 18 or based upon manufacturer's information relating to that particular type of sensor. For example, if a first type of sensor is known to be more susceptible to generating false alarms than a second type of sensor, the first type of sensor can be assigned a higher uncertainty ux, a higher disbelief dx, a lower belief bx, or combinations of these.
An opinion ωx having coordinates (bx,dx,ux) can be projected onto a 1-dimensional probability space by computing probability expectation value E(ωx), which is defined by the equation
E(ωx)=ax+uxbx, (Equation 2)
where ax is the decision bias, ux is the uncertainty, and bx is the belief. Decision bias ax can be defined by a user to bias the alarm system towards either deciding that an alarm event is a genuine alarm event or a nuisance alarm event.
Sensor fusion 66 can use various fusion operators in various combinations to fuse sensor decision 64. Examples of such fusion operators include multiplication, co-multiplication, counting, discounting, recommendation, consensus, and negation. In some embodiments, co-multiplication operators can function as “or” fusion operators while multiplication operators can function as “and” fusion operators. For example, the multiplication of two sensor decisions 64 having coordinates (0.8,0.1,0.1) and (0.1,0.8,0.1), whereby each sensor decision 64 is an opinion ωx triplet (bx,dx,ux), yields a fused opinion of (0.08,0.82,0.10), whereas the co-multiplication of the two sensor decision 64 yields a fused opinion of (0.82,0.08,0.10).
The above subjective belief modeling methods, as well as other belief modeling methods, can be used in conjunction with any fusion method of the present invention. For example, some embodiments of context aggregation 34 incorporate such belief modeling methods in computing situation context output 30.
As shown in
As shown in
In some embodiments, smart video sensor 84 includes facial recognition capabilities to capture the facial images of persons granted access to environment 18. These facial images can be used by alarm system 80 at a later time to determine user errors and filter out resulting nuisance alarms. In some embodiments, smart video sensor 84 includes a video content analyzer to extract contextual features from video data. In some embodiments, smart video sensor 84 includes voice and/or noise pattern recognition capabilities to allow standard voice commands or unusual noise patterns to be used to reinforce detection accuracy. In some embodiments, smart video sensor 84 communicates with one or more sensors and is activated by the other sensor(s).
In one embodiment, to gain access to a restricted area, a user must present smart badge 82 to an access reader and enter a PIN using keypad 100. Smart badge 82 compares the-user entered PIN with a reference PIN stored in flash memory 112. If the user-entered PIN matches the reference PIN, then wireless communication module 120 sends an encrypted command to the access reader and access to the restricted area is granted. If these two PINs do not match, then LCD 102 can display one or more prompt questions to verify the identity of the user and/or remind the user of the reference PIN. These prompt questions can be programmed in smart badge 82 in advance according to the preference of a user.
In another embodiment of smart badge 82, biometric data is used to verify the identity of a user. For example, upon presenting smart badge 82 to an access reader, a user presses a finger onto fingerprint sensor 104. Fingerprint processor 108 then compares the scanned fingerprint to a reference fingerprint stored in flash memory 112 to verify the identity of the user. As shown in
In some embodiments of the present invention, whether a contextualized alarm output such as situation context output 30 is transmitted to remote monitoring system 16 depends upon the probability and uncertainty associated with the contextualized alarm output. Depending upon the uncertainty level associated with the contextualized alarm output, in some embodiments, video data can be attached to the contextualized alarm output for live video verification of an alarm event at remote monitoring station 16. In some circumstances, the contextualized alarm output is automatically sent to remote monitoring system 16 without accompanying video data. This can occur, for example, when the contextualized alarm output includes opinion measures having a high probability of belief in the truth of an alarm event and/or a low uncertainty in the truth of the alarm event. Conversely, when the contextualized alarm output has a high uncertainty in a truth of an alarm event and/or a low belief in a truth of an alarm event, the contextualized alarm output is sent to remote monitoring system 16 along with video data to facilitate visual alarm verification and reduce nuisance alarms. In such situations, the bandwidth of communication is optimized for data transmission from alarm panel 26 to remote monitoring system 16. Such optimizations may include reducing the video data to one or more snapshots.
As described above with respect to exemplary embodiments, the alarm system of the present invention is capable of extracting contextual information associated with an alarm event to filter out nuisance alarms, facilitate maintenance actions, and/or assist in allocating security resources in response to various alarm events. In some embodiments, the alarm system of the present invention includes one or more smart sensors with on-board intelligence for extracting contextual information for communicating to an alarm panel.
Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US05/08566 | 3/15/2005 | WO | 00 | 9/20/2010 |