The application pertains to systems and methods of presenting primarily non-tabular multilevel images of alarms or events via a user friendly graphical user interface. More particularly, the application pertains to such systems and methods which provide a background contextual image which is overlaid, in part, by one or more semitransparent alarm or event indicating elements.
Video surveillance systems are used in almost all the business sectors for security and surveillance purposes. Primary usage of these systems includes live monitoring, reviewing playback videos, and, event, alarm, bookmark and incident reviews. A recurring problem with these systems is that they are primarily list driven. Alarm and event lists are presented in a more or less static configuration and need further improvements.
Alarm lists make for difficult reading in that they often show a basic description about the alarm details, alarm device, time, and location. Most of the time these alarms will be listed in a tabular list format.
Further, most of the time these alarms will be linked with some video data, like “Motion detection on Camera 39”. For this alarm, the user has to obtain the video from camera 39. Many known types of systems such as life safety, building automation and management systems, home automation, residential security systems for example, are integrated with a regional video system to provide visual alarm verification.
Existing alarm and event views gives more or less static information. There may be linkages to the other interfaces to provide for retrieving relevant video, but this information needs to be extracted from different interfaces or different subsystems. Problem is more applicable for video based mobile apps (CCTV) since the screen size is limited. Nowadays most of the residential security systems, connected home systems and residential home automation systems are controlled and monitored remotely though mobile devices such as smart phones and tablets.
In summary, in known systems there is no contextual view or info for the events to be reviewed (even before starting the video retrieval), For example in a scenario where a CCTV operator is looking for a person in red shirt who forcibly opened an access door to find the exact alarm and the video clip the operator needs to play the entire video clip when it has been found.
While disclosed embodiments can take many different forms, specific embodiments thereof are shown in the drawings and will be described herein in detail with the understanding that the present disclosure is to be considered as an exemplification of the principles thereof as well as the best mode of practicing same, and is not intended to limit the application or claims to the specific embodiment illustrated. In one aspect, contextual alarm views are presented after the users review a list of events and alarms from the system, and select a type of event. The user interface presents a background screen that shows context information for the selected event or alarm.
Context information can include, a video snapshot image at alarm time, a video clip at alarm time or alarm location information, for example by presenting a 2D floor map or 3D Model or BIM. On an as needed basis a user or operator can start playing video directly from the alarm view.
Once the video starts playing the alarm video, other alarm lists can be suppressed. Each event type can be displayed with text and a semitransparent icon. Selected events can be highlighted by selecting or highlighting the respective icon. The event list can be navigated up and down by touch or by using a mouse (workstation or mobile device). Camera views can be selected based on the physical location of the alarmed device or detector which is in an alarm state. In the event of an alarm from a sub system like an invalid access card entry, the alarm can retrieve the logical video feed.
The members of the plurality 12 communicate via a wired or wireless medium, indicated at 12a, with a monitoring, or security station 14. Station 14 includes control circuits 20 which can be implemented, at least in part, with one or more programmable processors 20a, and software 20b executable by the processor(s) 20a.
An input/output interface 22 coupled to the control circuits 20 facilitates communications with members of the plurality 12. A data base 24 provides storage for various types of information including maps of the region R, information as to characteristics of and location of members of the plurality 12 which in addition to the video cameras, can also include various types of ambient condition detectors.
Station 14 also includes a graphical user interface 26, coupled to control circuits 20. Interface 26, which can be driven by software 20c and the processor(s) 20a can include a visual display panel 28. In addition, it can include manually operable communications elements 30. Elements 30 can include mouse-type devices, keyboards and touch screens.
Station 14, via the interface element 22 can also communicate wirelessly with a computer network, such as the internet I with a user's displaced communication device 32. The below described contextual screens can be readily presented on portable communications devices 32 such as smart phones, tablets or the like all without limitation.
Those of skill will understand that the detailed characteristics of the elements of the station 14 do not constitute limitations hereof, except as described herein. Variations of the components of the station 14 come within the spirit and scope hereof. The graphical user interface 26, as discussed below, can present to a user multi-level contextual displays driven by outputs from members of the plurality 12, and, choices made by the user.
Advantageously, the display 42, associated with monitoring a residence, is not a mere list which might identify a camera, status or location. Instead the user is presented with a multi-level background contextual display, 44 which is associated with a current event. The background display 44 is overlaid by one or more semi-transparent foreground event identifiers, which include text and associated icons, 46a, b, c, d (from the event list) which represent events, conditions or alarms in the region R.
Each of the event identifiers can include a textual part 46a-1 and an activatable icon 46a-1. Each of the icons 46i can be independently selected for further investigation by the user or system operator. In
In summary embodiments hereof enhance the user's experiences by reducing time needed to search the video data for intended clips or the like. Instant visual context presenting images are provided to the user while investigating the list of alarms or events. Incident management should be improved due to enhanced and faster understanding of the incident.
Those of skill will understand that it is not necessary that any member of plurality 12 include video-type cameras. Some members of plurality 12 can be non-video ambient condition detectors. In this instance, a pre-stored background image can be displayed with overlying semitransparent event or incident identifiers.
From the foregoing, it will be observed that numerous variations and modifications may be effected without departing from the spirit and scope hereof. It is to be understood that no limitation with respect to the specific apparatus illustrated herein is intended or should be inferred. It is, of course, intended to cover by the appended claims all such modifications as fall within the scope of the claims. Further, logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be add to, or removed from the described embodiments.