The present disclosure relates to devices, methods, and systems for operating a surveillance system.
Surveillance systems can allow enhanced security in various facilities (e.g., buildings, plants, refineries, etc.). Security may be an important factor in managing a facility as various attacks targeting the facility can result in undesirable consequences (e.g., casualty, loss of asset(s), etc.).
Previous approaches to surveillance systems may not be linked to building architecture. Other previous approaches may be linked by manual input (e.g., by a user), for instance. As facilities become larger and more complex, previous approaches to operating surveillance systems may lack contextual information such as where in a facility an intruder may be and/or will be. Further, manually linking surveillance systems to building architecture may become prohibitively costly as architecture continues to become more complex.
Devices, methods, and systems for operating a surveillance system are described herein. For example, one or more embodiments include determining a plurality of parameters of a video camera installed at a particular location in a facility based on a projection of an image captured by the video camera onto a virtual image captured by a virtual video camera placed at a virtual location in a building information model of the facility corresponding to the particular location, determining a two-dimensional geometry of the facility based on the building information model, wherein the geometry includes a plurality of spaces, determining a coverage of the video camera based on a portion of the plurality of parameters and the geometry, determining which spaces of the plurality of spaces are included in the coverage, and associating each space included in the coverage with a respective portion of the image.
Operating a surveillance system in accordance with one or more embodiments of the present disclosure can use a three-dimensional building information model (BIM) associated with a facility to gain information regarding space geometry (e.g., topology), space connection(s), and/or access relationships. Accordingly, embodiments of the present disclosure can allow the linkage of what is captured (e.g., seen) in a video image with a corresponding location in a BIM.
Such linkage can allow embodiments of the present disclosure to determine spaces of a facility where people and/or other items are located. For example, an alarm can be triggered if a person enters a restricted space. Beyond determining a space that a person is in, embodiments of the present disclosure can determine spaces connected to that space and, therefore, where the person is likely to be in the future. Accordingly, embodiments can provide video images of camera(s) covering those connected spaces. Thus, a person's movement through the facility can be monitored by one or more users (e.g., operators, security personnel, etc.). Additionally, embodiments of the present disclosure can lock doors on a path being traveled by a person, for instance.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof. The drawings show by way of illustration how one or more embodiments of the disclosure may be practiced.
These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice one or more embodiments of this disclosure. It is to be understood that other embodiments may be utilized and that process changes may be made without departing from the scope of the present disclosure.
As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, combined, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. The proportion and the relative scale of the elements provided in the figures are intended to illustrate the embodiments of the present disclosure, and should not be taken in a limiting sense.
The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits.
As used herein, “a” or “a number of” something can refer to one or more such things. For example, “a number of spaces” can refer to one or more spaces.
A particular location of a camera can include a position (e.g., a geographic position) identified by geographic coordinates, for instance. A particular location can include a height (e.g., above a floor of the facility, above sea level, etc.). A particular location can include a particular location with respect to the facility and/or spaces of the facility. Spaces, as referred to herein, can include a room, for instance, though embodiments are not so limited. For example, spaces can represent areas, rooms, sections, etc. of a facility.
As shown in
Utilizing widgets 306-1-306-5, a user can manipulate (e.g., adjust, modify, etc.) a position and/or a scale of the image 300. Manipulation can allow the user to align (e.g., match) one or more features of the image 300 with one or more corresponding features with the virtual image 302. For example, the user can use widgets 306-1-306-5 to align a wall in image 300 with a corresponding virtual wall in virtual image 302. Embodiments of the present disclosure can provide one or more notifications responsive to a correct alignment (e.g., a green line along a wall), for instance.
Once the user has aligned the image 300 with the virtual image 302, embodiments of the present disclosure can determine a plurality of parameters associated with the video camera. Such parameters can be used by embodiments of the present disclosure to determine a coverage (e.g., a coverage area) of the camera (discussed further below). The plurality of parameters can include a name of the camera, a position of the camera, a resolution of the camera, a pan setting of the camera, a tilt setting of the camera, a focal length of the camera, an aspect ratio of the camera, a width of the image, etc. For example, the parameters can appear as:
A geometry (e.g., a 2-dimensional shape and/or cross-section) of the facility can be determined using the BIM associated with the facility. The Geometry can include a plurality of spaces. Spaces can represent areas, rooms, sections, etc. of a facility. Each space can be defined by a number of walls, for instance. It is to be understood that though certain spaces are discussed as examples herein, embodiments of the present disclosure are not limited to a particular number and/or type of spaces.
Spaces can be extracted from a three-dimensional BIM associated with a facility and/or from BIM data via a projection method (e.g., by projecting 3D objects of the BIM onto a 2D plan), for instance. Spaces can be polygons, though embodiments of the present disclosure are not so limited. Various information and/or attributes associated with spaces can be extracted along with the spaces themselves (e.g., semantic information, name, Globally Unique Identifier (GUID), etc.).
Connections (e.g., relationships, openings and/or doors) between the spaces can additionally be extracted from BIM data. A given space in a facility may be connected to another space via a door, for instance. Similarly, spaces extracted from BIM data may be connected via a graphical and/or semantic representation of a door. Additionally, spaces extracted from BIM data may be connected by a “virtual door.” For example, though a room may be a contiguous open space (e.g., having no physical doors therein), a BIM model associated with the room may partition the room into multiple (e.g., 2) spaces. Embodiments of the present disclosure can determine a connection between such spaces. The connection can be deemed a virtual door, for instance.
Once determined, the two-dimensional geometry (including spaces) can be used in conjunction with the determined camera parameters (previously discussed) to determine a coverage of the video camera.
Display 408 illustrates a portion of a facility including a number of spaces. The potential area of coverage 412 can be a polygon, and can be determined based on the position of the camera 410 and the polygons representing the spaces of the facility. As shown in
Once determined, the coverage 422 of camera 410 can be used to determine which of the spaces (e.g., a subset of the plurality of spaces) of the two-dimensional geometry are covered by camera 410 (e.g., are included in the coverage 422 of camera 410). Using spatial reasoning, for instance, embodiments of the present disclosure can determine that camera 410 covers (e.g., covers a portion of) a space 424 (shown as 1-1102), a space 426 (shown as 1-2451), and a door 428. The relationship between camera 410 and its covered spaces can be defined as:
“Camera1” covers “1-1102”
“Camera1” covers “1-2451”
“Camera1” covers “Door1”
“1-1102” covered by “Camera1”
“1-2451” covered by “Camera1”
“Door1” covered by “Camera1”
Such relationship information can be stored in an information model associated with the security system, using ontology, for instance, (e.g., in memory) and can be retrieved for various security management purposes and/or scenarios, such as those discussed below.
The relationship information can be associated with (e.g., attached to) the captured image (e.g., the camera video frame) using the determined coverage 422 and one or more of the plurality of camera parameters. For example, embodiments of the present disclosure can project the polygons of the spaces (e.g., space 424, space 426, and/or door 428) covered by camera 410 (e.g., included in coverage 422) into a coordinate system of the captured video image according to the camera parameters (e.g., using a transform matrix).
Each space of the camera coverage 422 can be associated with a respective portion of the captured image. Accordingly, if a person is determined to be in a video image captured by camera 410, embodiments of the present disclosure can determine a space in which that person is located based their location in the image by using the relationship information associated with the captured image.
Embodiments of the present disclosure can use the determined space to provide context and/or assistance to a user of the surveillance system in various surveillance scenarios. In one example, a user can specify a particular space as a restricted space. Embodiments of the present disclosure can update the relationship information of the camera(s) covering the restricted space. In the event that the camera(s) covering the restricted space capture an image of a person entering the restricted space, embodiments of the present disclosure can provide a notification (e.g., an alarm) responsive to the person entering the restricted space.
In an example, a protected asset (e.g., a valuable device) has gone missing from space 530. However, space 530 is a private space having no surveillance camera coverage. Embodiments of the present disclosure can determine spaces connected to space 530 (e.g., space 532 and space 534). Embodiments of the present disclosure can determine cameras covering the spaces connected to space 530 (e.g., camera 510-1, camera 510-2, camera 510-3, and camera 510-5) based on the relationship information stored in the information model. Once determined, video images captured by the cameras covering the spaces connected to space 530 can be provided (e.g., displayed) to a user (e.g., immediately and/or in real time) such that the user can attempt to locate the missing asset and/or a person suspected of taking it.
In an example, an intruder has broken into space 634 and is captured in an image by camera 610-3. Embodiments of the present disclosure can log an event and/or can determine spaces and/or doors connected to space 634. Embodiments of the present disclosure can determine cameras covering the spaces and/or doors connected to space 634, and provide video images captured by the cameras covering the spaces and/or doors connected to space 634 in a manner analogous to that previously discussed in connection with
Further, embodiments of the present disclosure can take action with respect to the spaces and/or doors. For example, embodiments of the present disclosure can lock (e.g., automatically lock) door 636, door 638, and/or door 640 to prevent further action (e.g., destruction) by the intruder.
Embodiments of the present disclosure can also manipulate one or more of cameras 610-1, 610-2, 610-3, 610-4, and/or 610-5. Manipulation can include manipulation of orientation and/or one or more parameters of cameras 610-1, 610-2, 610-3, 610-4, and/or 610-5 (e.g., pan, tilt, zoom, etc.) For example, embodiments of the present disclosure can pan camera 610-1 negative 30 degrees in order to capture an image of the intruder using camera 610-1.
As shown in
Memory 744 can be volatile or nonvolatile memory. Memory 744 can also be removable (e.g., portable) memory, or non-removable (e.g., internal) memory. For example, memory 744 can be random access memory (RAM) (e.g., dynamic random access memory (DRAM) and/or phase change random access memory (PCRAM)), read-only memory (ROM) (e.g., electrically erasable programmable read-only memory (EEPROM) and/or compact-disc read-only memory (CD-ROM)), flash memory, a laser disc, a digital versatile disc (DVD) or other optical disk storage, and/or a magnetic medium such as magnetic cassettes, tapes, or disks, among other types of memory.
Further, although memory 744 is illustrated as being located in computing device 742, embodiments of the present disclosure are not so limited. For example, memory 744 can also be located internal to another computing resource (e.g., enabling computer readable instructions to be downloaded over the Internet or another wired or wireless connection).
As shown in
User interface 748 (e.g., the display of user interface 748) can provide (e.g., display and/or present) information to a user of computing device 742. For example, user interface 748 can provide displays 100, 202, 304, 408, 416, 420, 529, and/or 631 previously described in connection with
Additionally, computing device 742 can receive information from the user of computing device 742 through an interaction with the user via user interface 748. For example, computing device 742 (e.g., the display of user interface 748) can receive input from the user via user interface 748. The user can enter the input into computing device 742 using, for instance, a mouse and/or keyboard associated with computing device 742, or by touching the display of user interface 748 in embodiments in which the display includes touch-screen capabilities (e.g., embodiments in which the display is a touch screen).
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that any arrangement calculated to achieve the same techniques can be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments of the disclosure.
It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.
The scope of the various embodiments of the disclosure includes any other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.
In the foregoing Detailed Description, various features are grouped together in example embodiments illustrated in the figures for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the embodiments of the disclosure require more features than are expressly recited in each claim.
Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.