The embodiments described herein relate to security and surveillance, in particular, technologies related to video recognition threat detection.
A software platform for threat detection solutions is described. This software platform may use radar or other technologies to detect concealed weapons such as guns and knives. Existing systems simply use motion or other triggers to focus cameras in front of a user, and in some cases place a highlight box around the subject of interest.
Currently many sensors, such as cameras have to be manually monitored by humans (i.e., security personnel) and with the growing number of cameras in facilities it is difficult to track them all. This may lead to info being missed.
There is a desire to incorporate advanced sensing technology and artificial intelligence in a threat detection system to better track and detect potential threats.
Embodiments described herein relate to a threat detection system that shows a user an incident as it develops in real time by leveraging artificial intelligence (AI) to more accurately focus the attention of the user on specific cameras or other sensors and highlight the areas of concern within those feeds, providing a much more efficient user interface to the operator. These annotated feeds and feed-focused triggering events can also be connected to third party systems. This timeline of events and evidence (a small annotated video clip surrounding a detection event when available) is archived and can be reviewed at a later date, containing an accurate timeline of the incident as it progressed.
In a preferred embodiment, a multi-sensor covert threat detection system is disclosed. This covert threat detection system utilizes software, artificial intelligence and integrated layers of diverse sensor technologies (e.g. cameras, etc.) to deter, detect and defend against active threats (e.g., detection of guns, knives or fights) before these threat events occur.
The threat detection system enables the system operator (user) to easily determine if the system is operational without requiring testing with actual triggering events. This system also provides more situational information to the operator in real time as the incident is developing, and shows them what they need to know, when they need to know it.
Within this system, a threat could move through a multi sensor gateway that would not only focus the camera on that gate, but show the operator all of the detections in one place as they happen. The solution amalgamates all sensors and their detections into a single dashboard of focus for the user, whilst providing the ability for forensic review after the event, clearly showing when and where detections took place, with the recorded evidence.
The system features the ability to display to a user all relevant situational sensor information during a threat event, as it develops through a facility in real time. The system uses artificial intelligence (AI) to not only inform a user which cameras should be in focus, but may highlight where in the camera frame they should focus their attention. This more efficient approach will clearly show the user the system's increasing confidence in event detections, so information is less likely to be missed by the user, thus allowing the user to react in a real-time manor to an active threat. All of this UI dashboard and integrated Al event tracking combines to create a valuable timeline of an event that can be used for future forensic analysis and reporting.
In one implementation, the sensor feeds will occupy as much space as is available. For example, 1 sensor feed will occupy the entire space if no other sensor feeds have been brought into focus. Sensor feeds cycle through Area 2 in a first in first out fashion based on last detection for that sensor.
The person count stat shows that the system is working as the number generally increases as people walk through frames. This count is not to provide specific threat information, instead, it is to show things are working. This number can be used as a secondary check for other systems, such as turnstile entry systems, crowding or social distancing indicators. On the left is a list of the sensors and their status. On the right is a quadrant of 4 sensors. In the neutral (non-threat detected) system state here, they will rotate through sensor feeds randomly or in sequence showing the last frame captured and the last time that sensor triggered an alert.
The right side of the dashboard shows a list of the sensors and their status. The system will determine the status of a sensor based on the last time it has heard from it. This heartbeat signal allows the system to show when a sensor goes down. Clicking on a sensor will navigate the operator to the alerts view.
Simplicity is one of the goals of this screen as well as trying to enforce good work flow. Ideally, there should not be many alerts here so the operator is forced to select each alert before giving them the ability to clear it. This forces the operator to evaluate the threat before dismissing it.
As seen in
In a further embodiment, disclosed herein is a multi-sensor threat detection system used for displaying real-time threat detection, the system comprising a processor to compute and process the data, a plurality of video camera configured to capture image data, a sensor acquisition module, an artificial intelligence (AI) algorithm to provide instructions to focus the camera of on areas of concern and to identify an item as a possible threat, and a graphical user interface (GUI) to provide an update of real-time data feeds based on the processed feeds. The real-time data feed is an annotated feed consisting of a timeline of events and evidence, as well as a small annotated clip of the detection event.
The multi-sensor threat detection system is shown wherein a possible threat includes a gun, knife or a concealed weapon. The multi-sensor threat detection system further comprises a notification/alert module to provide alert to security personnel or operator. The multi-sensor threat detection where the events and evidence can be archived and be reviewed at a later date, providing an accurate timeline of the incident as the incident progresses.
The multi-sensor threat detection system wherein the graphical user interface (GUI) is further configured to display escalation of a threat with detection by separate sensors. The GUI further comprise a dashboard screen to consolidate all the real-time data feeds. The GUI displays a box around the potential threat item on the dashboard screen. The GUI alerts the user. Preferably, the alert comprises further displaying to the user an alert of “WEAPON DETECTED” in red, initiating an audible notification, or a combination of these.
In a further embodiment, a computer-implemented method for displaying real-time threat using a multi-sensor threat detection system is disclosed. The method comprises receiving image data from cameras (sensor) from the multi-sensor threat detection system, processing the data using an artificial intelligence algorithm, aggregating the data, displaying the data on a graphical user interface (GUI) as a newsfeed, updating the newsfeed with real-time updates, and providing an alert warning when a threat is identified.
According to the computer-implemented method, the threat includes identification of a weapon or a concealed weapon. The graphical user interface (GUI) of the computer-implemented method includes a dashboard screen to consolidate all the real-time data feeds. The GUI displays a box around the potential threat item on the dashboard screen. Furthermore, the GUI further alerts the user, preferably displaying, to the user, an alert of “WEAPON DETECTED” in red, initiating an audible notification, or a combination of these.
The functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium. The term “computer-readable medium” refers to any available medium that can be accessed by a computer or processor. By way of example, and not limitation, such a medium may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. It should be noted that a computer-readable medium may be tangible and non-transitory. As used herein, the term “code” may refer to software, instructions, code or data that is/are executable by a computing device or processor. A “module” can be considered as a processor executing computer-readable code.
A processor as described herein can be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, or microcontroller, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, any of the signal processing algorithms described herein may be implemented in analog circuitry. In some embodiments, a processor can be a graphics processing unit (GPU). The parallel processing capabilities of GPUs can reduce the amount of time for training and using neural networks (and other machine learning models) compared to central processing units (CPUs). In some embodiments, a processor can be an ASIC including dedicated machine learning circuitry custom-build for one or both of model training and model inference.
The disclosed or illustrated tasks can be distributed across multiple processors or computing devices of a computer system, including computing devices that are geographically distributed.
The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the method that is being described, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
As used herein, the term “plurality” denotes two or more. For example, a plurality of components indicates two or more components. The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
The phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on” and “based at least on.”
While the foregoing written description of the system enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The system should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the system. Thus, the present disclosure is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The application claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 63/029606, entitled “SYSTEM AND METHOD FOR SITUATIONAL AWARENESS ASSIST VIEW”, filed on May 25, 2020, the disclosure of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63029606 | May 2020 | US |