The embodiments described herein relate to security and surveillance, in particular, technologies related to video recognition threat detection.
After one or many perpetrators commit an offense, how can security find the person(s) of interest after they run away? As an example, if a perpetrator brandishes a weapon or assaults another person and the perpetrator disappears into a crowd, how can a security officer find them?
The current solution is for security or the security team to comb an area on foot and/or manually view various closed caption television (CCTV) cameras in order to locate the perpetrator. This is a time consuming and possibly ineffective method when time is of the essence. In addition, human identification of a person of interest with multiple lighting, viewpoint, and other possible changes like removal of a hat, mask or coat is error-prone.
A system and method of at using all CCTV cameras simultaneously to find any person of interest in real time and alert security to their location. The person of interest may be manually selected by the user or automatically by computer software and algorithms.
In a preferred embodiment, a multi-sensor covert threat detection system is disclosed. This covert threat detection system utilizes software, artificial intelligence and integrated layers of diverse sensor technologies (i.e., cameras, etc.) to deter, detect and defend against active threats to health and human safety (i.e., detection of guns, knives or fights, or potential health and safety non-compliance) before these events occur.
A software platform for threat detection solutions is envisioned. This software platform may use camera and/or closed circuit televisions (CCTVs), or other technologies to detect perpetrators and concealed weapons such as guns and knives and alert security officers to these perpetrators.
In a preferred embodiment, security officers or threat detection system users (i.e., security team) confirms they want to track perpetrator or people in a video feed scene. The user selects-these person(s) of interest whereby the system is triggered to begin tracking the person(s) of interest. The system will then present the feeds of the location of the person of interest is located in, in order to allow the security team to track and catch the person(s) of interest.
According to
The identification box indicates to the user that a person of suspect (i.e. perpetrator) has been identified and that the system is now able to track them. This satisfies a use case of tracking a person of interest through a facility, not necessarily coupled with an associated alert which is the initial entry point into our tracking feature. In both cases, the system is receiving an input to start tracking, that is either an alert generated or generated by a user selection of a person of interest.
According to
According to
According to further disclosure, re-identification will be extended across multiple cameras in a fashion similar to what is shown in assist tracking. This feature can be extended to pull up video feeds as a weapon is shown in multiple cameras and to re-identify people or weapons across multiple camera feeds.
A key feature of this disclosure is the ability for the security team to leverage all cameras at one time automatically. The location of person(s) of interest can be tracked across a location without violating the privacy of the person(s) of interest.
This is traditionally known as person tracking/person re-identification. After persons are found in frame, a signature, representing their clothes, body type, skin tone, etc., is created. When a person becomes a perpetrator their signature is saved. The signature can be generated through known mechanisms such as perceptual hashing, and more advanced algorithms that provide unique identification of individual attributes by hashing subsections of the frame representing attribute markers, for example clothes color. To further enhance the ability to track persons moving through space, movement probability algorithms can also be employed, noting that the a person in a frame is probably close to the place where that person was last identified. As other people are seen in other cameras, their signatures are compared. If a signature is found that is close to the perpetrator, then security is notified.
According to embodiments of this disclosure, a system for using CCTV cameras simultaneously to find person of interest in real-time comprising a camera detection system to capture videos, a computer processor to process the video images, a software module to analyze frames of the video images and a means to identify a person of interest and a notification module to send a notification. Note that in practice, the video image may also be an optical video image, infrared image, LIDAR image, doppler image, an image based on RF scanning, a magnetic signature image, thermal image, or a multiple image composed of combinations of these imaging technologies.
According to the disclosure, the notification module sends the notification to a security team to provide confirmation of tracking the person of interest in a video feed scene. Furthermore, upon confirmation by the security team, enabling the system to continuously track the person of interest in the video feed scene.
The camera detection system further comprises CCTV cameras and the means to identify a person of interest is done manually by user or automatically by computer software or software algorithms. The software algorithm is executed only if there is a notification event for which the person of interest alert is triggered. The notification event is selected from a list consisting of detection of a weapon, pulling out a weapon, high velocity movements associated with fighting or escaping, abandonment of parcels, participation in unusual crowd activity such as threatening or fighting, throwing objects, proximity to sensitive areas such as restricted access doors, entering restricted areas, and similar. The notification module includes sending email, text message (SMS), instant message, voice call, security center user interface and mobile application.
According to further embodiments, a computer-implemented method for using CCTV cameras simultaneously to find person of interest in real-time, the method comprising the steps of receiving a video dataset from a camera detection system, analyzing image frames of the video dataset by a computer processor, identifying a person of interest in the video dataset image frames, sending a notification to a security team, receiving a confirmation from the security team to track the person of interest in a video feed scenes and enabling the system to continuously track the person of interest in the video feed scenes. According to the method, step of identifying a person of interest is conducted manually by a user or automatically through supplemental computer software.
Implementations disclosed herein provide systems, methods and apparatus for generating or augmenting training data sets for machine learning training. The functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium. The term “computer-readable medium” refers to any available medium that can be accessed by a computer or processor. By way of example, and not limitation, such a medium may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. It should be noted that a computer-readable medium may be tangible and non-transitory. As used herein, the term “code” may refer to software, instructions, code or data that is/are executable by a computing device or processor. A “module” can be considered as a processor executing computer-readable code.
A processor as described herein can be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, or microcontroller, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, any of the signal processing algorithms described herein may be implemented in analog circuitry. In some embodiments, a processor can be a graphics processing unit (GPU). The parallel processing capabilities of GPUs can reduce the amount of time for training and using neural networks (and other machine learning models) compared to central processing units (CPUs). In some embodiments, a processor can be an ASIC including dedicated machine learning circuitry custom-build for one or both of model training and model inference.
The disclosed or illustrated tasks can be distributed across multiple processors or computing devices of a computer system, including computing devices that are geographically distributed. The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the method that is being described, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
As used herein, the term “plurality” denotes two or more. For example, a plurality of components indicates two or more components. The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
The phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on” and “based at least on.” While the foregoing written description of the system enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The system should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the system. Thus, the present disclosure is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The application claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 63/124,108, entitled “SYSTEM AND METHOD FOR REAL-TIME MULTI-PERSON THREAT TRACKING AND RE-IDENTIFICATION”, filed on Dec. 11, 2020, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63124108 | Dec 2020 | US |