Embodiments of systems and methods for video monitoring are provided herein. In a first embodiment, a method for providing video monitoring includes three steps. The first step is the step of identifying a target by a computing device. The target is displayed from a video through a display of the computing device. The second step of the method is the step of receiving a selection of a trigger via a user input to the computing device. The third step of the method is the step of providing a response of the computing device, based on recognition of the identified target and the selected trigger from the video.
In a second embodiment, a computer readable storage medium is described. The computer readable storage medium includes instructions for execution by the processor which causes the processor to provide a response. The processor is coupled to the computer readable storage medium, and the processor executes the instructions on the computer readable storage medium. The processor executes instructions to identify a target by a computing device, where the target is being displayed from a video through a display of the computing device. The processor also executes instructions to receive a selection of a trigger via a user input to the computing device. Further, the processor executes instructions to provide the response of the computing device, based on recognition of the identified target and the selected trigger from the video.
According to a third embodiment, a system for recognizing targets from a video is provided. The system includes a target identification module, an interface module and a response module. The target identification module is configured for identifying a target from the video supplied to a computing device. The interface module is in communication with the target identification module. The interface module is configured for receiving a selection of a trigger based on a user input to the computing device. The response module is in communication with the target identification module and the interface module. The response module is configured for providing a response based on recognition of the identified target and the selected trigger from the video.
According to a fourth embodiment, a system for providing video monitoring is supplied. The system includes a processor and a computer readable storage medium. The computer readable storage medium includes instructions for execution by the processor which causes the processor to provide a response. The processor is coupled to the computer readable storage medium. The processor executes the instructions on the computer readable storage medium to identify a target, receive a selection of a trigger, and provide a response, based on recognition of the identified target and the selected trigger from a video.
Most video monitoring systems and software programs are difficult to install, utilize and maintain. In other words, most video monitoring systems and programs require a custom (and sometimes expensive) installation by an expert, and they require constant maintenance and fine-tuning because such systems and programs are not equipped to filter certain aspects or images from a video. They are not calibrated with intelligent computing. Furthermore, existing systems and programs are not user-extensible, nor are they user-friendly. That is, existing systems and programs cannot be configured to apply a user's rules or commands that can be applied to a video using easy-to-learn techniques.
The technology presented herein provides embodiments of systems and methods for conducting video monitoring in a user-friendly, user-extensible manner. Systems and methods for providing user-configurable rules in order to search video metadata, for both real-time and archived searches, are provided herein. The technology may be implemented through a variety of means, such as object recognition, artificial intelligence, hierarchical temporal memory (HTM), and any technology that recognizes patterns found in objects. The technology may be implemented through any technology that can establish categories of objects. However, one skilled in the art will recognize that these lists of ways to implement the technology are exemplary and the technology is not limited to a single type of implementation.
The technology presented herein also allows for new objects to be taught or recognized. By allowing for new objects to be recognized, the systems and methods described herein are extensible, flexible, more robust, and not easily fooled by variations. Also, such systems and methods are more tolerant of bad lighting and focus because the technology as implemented operates at a high level of object recognition.
Further, one skilled in the art will recognize that although some embodiments are provided herein for video monitoring, any type of monitoring from any data source may be utilized with this technology. For instance, instead of a video source, an external data source (such as a web-based data source in the form of a news feed) may be provided instead. The technology is flexible to utilize any data source, and is not restricted to only video sources or video streams.
The technology herein may also utilize, manipulate, or display metadata. In some embodiments, the metadata may be associated with a video. For instance, metadata in a video may be useful to define and/or recognize triggered events according to rules that are established by a user. Metadata may also be useful to provide only those videos or video clips that conform to the parameters set by a user through rules. By doing this, videos or video clips that may include triggered events as identified by the user may be provided to the user. Thus, the user is not shown hundreds or thousands of videos, but the user is provided with a much smaller set of videos that meets the user's requirements as set forth in one or more rules.
Also, metadata in video may be searched using user-configurable rules for both real-time and archive searches. As will be described in greater detail herein, metadata in video may be associated with camera, target and/or trigger attributes of a target that is logged for processing, analyzing, reporting and/or data mining methodologies. Metadata may be extracted, filtered, presented, and used as keywords for searches. Metadata in video may also be accessible to external applications. Further discussion regarding the use of metadata in video will be provided herein.
The computing device 120 may be a computer, a laptop computer, a desktop computer, a mobile communications device, a personal digital assistant, a video player, an entertainment device, a game console, a GPS device, networked sensor, card key reader, credit card reader, a digital device, a digital computing device and any combination thereof. The computing device 120 preferably includes a display (not shown). One skilled in the art will recognize that a display may include one or more browsers, one or more user interfaces, and any combination thereof. The display of the computing device 120 may be configured to show one or more videos. A video can be a video feed, a video scene, a captured video, a video clip, a video recording, or any combination thereof.
The network 110 may be also configured to couple to one or more video sources 130. The video may be provided by one or more video sources 130, such as a camera, a fixed security camera, a video camera, a video recording device, a mobile video recorder, a webcam, an IP camera, pre-recorded data (e.g., pre-recorded data on a DVD or a CD), previously stored data (including, but not limited to, previously stored data on a database or server), archived data (including but not limited to, video archives or historical data), and any combination thereof. The computing device 120 may be a mobile communications device that is configured to receive and transmit signals via one or more optional towers 140.
Still referring to
Notably, one skilled in the art can recognize that all the figures herein are exemplary. For all the figures, the layout, arrangement and the number of elements depicted are exemplary only. Any number of elements can be used to implement the technology of the embodiments herein. For instance, in
The system 100 of
In an alternative exemplary embodiment, video may be streamed continuously (24 hours a day, 7 days a week) to the server 150. In other words, an IP camera may provide live streaming, which may be uploaded by the server 150. The server 150 may provide the functionalities of search, setup, view, recognition, remote storage, and remote viewing. Then, the server 150 may stream to a client (such as a web client, a mobile client or a desktop client).
In another exemplary embodiment, video from an IP camera and/or USB camera may be cached locally to a local PC. The local PC has the capabilities of live stream and optional local storage. All the video may then be uploaded to a server (such as the server 150). The server 150 may provide the functionalities of search, setup, view, recognition, remote storage, and remote viewing. The server may then stream the video to a client (such as a web client, a mobile client, or a desktop client).
In yet another exemplary embodiment, analytics may be performed locally by the local PC and then triggered events may be uploaded. Analytics refer to recognition and non-recognition components that may be used to identify an object or a motion. An IP camera and/or a USB camera may provide video to a local personal computer. The local personal computer may provide the functionalities of recognition, local storage, setup, search, view and live streaming. The video may then be streamed to a server (such as the server 150). The server has the functionalities of remote storage and remote viewing. The server may then stream triggered events to a client (such as a web client, a mobile client, or a desktop client).
Turning to
Any aspect of the method 200 may be user-extensible. For example, the target, the trigger, the response, and any combination thereof may be user-extensible. The user may therefore define any aspect of the method 200 to suit his requirements for video monitoring. The feature of user-extensibility allows for this technology to be more robust and more flexible than the existing technology. As will be discussed later herein, the technology described herein can learn to recognize targets. In other words, end users may train the technology to recognize objects that were previously unrecognized or uncategorized using previously known technology.
It should be noted that the method 200 may be viewed as an implemented “if . . . then statement.” For instance, steps 202 and 204 can be viewed as the “if” portion of the statement. In some embodiments, steps 202 and 204 combined may be known as a rule. Rules may be user-extensible, and any portion of the rules may be user-extensible. More details as to the user-extensibility of rules will be discussed later herein. Likewise, step 206 can be viewed as the “then” portion. Step 206 may also be user-extensible, which will also be described herein. More importantly, users may combine targets, triggers and responses in various combinations to achieve customized results.
Still referring to
Also, at step 202, identifying the target from a video may include receiving a selection of a predefined object. For instance, preprogrammed icons depicting certain objects (such as a person, a pet or a vehicle) that have already been learned and/or otherwise identified by the software program may be shown to the user through a display of the computing device 120. Thus, the user may select a predefined object (such as a person, a pet or a vehicle) by selecting the icon that best matches the target. Once a user selects an icon of the target, the user can drag and drop the icon onto another portion of the display of the computing device, such that the icon (sometimes referred to as a block) may be rendered on the display. Thus, the icon becomes part of a rule (such as the rule 405 shown in
The technology allows for user-extensibility for defining targets. For instance, a user may “teach” the technology how to recognize new objects by assigning information (such as labels or tags) to clips of video that include the new objects. Thus, a software program may “learn” the differences between categories of pets, such as cats and dogs, or even categories of persons, such as adults, infants, men, and women. Alternatively, at step 202, identifying the target from a video may include recognizing an object based on a pattern. For instance, facial patterns (frowns, smiles, grimaces, smirks, and the like) of a person or a pet may be recognized.
Through such recognition based on a pattern, a category may be established. For instance, a category of various human smiles may be established through the learning process of the software. Likewise, a category of variety of human frowns may be established by the software. Further, a behavior of a target may be recognized. Thus, the software may establish any type of behavior of a target, such as the behavior of a target when the target is resting or fidgeting. The software may be trained to recognize new or previously unknown objects. The software may be programmed to recognize new actions, new behaviors, new states, and/or any changes in actions, behaviors or states. The software may also be programmed to recognize metadata from video and provide the metadata to the user through the display of a computing device 120.
In the case where the target is a motion sequence, the motion sequence may be a series of actions that are being targeted for identification. One example of a motion sequence is the sequence of lifting a rock and tossing the rock through a window. Such a motion sequence may be preprogrammed as a target. However, as described earlier, targets can be user-extensible. Thus, the technology allows for users to extend the set of targets to include targets that were not previously recognized by the program. For instance, in some embodiments, targets can include previously unrecognized motion sequences, such as the motion sequence of kicking a door down. Also, targets may even include visual, audio, and both visual-audio targets. Thus, the software program may be taught to recognize a baby's face versus an adult female's face. The program may be taught to recognize a baby's voice versus an adult female's voice.
At step 204, receiving the selection of the trigger may include receiving a user input of a predefined trigger icon provided by the computing device. The trigger comprises an attribute of the target relating to at least one of a location, a direction, a clock time, a duration, an event, and any combination thereof. A trigger usually is not a visible object, and therefore a trigger is not a target. Triggers may be related to any targets that are within a location or region (such as “inside a garden” or “anywhere” within the scope of the area that is the subject matter of the video). The trigger may be related to any targets that are moving within a certain direction (such as “coming in through a door” or “crossing a boundary”). The trigger may be related to targets that are visible for a given time period (such as “visible for more than 5 seconds” or “visible for more than 5 seconds but less than 10 seconds”). The trigger may be related to targets that are visible at a given clock time (such as “visible at 2:00 pm on Thursdays”). The trigger may be related to targets that coincide with events. An event is an instance when a target is detected (such as “when a baseball flies over the fence and enters the selected region”).
As mentioned previously, step 204 may be user-extensible insofar that the user may define one or more triggers that are to be part of the rule. For instance, the user can select predefined trigger icons, such as icons that say “inside a garden” and “visible>5 seconds.” With such a selection, the attributes of the identified targets include those targets inside of a garden (as depicted in a video) that are also visible for more than 5 seconds. Also, the user is not limited to predefined trigger icons. The user may define his own trigger icons, by teaching the software attributes based on object attribute recognition. In other words, if the software program does not have a predefined trigger icon (such as “having the color red”), the user may teach the software program to learn what constitutes the color red as depicted in one or more videos, and then can define the trigger “having the color red” for later usage in rules.
At step 206, the response may include a recording of the video, a notification, a generation of a report, an alert, a storing of the video on a database associated with the computing device, and any combination thereof. As stated previously, the response may constitute the “then” portion of an “if . . . then statement” such that the response is provided once the “if” condition is satisfied by the rule provided by the user. In other words, if a target has been identified and a trigger selection has been received, then a response based on the recognition of the identified target and the selected trigger may be provided.
A response may include recording one or more videos. The recording may be done by any video recording device, including but not limited, to video camera recorders, media recorders, and security cameras. A response may include a notification, such as a text message to a cell phone, a multimedia message to a cell phone, a generation of an electronic mail message to a user's email account, or an automated phone call notification.
Another type of response may include a generation of a report. A report may be a summary of metadata that is presented to a user for notification or analysis. A report may be printed and/or delivered, such as a security report to authorities, a printed report of activity, and the like. An alert may be a response, which may include a pop-up alert to the user on his or her desktop computer that suspicious activity is occurring in the area that is the subject of a video. An example of such a pop-up alert is provided in U.S. patent application Ser. No. ______ filed on Feb. 9, 2009, titled “Systems and Methods for Video Analysis,” which is hereby incorporated by reference. Further, a response may be the storing of the video onto a database or other storage means associated with the computing device. A response may be a command initiated by the computing device 120.
As with all aspects of the method 200, the response is user-extensible. Thus, the user may customize a response or otherwise define a response that is not predefined by the software program. For instance, the user may define a response, such as “turn on my house lights,” and associate the system 100 with one or more lighting features within the user's house. Once the user has defined the response, the user may then select a new response icon and designate the icon as a response that reads: “turn on my house lights.” The response icon that reads “turn on my house lights” can then be selected such that it is linked or connected to a rule (such as the rule 405 of
The method 200 may include steps that are not shown in
According to one exemplary embodiment, the target identification module 310 is configured for identifying a target from the video supplied to a computing device 120 (
The system 300 may comprise a processor (not shown) and a computer readable storage medium (not shown). The processor and/or the computer readable storage medium may act as one or more of the three modules (i.e., the target identification module 310, the interface module 320, and the response module 330) of the system 300. It will be appreciated by one of ordinary skill that examples of computer readable storage medium may include discs, memory cards, servers and/or computer discs. Instructions may be retrieved and executed by the processor. Some examples of instructions may include software, program code, and firmware. Instructions are generally operational when executed by the processor to direct the processor to operate in accord with embodiments of the invention. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more modules may be provided and still fall within the scope of various embodiments.
Turning to
Still referring to
Once a video source 440 is selected and displayed as part of the rule 405 (such as the selected side camera video source icon 445), the user can define the target that is to be identified by a computing device. Preferably, the user may select the “Look for” icon 450 on a left portion of the display of the computing device. Then, a selection of preprogrammed targets is provided to the user. The user may select one target (such as “Look for: People” icon 455 as shown in the exemplary rule 405 of
The user may select one or more triggers. The user may select a trigger via a user input to the computing device 120. A plurality of trigger icons 460 and 465 may be provided to the user for selection. Trigger icons depicted in
The bounding box may track an identified target. Preferably, the bounding box may track an identified target that has been identified as a result of an application of a rule. The bounding box may resize based on the dimensions of the identified target. The bounding box may move such that it tracks the identified target as the identified target moves in a video. For instance, a clip of a video may be played back, and during playback, the bounding box may surround and/or resize to the dimensions of the identified target. If the identified target moves or otherwise makes an action that causes the dimensions of the identified target to change, the bounding box may resize such that it may surround the identified target while the identified target is shown in the video, regardless of the changing dimensions of the identified target. FIG. 7 of the U.S. patent application Ser. No. ______ filed on Feb. 9, 2009, titled “Systems and Methods for Video Analysis” shows an exemplary bounding box 775. One skilled in the art will appreciate that one or more bounding boxes may be shown to the user to assist in tracking one or more identified targets while a video is played.
Also, the “Look Where” pane 430 may allow the user to select a radio button that defines the location attribute of the identified target as a trigger. The user may select the option that movement “Anywhere” is a trigger. The user may select the option that “inside” a designated region (such as “the garden”) is a trigger. Similarly, the user may select “outside” a designated region. The user may select an option that movement that is “Coming in through a door” is a trigger. The user may select an option that movement that is “Coming out through a door” is a trigger. The user may select an option that movement that is “Walking on part of the ground” (not shown) is a trigger. In other words, the technology may recognize when an object is walking on part of the ground. The technology may recognize movement and/or object in three-dimensional space, even when the movement and/or object is shown on the video in two dimensions. Further, the user may select an option of “crossing a boundary” is a selected trigger.
If the “When” icon 465 is selected, then the “Look When” pane (not shown) on the right side of the display may be provided to the user. The “Look When” pane may allow for the user to define the boundaries of a time period that the user wants movements to be monitored. Movement may be monitored when motion is visible for more than a given number of seconds. Alternatively, movement may be monitored for when motion is visible for less than a given number of seconds. Alternatively, movement may be monitored within a given range of seconds. In other words, a specific time duration may be selected by a user. One skilled in the art that any measurement of time (including, but not limited to, weeks, days, hours, minutes, or seconds) may be utilized. Also, one skilled in the art may appreciate that the user selection may be through any means (including, but not limited to, dropping and dragging icons, checkmarks, selection highlights, radio buttons, text input, and the like).
Still referring to
If the Notify icon 472 is selected, then a notification may sent to the computing device 120 of the user. A user may select the response of “If seen: Send email” (not shown) as part of the notification. The user may drag and drop a copy of the Notify icon 472 and then connect the Notify icon 472 to the rule 405.
As described earlier, a notification may also be sending a text message to a cell phone, sending a multimedia message to a cell phone, or a notification by an automated phone. If the Report icon 474 is selected, then a generation of a report may be the response. If the Advanced icon 476 is selected, the computer may play a sound to alert the user. Alternatively, the computer may store the video onto a database or other storage means associated with the computing device 120 or upload a video directly to a user-designated URL. The computer may interact with external application interfaces, or it may display custom text and/or graphics.
Another embodiment is where Boolean language is used to apply to multiple triggers for a particular target. For instance, Boolean language may be applied, such that the user has instructed the technology to locate a person “in the garden OR (on the sidewalk AND moving left to right).” With this type of instruction, the technology may locate either persons in the garden or persons that are on the sidewalk who are also moving left to right. As mentioned above, one skilled in the art will recognize that the user may include Boolean language that apply for both one or more targets(s) as well as one or more trigger(s).
A further embodiment is a rule 505 that includes Boolean language that provides a sequence (such as “AND THEN”). For instance, a user may select two or more triggers to occur in a sequence (e.g., “Trigger A” happens AND THEN “Trigger B” happens. Further, one skilled in the art will understand that a rule 505 includes one or more nested rules, as well as one or more rules in a sequence, in a series, or in parallel. Rules may be ordered in a tree structure with multiple branches, with one or more responses coupled to the rules.
As shown in
Now referring to
In the example provided in
The first timeline 665 is from 8 am to 4 pm. The first timeline 665 shows five vertical lines. Each vertical line may represent the amount of time in which movement was detected according to the parameters of the rule application “People—Walking on the lawn” 660. In other words, there were five times during the time period of 8 am to 4 pm in which movement was detected that is likely to be people walking on the lawn. The second timeline 675 is also from 8 am to 4 pm. The second timeline 675 shows only one vertical line, which means that in one time period (around 10:30 am), movement was detected according to the parameters of the rule application “Pets—In the Pool” 670. According to
As mentioned previously, the technology mentioned herein is not limited to video. External data sources, such as web-based data sources, may be utilized in the system 100 of
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.
This application is related to the U.S. patent application Ser. No. ______ filed on Feb. 9, 2009, titled “Systems and Methods for Video Analysis,” which is hereby incorporated by reference.