A camera, such as a camera used for surveillance and/or security, may capture an activity occurring within its field of view and provide a user with a notification of the activity. However, if the camera is placed in a high traffic area, such as a road or street, the camera will capture an excessive amount of activity occurring within its field of view. The camera will notify the user of the excessive amount of activity, even though the user may not be interested in some or any of the activity. The camera is unable to determine a region(s) within its field of view that may or may not be of interest to the user and notify the user of a specific object and/or specific type of activity associated with the object occurring within the region(s), such as a person/animal walking towards/from the user's private property, vehicles passing by on a road in front of a home, flags/windmills in motion due to heavy wind, etc. These and other considerations are addressed by the methods and systems described herein.
It is to be understood that both the following general description and the following detailed description are explanatory only and are not restrictive. Methods and systems are described for determining object activity within a region of interest. A camera system (e.g., a smart camera, a camera in communication with a computing device, etc.) may identify/detect high frequency activity/motion regions within its field of view. The camera system may be used for long-term analysis of activity/motion events detected at different regions of a scene within its field of view, such as a user's front porch, private property, and the like, over an extended time period (e.g., hours, days, etc.). The camera system may capture/detect similar activity/motion events frequently occurring within a certain region and record (e.g., store, accumulate, etc.) statistics associated with each activity/motion event. Regions within the field of view of the camera system with high frequency activity/motion may be identified/determined and a user may be notified. The camera system may identify/detect regions within its field of view, objects within the regions, actions/motions associated with the objects, or the like. Regions (images of the regions, etc.) within field of view of the camera system may be tagged with region-labels that identify the regions, such as, “street,” “sidewalk,” “private walkway,” “private driveway,” “private lawn,” “private porch,” and the like. The camera system may determine the regions within its field of view and the information may be used to train the camera system and/or a neural network associated with the camera system to automatically detect/determine regions within its field of view.
Additional advantages will be set forth in part in the description which follows or may be learned by practice. The advantages will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
The accompanying drawings, which are incorporated in and constitute a part of this specification, and together with the description, serve to explain the principles of the methods and systems:
Before the present methods and systems are described, it is to be understood that the methods and systems are not limited to specific methods, specific components, or to particular implementations. It is also to be understood that the terminology used herein is for the purpose of describing particular features only and is not intended to be limiting.
As used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another range includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another value. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
“Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes cases where said event or circumstance occurs and cases where it does not.
Throughout the description and claims of this specification, the word “comprise” and variations of the word, such as “comprising” and “comprises,” means “including but not limited to,” and is not intended to exclude other components, integers or steps. “Such as” is not used in a restrictive sense, but for explanatory purposes.
Components that may be used to perform the present methods and systems are described herein, and it is understood that when combinations, subsets, interactions, groups, etc. of these components are described that while specific reference of each various individual and collective combinations and permutation of these may not be explicitly described, each is specifically contemplated and described herein, for all methods and systems. This applies to all sections of this application including, but not limited to, steps in described methods. Thus, if there are a variety of additional steps that may be performed it is understood that each of these additional steps may be performed with any specific step or combination of steps of the described methods.
As will be appreciated by one skilled in the art, the methods and systems may be implemented using entirely hardware, entirely software, or a combination of software and hardware. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) encoded on the storage medium. More particularly, the present methods and systems may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
The methods and systems are described below with reference to block diagrams and flowchart illustrations of methods, systems, apparatuses and computer program products. It will be understood that each block of the block diagrams and flowcharts, and combinations of blocks in the block diagrams and flowcharts, respectively, may be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks.
These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, may be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
Note that in various cases described herein reference may be made to a given entity performing some action. It should be understood that this language may in some cases mean that a system (e.g., a computer) owned and/or controlled by the given entity is actually performing the action.
Methods and systems are described for determining activity within a region of interest. A camera (e.g., video camera, etc.) system (e.g., a camera in communication with a computing device) may identify/detect high frequency activity/motion regions within its field of view. The camera system may capture video of a scene within its field of view. For each frame of the video, a change in pixels from a previous frame may be determined. If a change in a pixel (e.g., one or more pixels, etc.) is determined, the frame may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of pixel occurred. If it is determined that no pixels changed (e.g., one or more pixels, etc.), the frame may be tagged with a motion indication parameter with a different predefined value (e.g., 0). A plurality of frames associated with the video may be determined and a plurality of motion indication parameters may be determined and/or stored. A plurality of frames associated with the video may be determined and a plurality of motion indication parameters may be determined and/or stored over a time period (e.g., a day(s), a week(s), etc.). The plurality of motion indication parameters may be compared to a threshold. An amount of motion indication parameters with a value of 1 may satisfy or exceed a threshold value, such as 100 motion indication parameters with a value of 1 may exceed a threshold value set for 50 motion indication parameters with a value of 1. A threshold value may be based on any amount or value of motion indication parameters. A region of interest (ROI) within the field of view of the camera may be determined based on the plurality of motion indication parameters compared to a threshold.
The camera system may be used for long-term analysis of activity/motion events detected at different regions of a scene within its field of view, such as a user's front porch, private property, and the like, over an extended time period (e.g., hours, days, etc.). The camera system may capture/detect similar activity/motion events frequently occurring within a certain region and record (e.g., store, accumulate, etc.) statistics associated with each activity/motion event. Regions within the field of view of the camera system with high frequency activity/motion may be identified/determined and a user may be notified. A notification may be sent to the user that requests the user confirm whether the user desires continued notification of a particular region and/or frequently occurring activity/motion event. A user may be notified that a region within the field of view of the camera is associated with a plurality of motion indication parameters that satisfy, do not satisfy, or exceed a threshold value. The camera system may notify the user (e.g., a user device, etc.) via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification may be a text message, a notification/indication via an application, an email, a call, or any type of notification. A user may receive a message via a user device such as “do you want to ignore the event and/or events in this area?” “do you want to be notified of events on the road?” or any other type of message. If the user does not desire continued notification of the particular region and/or the frequently occurring activity/motion event, the camera may cease such notifications and/or filter/cease detection of the particular region and/or the frequently occurring activity/motion event.
The camera system may identify/detect regions within its field of view, objects within the regions, actions/motions associated with the objects, or the like. The camera system may determine regions within its field of view and images of the regions may be tagged with region-labels that identify the regions. images of the regions may be tagged with labels such as, “street,” “sidewalk,” “private walkway,” “private driveway,” “private lawn,” “private porch,” and the like. The camera system may determine the regions within its field of view based on user provided information. The user may use an interface in communication with and/or associated with the camera system that displays the camera system's field of view to identify (e.g., draw, click, circle, etc.) the regions.
The camera system may determine the regions within its field of view by automatically identifying/detecting the regions and sending notifications to the user when a motion event is detected in an automatically identified/detected region or regions. The user may use the interface in communication with and/or associated with the camera system to view the notifications and to provide feedback indications (e.g., a “Thumbs Up” button indicative of a notification being helpful; a “Thumbs Down” button indicative of a notification being unhelpful, and the like). The feedback indications may be sent through the interface to the camera system. Based on the feedback indications provided from the user after viewing a notification(s), the camera system may continue or may cease providing notifications for the region or regions associated with the notification(s). The camera system may continue providing notifications for the region or regions associated with the notification(s) when the feedback indicates the notification(s) are helpful or desirable (e.g., an indication of a “Thumbs Up” in response to viewing the notification(s); an indication that the notification(s) was viewed at least once; and the like). The camera system may cease providing notifications for the region or regions associated with the notification(s) when the feedback indicates the notification(s) are not helpful or not desirable (e.g., an indication of a “Thumbs Down” in response to viewing the notification(s); an indication that the notification(s) was not viewed; and the like).
A region segmentation map may be generated, based on the identified/detected regions. One or more region segmentation maps and associated information may be used to train the camera system and/or any other camera system (e.g., a camera-based neural network, etc.) to automatically identify/detect regions of interest (ROIs) within a field of view. The camera system may automatically determine that a region within its field of view is a home/porch and whether an object moves towards the home/porch. The camera system may only be concerned (e.g., perform identification/detection, etc.) with region(s) within its field of view determined to be a particular region(s). The camera system may only be concerned with a region within its field of view determined to be a porch or regions connected to and/or associated with the porch, such as a lawn, a walkway, or the like. The camera system may only be concerned (e.g., perform identification/detection, etc.) with a particular region within its field of view to reduce analysis of unnecessary information (e.g., actions, motions, objects, etc.) of other regions within its field of view. The camera system may be configured to detect a particular object and/or action/motion occurring in the particular region within its field of view, such as a person walking towards the front door of a house. The camera system may be configured to ignore (e.g., not detect, etc.) a particular object and/or action/motion occurring in the particular region within its field of view, such as a person walking along a sidewalk. The camera system may use scene recognition to automatically identify regions, objects, and actions/motions occurring in a scene with in its field of view that may be a layout that is new to the camera system (e.g., the front yard of a location where the camera of the camera system is newly installed, etc.). The camera system (or any other camera system, etc.) may abstract away appearance variations between scenes within its field of view (e.g., variations in scenes caused by a change in a location of the camera system).
To abstract away appearance variations between scenes within its field of view, the camera system may use a layout-induced video representation (LIVR) method to encode a scene layout based on a region segmentation map determined from a previous scene in the camera system's field of view.
The image capturing device 102 may be an electronic device such as a smart camera, a video recording and analysis device, a communications terminal, a computer, a display device, or other device capable of capturing images, video, and/or audio and communicating with the computing device 104. The image capturing device 102 may be a communication element 106 for providing an interface to a user to interact with the image capturing device 102 and/or the computing device 104. The communication element 106 may be any interface for presenting and/or receiving information to/from the user, such as a notification, confirmation, or the like associated with a region of interest (ROI), an object, or an action/motion within a field of view of the image capturing device 102. An interface may be a communication interface such as a display screen, a touchscreen, an application interface, a web browser (e.g., Internet Explorer®, Mozilla Firefox®, Google Chrome®, Safari®, or the like). Other software, hardware, and/or interfaces may be used to provide communication between the user and one or more of the image capturing device 102 and the computing device 104. The communication element 106 may request or query various files from a local source and/or a remote source. The communication element 106 may send data to a local or remote device such as the computing device 104.
The image capturing device 102 may be associated with a device identifier 108. The device identifier 108 may be any identifier, token, character, string, or the like, for differentiating one image capturing device (e.g., image capturing device 102) from another image capturing device. The device identifier 108 may identify an image capturing device as belonging to a particular class of image capturing devices. The device identifier 108 may be information relating to an image capturing device such as a manufacturer, a model or type of device, a service provider associated with the image capturing device 102, a state of the image capturing device 102, a locator, and/or a label or classifier. Other information may be represented by the device identifier 108.
The device identifier 108 may be an address element 110 and a service element 112. The address element 110 may be or provide an internet protocol address, a network address, a media access control (MAC) address, an Internet address, or the like. The address element 110 may be relied upon to establish a communication session between the image capturing device 102 and the computing device 104 or other devices and/or networks. The address element 110 may be used as an identifier or locator of the image capturing device 102. The address element 110 may be persistent for a particular network.
The service element 112 may be an identification of a service provider associated with the image capturing device 102 and/or with the class of image capturing device 102. The class of the user device 102 may be related to a type of device, capability of device, type of service being provided, and/or a level of service (e.g., business class, service tier, service package, etc.). The service element 112 may be information relating to or provided by a communication service provider (e.g., Internet service provider) that is providing or enabling data flow such as communication services to the image capturing device 102. The service element 112 may be information relating to a preferred service provider for one or more particular services relating to the image capturing device 102. The address element 110 may be used to identify or retrieve data from the service element 112, or vice versa. One or more of the address element 110 and the service element 112 may be stored remotely from the image capturing device 102 and retrieved by one or more devices such as the image capturing device 102 and the computing device 104. Other information may be represented by the service element 112.
The computing device 104 may be a server for communicating with the image capturing device 102. The computing device 104 may communicate with the user device 102 for providing data and/or services. The computing device 104 may provide services such as object activity and region detection services. The computing device 104 may allow the image capturing device 102 to interact with remote resources such as data, devices, and files.
The computing device 104 may manage the communication between the image capturing device 102 and a database 114 for sending and receiving data therebetween. The database 114 may store a plurality of files (e.g., regions of interest, motion indication parameters, etc.), object and/or action/motion detection algorithms, or any other information. The image capturing device 102 may request and/or retrieve a file from the database 114. The database 114 may store information relating to the image capturing device 102 such as the address element 110, the service element 112, regions of interest, motion indication parameters, and the like. The computing device 104 may obtain the device identifier 108 from the image capturing device 102 and retrieve information from the database 114 such as the address element 110 and/or the service elements 112. The computing device 104 may obtain the address element 110 from the image capturing device 102 and may retrieve the service element 112 from the database 114, or vice versa. The computing device 104 may obtain the regions of interest, motion indication parameters, object and/or action/motion detection algorithms, or the like from the image capturing device 102 and retrieve/store information from the database 114, or vice versa. Any information may be stored in and retrieved from the database 114. The database 114 may be disposed remotely from the computing device 104 and accessed via direct or indirect connection. The database 114 may be integrated with the computing system 104 or some other device or system.
A network device 116 may be in communication with a network such as network 105. One or more of the network devices 116 may facilitate the connection of a device, such as image capturing device 102, to the network 105. The network device 116 may be configured as a wireless access point (WAP). The network device 116 may be configured to allow one or more wireless devices to connect to a wired and/or wireless network using Wi-Fi, BLUETOOTH®, or any desired method or standard.
The network device 116 may be configured as a local area network (LAN). The network device 116 may be a dual band wireless access point. The network device 116 may be configured with a first service set identifier (SSID) (e.g., associated with a user network or private network) to function as a local network for a particular user or users. The network device 116 may be configured with a second service set identifier (SSID) (e.g., associated with a public/community network or a hidden network) to function as a secondary network or redundant network for connected communication devices.
The network device 116 may have an identifier 118. The identifier 118 may be or relate to an Internet Protocol (IP) Address IPV4/IPV6 or a media access control address (MAC address) or the like. The identifier 118 may be a unique identifier for facilitating communications on the physical network segment. There may be one or more network devices 116. Each of the network devices 116 may have a distinct identifier 118. An identifier (e.g., the identifier 118) may be associated with a physical location of the network device 116.
The image capturing device 102 may have an input module 111. The input module 111 may be one or more cameras (e.g., video cameras) and/or microphones that may be used to capture one or more images (e.g., video, etc.) and/or audio of a scene within its field of view.
The image capturing device 102 may have an image analysis module 114. The image analysis module 114 may analyze one or more images (e.g., video, frames of video, etc.) determined/captured by the image capturing device 102 and determine a plurality of portions of a scene within a field of view of the image capturing device 102 (e.g., the input module 111). Each portion of the plurality of portions of the scene may be classified/designated as a region of interest (ROI). A plurality of ROIs associated with a scene may be used to generate a region segmentation map of the scene. The image analysis module 114 may use a region segmentation map as baseline and/or general information for predicting/determining a plurality of portions (e.g., a street, a porch, a lawn, etc.) of a new scene in a field of view of the image capturing device 102.
The image analysis module 114 may use selected and/or user provided information/data associated with one or more scenes to automatically determine a plurality of portions of any scene within a field of view of the image capturing device 102. The selected and/or user provided information/data may be provided to the image capturing device 102 during a training/registration procedure. A user may provide general geometric and/or topological information/data (e.g., user defined regions of interest, user defined geometric and/or topological labels associated with one or more scenes such as “street, “porch, “lawn,” etc.) to the image capturing device 102. The communication element 106 may display a scene in the field of view of the image capturing device 102 (e.g., the input module 111). The user may use the communication element 106 (e.g., an interface, a touchscreen, a keyboard, a mouse, etc.) to generate/provide the geometric and/or topological information/data to the image analysis module 114. The user may use an interface to identify (e.g., draw, click, circle, etc.) regions of interests (ROIs) within a scene. The user may tag the ROIs with labels such as, “street,” “sidewalk,” “private walkway,” “private driveway,” “private lawn,” “private porch,” and the like. A region segmentation map may be generated, based on the user defined ROIs. One or more region segmentation maps may be used to train the image analysis module 114 and/or any other camera system (e.g., a camera-based neural network, etc.) to automatically identify/detect regions of interest (ROIs) within a field of view. The image analysis module 114 may use the general geometric and/or topological information/data (e.g., one or more region segmentation maps, etc.) as template and/or general information to predict/determine portions and/or regions of interest (e.g., a street, a porch, a lawn, etc.) associated with any scene (e.g., a new scene) in a field of view of the image capturing device 102.
The image analysis module 114 may determine an area within its field of view to be a region of interest (ROI) (e.g., a region of interest to a user) and/or areas within its field of view that are not regions of interest (e.g., non-ROIs). The image analysis module 114 may determine an area within its field of view to be a ROI or non-ROI based on long-term analysis of events occurring within its field of view. The image analysis module 114 may determine/detect a motion event occurring within an area within its field of view and/or a determined region of interest (ROI), such as a person walking towards a front door of a house within the field of view of the image capturing device 102. The image analysis module 114 may analyze video captured by the input module 111 (e.g., video captured over a period of time, etc.) and determine whether a plurality of pixels associated with a frame of the video is different from a corresponding plurality of pixels associated with a previous frame of the video. The image analysis module 114 may tag the frame with a motion indication parameter based on the determination whether the plurality of pixels associated with the frame is different from a corresponding plurality of pixels associated with a previous frame of the video. If a change in the plurality of pixels associated with the frame is determined, the frame may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of pixel occurred. If it is determined that no pixels changed (e.g., the pixel and its corresponding pixel is the same, etc.), the frame may be tagged with a motion indication parameter with a different predefined value (e.g., 0). A plurality of frames associated with the video may be determined. The image analysis module 114 may determine and/or store a plurality of motion indication parameters.
The image analysis module 114 may determine and/or store a plurality of motion indication parameters over a time period (e.g., a day(s), a week(s), etc.). The plurality of motion indication parameters may be compared to a threshold. An amount of motion indication parameters with a value of 1 may satisfy or exceed a threshold value. The threshold value may be based on any amount or value of motion indication parameters. (e.g., 100 motion indication parameters with a value of 1 may exceed a threshold value set for 50 motion indication parameters with a value of 1).
The image analysis module 114 may perform analysis of activity/motion events detected at different ROIs of a scene, such as a user's front porch, private property, and the like, over an extended time period (e.g., hours, days, etc.). During such extended activity/motion analysis, the image analysis module 114 may determine similar activity/motion events frequently occurring within a particular ROI and record (e.g., store, accumulate, etc.) statistics associated with each activity/motion event. Regions of interest (ROIs) within the field of view of the image capturing device 102 with high frequency activity/motion may be identified/determined and a user may be notified. A notification may be sent to the user (e.g., to a user device) that requests that the user confirms whether the user would like to continue to receive notifications of activity/motion occurring within a particular ROI.
The image analysis module 114 may be trained to continue or to cease providing a notification when an activity/motion event is detected in a ROI based on user provided feedback indications. The user may provide the feedback using an interface of a user device (e.g., a “Thumbs Up” button indicative of a notification being helpful; a “Thumbs Down” button indicative of a notification being unhelpful, and the like). The feedback may be sent by the user device to the analysis module 114. Based on the feedback provided from the user after viewing a notification, the camera system may continue or may cease providing notifications for the ROI associated with the notification. The camera system may continue providing notifications for the ROI associated with the notification when the feedback indicates the notification is helpful or desirable (e.g., an indication of a “Thumbs Up” in response to viewing the notification; an indication that the notification was viewed at least once; and the like). The camera system may cease providing notifications for the ROI associated with the notification when the feedback indicates the notification is not helpful or not desirable (e.g., an indication of a “Thumbs Down” in response to viewing the notification; an indication that the notification was not viewed; and the like).
The image capturing device 102 may use the communication element 106 to notify the user of activity/motion occurring within a particular ROI. The notification may be sent to the user via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification may be a text message, a notification/indication via an application, an email, a call, or any type of notification. A user may receive a message, via a user device, such as “Are you interested in the events in the region in the future?”, “do you want to be notified of events on the road?”, or any other type of message. If the user does not desire continued notification of activity/motion occurring within a particular ROI, the image capturing device 102 may cease such notifications and/or filter/cease detection of activity/motion occurring within the particular ROI. By filtering/ceasing detection of activity/motion occurring within a particular ROI, the image capturing device 102 may avoid/reduce notifications of action/motion events, such as trees/flags moving due to wind, rain/snow, shadows, and the like that may not be of interest to the user.
At 201, the image capturing device may determine/detect a motion event occurring within an area within its field of view 202. The image capturing device may be one or more cameras (e.g., video cameras) and/or microphones that may be used to capture video 203 and/or audio of a scene within its field of view 202. The motion event may be a car driving in front door of a house within the field of view 202 of the image capturing device. The motion event may be any type of motion occurring within the field of view 202. The image capturing device may analyze the video 203 and determine whether a plurality of pixels associated with a frame of the video 203 is different from a corresponding plurality of pixels associated with a previous frame of the video 203.
At 204, the image capturing device (e.g., the image capturing device 102, etc.) may tag one or more frames 205, 206, 207 with a motion indication parameter based on the determination whether the plurality of pixels associated with the one or more frames 205, 206, 207 are different from a corresponding plurality of pixels associated with a previous frame of the video 203. The image capturing device may analyze the video 203 and determine whether a plurality of pixels associated with one or more frames 205, 206, 207 of the video 203 is different from a corresponding plurality of pixels associated with a previous frame of the video 203. The image capturing device may analyze the video 203 and determine whether a plurality of pixels associated with a frame 206 is different from a corresponding plurality of pixels associated with a previous frame 205. The image capturing device may analyze the video 203 and determine whether a plurality of pixels associated with any frames of the video 203 is different from a corresponding plurality of pixels associated with a previous frame of the video 203. The image capturing device may determine that one or more pixels of the plurality of pixels associated with the one or more frames 205, 206, 207 changes (e.g., is different, etc.) in reference to a previous frame, and a respective area of each of the one or more frames 205, 206, 207 may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of the one or more pixels occurred. The image capturing device may highlight an area (or one or more areas 208, 209, 210) of the one or more frames 205, 206, 207 that are tagged with a motion indication parameter with a predefined value (e.g., 1). The image capturing device may determine that that no pixels of the plurality of pixels associated with the one or more frames 205, 206, 207 change in reference to a previous frame, and a respective area of each of the one or more frames 205, 206, 207 may be tagged with a motion indication parameter with a different predefined value (e.g., 0). The image capturing device may obfuscate/mask an area (or one or more areas 211, 212, 213) of the one or more frames 205, 206, 207 that are tagged with a motion indication parameter with a predefined value (e.g., 0).
The image capturing device may determine, based on respective pixels, whether the one or more frames 205, 206, 207 are indicative of a motion event based on a plurality of motion indication parameters. At 214, the image capturing device may determine (e.g., accumulate, etc.) and/or store a plurality of motion indication parameters. The image capturing device may determine and/or store a plurality of motion indication parameters over a time period (e.g., a day(s), a week(s), etc.).
At 215, the image capturing device may compare the plurality of motion indication parameters accumulated/stored over a period of time (e.g., a day(s), a week(s), etc.) to a threshold. An amount of motion indication parameters with a value of 1 may satisfy or exceed a threshold value. The threshold value may be based on any amount or value of motion indication parameters. Motion indication parameters with a value of 1 may exceed a threshold value set for 50 motion indication parameters with a value of 1. The image capturing device, based on the accumulated/stored motion indication parameters may determine activity/motion events frequently occurring within a particular region of interest (ROI).
At 216, the image capturing device may determine a region of interest (ROIs) within the field of view 202 of the image capturing device with a high frequency of activity/motion (e.g., motion indication parameters exceeding a threshold, etc.) and notify a user. The image capturing device may send a notification 217 to the user that requests that the user confirms whether the user would like to continue to receive notifications of activity/motion occurring within a ROI 218. The notification 217 may be sent to the user via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification 217 may be a text message, a notification/indication via an application, an email, a call, or any type of notification. The user may receive a message, via a user device, such as “Are you interested in the events in the region in the future?”, “do you want to be notified of events on the road?”, or any type of message. If the user does not desire continued notification of activity/motion occurring within the ROI 218, the image capturing device may cease notifications and/or filter/cease detection of activity/motion occurring within the ROI 218. By filtering/ceasing detection of activity/motion occurring within a ROI, the image capturing device may avoid/reduce notifications of action/motion events, such as trees/flags moving due to wind, rain/snow, shadows, and the like that may not be of interest to the user.
The region segmentation map 300 may be tags/labels determined by the user such as, a road 301, a sidewalk 302, a lawn 303, a lawn 304, a driveway 305, or general area 306. One or more region segmentation maps may be used to train the camera system and/or any other camera system (e.g., a camera-based neural network, etc.) to automatically identify/detect regions of interest (ROIs) associated with a scene within its field of view. The camera system and/or any other camera system (e.g., a camera-based neural network, etc.) may use the general geometric and/or topological information/data (e.g., one or more region segmentation maps, etc.) as template and/or general information to predict/determine portions and/or regions of interest (e.g., a street, a porch, a lawn, etc.) associated with any scene (e.g., a new scene) in its field of view.
The region segmentation maps 400, 401, 402, and 403 may represent geometry and topology of scene layouts, such as new scenes (e.g., scenes/images not previously captured by and/or introduced to the camera system, etc.) captured in the field of view of a camera system (e.g., the image capturing device 102, etc.). A region map may be generated based on the identified/detected regions. The region map may be used to train a camera system to automatically identify/detect regions within a field of view. The camera system may automatically determine that a region within its field of view is a home/porch, street, or the like. The region segmentation maps 400 and 403 each show different homes/porches, 404 and 405 respectively, that have been automatically determined as such by the camera system. The region segmentation maps 400, 401, 402, and 403 each show different lawns, 406, 407, 408, and 409 respectively, that have been automatically determined as such by the camera system. The region segmentation maps 400, 401, 402, 403 each show different streets, 410, 411, 412, and 413 respectively, that have been automatically determined as such by the camera system.
A camera-based neural network may be used for surveillance. The network must be able to identify actions occurring within a field of view of a camera. Such actions are generally associated with locations and directions. A camera system (e.g., the image capturing device 102, etc.) and/or a camera-based neural network may be configured to detect/identify certain actions occurring within a field of view of a camera and ignoring other actions. A user of a camera system may be interested in detecting (e.g., having an alert or notification generated, etc.) a person walking towards a front door of a house within the field of view of the camera system, and may be uninterested in detecting a person walking along the sidewalk that is also within the field of view. As such, the user's interest may be based on how objects captured in a field of view interact with the geometry and topology of a scene captured by the camera system. However, the layout of scenes captured in a field of view may vary significantly. Therefore a camera system (e.g., a camera-based neural network) must discriminately identify/determine actions occurring with a field of view of an associated camera. The camera system (e.g., the image capturing device 102, etc.) may use one or more identification algorithms (e.g., a facial recognition algorithm, an object recognition algorithm, a landmark recognition algorithm, a motion recognition algorithm, etc.) to detect a particular object and/or action/motion occurring in a particular region within its field of view. The camera system may use a layout-induced video representation (LIVR) method to detect a particular object and/or action/motion occurring in a particular region within its field of view. The camera system (e.g., a camera-based neural network) may be trained (e.g., trained during a training/registration procedure) to represent geometry and topology of scene layouts (e.g., scenes captured within a field of view of a camera) so that the camera system may use scenes determined during training to generalize/determine unseen layouts.
A semantic component 600 may be represented by characteristic functions (e.g., region-labels) of scene layouts (e.g., a set of bitmaps used for feature aggregation in convolutional layers of a neural network referred to as “places”). A geometric component 602 may be represented by a set of coarsely quantized distance transforms of each semantic place incorporated into the convolutional layers of a neural network (NN). A topological component upper part of 601 may be represented through the connection structure in a dynamically gated fully connected layer of the network—essentially aggregating representations from adjacent (more generally h-connected for h hops in the adjacency graph of the region segmentation map). The components 600, 601 and 602 require semantic feature decomposition as indicated at 603.
Bitmaps encoded with the semantic labels of places (e.g., “street,” “sidewalk,” “walkway,” “driveway,” “lawn,” “porch,” etc.) may be utilized to decompose video representations of scenes within a field of view of the camera system into different places (e.g., regions of interest, etc.) and train a camera-based neural network to learn/identify place-based feature descriptions (e.g., a street, a sidewalk, a walkway, a driveway, a lawn, a porch, etc.). Such decomposition encourages the camera-based neural network to learn features of generic place-based motion patterns that are independent of scene layouts. As part of the semantic feature decomposition, scene geometry may be encoded to model moving directions by discretizing a place into parts based on a quantized distance transform with regard to another place. The component 602 shows discretized bitmaps of walkway with regard to porch.
The confidence of an action may be predicted by the camera system (e.g. camera-based neural network, etc.) by using place-based feature descriptions. For example, since the actions occurring in one place may also be projected onto adjacent places from a camera field of view, the confidence of an action may be predicted by dynamically aggregating features on the place which is associated with that action and its adjacent places. Topological feature aggregation may control the “on/off” of neuron connections from place-based feature descriptions to action nodes at both training and testing time based on scene topological connectivity.
To evaluate the LIVR, a dataset may be collected. The data set may be referred to as an Agent-in-Place Action dataset. The Agent-in-Place Action dataset may be over 5,000 15-second videos obtained from different surveillance scenes (e.g., 26 different surveillance scenes) with approximately 7,100 actions from 15 categories. To evaluate the generalization of LIVR, the scenes may be split into observed and unseen scenes. As described later in detail, experiments show that LIVR significantly improves the generalizability of a camera system (e.g., a camera-based neural network, etc.) trained only by observed scenes and tested on unseen scenes (improving the mean average precision (mAP) from around 20% to more than 51%). Consistent improvements are observed on almost all action categories.
Semantic feature decomposition may entail the use of a region segmentation map of each place to decompose features and force the network (e.g., neural network, etc.) to extract place-based feature descriptions individually. Region segmentation maps may be manually constructed and a labeling may be used to annotate each place by drawing points to construct polygons. In addition, to differentiate some of the actions (e.g., person_move toward (home)_walkway and person_move away (home)_walkway), place descriptions may be extended by segmenting a place into several parts based on its distance to an anchor place to allow the network (e.g., neural network, etc.) to explicitly model moving directions with respect to the anchor place.
Given a region segmentation map, place based feature descriptions (PD) may be extracted as shown at 801, 802, and 803 by place based descriptions “sidewalk,” “street,” and “porch,” respectively. The region segmentation map represented by a set of binary masks 804, 805, and 806 may be used to decompose feature maps spatially into regions, each capturing the motion occurring in a certain place (e.g., motion occurring on a sidewalk, motion occurring on a street, motion occurring on a porch, etc.). The decomposition may be applied to features instead of raw inputs to retain context information. XL∈w
f
L,p(XL⊙[ML=p]
where ML∈w
Many actions are naturally associated with moving directions with respect to some scene element (e.g., the house in home surveillance). To teach the camera system (e.g., the image capturing device 102, a camera-based neural network, etc.) general patterns of the motion direction in different scenes, the place segmentation may be further discretized into several parts and features may be extracted from each part. The features may be aggregated to construct the place-based feature description of the place. The word “porch” may be used as the anchor place as shown in
is computed as:
where DLmax(x)=max{DL(x′)|ML(x′)=ML(x)}, and Dmin(x)=min{DL(x′)|ML(x′)=ML(x)} are the maximum and minimum of pixel distances in the same place. The max and min of pixel distances in the same place may be efficiently pre-computed. The feature description corresponding to the ith part of pth place in Lth layer is:
where ⊙ is the tiled element-wise multiplication. Discretizing a place into parts at different distances to the anchor place and explicitly separating their spatial-temporal features allows the representation to capture moving agents in spatial-temporal order and extract direction-related abstract features.
All places may not need to be segmented since some places (such as a sidewalk or a street) are not associated with any direction-related action (e.g., moving toward or away from the house). For these places, the whole-place feature descriptors fL,p may be extracted. Different choices of place discretization and the number of parts k may be used. To preserve temporal ordering, at 901, 3D-conv blocks with spatial only max pooling may be applied to extract features from each discretized place, and concatenate them channel-wise. Then, 3D-conv blocks with temporal-only max pooling may be applied to abstract temporal information. A 1-D place-based feature description may be obtained after applying GMP. The final description obtained after distance-based place discretization may have the same dimensions as non-discretized place descriptions.
Semantic feature decomposition may enable a feature description for each place to be extracted individually. In order for the camera system (e.g., the image capturing device 102, a camera-based neural network, etc.) to predict action labels, the place features may be aggregated. Each action may be mapped to a place. To predict the confidence of an action a occurring in a place p, features extracted far from place p may be considered distractors. To reduce interference by features from irrelevant places, the camera system (e.g., the image capturing device 102, a camera-based neural network, etc.) may ignore far away features. As previously described, the camera system (e.g., the image capturing device 102, a camera-based neural network, etc.) may ignore the far away features via Topological Feature Aggregation, which utilizes the spatial connectivity between places, to guide feature aggregation.
As shown in 902, given a scene segmentation map (e.g., a region segmentation map), a source place p and a constant h, the camera system (e.g., the image capturing device 102, a camera-based neural network, etc.) may employ a Connected Component algorithm to find the h-connected set Ch(p) which considers all places connected to place p within h hops. The constant h specifies the minimum number of steps to walk from the source to a destination place. Given the h-connected place set Ch, we construct a binary action-place matrix (T∈n
y=(W└T*)f*,
where ⊙ is the element-wise matrix multiplication, and f* is the concatenated feature vector as the input of the layer. For simplicity, bias may be omitted. Let J be the training loss function (cross-entropy loss), considering the derivative of W, the gradient formulation is:
∇wJ=(∇yJf*)⊙T*
which is exactly the usual gradient (∇s JfT) masked by T*. To train the camera-based neural network, the gradients to connected neurons may be back-propagated.
A surveillance video dataset for recognizing agent-in-place actions may be implemented. Outdoor home surveillance videos may be collected from internal donors and webcams for duration (e.g., months) to obtain over 7,100 actions from around 5,000 15-second video clips with 1280*720 resolution. The videos are captured from 26 different outdoor cameras which cover various layouts of typical American families' front yards and back yards. 15 common agent-in-place actions may be selected to label and be represented as a tuple indicative of an action, the agent performing it, and the place where it occurs. The agents, actions, and places involved may be: Agent={person, vehicle, pet}; Action={move along, stay, move away (home), move toward (home), interact with vehicle, move across}; Place={street, sidewalk, lawn, porch, walkway, driveway}.
The duration of each video clip may be 15 s, so multiple actions may be observed from a single agent or multiple agents in one video. Action recognition may be formulated as a multi-label classification task. A group of 26 cameras may be split into two sets, observed scenes and unseen scenes, to balance the number of instances of each action in observed and unseen scenes and at the same time cover more scenes in the unseen set. Training and validation of the model may be performed on observed scenes to test generalization of capability on the unseen scenes.
A neural network may include a distinct architecture. Training the neural network may include decoupling the pooling into spatial only and temporal-only produced ideal performance. For each place-specific camera system (e.g., the image capturing device 102, camera-based neural network, etc.) that extracts place-based feature descriptions, nine blocks of 3D ConvNets are utilized with the first five blocks using spatial-only max pooling and the last four blocks using temporal-only max pooling. The first two blocks have one 3D-conv layer each, and there are two convolutional (cony) layers with ReLU in between for the remaining blocks. For each place-specific network, we use 64 3*3*3 cony filters per 3D-conv layer. After conducting SGMP on features extracted by each place-specific network, the final concatenated 1-D feature dimension is 6*64 since there are 6 places in total. The inference may be conducted with a gated fully connected layer, whose connections (“on/off” status) may be determined by action labels and scene topology. A sigmoid function may be used to obtain the predicted probability of each action. If conduct feature level decomposition (L>0) is conducted, a shared network may be used to extract low-level features. The detailed place-specific network structure is shown in the Table (Table 1) below.
For a dataset, the directions mentioned are all relative to a house location (e.g., where a camera is placed), and a porch is a strong indicator of the house location. A distance transform to the porch may be conducted, but the distance-based place discretization method may be generalized to represent moving direction with respect to any arbitrary anchor place (e.g., camera location).
An action recognition task may be formulated as multi-label classification without mutual exclusion. For input video frames, FPS 1 may be used and each frame may be down sampled to 160*90 to construct a 15*160*90*3 tensor for each video as input. Small FPS and low resolution are sufficient to model actions for home surveillance where most agents are large and the motion patterns of actions are relatively simple. The performance of recognizing each action may be evaluated independently. An Average Precision (AP) for each action and mean Average Precision (mAP) over all categories may be determined.
26 scenes are split into two sets: observed scenes and unseen scenes. The videos in observed scenes are further spit into training and validation sets with a sample ratio of nearly 1:1. A model may be trained on observed scenes and tested on unseen scenes. The validation set may be used for tuning hyper-parameters: semantics may be decomposed after the second cony blocks (L=2). Distance-based place discretization may be conducted on PLDT={walkway; driveway; lawn} with k=3; for topological feature aggregation, h=1.
3D ConvNets may be used as a baseline (B/L) model. All three baseline models may share the same 3D ConvNets architecture, which is very similar to the architectures of each place-specific network that extracts place-based feature descriptions, except that the last layer is fully connected instead of gated through topological feature aggregation. The difference among baseline 1, 2 and 3 the respective inputs: B/L1 takes the raw frames as input; B/L2 applies frame difference on two consecutive frames; B/L3 incorporates the scene layout information by directly concatenating the 6 segmentation maps to the RGB channels in each frame, resulting in an input of 9 channels per frame in total. The baseline models were trained using the same setting as in the proposed model, and the performance of the baselines are shown in Column 2-5 in the Table (Table 2) below.
Notably, adding frame differencing leads to significant performance improvements, marginal improvements are obtained by incorporating scene layout information using ConcateMap, and the testing performance gap between observed and unseen scenes is large, which reveals the poor generalization of the baseline models. In addition, a B/L3 model was trained with 6 times more filters per layer to evaluate whether model size is the key factor for the performance improvement. The result of this enlarged B/L3 model is shown in Column 5 of Table 2. Overall, the baseline models which directly extract features jointly from the entire video suffer from overfitting, and simply enlarging the model size or directly using the segmentation maps as features does not improve their generalization. More details about baseline models may be found in the supplementary materials.
Column 6-9 of Table 2 shows the mAP of the experimental models on observed scene validation set and unseen scene testing set. Notably there are three significant performance gaps, especially on unseen scenes: 1) from B/L3 to Ours-V1, over 20% mAP improvement is obtained by applying the proposed semantic feature decomposition to extract place feature descriptions; 2) from Ours-V1 to Ours-V3, the model is further improved by explicitly modeling moving directions by place discretization; 3) when compared to using a fully connected layer for feature aggregation (V1 and V3), the topological method (V2 and V4) leads to another significant improvement, which shows the efficacy of feature aggregation based on scene layout connectivity. Doubling the resolution (320*180), FPS (2) and number of filters (128) only results in a slight change of the model's accuracy (columns 10-12 in Table 2).
Place-based Feature Description. The hyper-parameter for PD is the level L, controlling when to decompose semantics in different places.
Different strategies for determining places to be discretized and the number of parts to discretize (k) per place were reviewed. Besides the anchor place-porch, the remaining five places in the experimental dataset may be clustered into three categories with regard to the distance to camera: C1 includes only porch, which is usually the closest place to a camera; C2 includes lawn, walkway, driveway, and actions occurring in those places usually require modeling the moving direction directly; C3 includes sidewalk and street, which are usually far away from a house, and actions on them are not sensitive to directions (e.g., “move along”). The experimental camera system was evaluated with two strategies to apply DD on. First, all places belong to C2 and C3, and second, only places in C2. Results are shown at 1301. Applying DD on C3 does not help much, but if only DD is applied on places in C2, the experimental method achieves the best performance. In terms of the number of discretized parts k, we evaluate k from 2 to 5 may was evaluated and as shown in 1301, performance is robust when k≥3.
Different h values were evaluated to determine the h-connected set and different strategies to construct and utilize the action-place mapping T. The results are shown in 1302. Notably, Topo-Agg achieves its best performance when h=1, i.e., for an action occurring in place, features extracted from place P are aggregated and directly connected. Topo-Agg is compared to the naive fully connected inference layer (FC-Agg: 1 layer) and two fully-connected layers with 384 neurons each and a ReLU layer in between (FC-Agg: 2 layers). Notably, generalizability drops significantly with an extra fully-connected layer, which reflects overfitting. Topo-Agg outperforms both methods. A trained, fully connected inference layer comprising only aggregate features based on topology at testing time (“Topo-Agg: 1-hop test only”) reflects worse performance.
The model described uses a user provided semantic map. To explore the sensitivity of the network to typical errors that would be encountered with automatically constructed semantic maps, dilation and erosion may be applied on the ground-truth maps to simulate the two situations which model inaccuracy.
The motion may be detected by determining a change in pixels between sequential frames (e.g., a previous frame and a current frame, etc.) of the video of the scene within the field of view. For each frame of the video a change in pixels from a previous frame may be determined. The camera system may analyze the video and determine whether a plurality of pixels associated with a frame of the video is different from a corresponding plurality of pixels associated with a previous frame of the video. Analysis of the video may be performed in incremental durations of the video, such as at every 15-second interval of the video.
At 1520, a plurality of motion indication parameters (e.g., motion indication values) may be determined/generated. Each frame of the video determined to have a change in pixels from a previous frame may be tagged with a motion indication parameter. If a change in the plurality of pixels associated with a frame is determined, the frame may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of pixel occurred. If it is determined that no pixels changed (e.g., the pixel(s) and its corresponding pixel(s) is the same, etc.), the frame may be tagged with a motion indication parameter with a different predefined value (e.g., 0). A plurality of frames associated with the video may be determined. The camera system may determine and/or store a plurality of motion indication parameters. The plurality of motion indication parameters may be determined and/or stored over a time period (e.g., a day(s), a week(s), etc.).
At 1530, the plurality of motion indication parameters may be compared to a threshold. The camera system may determine that the plurality of motion indication parameters satisfy a threshold. A given number of motion indication parameters with a value of 1 may satisfy or exceed a threshold value, such as 100 motion indication parameters with a value of 1 may satisfy/exceed a threshold value set for 50 motion indication parameters with a value of 1. A threshold value may be based on any amount or value of motion indication parameters. A region of interest (ROI) within the field of view of the camera may be determined based on the plurality of motion indication parameters compared to a threshold.
At 1540, a user may be notified of action/motion events. The camera system may provide a notification to a user device based on the plurality of motion indication parameters satisfying the threshold. The portion of the plurality of portions of the scene within the field of view of the camera system may be determined to be associated with a high frequency of motion events (e.g., as indicated by the motion indication parameters with a value of 1). A notification may be sent to the user device that requests the user to confirm whether the user desires continued notification of motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system. The camera system may notify the user (e.g., the user device) via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification may be a text message, a notification/indication via an application, an email, a call, or any type of notification.
At 1550, the user may provide an instruction. The instruction may be in response to the notification the user received via the user device. The user may receive a message via the user device such as “do you want to ignore the event and/or events in this area?”, “do you want to be notified of events on the road?” or any type of message. The user may reply to the notification with the instruction. The instruction may be sent to the camera system via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The instruction may be provided via a text message, a reply via an application, an email, a call, etc.
At 1560, the portion of the plurality of portions of the scene within the field of view of the camera system may be excluded from future detections of motion events. The camera system may exclude the portion of the plurality of portions of the scene within the field of view of the camera system from future detections of motion events based on an exclusion threshold being satisfied. The exclusion threshold may be satisfied based on a high frequency of motion events detected in the portion within a short timeframe (e.g., 10 motion events detected within a one-hour timeframe). The exclusion threshold may be temporally based (e.g., a lower threshold during daytime and a higher threshold during nighttime). The excluded portion may be disabled from triggering a notification in response to detecting one or more motion events following the exclusion. Any previously excluded portion may be un-excluded based on the exclusion threshold no longer being satisfied over a timeframe. (e.g., 2 motion events detected within a one-hour timeframe).
The camera system may exclude the portion of the plurality of portions of the scene within the field of view of the camera system from future detections of motion events based on the instruction received from the user/user device. If the user does not desire continued notification of motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system (as indicated by the instruction), the camera system may cease such notifications and/or filter/cease detection of motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system. Based on the instruction, the camera system may still detect motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system, yet the camera system may only notify the user if an amount of associated motion indicated parameters satisfy or exceed a threshold. The threshold may be automatically determined by the camera system and/or determined by the user.
The camera system may downgrade a processing priority of a portion of the plurality of portions of the scene within the field of view of the camera system based on a downgrade threshold being satisfied. The downgrade threshold may be satisfied based on a low frequency of motion events detected in the portion within a timeframe (e.g., 5 motion events detected within a two-hour timeframe). The downgrade threshold may be temporally based. The downgraded portion may trigger a notification in response to determining that one or more security settings have been violated (e.g., a detected motion event violates a security setting(s)). Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like. The downgraded portion may be excluded based on satisfying the exclusion threshold described above.
The motion may be detected by determining a change in pixels between sequential frames (e.g., a previous frame and a current frame, etc.) of the video of the scene within the field of view. For each frame of the video a change in pixels from a previous frame may be determined. The camera system may analyze the video and determine whether a plurality of pixels associated with a frame of the video is different from a corresponding plurality of pixels associated with a previous frame of the video. Analysis of the video may be performed in incremental durations of the video, such as at every 15-second interval of the video.
At 1620, an object associated with the motion event and action associated with the motion event may be determined. The camera system (e.g., the image capturing device 102, etc.) may determine the object and action associated with the motion event. The object and action associated with the motion event may be any object and any action, such as such as a person/animal walking towards/from a user's private property, vehicles passing by on a road in front of a home, flags/windmills in motion due to heavy wind, and the like. The camera system may use scene recognition, facial recognition, landmark recognition, spatial recognition, and the like to automatically identify regions, objects, and the like within in a scene within its field of view. The camera system may use a layout-induced video representation (LIVR) method to: determine regions of the scene (e.g., a porch, a street, a lawn, etc.), may abstract away areas (e.g., regions) of the scene where motion is not detected, and use a combination of semantic geometric, and topological analysis to determine the object and the action. The camera system may use the LIVR method to determine that a person if walking towards a home. The LIVR method is described in detail in previous sections of the present disclosure (e.g.,
At 1630, it may be determined that an area within a field of view of a camera is compromised. The camera system (e.g., the image capturing device 102, etc.) may determine that an area within its field of view is compromised based on the determined object and action. The camera system may determine that a person is walking towards a home. The camera system may determine that the person walking towards the home compromises security settings associated with the camera system. Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time, a time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like.
At 1640, a notification may be provided to a user (e.g., a user device). The camera system may provide a notification to the user that the area within the field of view of the camera system is compromised. The camera system may provide a notification to the user/user device based on a detection that a person is walking towards the home. A notification may be sent to the user device that requests the user to confirm whether the user desires continued notification of such motion events occurring within the area within the field of view of the camera system. The camera system may notify the user (e.g., the user device) via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification may be a text message, a notification/indication via an application, an email, a call, or any type of notification.
The motion may be detected by determining a change in pixels between sequential frames (e.g., a previous frame and a current frame, etc.) of the video of the scene within the field of view. For each frame of the video a change in pixels from a previous frame may be determined. The camera system may analyze the video and determine whether a plurality of pixels associated with a frame of the video is different from a corresponding plurality of pixels associated with a previous frame of the video. Analysis of the video may be performed in incremental durations of the video, such as at every 15-second interval of the video.
At 1720, a plurality of motion indication parameters may be determined/generated. Each frame of the video determined to have a change in pixels from a previous frame may be tagged with one or more motion indication parameters of the plurality. If a change in the plurality of pixels associated with a frame is determined, the frame may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of pixel occurred. If it is determined that no pixels changed (e.g., the pixel(s) and its corresponding pixel(s) is the same, etc.), the frame may be tagged with a motion indication parameter with a different predefined value (e.g., 0). A plurality of frames associated with the video may be determined. The camera system may determine and/or store a plurality of motion indication parameters. The plurality of motion indication parameters may be determined and/or stored over a time period (e.g., a day(s), a week(s), etc.).
At 1730, the plurality of motion indication parameters may be compared to a threshold. The camera system may determine that the plurality of motion indication parameters satisfy an exclusion threshold. A given number of motion indication parameters with a value of 1 may satisfy or exceed an exclusion threshold value, such as 100 motion indication parameters with a value of 1 may satisfy/exceed an exclusion threshold value set for 50 motion indication parameters with a value of 1. An exclusion threshold value may be based on any amount or value of motion indication parameters. A portion of the plurality of portions within the field of view of the camera may be determined based on the plurality of motion indication parameters compared to the exclusion threshold.
At 1740, the portion of the plurality of portions of the scene within the field of view of the camera system may be disabled from triggering notifications based on one or more subsequent detections of motion events within the portion. The camera system may disable the portion of the plurality of portions of the scene within the field of view of the camera system from triggering notifications based on one or more subsequent detections of motion events when an exclusion threshold being satisfied. The exclusion threshold may be satisfied based on a high frequency of motion events detected in the portion within a short timeframe (e.g., 10 motion events detected within a one-hour timeframe). The exclusion threshold may be temporally based (e.g., a lower threshold during daytime and a higher threshold during nighttime). Any previously excluded portion may be un-excluded based on the exclusion threshold no longer being satisfied over a timeframe (e.g., 2 motion events detected within a one-hour timeframe).
The camera system may exclude the portion of the plurality of portions of the scene within the field of view of the camera system from future detections of motion events based on an instruction received from the user/user device. If the user does not desire continued notification of motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system (as indicated by the instruction), the camera system may cease such notifications and/or filter/cease detection of motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system. Based on the instruction, the camera system may still detect motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system, yet the camera system may only notify the user if an amount of associated motion indicated parameters satisfy or exceed a threshold. The threshold may be automatically determined by the camera system and/or determined by the user.
Based on a detected motion event, a user may be notified. The camera system may provide a notification to a user device based on the plurality of motion indication parameters satisfying the exclusion threshold. The portion of the plurality of portions of the scene within the field of view of the camera system may be determined to be associated with a high frequency of motion events (e.g., as indicated by the motion indication parameters with a value of 1). A notification may be sent to the user device that requests the user to confirm whether the user desires continued notification of motion events occurring within the portion of the plurality of portions of the scene within the field of view of the camera system. The camera system may notify the user (e.g., the user device) via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification may be a text message, a notification/indication via an application, an email, a call, or any type of notification.
The user may provide an instruction. The instruction may be in response to the notification the user received via the user device. The user may receive a message via the user device such as “do you want to ignore the event and/or events in this area?”, “do you want to be notified of events on the road?” or any type of message. The user may reply to the notification with the instruction. The instruction may be sent to the camera system via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The instruction may be provided via a text message, a reply via an application, an email, a call, or any type of response comprising the instruction.
The user may use an interface in communication with and/or associated with the camera system to view the notifications and to provide feedback indications in response to receiving the notification (e.g., a “Thumbs Up” button indicative of a notification being helpful; a “Thumbs Down” button indicative of a notification being unhelpful, and the like). The feedback indications may be sent through the interface to the camera system. Based on the feedback indications provided from the user after viewing a notification(s), the camera system may continue or may cease providing notifications for the region or regions associated with the notification(s). The camera system may continue providing notifications for the region or regions associated with the notification(s) when the feedback indicates the notification(s) are helpful or desirable (e.g., an indication of a “Thumbs Up” in response to viewing the notification(s); an indication that the notification(s) was viewed at least once; and the like). The camera system may cease providing notifications for the region or regions associated with the notification(s) when the feedback indicates the notification(s) are not helpful or not desirable (e.g., an indication of a “Thumbs Down” in response to viewing the notification(s); an indication that the notification(s) was not viewed; and the like).
Further, one or more subsequent motion events in the first portion may be detected. Based on the one or more subsequent motion events, a plurality of motion indication parameters associated with the one or more subsequent motion events may be generated in manner as described above. The camera system may then determine that the plurality of motion indication parameters associated with the one or more subsequent motion events no longer satisfy the exclusion threshold (e.g., 10 motion events detected within a three-hour timeframe). The camera system, based on the plurality of motion indication parameters associated with the one or more subsequent motion events no longer satisfying the exclusion threshold, may then enable the first portion such that it may trigger a notification based on the one or more subsequent detections of motion events.
The camera system may downgrade a processing priority of a portion of the plurality of portions of the scene within the field of view of the camera system based on a downgrade threshold being satisfied. The downgrade threshold may be satisfied based on a high frequency of motion events detected in the portion within an associated timeframe (e.g., 5 motion events detected within a two-hour timeframe). The number of motion events required to satisfy the downgrade threshold, as well as the associated timeframe during which the motion events are detected, may be defined by the camera system or by a user of the camera system. The downgrade threshold may be temporally based such that the downgrade threshold may be satisfied with a fewer number of detected motion events during a first timeframe (e.g., nighttime hours) as compared to a second timeframe (e.g., daytime hours). In this way, a fewer number of detected motion events may satisfy the threshold when they are detected during hours corresponding to a certain timeframe(s). (e.g., nighttime). The number of motion events required to satisfy the downgrade threshold during a certain time period (e.g., nighttime versus daytime), as well as the associated timeframe during which the motion events are detected (e.g., between a time corresponding to dusk and a time corresponding to dawn), may be defined by the camera system or by a user of the camera system.
A downgraded processing priority may be associated with triggering a notification (e.g., a notification sent to a user device) in response to the camera system detecting a motion event with less frequency as compared to a frequency of triggering notifications prior to the processing priority being downgraded. For a portion having a processing priority that has not been downgraded, a notification may be triggered each time a motion event is detected. For a portion having a downgraded processing priority, a notification may be triggered despite the processing priority having been downgraded (e.g., the downgraded processing priority may be overcome) when an average number of motion events detected in the portion in a given timeframe is less than a number of detected motion events that satisfy the downgrade threshold (e.g., 5 motion events detected within a two-hour timeframe may satisfy the downgrade threshold, and a notification may nonetheless be triggered when only 1 motion event is detected in a subsequent four-hour timeframe). The number of motion events required to overcome the downgraded processing priority and to cause a notification to be triggered, as well as the associated timeframe during which the motion events are detected, may be defined by the camera system or by a user of the camera system.
The camera system may also be configured such that a notification is triggered despite the processing priority having been downgraded (e.g., the downgraded processing priority may be overcome) only in response to determining that one or more security settings have been violated (e.g., a detected motion event violates a security setting(s) and a notification is caused to be triggered as a result). Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like. The downgraded portion may be excluded based on satisfying the exclusion threshold described above.
The motion may be detected by determining a change in pixels between sequential frames (e.g., a previous frame and a current frame, etc.) of the video of the scene within the field of view. For each frame of the video a change in pixels from a previous frame may be determined. The camera system may analyze the video and determine whether a plurality of pixels associated with a frame of the video is different from a corresponding plurality of pixels associated with a previous frame of the video. Analysis of the video may be performed in incremental durations of the video, such as at every 15-second interval of the video.
At 1820, a plurality of motion indication parameters may be determined/generated. Each frame of the video determined to have a change in pixels from a previous frame may be tagged with a motion indication parameters. If a change in the plurality of pixels associated with a frame is determined, the frame may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of pixel occurred. If it is determined that no pixels changed (e.g., the pixel(s) and its corresponding pixel(s) is the same, etc.), the frame may be tagged with a motion indication parameter with a different predefined value (e.g., 0). A plurality of frames associated with the video may be determined. The camera system may determine and/or store a plurality of motion indication parameters. The plurality of motion indication parameters may be determined and/or stored over a time period (e.g., a day(s), a week(s), etc.).
At step 1830, the plurality of motion indication parameters may be compared to a downgrade threshold. The camera system may determine that the plurality of motion indication parameters satisfy the downgrade threshold. A given number of motion indication parameters with a value of 1 may satisfy or exceed the downgrade threshold value, such as 100 motion indication parameters with a value of 1 may satisfy/exceed an exclusion threshold value set for 50 motion indication parameters with a value of 1. The downgrade threshold value may be based on any amount or value of motion indication parameters. A portion of the plurality of portions within the field of view of the camera may be determined based on the plurality of motion indication parameters compared to the downgrade threshold.
At step 1840, the camera system may downgrade a processing priority of a portion of the plurality of portions of the scene within the field of view of the camera system based on the downgrade threshold being satisfied. The downgrade threshold may be satisfied based on a high frequency of motion events detected in the portion within an associated timeframe (e.g., 5 motion events detected within a two-hour timeframe). The number of motion events required to satisfy the downgrade threshold, as well as the associated timeframe during which the motion events are detected, may be defined by the camera system or by a user of the camera system. The downgrade threshold may be temporally based such that the downgrade threshold may be satisfied with a fewer number of detected motion events during a first timeframe (e.g., nighttime hours) as compared to a second timeframe (e.g., daytime hours). In this way, a fewer number of detected motion events may satisfy the threshold when they are detected during hours corresponding to a certain timeframe(s). (e.g., nighttime). The number of motion events required to satisfy the downgrade threshold during a certain time period (e.g., nighttime versus daytime), as well as the associated timeframe during which the motion events are detected (e.g., between a time corresponding to dusk and a time corresponding to dawn), may be defined by the camera system or by a user of the camera system.
A downgraded processing priority may be associated with triggering a notification (e.g., a notification sent to a user device) in response to the camera system detecting a motion event with less frequency as compared to a processing priority that has not been downgraded. For a portion having a processing priority that has not been downgraded, a notification may be triggered each time a motion event is detected. For a portion having a downgraded processing priority, a notification may be triggered, despite the processing priority have been downgraded, only in response to determining that one or more security settings have been violated (e.g., a detected motion event violates a security setting(s)). Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like. The downgraded portion may be excluded based on satisfying the exclusion threshold described above.
At 1850, an object associated with a detected subsequent motion event and action associated with the subsequent motion event may be determined. The camera system (e.g., the image capturing device 102, etc.) may determine the object and action associated with the motion event. The object and action associated with the motion event may be any object and any action, such as such as a person/animal walking towards/from a user's private property, vehicles passing by on a road in front of a home, flags/windmills in motion due to heavy wind, and the like. The camera system may use scene recognition, facial recognition, landmark recognition, spatial recognition, and the like to automatically identify regions, objects, and the like within in a scene within its field of view. The camera system may use a layout-induced video representation (LIVR) method to: determine regions of the scene (e.g., a porch, a street, a lawn, etc.), may abstract away areas (e.g., regions) of the scene where motion is not detected, and use a combination of semantic geometric, and topological analysis to determine the object and the action. The camera system may use the LIVR method to determine that a person if walking towards a home. The LIVR method is described in detail in previous sections of the present disclosure (e.g.,
At 1860, it may be determined that the subsequent motion event violates a security setting. The camera system (e.g., the image capturing device 102, etc.) may determine that the subsequent motion event violates a security setting based on the determined object and action. The camera system may determine that a person is walking towards a home. The camera system may determine that the person walking towards the home compromises security settings associated with the camera system. Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time, a time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like.
At 1870, the camera system may cause a notification to be provided to a user (e.g., a user device) using any of the communication protocols discussed above with respect to method 1700. The camera system may downgrade a processing priority of a portion of the plurality of portions of the scene within the field of view of the camera system based on the downgrade threshold being satisfied. The downgrade threshold may be satisfied based on a high frequency of motion events detected in the portion within an associated timeframe (e.g., 5 motion events detected within a two-hour timeframe). The number of motion events required to satisfy the downgrade threshold, as well as the associated timeframe during which the motion events are detected, may be defined by the camera system or by a user of the camera system. The downgrade threshold may be temporally based such that the downgrade threshold may be satisfied with a fewer number of detected motion events during a first timeframe (e.g., nighttime hours) as compared to a second timeframe (e.g., daytime hours). In this way, a fewer number of detected motion events may satisfy the threshold when they are detected during hours corresponding to a certain timeframe(s) (e.g., nighttime). The number of motion events required to satisfy the downgrade threshold during a certain time period (e.g., nighttime versus daytime), as well as the associated timeframe during which the motion events are detected (e.g., between a time corresponding to dusk and a time corresponding to dawn), may be defined by the camera system or by a user of the camera system.
A downgraded processing priority may be associated with triggering a notification (e.g., a notification sent to a user device) in response to the camera system detecting a motion event with less frequency as compared to a processing priority that has not been downgraded. For a portion having a processing priority that has not been downgraded, a notification may be triggered each time a motion event is detected. For a portion having a downgraded processing priority, a notification may be triggered, despite the processing priority have been downgraded, only in response to determining that one or more security settings have been violated (e.g., a detected motion event violates a security setting(s)).
At 1880, the camera system may provide a notification to the user that the subsequent motion event violates a security setting. The camera system may provide a notification to the user/user device based on a detection that a person is walking towards the home. The camera system may notify the user (e.g., the user device) via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification may be a text message, a notification/indication via an application, an email, a call, or any type of notification. Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like. The downgraded portion may be excluded based on satisfying the exclusion threshold described above.
The camera system may detect a motion event in a second portion of the plurality of portions. The motion event may be detected from content (e.g., a plurality of images, video, etc.) captured by a camera system (e.g., the image capturing device 102, etc.). The camera system may capture video of a scene within its field of view (e.g., field of view of the camera, etc.). The motion may be detected in the second portion by determining a change in pixels between sequential frames (e.g., a previous frame and a current frame, etc.) of the video of the scene within the field of view. For each frame of the video a change in pixels from a previous frame may be determined. The camera system may analyze the video and determine whether a plurality of pixels associated with a frame of the video is different from a corresponding plurality of pixels associated with a previous frame of the video. Analysis of the video may be performed in incremental durations of the video, such as at every 15-second interval of the video.
A plurality of motion indication parameters may be determined/generated based on the motion event in the second portion. Each frame of the video determined to have a change in pixels from a previous frame may be tagged with a motion indication parameters. If a change in the plurality of pixels associated with a frame is determined, the frame may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of pixel occurred. If it is determined that no pixels changed (e.g., the pixel(s) and its corresponding pixel(s) is the same, etc.), the frame may be tagged with a motion indication parameter with a different predefined value (e.g., 0). A plurality of frames associated with the video may be determined. The camera system may determine and/or store a plurality of motion indication parameters. The plurality of motion indication parameters may be determined and/or stored over a time period (e.g., a day(s), a week(s), etc.).
The plurality of motion indication parameters based on the motion event in the second portion may be compared to a downgrade threshold. The camera system may determine that the plurality of motion indication parameters based on the motion event in the second portion satisfy the downgrade threshold. A given number of motion indication parameters with a value of 1 may satisfy or exceed the downgrade threshold value, such as 100 motion indication parameters with a value of 1 may satisfy/exceed an exclusion threshold value set for 50 motion indication parameters with a value of 1. The downgrade threshold value may be based on any amount or value of motion indication parameters.
The camera system may downgrade a processing priority of the second portion based on the downgrade threshold being satisfied. The downgrade threshold may be satisfied based on a low frequency of motion events detected in the second portion within a timeframe (e.g., 5 motion events detected within a two-hour timeframe). The downgrade threshold may be temporally based. The downgraded second portion may trigger a notification in response to determining that one or more security settings have been violated (e.g., a detected motion event violates a security setting(s)) based on a detected subsequent motion event in the downgraded second portion. Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like.
The camera system may determine that the subsequent motion event in the downgraded second portion violates a security setting. The camera system (e.g., the image capturing device 102, etc.) may determine that the subsequent motion event violates a security setting based on the determined object and action. The camera system may determine that a person is walking towards a home. The camera system may determine that the person walking towards the home compromises security settings associated with the camera system. Security settings associated with the camera system may be based on any parameter, such as a number of persons approaching a house, a motion event detected at a certain time, a time of day, a person entering a certain area/region within the field of view of the camera system, and/or the like.
The camera system may cause a notification to be provided to a user (e.g., a user device) once it is determined that the subsequent motion event violates the security setting. The camera system may notify the user (e.g., the user device) via a short range communication technique (e.g., BLUETOOTH®, near-field communication, infrared, etc.) or a long range communication technique (e.g., WIFI, cellular, satellite, Internet, etc.). The notification may be a text message, a notification/indication via an application, an email, a call, or any type of notification.
The camera system may exclude the downgraded second portion based on satisfying an exclusion threshold in response to detecting a future motion event in the downgraded second portion (e.g., the future motion event is detected at a later time than when the subsequent motion event is detected). The future motion event may be detected from content (e.g., a plurality of images, video, etc.) captured by a camera system (e.g., the image capturing device 102, etc.). The camera system may capture video of a scene within its field of view (e.g., field of view of the camera, etc.). The scene within the field of view may be partitioned into different portions, such as a lawn, a porch, a street, and or the like. The future motion event may be detected by determining a change in pixels between sequential frames (e.g., a previous frame and a current frame, etc.) of the video of the scene within the field of view. For each frame of the video a change in pixels from a previous frame may be determined. The camera system may analyze the video and determine whether a plurality of pixels associated with a frame of the video is different from a corresponding plurality of pixels associated with a previous frame of the video. Analysis of the video may be performed in incremental durations of the video, such as at every 15-second interval of the video.
A plurality of motion indication parameters may be determined/generated based on the detected future motion event in the downgraded second portion. Each frame of the video determined to have a change in pixels from a previous frame may be tagged with a motion indication parameters. If a change in the plurality of pixels associated with a frame is determined, the frame may be tagged with a motion indication parameter with a predefined value (e.g., 1) at the location in the frame where the change of pixel occurred. If it is determined that no pixels changed (e.g., the pixel(s) and its corresponding pixel(s) is the same, etc.), the frame may be tagged with a motion indication parameter with a different predefined value (e.g., 0). A plurality of frames associated with the video may be determined. The camera system may determine and/or store a plurality of motion indication parameters. The plurality of motion indication parameters may be determined and/or stored over a time period (e.g., a day(s), a week(s), etc.).
The plurality of motion indication parameters based on the detected future motion event in the downgraded second portion may be compared to an exclusion threshold. The camera system may determine that the plurality of motion indication parameters satisfy an exclusion threshold. A given number of motion indication parameters with a value of 1 may satisfy or exceed an exclusion threshold value, such as 100 motion indication parameters with a value of 1 may satisfy/exceed the exclusion threshold value set for 50 motion indication parameters with a value of 1. The exclusion threshold value may be based on any amount or value of motion indication parameters.
The camera system may disable the downgraded second portion from triggering notifications based on the detected future motion event when the exclusion threshold is satisfied. The exclusion threshold may be satisfied based on a high frequency of motion events detected in the downgraded second portion within a short timeframe (e.g., 10 motion events detected within a one-hour timeframe). The exclusion threshold may be temporally based (e.g., a lower threshold during daytime and a higher threshold during nighttime). The camera system may exclude the downgraded second portion from subsequent detections of motion events.
The methods and systems may be implemented on a computer 1901 as shown in
The present methods and systems may be operational with numerous other general purpose or special purpose computing system environments or configurations. Well-known computing systems, environments, and/or configurations that may be suitable for use with the systems and methods may be, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional computing systems, environments, and/or configurations are set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that are composed of any of the above systems or devices, and the like.
The processing of the present methods and systems may be performed by software components. The described systems and methods may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices. Generally, program modules are composed of computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The described methods may also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Further, one skilled in the art will appreciate that the systems and methods described herein may be implemented via a general-purpose computing device in the form of a computer 1901. The components of the computer 1901 may be, but are not limited to, one or more processors 1903, a system memory 1912, and a system bus 1913 that couples various system components including the one or more processors 1903 to the system memory 1912. The system may utilize parallel computing.
The system bus 1913 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, or local bus using any of a variety of bus architectures. Such architectures may be an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, and a Peripheral Component Interconnects (PCI), a PCI-Express bus, a Personal Computer Memory Card Industry Association (PCMCIA), Universal Serial Bus (USB) and the like. The bus 1913, and all buses specified in this description may also be implemented over a wired or wireless network connection and each of the subsystems, including the one or more processors 1903, a mass storage device 1904, an operating system 1905, object identification and action determination software 1906, image data 1907, a network adapter 1908, the system memory 1912, an Input/Output Interface 1910, a display adapter 1909, a display device 1911, and a human machine interface 1902, may be contained within one or more remote computing devices 1914a,b,c at physically separate locations, connected through buses of this form, in effect implementing a fully distributed system.
The computer 1901 is typically composed of a variety of computer readable media. Readable media may be any available media that is accessible by the computer 1901 and may be both volatile and non-volatile media, removable and non-removable media. The system memory 1912 may be computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). The system memory 1912 is typically composed of data such as the image data 1907 and/or program modules such as the operating system 1905 and the object identification and action determination software 1906 that are immediately accessible to and/or are presently operated on by the one or more processors 1903.
The computer 1901 may also be composed of other removable/non-removable, volatile/non-volatile computer storage media.
Optionally, any number of program modules may be stored on the mass storage device 1904, such as the operating system 1905 and the object identification and action determination software 1906. Each of the operating system 1905 and the object identification and action determination software 1906 (or some combination thereof) may be elements of the programming and the object identification and action determination software 1906. The image data 1907 may also be stored on the mass storage device 1904. The image data 1907 may be stored in any of one or more databases known in the art. Such databases are DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, MySQL, PostgreSQL, and the like. The databases may be centralized or distributed across multiple systems.
The user may enter commands and information into the computer 1901 via an input device (not shown). Such input devices may be, but are not limited to, a keyboard, pointing device (e.g., a “mouse”), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, and the like These and other input devices may be connected to the one or more processors 1903 via the human machine interface 1902 that is coupled to the system bus 1913, but may be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, or a universal serial bus (USB).
The display device 1911 may also be connected to the system bus 1913 via an interface, such as the display adapter 1909. It is contemplated that the computer 1901 may have more than one display adapter 1909 and the computer 1901 may have more than one display device 1911. The display device 1911 may be a monitor, an LCD (Liquid Crystal Display), or a projector. In addition to the display device 1911, other output peripheral devices may be components such as speakers (not shown) and a printer (not shown) which may be connected to the computer 1901 via the Input/Output Interface 1910. Any step and/or result of the methods may be output in any form to an output device. Such output may be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like. The display device 1911 and computer 1901 may be part of one device, or separate devices.
The computer 1901 may operate in a networked environment using logical connections to one or more remote computing devices 1914a,b,c. A remote computing device may be a personal computer, portable computer, smartphone, a server, a router, a network computer, a peer device or other common network node, and so on. Logical connections between the computer 1901 and a remote computing device 1914a,b,c may be made via a network 1915, such as a local area network (LAN) and/or a general wide area network (WAN). Such network connections may be through the network adapter 1908. The network adapter 1908 may be implemented in both wired and wireless environments. Such networking environments are conventional and commonplace in dwellings, offices, enterprise-wide computer networks, intranets, and the Internet.
Application programs and other executable program components such as the operating system 1905 are shown herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 1901, and are executed by the one or more processors 1903 of the computer. An implementation of the object identification and action determination software 1906 may be stored on or sent across some form of computer readable media. Any of the described methods may be performed by computer readable instructions embodied on computer readable media. Computer readable media may be any available media that may be accessed by a computer. Computer readable media may be “computer storage media” and “communications media.” “Computer storage media” may be composed of volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Further, computer storage media may be, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by a computer.
The methods and systems may employ Artificial Intelligence techniques such as machine learning and iterative learning. Such techniques include, but are not limited to, expert systems, case based reasoning, Bayesian networks, behavior based AI, neural networks, fuzzy systems, evolutionary computation (e.g. genetic algorithms), swarm intelligence (e.g. ant algorithms), and hybrid intelligent systems (e.g. Expert inference rules generated through a neural network or production rules from statistical learning).
Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is in no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; and the number or type of configurations described in the specification.
It will be apparent to those skilled in the art that various modifications and variations may be made without departing from the scope or spirit. Other configurations will be apparent to those skilled in the art from consideration of the specification and practice described herein. It is intended that the specification and methods and systems described therein be considered exemplary only, with a true scope and spirit being indicated by the following claims.
This application is a continuation of U.S. patent application Ser. No. 16/353,954, filed Mar. 14, 2019, which claims priority to U.S. Provisional Application No. 62/643,093 filed Mar. 14, 2018, both of which are herein incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62643093 | Mar 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16353954 | Mar 2019 | US |
Child | 17691695 | US |