NOTIFICATIONS BY A NETWORK-CONNECTED SECURITY SYSTEM BASED ON CONTENT ANALYSIS

Information

  • Patent Application
  • 20190289263
  • Publication Number
    20190289263
  • Date Filed
    January 03, 2019
    5 years ago
  • Date Published
    September 19, 2019
    4 years ago
Abstract
Systems and methods are described for presenting notifications at a client device based on analysis of content generated by electronic devices in a network-connected security system. The introduced technique can be applied as a filtering process to reduce the number of notifications sent by a network-connected security device to a client device. In an example embodiment, content such as video captured by a video surveillance camera is processed to detect events occurring in a surveilled environment. Notifications are then presented at a user device for events that satisfy some specified criterion.
Description
TECHNICAL FIELD

Various embodiments concern computer programs and associated computer-implemented techniques for intelligently processing content generated by electronic devices such as security cameras, security lights, etc.


BACKGROUND

Surveillance is the monitoring of behavior, activities, or other changing information for the purpose of influencing, managing, or protecting people/items in a given environment. Generally, surveillance requires that the given environment be monitored by means of electronic devices such as security cameras, security lights, etc. For example, a variety of electronic devices may be distributed through the home environment to detect activities performed in/around the home.


Wireless security cameras have proved to be very popular among modern consumers due to their low installation costs and flexible installation options. Moreover, many wireless security cameras can be mounted in locations that were previously unavailable to wired security cameras. Thus, consumers can readily set up home security systems for seasonal monitoring/surveillance (e.g., of pools, yards, garages, etc.).





BRIEF DESCRIPTION OF THE DRAWINGS

Various features and characteristics of the technology will become more apparent to those skilled in the art from a study of the Detailed Description in conjunction with the drawings. Embodiments of the technology are illustrated by way of example and not limitation in the drawings, in which like references may indicate similar elements.



FIG. 1 is a diagram illustrating an example environment in which at least some operations described herein can be implemented;



FIG. 2 is a diagram illustrating various functional components of an example electronic device configured to monitor various aspects of a surveilled environment;



FIG. 3 is a diagram illustrating various functional components of an example base station associated with a network-connected security system configured to monitor various aspects of a surveilled environment;



FIG. 4 is a plan view of a surveilled environment (e.g., a home) illustrating an example arrangement of devices associated with a network-connected security system;



FIG. 5A is a diagram illustrating a network environment that includes a base station designed to receive content generated by one or more electronic devices arranged throughout a surveilled environment;



FIG. 5B is a diagram illustrating a network environment that includes a security management platform that is supported by the network-accessible server system;



FIG. 6 is an architecture flow diagram illustrating an environment including an analytics system for presenting security notifications at a client device based on analysis of content generated at electronic devices in a network-connected security system;



FIG. 7 is a flow diagram illustrating an example process for detecting objects in captured image or video content;



FIG. 8 is a flow diagram illustrating an example process for classifying objects detected in captured image or video content;



FIG. 9 is a diagram illustrating how a distributed computing cluster can be utilized to process content;



FIG. 10 is a diagram illustrating how MapReduce™ can be utilized in combination with Apache Hadoop™ in the distributed computing cluster depicted in FIG. 9;



FIG. 11 is a diagram illustrating how content can be processed in batches;



FIG. 12 is a flow diagram illustrating an example process for presenting notifications at a client device based on analysis of content generated at electronic devices in a network-connected security system; and



FIG. 13 is a diagram illustrating an example of a computer processing system in which at least some operations described herein can be implemented.





The drawings depict various embodiments for the purpose of illustration only. Those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technology. Accordingly, while specific embodiments are shown in the drawings, the technology is amenable to various modifications.


DETAILED DESCRIPTION
Overview

Surveillance is the monitoring of behavior, activities, or other changing information for the purpose of influencing, managing, or protecting people/items in a given environment. Surveillance often requires that the given environment be monitored by means of various electronic devices such as security cameras, security lights, etc. In some instances, surveillance systems (also referred to as “security systems”) are connected to a computer server via a network. Some content generated by a security system may be examined locally (i.e., by the security system itself), while other content generated by the security system may be examined remotely (e.g., by the computer server).


Generally, a network-connected surveillance system (also referred to as a “security system”) includes a base station and one or more electronic devices. The electronic device(s) can be configured to monitor various aspects of a surveilled physical environment (also referred to herein as a “surveilled environment”). For example, security cameras may be configured to record video upon detecting movement, while security lights may be configured to illuminate the surveilled environment upon detecting movement. Different types of electronic devices can create different types of content. Here, for example, the security cameras may generate audio data and/or video data, while the security lights may generate metadata specifying a time at which each illumination event occurred, a duration of each illumination event, etc.


The base station, meanwhile, may be responsible for transmitting the content generated by the electronic device(s) to a network-accessible computer server. Thus, each electronic device may provide data to the base station, which in turn provides at least some of the data to the network-accessible computer server.


Nowadays, security systems support features such as high-quality video recording, live video streaming, two-way audio transmission, cloud-based storage of recordings, instant alerts, etc. These features enable individuals to gain an in-depth understanding of what activities are occurring within the environment being surveilled. However, security systems having these features also experience drawbacks.


For example, once an event is detected by an electronic device, the security system may alert an administrator (e.g., a home owner). Certain alerts are not necessary in that the detected event does not pose any security risk. For instance, if the administrator observes that motion detection is triggered by movement of a bird, the administrator may determine that an alert is not needed. Conversely, if the administrator observes that motion detection is triggered by movement of a coyote, the administrator may determine that an alert is needed. Similar conclusions may be drawn for other routine events (e.g., mail delivery by postal worker). Administrators may simply ignore those alerts that are not needed (e.g., by simply deleting the corresponding notifications); however, an abundance of false positive notifications will tend to reduce the effectiveness of the security system as the administrator becomes overwhelmed by notifications and is not able to respond effectively.


Introduced herein is a technique for analyzing content generated by electronic devices in a network-connected security system to detect events and generate notifications based on the detected events that addresses the challenges discussed above. For example, a base station may employ cloud-based analytics to verify content (e.g., a video clip) before generating a notification or initiating a peer-to-peer stream to deliver the content to a user device. To verify the content, the base station may be required to contact the network-connected computer server, which can perform the processing needed to filter out unnecessary alerts. In some embodiments, the network-connected computer server is one of multiple network-connected computer servers that form a server system. The server system may balance the load amongst the multiple network-connected computer servers (e.g., by intelligently distributing images for processing) to ensure the verification process is completed with low latency. Similar cloud-based analytics can be employed on content generated by electronic devices to visually detect intruders, audibly detect sounds indicative of a break-in (e.g., an unrecognized voice or a window breaking), audibly detect sounds indicative of catastrophic events such as fire or earthquakes, etc. Further, such analysis may be performed in real time (or near real time) as content is generated so that an administrator is able to quickly respond to notifications of detected events.


Terminology

References in this description to “an embodiment” or “one embodiment” means that the particular feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another.


Unless the context clearly requires otherwise, the words “comprise” and “comprising” are to be construed in an inclusive sense rather than an exclusive or exhaustive sense (i.e., in the sense of “including but not limited to”). The terms “connected,” “coupled,” or any variant thereof is intended to include any connection or coupling between two or more elements, either direct or indirect. The coupling/connection can be physical, logical, or a combination thereof. For example, devices may be electrically or communicatively coupled to one another despite not sharing a physical connection.


The term “based on” is also to be construed in an inclusive sense rather than an exclusive or exhaustive sense. Thus, unless otherwise noted, the term “based on” is intended to mean “based at least in part on.”


The term “module” refers broadly to software components, hardware components, and/or firmware components. Modules are typically functional components that can generate useful data or other output(s) based on specified input(s). A module may be self-contained. A computer program may include one or more modules. Thus, a computer program may include multiple modules responsible for completing different tasks or a single module responsible for completing all tasks.


When used in reference to a list of multiple items, the word “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list.


The sequences of steps performed in any of the processes described here are exemplary. However, unless contrary to physical possibility, the steps may be performed in various sequences and combinations. For example, steps could be added to, or removed from, the processes described here. Similarly, steps could be replaced or reordered. Thus, descriptions of any processes are intended to be open-ended.


Example Operating Environment


FIG. 1 is a block diagram illustrating an example environment in which the introduced technique for analysis of content can be implemented. The example environment 100 includes a network-connected security system that includes base station 105 and one or more electronic devices 110 such as cameras 110a, audio recorder devices 110b, security lights 110c, or any other types of security devices.


The base station 105 and the one or more electronic devices 110 can be connected to each other via a local network 125. The local network 125 can be a local area network (LAN). In some embodiments, the local network 125 is a WLAN, such as a home Wi-Fi, created by one or more wireless accesses points (APs) 120. In some embodiments, functionality associated with base station 105 and/or wireless AP 120 are implemented in software instantiated at a wireless networking device. In other words, the system may include multiple wireless networking devices as nodes, wherein each of the wireless networking devices is operable as a wireless AP 120 and/or base station 105. The one or more electronic devices 110 and the base station 105 can be connected to each other wirelessly, e.g., over Wi-Fi, or using wired means. The base station 105 and the one or more electronic devices 110 can be connected to each other wirelessly via the one or more wireless APs 120, or directly with each other without the wireless AP 120, e.g., using Wi-Fi direct, Wi-Fi ad hoc or similar wireless connection technologies or via wired connections. Further, the base station 105 can be connected to the local network 125 using a wired means or wirelessly.


The one or more electronic devices 110 can be battery powered or powered from a wall outlet. In some embodiments, the one or more electronic devices 110 can include one or more sensors such as motion sensors that can activate, for example, the capturing of audio or video, the encoding of captured audio or video, and/or transmission of an encoded audio or video stream when motion is detected.


Cameras 110a may capture video, encode the video as a video stream, and wirelessly transmit the video stream via local network 125 for delivery to a user device 102. In some embodiments, certain cameras may include integrated encoder components. Alternatively, or in addition, the encoder component may be a separate device coupled to the camera 110a. For example, an analog camera may be communicatively coupled to the base station 105 and/or wireless AP 120 via an analog to digital encoder device (not shown in FIG. 1). In some embodiments, the base station 105 and/or wireless APs 120 may include encoding components to encode and/or transcode video. Encoder components may include any combination of software and/or hardware configured to encode video information. Such encoders may be based on any number of different standards such as H.264, H.265, VP8, VP9, Daala, MJPEG, MPEG4, Windows Media Video (WMV), etc. for encoding video information. Accordingly, depending on the codec used, the video stream from a given camera 110a may be one of several different formats such as .AVI, .MP4, .MOV, .WMA, .MKV, etc. The video stream can include audio as well if the camera 110a includes or is communicatively coupled to an audio device 110b (e.g., a microphone). In some embodiments, cameras 110a can include infrared (IR) light emitting diode (LED) sensors, which can provide night-vision capabilities.


Similarly, audio recording devices 110b may capture audio, encode the audio as an audio stream, and wirelessly transmit the audio stream via local network 125 for delivery to a user device 102. In some embodiments, certain audio recording devices may include integrated encoder components. Alternatively, or in addition, the encoder component may be a separate device coupled to the audio recording device 110b. For example, an analog audio recording device may be communicatively coupled to the base station 105 and/or wireless AP 120 via an analog to digital encoder device (not shown in FIG. 1). In some embodiments, the base station 105 and/or wireless APs 120 may include encoding components to encode and/or transcode audio. Encoder components may include any combination of software and/or hardware configured to encode audio information. Such encoders may be based on any number of different standards such as Free Lossless Audio Codec (FLAC), MPEG-4 Audio, Windows Media Audio (WMA), etc. for encoding audio information. Accordingly, depending on the codec used, the audio stream from a given camera 110a may be one of several different formats such as .FLAC, .WMA, .AAC, etc.


Although the example environment 100 illustrates various types of electronic devices 110a-d, the security system can include just a single type of electronic device (e.g., cameras 110a) or two or more different types of electronic devices 110 which can be installed at various locations of a building. The various electronic devices 110 of the security system may include varying features and capabilities. For example, some electronic devices 110 may be battery powered while another may be powered from the wall outlet. Similarly, some electronic devices 110 may connect wirelessly to the base station 105 while others rely on wired connections. In some embodiments, electronic devices of a particular type (e.g., cameras 110a) included in the security system may also include varying features and capabilities. For example, in a given security system, a first camera 110a may include an integrated night vision, audio recording, and motion sensing capabilities while a second camera 100a only includes video capture capabilities.


The base station 105 can be a computer system that serves as a gateway to securely connect the one or more electronic devices 110 to an external network 135, for example, via one or more wireless APs 120. The external network 135 may comprise one or more networks of any type including packet switched communications networks, such as the Internet, World Wide Web portion of the Internet, extranets, intranets, and/or various other types of telecommunications networks such as cellular phone and data networks, plain old telephone system (POTS) networks, etc.


The base station 105 can provide various features such as long range wireless connectivity to the electronic devices 110, a local storage device 115, a siren, connectivity to network attached storage (NAS), and enhance battery life for certain electronic devices 110, e.g., by configuring certain electronic devices 110 for efficient operation and/or by maintaining efficient communications between the base station 105 and such electronic devices 110. The base station 105 can be configured to store the content (e.g., audio and/or video) captured by some electronic devices 110 in any of the local storage device 115 or a network-accessible storage 148. The base station 105 can be configured to generate a sound alarm from the siren when an intrusion is detected by the base station 105 based on the video streams receive from cameras 110/112.


In some embodiments, the base station 105 can create its own network within the local network 125, so that the one or more electronic devices 110 do not overload or consume the network bandwidth of the local network 125. In some embodiments, the local network 125 can include multiple access points 120 to increase wireless coverage of the base station 105, which may be beneficial or required in cases where the electronic devices 110 are wirelessly connected and are spread over a large area.


In some embodiments the local network 125 can provide wired and/or wireless coverage to user devices (e.g., user device 102), for example, via APs 120. In the example environment 100 depicted in FIG. 1, a user device 102 can connect to the base station 105, for example, via the local network 125 if located close to the base station 105 and/or wireless AP 120. Alternatively, the user device 102 can connect to the base station 105 via network 135 (e.g., the Internet). The user device 102 can be any computing device that can connect to a network and play video content, such as a smartphone, a laptop, a desktop, a tablet personal computer (PC), or a smart TV.


In an example embodiment, when a user 103 sends a request (e.g., from user device 102), to access content (e.g., audio and/or video) captured by any of the electronic devices 110, the base station 105 receives the request and in response to receiving the request, obtains the encoded stream(s) from one or more of the electronic devices 110 and transmits the encoded stream to the user device 102 for presentation. Upon receiving the encoded stream at the user device 102, a playback application in the user device 102 decodes the encoded stream and plays the audio and/or video to the user 103, for example, via speakers and/or a display of the user devices 102.


As previously mentioned, in some embodiments, the base station 105 may include an encoding/transcoding component that performs a coding process on audio and/or video received from the electronic devices 110 before streaming to the user device 102. In an example embodiment, a transcoder at the base station 105 transcodes a stream received from an electronic device 100 (e.g., a video stream from a camera 110a), for example, by decoding the encoded stream and re-encoding the stream into another format to generate a transcoded stream that it then streams to the user device 102.


The audio and/or video stream received at the user device 102 may be a real-time stream and/or a recorded stream. For example, in some embodiments, a transcoder may transcode an encoded stream received from an electronic device 110 and stream the transcoded stream to the user device 102 in real time or near real time (i.e., within several seconds) as the audio and/or video is captured at the electronic device 110. Alternatively, or in addition, audio and/or video streamed by base station 105 to the user device 102 may be retrieved from storage such as local storage 115 or a network-accessible storage 148.


The base station 105 can stream audio and/or video to the user device 102 in multiple ways. For example, the base station 105 can stream to the user device 102 using peer-to-peer (P2P) streaming technique. In P2P streaming, when the playback application on the user device 102 requests the stream, the base station 105 and the user device 102 may exchange signaling information, for example via network 135 or a network-accessible server system 145, to determine location information of the base station 105 and the user device 102, to find a best path and establish a P2P connection to route the stream from the base station 105 to the user device 102. After establishing the connection, the base station 105 streams the audio and/or video to the user device 102, eliminating the additional bandwidth cost to deliver the audio and/or video stream from the base station 105 to a network-accessible server computer 146 in a network-accessible server system 145 and for streaming from the network-accessible server computer 146 to the user device 102. In some embodiments, a network-accessible server computer 146 in the network-accessible server system 145 may keep a log of available peer node servers to route streams and establish the connection between the user device 102 and other peers. In such embodiments, instead of streaming content, the server 146 may function as a signaling server or can include signaling software whose function is to maintain and manage a list of peers and handle the signaling between the base station 105 and the user device 102. In some embodiments, the server 146 can dynamically select the best peers based on geography and network topology.


In some embodiments, the network-accessible server system 145 is a network of resources from a centralized third-party provider using Wide Area Networking (WAN) or Internet-based access technologies. In some embodiments, the network-accessible server system 145 is configured as or operates as part of a cloud network, in which the network and/or computing resources are shared across various customers or clients. Such a cloud network is distinct, independent, and different from that of the local network 125.


In some embodiments, the local network 125 may include a multi-band wireless network comprising one or more wireless networking devices (also referred to herein as nodes) that function as wireless APs 120 and/or a base station 105. For example, with respect to the example environment 100 depicted in FIG. 1, base station 105 may be implemented at a first wireless networking device that functions as a gateway and/or router. That first wireless networking device may also function as a wireless AP. Other wireless networking devices may function as satellite wireless APs that are wirelessly connected to each other via a backhaul link. The multiple wireless networking devices provide wireless network connections (e.g., using Wi-Fi) to one or more wireless client devices such as one or more wireless electronic devices 110 or any other devices such as desktop computers, laptop computers, tablet computers, mobile phones, wearable smart devices, game consoles, smart home devices, etc. The wireless networking devices together provide a single wireless network (e.g., network 125) configured to provide broad coverage to the client devices. The system of wireless networking devices can dynamically optimize the wireless connections of the client devices without the need of reconnecting. An example of the multi-band wireless networking system is the NETGEAR® Orbi® system. Such systems are exemplified in U.S. patent application Ser. No. 15/287,711, filed Oct. 6, 2016, and Ser. No. 15/271,912, filed Sep. 21, 2016, now issued as U.S. Pat. No. 9,967,884 both of which are hereby incorporated by reference in their entireties for all purposes.


The wireless networking devices of a multi-band wireless networking system can include radio components for multiple wireless bands, such as 2.5 GHz frequency band, low 5 GHz frequency band, and high 5 GHz frequency band. In some embodiments, at least one of the bands can be dedicated to the wireless communications among the wireless networking devices of the system. Such wireless communications among the wireless networking devices of the system are referred to herein as “backhaul” communications. Any other bands can be used for wireless communications between the wireless networking devices of the system and client devices such as cameras 110 connecting to the system. The wireless communications between the wireless networking devices of the system and client devices are referred to as “fronthaul” communications.



FIG. 2 shows a high-level functional block diagram illustrating the architecture of an example electronic device 200 (e.g., similar to electronic devices 110 described with respect to FIG. 1) that monitors various aspects of a surveilled environment. As further described below, the electronic device 200 may generate content while monitoring the surveilled environment, and then transmit the content to a base station for further processing.


The electronic device 200 (also referred to as a “recording device”) can include one or more processors 202, a communication module 204, an optical sensor 206, a motion sensing module 208, a microphone 210, a speaker 212, a light source 214, and one or more storage modules 216.


The processor(s) 202 can execute instructions stored in the storage module(s) 216, which can be any device or mechanism capable of storing information. In some embodiments, a single storage module includes multiple computer programs for performing different operations (e.g., image recognition, noise reduction, filtering), while in other embodiments each computer program is hosted within a separate storage module.


The communication module 204 can manage communication between various components of the electronic device 200. The communication module 204 can also manage communications between the electronic device 200 and a base station, another electronic device, etc. For example, the communication module 204 may facilitate communication with a mobile phone, tablet computer, wireless access point (WAP), etc. As another example, the communication module 204 may facilitate communication with a base station responsible for communicating with a network-connected computer server; more specifically, the communication module 204 may be configured to transmit content generated by the electronic device 200 to the base station for processing. As further described below, the base station may examine the content itself or transmit the content to the network-connected computer server for examination.


The optical sensor 206 (also referred to as “image sensors”) can be configured to generate optical data related to the surveilled environment. Examples of optical sensors include charged-coupled devices (CCDs), complementary metal-oxide-semiconductors (CMOSs), infrared detectors, etc. In some embodiments, the optical sensor 206 is configured to generate a video recording of the surveilled environment responsive to, for example, determining that movement has been detected within the surveilled environment. In other embodiments, the optical data generated by the optical sensor 206 is used by the motion sensing module 208 to determine whether movement has occurred. The motion sensing module 208 may also consider data generated by other components (e.g., the microphone) as input. Thus, an electronic device 200 may include multiple optical sensors of different types (e.g., visible light sensors and/or IR sensors for night vision).


The microphone 210 can be configured to record sounds within the surveilled environment. The electronic device 200 may include multiple microphones. In such embodiments, the microphones may be omnidirectional microphones designed to pick up sound from all directions. Alternatively, the microphones may be directional microphones designed to pick up sounds coming from a specific direction. For example, if the electronic device 200 is intended to be mounted in a certain orientation (e.g., such that the camera 208 is facing a doorway), then the electronic device 200 may include at least one microphone arranged to pick up sounds originating from near the point of focus.


The speaker 212, meanwhile, can be configured to convert an electrical audio signal into a corresponding sound that is projected into the surveilled environment. Together with the microphone 210, the speaker 212 enables an individual located within the surveilled environment to converse with another individual located outside of the surveilled environment. For example, the other individual may be a homeowner who has a computer program (e.g., a mobile application) installed on her mobile phone for monitoring the surveilled environment.


The light source 214 can be configured to illuminate the surveilled environment. For example, the light source 214 may illuminate the surveilled environment responsive to a determination that movement has been detected within the surveilled environment. The light source 214 may generate metadata specifying a time at which each illumination event occurred, a duration of each illumination event, etc. This metadata can be examined by the processor(s) 202 and/or transmitted by the communication module 204 to the base station for further processing.


As previously discussed with respect to FIG. 1, electronic devices 110 may be configured as different types of devices such as cameras 110a, audio recording devices 110b, security lights 110c, and other types of devices. Accordingly, embodiments of the electronic device 200 may include some or all of these components, as well as other components not shown here. For example, if the electronic device 200 is a security camera 110a, then some components (e.g., the microphone 210, speaker 212, and/or light source 214) may not be included. As another example, if the electronic device 200 is a security light 110c, then other components (e.g., the camera 208, microphone 210, and/or speaker 212) may not be included.



FIG. 3 is a high-level functional block diagram illustrating an example base station 300 configured to process content generated by electronic devices (e.g., electronic device 200 of FIG. 2) and forward the content to other computing devices such as a network-connected computer server, etc.


The base station 300 can include one or more processors 302, a communication module 304, and one or more storage modules 306. In some embodiments, a single storage module includes multiple computer programs for performing different operations (e.g., image recognition, noise reduction, filtering), while in other embodiments each computer program is hosted within a separate storage module. Moreover, the base station 300 may include a separate storage module for each electronic device within its corresponding surveillance environment, each type of electronic device within its corresponding surveillance environment, etc.


Such a categorization enables the base station 300 to readily identify the content/data generated by security cameras, security lights, etc. The content/data generated by each type of electronic device may be treated differently by the base station 300. For example, the base station 300 may locally process sensitive content/data but transmit less sensitive content/data for processing by a network-connected computer server.


Thus, in some embodiments, the base station 300 processes content/data generated by the electronic devices, for example, to analyze the content to understand what events are occurring within the surveilled environment, while in other embodiments the base station 300 transmits the content/data to a network-connected computer server responsible for performing such analysis.


The communication module 304 can manage communication with electronic device(s) within the surveilled environment and/or the network-connected computer server. In some embodiments, different communication modules handle these communications. For example, the base station 300 may include one communication module for communicating with the electronic device(s) via a short-range communication protocol, such as Bluetooth® or Near Field Communication, and another communication module for communicating with the network-connected computer server via a cellular network or the Internet.



FIG. 4 depicts a network security system that includes a variety of electronic devices configured to collectively monitor a surveilled environment 400 (e.g., the interior and exterior of a home). Here, the variety of electronic devices includes multiple security lights 402a-b, multiple external security cameras 404a-b, and multiple internal security cameras 406a-b. However, those skilled in the art will recognize that the network security system could include any number of security lights, security cameras, and other types of electronic devices. Some or all of these electronic devices are communicatively coupled to a base station 408 that can be located in or near the surveilled environment 400. Each electronic device can be connected to the base station 408 via a wired communication channel or a wireless communication channel.



FIG. 5A illustrates an example network environment 500a that includes a base station 502 designed to receive content generated by one or more electronic devices arranged throughout a surveilled environment. The base station 502 can transmit at least some of the content to a network-accessible server system 506. The network-accessible server system 506 may supplement the content based on information inferred from content uploaded by other base stations corresponding to other surveilled environments.


The base station 502 and the network-accessible server system 506 can be connected to one another via a computer network 504a. The computer network 504a may include a personal area network (PAN), local area network (LAN), wide area network (WAN), metropolitan area network (MAN), cellular network, the Internet, or any combination thereof.



FIG. 5B illustrates an example network environment 500b that includes a security management platform 508 that is supported by the network-accessible server system 506. Users can interface with the security management platform 508 via an interface 510. For example, a homeowner may examine content generated by electronic devices arranged proximate her home via the interface 510.


The security management platform 508 may be responsible for parsing content/data generated by electronic device(s) arranged throughout a surveilled environment to detect occurrences of events within the surveilled environment. The security management platform 508 may also be responsible for creating interfaces through which an individual can view content (e.g., video clips and audio clips), initiate an interaction within someone located in the surveilled environment, manage preferences, etc.


As noted above, the security management platform 508 may reside in a network environment 500b. Thus, the security management platform 508 may be connected to one or more networks 504b-c. Similar to network 504a, networks 504b-c can include PANs, LANs, WANs, MANs, cellular networks, the Internet, etc. Additionally, or alternatively, the security management platform 508 can be communicatively coupled to computing device(s) over a short-range communication protocol, such as Bluetooth® or NFC.


The interface 510 is preferably accessible via a web browser, desktop application, mobile application, or over-the-top (OTT) application. Accordingly, the interface 510 may be viewed on a personal computer, tablet computer, personal digital assistant (PDA), mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness accessory), network-connected (“smart”) electronic device, (e.g., a television or home assistant device), virtual/augmented reality system (e.g., a head-mounted display), or some other electronic device.


Security Notifications Based on Analysis of Content

As described above, one issue with security systems is the overabundance of alerts generated by the security systems. Consequently, individuals (e.g., administrators of the security systems) may lose interest in the security capabilities of these security systems due to too many undesired notifications.


These undesired notifications may be derived from several different sources. For example, in some embodiments, a security system may detect too many false instances of motion because it relies on a signal generated by an overly sensitive passive infrared sensor (PIS). As another example, in some embodiments a security system may detect too many false instances of audio because it relies on an overly sensitive audio sensor (which is configured to prompt recording by the security camera).


To reduce the quantity of notifications, a network-connected security system can be configured to filter those notifications deemed likely to be unnecessary. In some cases, the “filtering” of notifications may include receiving notifications and only forwarding a portion of the received notifications deemed necessary to a user. Alternatively, or in addition, “filtering” notifications may refer to detecting multiple events that would otherwise result in notifications to a user (e.g., detected motion) and only generating notifications based on a subset of the detected events for presentation to a user. For example, the base station can apply an algorithm that allows it to detect objects included in a video clip. Moreover, the base station can apply an algorithm that allows it to detect the scene depicted in the video clip. The base station can then remove undesired motion from the video clip. Said another way, the base station can ignore those movements that are indicative of events the corresponding individual does not wish to be notified about. Thereafter, the base station can generate notifications only for those events that survive the “filtering” process. Such action ensures that the corresponding individual is only notified about significant events.



FIG. 6 shows a flow diagram of a technique for processing content generated by electronic devices 110 before generating a notification and/or initiating a stream for delivery to a client device 102. Some or all of the steps described with respect to FIG. 6 may be executed at least in part by an analytics system 604 deployed on a base station 105, at a network-accessible server system 145, at one or more electronic devices 110, or any combination thereof. In other words, the analytics system 604 depicted in FIG. 6 refers to a functional entity that may include hardware and/or software elements at any one or more of the components depicted in the example operation environment 100 depicted in FIG. 1. Further, while the embodiment is described in the context of a security camera, those skilled in the art will recognize that similar techniques could also be employed with other types of electronic devices.


Initially, one or more security cameras 110a generate content 602, for example, by capturing video and encoding the captured video into digital information. The content 602 may include, for example, one or more digital files including the encoded video.


The content 602 is then fed into an analytics system 604 for processing according to the introduced technique. In some embodiments, the step of feeding the content 602 into the analytics system 604 may include a camera 110a transmitting the generated content 602 over a computer network (e.g., a wired or wireless local network 125) to a base station 105. The base station 105 may then forward the received content 602 to a network-accessible server system 145 that implements the analytics system 604. Alternatively, or in addition, the camera 110a and/or base station 105 may include processing components that implement at least a portion of the analytics system 604.


In some embodiments, content 602 is fed into the analytics system 604 continually as it is generated. For example, in some embodiments, the camera 110a may generate a digital video stream that is transmitted to the analytics system 604 for processing by way of the base station 105.


In some embodiments, content 602 is continually generated by the camera 110a. For example, a camera 110a that is powered by a wall outlet may continually capture view, encode the captured video into a digital stream, and transmit that digital stream for processing by the analytics system.


Alternatively, the camera 110a may be configured to generate content 602 at periodic intervals and/or in response to detecting certain conditions or events. For example, the camera 110a may be equipped with, or in communication with, a motion detector that triggers the capturing and encoding of video when motion in the surveilled environment is detected. In response to receiving an indication of detected motion, the camera 110a may begin generating content 602 by capturing video and encoding the captured video. As another illustrative example, instead of transmitting a continuous stream of content 602, the video camera 110a may transmit small portions of content (e.g., short video clips or still images) at period intervals (e.g., every few seconds).


Generating content 602 at periodic intervals and/or in response to detected events may conserve energy at the camera 110a, which may be particularly beneficial for battery-powered cameras 110a. Generating content 602 at periodic intervals and/or in response to detected events may also reduce resource requirements to process the content, for example, when generating notifications. For example, in the case of a surveilled environment, the system may be configured based on an assumption that the video of the surveilled environment is of no interest to an administrator unless the video captures an object in motion.


In some embodiments, content 602 is fed into the analytics system 604 periodically (e.g., daily, weekly, or monthly) or in response to detected events. For example, even if the content 602 is continually generated, such content 602 may be held in storage (e.g., at local storage 115 or a NAS 148) before being released (periodically or in response to detected events) for analysis by the analytics system 604.


Thereafter, the analytics system 604 processes the received content 602 to perform the notification filtering technique described herein. For example, the analytics system 604 may process the received content 602 to detect whether an event has occurred that necessitates a notification to a user. As previously mentioned, processing of the received content 602 may be carried out by processors located at the base station 105, a network-accessible server system 145, or any combination thereof.


In some embodiments, processing of content 602 may include a content recognition process 606 to gain some level of understanding of the information captured in the content 602. For example, the content recognition process 606 may apply computer vision techniques to detect physical objects captured in the content 602. FIG. 7 shows a flow diagram that illustrates an example high-level process 700 for image processing-based object detection that involves, for example, processing content 602 to detect identifiable feature points (step 702), identifying putative point matches (step 704), and detecting an object based on the putative point matches (step 706).


The content recognition process 606 may further classify such detected objects. For example, given one or more classes of objects (e.g., humans, buildings, cars, animals, etc.), the content recognition process 606 may process the video content 602 to identify instances of various classes of physical objects occurring in the captured video of the surveilled environment.


In some embodiments, the content recognition process 606 may employ deep learning-based video recognition to classify detected objects. In an example deep learning-based video recognition process for detecting a face, raw image data is input as a matrix of pixels. A first representational layer may abstract the pixels and encode edges. A second layer may compose and encode arrangements of edges, for example, to detect objects. A third layer may encode identifiable features such as a nose and eyes. A fourth layer may recognize that the image includes a face based on the arrangement of identifiable features.


An example technique for classifying objects detected in images or video is the Haar Cascade classifier. FIG. 8 shows a flow diagram that illustrates an example high-level process 800 applied by a Haar Cascade classifier, specifically for classifying an object in a piece of content 602 as a face. As shown in FIG. 8, the content 602 (or a portion thereof) is fed into a first level process 802 which determines whether an object that can be classified as a face is present in the content 602. If, based on the processing at the first stage 802, it is determined that content 602 does not include an object that can be classified as a face, that object is immediately eliminated as an instance of a face. If, based on the processing at the first stage 802, it is determined that content 602 does include an object that can be classified as a face, the process 800 proceeds to the next stage 804 for further processing. Similar processes are applied at each stage 804, 806 and so on to some final stage 808.


Notably, each stage in the example process 800 may apply increasing levels of processing which requiring increasingly more computational resources. A benefit of this cascade technique is that objects that are not faces are immediately eliminated as such at higher stages with relatively little processing. To be classified as a particular type of object (e.g., face detection), the content must pass each of the stages 802-808 of the classifier.


Note that the example Haar Cascade classifier process 800 depicted in FIG. 8 is for classifying detected objects as faces, however, similar classifiers may be trained to detect other classes of objects (e.g., car, building, cat, tree, etc.).


Returning to FIG. 6, the content recognition process 606 may also include distinguishing between instances of detected objects. For example, a grouping method may be applied to associate pixels corresponding to a particular class of objects to a particular instance of that class by selecting pixels that are substantially similar to certain other pixels corresponding to that instance, pixels that are spatially clustered, pixel clusters that fit an appearance-based model for the object class, etc. Again, this process may involve applying a deep learning (e.g., through applying a convolutional neural network) to distinguish individual instances of detected objects. Some example techniques that can be applied for identifying multiple objects include Regions with Convolutional Neural Network Features (RCNN), Fast RCNN, Single Shot Detector (SDD), You Only Look Once (Yolo), etc.


The content recognition process 606 may also include recognizing the identity of detected objects (e.g., specific people). For example, the analytics system 604 may receive inputs (e.g., captured images/video) to learn the appearances of instances of certain objects (e.g., specific people) by building machine-learning appearance-based models. Instance segmentations identified based on processing of content 602 can then be compared against such appearance-based models to resolve unique identities for one or more of the detected objects. Identity recognition can be particularly useful in this context as it would allow the system to ignore the detection of certain known individuals in captured image (e.g., members of a household) while focusing notifications on unknown individuals and/or known unwanted individuals that more likely pose a security threat.


The content recognition process 606 may also include fusing information related to detected objects to gain a semantic understanding of the captured scene. For example, the content recognition process 606 may include fusing semantic information associated with a detected object with geometry and/or motion information of the detected object to infer certain information regarding the scene. Information that may be fused may include, for example, an object's category (i.e., class), identity, location, shape, size, scale, pixel segmentation, orientation, inter-class appearance, activity, and pose. As an illustrative example, the content recognition 606 process may fuse information pertaining to one or more detected objects to determine that a clip of video is capturing a known person (e.g., a neighbor) walking their dog past a house. The same process may be applied to another clip to determine that the other clip is capturing an unknown individual peering into a window of a surveilled house. The analytics system 604 can then use such information to generate notifications only for the scene that presents a heightened security risk (i.e., the unknown person looking in the window) despite motion being detected in both.


In some embodiments, labeled image data (e.g., historical video from one or more sources) may be input to train a neural network (or other machine-learning based models) as part of the content recognition process 606. For example, security experts may input previously captured video from a number of different sources as examples of certain classes of objects (e.g., car, building, cat, tree, etc.) to inform the content recognition process 606.


As alluded to above, after performing content recognition to, for example, detect objects or further to gain a semantic understanding of a captured scene, the analytics system 604 will utilize this information as part of an event detection process 608. Event detection may include detecting recognizable events (e.g., a person walking to the front door) and analyzing certain specifics regarding the detected event (e.g., person's identity, time of day, proximity to other detected events, and other contextual information) to determine if the detected event is indicative of a security threat that warrants a notification.


In some embodiments, determining that a detected event is indicative of a security threat may include comparing data associated with the detected event against a database of data associated with candidate threats, for example, defined based on input from industry security experts. As an illustrative example, by processing video content 602, the analytics system 604 may detect an event characterized by an unknown individual approaching the doorway to a residence. The analytics system 604 may then compare this semantic information regarding the detected event to a database of candidate threats. Based on the comparison, the analytics system 604 may identify a particular candidate threat that matches (within some threshold level of certainty) the detected event. In some embodiments, the process of comparing may include generating, by the analytics system 604, a pattern matching score (e.g., a value between 0 and 10) and identifying the detected event as indicative of the particular candidate security threat if the generated score satisfies a threshold criterion (e.g., above 7 on a scale of 0-10).


Alternatively, or in addition, the analytics system 604 may employ machine learning to analyze received content 602 to determine if the content is indicative of a security threat. For example, as previously discussed, the analytics system 604 may apply machine-learning-based behavioral analytics to learn the behavior of objects captured in video images and identify when the behavior of such objects as indicative of a security threat. Applying a machine-learning-based approach may be beneficial in certain instances as it may alleviate the need to develop complex threat detection rules that rely on preexisting knowledge and that are prone to incorrectly identifying unexpected or rare behavior.


If, based on the analysis of the received content 602, the analytics system 604 determines that the content is indicative of a security threat, the analytics system may apply a notification generation process 610 to generate one or more notifications for delivery to an administrative user at a user device 102.


In some embodiments, notification generation 610 may include generating one or more notifications 614, for example, in the form of messages that are then transmitted over a computer network to a user device 102 associated with an administrative user. Notifications may include emails, text messages (e.g., SMS, MMS, etc.), automated phone calls, alerts within interface 510, or any other communications medium appropriate for delivery at user device 102.


In some embodiments, notification generation may include transmitting processed and/or filtered content 616 for delivery at the client device. For example, in the case of video content 602, the analytics system may process the received video content 602 and forward content 616 based on the processing to the user device 102. Processed content 616 may include, for example, a shortened video clip that specifically depicts the activity upon which the security threat was identified. Processed content 616 may also include transformations to the original content to highlight the activity upon which the security threat was identified. For example, the analytics system 604 may be configured to process content 602 to remove detected motion from the content that is not indicative of a security threat.


Processed content 616 may be transmitted to the user device 102 in real time (or near real time) as the content 602 is generated by the camera 110a. For example, a camera 110a may transmit content 602 in the form of a continuous stream of video to analytics system 604. The analytics system 604 may process the received video stream as it is received and only forward portions of the video stream (i.e., processed/filtered content 616) as events are detected that are indicative of a security threat. This processing may occur in real time or near real time as the video is captured at the camera 110a so that an administrator user can effectively respond to the security threats.


Alternatively, or in addition, processed/filtered content 616 may represent time-shifted recordings that are delivered to the user device 102 after the events underlying the recordings have already occurred. For example, an administrator user that does not want to be bothered with the delivery of live streams throughout the day may elect instead to, for example, review the recordings for the day once at the end of the day.


In some embodiments, one or more of the electronic devices 110 may be configured to individually analyze certain content (e.g., captured video) and generate notifications. In such embodiments, an analytics system 604 operating apart from such an electronic device, may be configured to process such notifications as content 602 as part of a notification filtering process. Notifications received by the analytics system 604 for processing may be referred to as provisional notifications in that they are subject to filtering processes which may result in being forwarded to a user device or discarded/ignored.


As an illustrative example, the content 602 depicted in FIG. 6 as being input to analytics system 604 may include a provisional notification from a camera 110a. For example, the camera 110a may be configured to independently analyze captured video to detect motion and generate notifications based on the detected motion. In that case, the processing of content 602 by the analytics system 604 to detect an event (i.e., event detection process 608) may include analyzing the received provisional notification to interpret or otherwise identify an event (e.g., motion detection) as detected at the camera 110a. Further, the process of causing a notification to be presented to a user (i.e., notification generation 610) if the detected event satisfies a specified criterion may include generating a new notification based on the received provisional notification and/or simply forwarding the received provisional notification for delivery to a user device 102.


In some embodiments, the analytics system 604 may consider user feedback 618 provided by one or more users. For example, if a user indicates that a certain video clip is undesirable, uninteresting, or otherwise not worthy of a notification, then the analytics system 604 may reduce/eliminate notifications related to future video clips having similar characteristics (e.g., generated at the same time, generated based on the same trigger, including the same visual/audible objects, etc.).


The analytics system 604 may also consider feedback provided by a cohort that includes the corresponding user. The cohort may include users that share a characteristic in common, such as geographical location, notification frequency, etc. For example, if the analytics system 604 considers feedback from users within a neighborhood, the analytics system 604 may know to filter those notifications pertaining to a cat that lives in the neighborhood. As another example, if the analytics system 604 considers feedback from users within a city, the analytics system 604 may know to filter those notifications pertaining to events triggered by weather (e.g., wind or rain).


Various programming models and associated techniques for processing and generating data can be applied by the analytics system 604 to process content 602 and generate notifications. For example, in some embodiments, analytics system 604 may utilize a distributed computing cluster to process content 602. Utilizing a distributed computing architecture can be particularly beneficial when processing large amounts of data such as content received from a security system or multiple security systems. FIG. 9 illustrates how various inputs such as content 602 (e.g., video clips, keystrokes) and session metadata may be received from base station(s) 105, for example, via a network-accessible server 146, and fed into a distributed computing cluster 902. In some embodiments, input data from a development cycle 904 such as ticketing/monitoring information and/or information stored in a knowledge base may also be input to the distributed computing cluster 902.


The distributed computing cluster 902 may represent a logical entity that includes sets of host machines (not shown in FIG. 9) that run instances of services configured for distributed processing of data. In an example embodiment, the distributed computing cluster 902 may comprise an Apache Hadoop™ deployment. Apache Hadoop™ is an open-source software framework for reliable, scalable and distributed processing of large data sets across clusters of commodity machines. Examples of services/utilities that can be deployed in an Apache Hadoop™ cluster include the Apache Hadoop™ Distributed File System (HDFS), MapReduce™, Apache Hadoop™ YARN, and/or the like. The host computing devices comprising the computing cluster 802 can include physical and/or virtual machines that run instances of roles for the various services/utilities. For example, the Apache™ HDFS service can have the following example roles: a NameNode, a secondary NameNode, DataNode, and balancer. In a distributed system such as computing cluster 902, one service may run on multiple host machines.


Apache Hadoop™ software utilities can be employed to facilitate the development of filtering algorithm(s), the acquisition of data pertaining to surveilled environments, and the application of the filtering algorithm(s) to improve real-time analytics. Here, for example, the Apache Hadoop™ software utilities may consider content 602 (e.g., video clips) generated by electronic devices deployed in surveilled environments, as well as user feedback specifying which notifications are desired. The Apache Hadoop™ software utilities can also develop a classification model for classifying content by training a supervised machine learning algorithm. Various machine learning and/or artificial intelligence technologies can be employed to facilitate the development of the classification model.



FIG. 10 illustrates how MapReduce™ can be utilized in combination with Apache Hadoop™ in a distributed computing cluster 902 to process various sources of information. MapReduce™ is a programming model for processing/generating big data sets with a parallel, distributed algorithm on a cluster. As shown in FIG. 10, MapReduce™ usually splits an input data set (e.g., content 602 comprising video clips) into independent chunks that are processed by the map tasks in a parallel manner. The framework sorts the outputs of the map tasks, which are then input to the reduce tasks. Ultimately, the output of the reduce tasks may be a classification of the content or an event determination that can be utilized by the analytics system 604 to generate notifications and/or filter content being delivered to a user device 102.



FIG. 11 illustrates how content 602 can be processed in batches by the analytics system 604. Here, for example, video clips generated by security cameras may be processed in groups. In some embodiments, all of the video clips corresponding to a certain segment of surveilled environments (e.g., a particular group of homes) are collected on a periodic basis. For example, video clips may be collected every 15 minutes, 30 minutes, 60 minutes, 120 minutes, etc. Thereafter, each batch of video clips can be processed. After processing has been completed, notifications can be generated by the analytics system 604 and transmitted substantially simultaneously. Thus, users may periodically receive reports including one or more notifications rather than a steady stream of notifications throughout the day. Users may be permitted to manually specify the cadence at which they receive these reports.



FIG. 12 shows a flow chart of an example process 1200 for filtering and/or generating notifications based on analysis of content, according to some embodiments. One or more steps of the example process 1200 may be performed by any one or more of the components of the example computer system 1300 described with respect to FIG. 13. For example, the example process 1200 depicted in FIG. 12 may be represented in instructions stored in memory that are then executed by a processing unit. The process 1200 described with respect to FIG. 12 is an example provided for illustrative purposes and is not to be construed as limiting. Other processes may include more or fewer steps than depicted while remaining within the scope of the present disclosure. Further, the steps depicted in example process 1200 may be performed in a different order than is shown.


Example process 1200 begins at step 1202 with receiving content 602 generated by an electronic device 110 located in a physical environment (e.g., a surveilled environment 400). As previously discussed, the electronic device 110 may be one of several electronic devices 110 associated with a network-connected security system. In some embodiments, the content 602 is received at step 1202 via a computer network, from a base station 105 associated with the network-connected security system.


In some embodiments, the network-connected security system is a video surveillance system and the electronic device 110 is a network-connected video camera 110a. In such embodiments the content 602 may include video files. Further, the optical parameter may be any of an optical parameter, an image processing parameter, or an encoding parameter.


In some embodiments, the content 602 received from the electronic device 110 at step 1202 may include a provisional notification generated by the electronic device 110. For example, the electronic device 110 may include processing resources to detect an event based on sensory information (e.g., video) and generate a notification based on the detected event.


Example process 1200 continues at step 1204 with processing the received content 602 to detect an event in the surveilled physical environment.


In the case of a video content (e.g., from a camera 110a), the processing of content at step 1204 to detect an event may include processing the video to detect one or more instances of physical objects in the physical environment and analyzing data associated with the detected one or more instances of physical objects to detect a scene captured by the video camera 110a. For example, as previously discussed, a content recognition process 606 (described with respect to FIG. 6) may apply computer vision techniques to, for example, detect various instances of physical objects, resolve object identities, and may fuse various sources of information to gain a semantic understanding of a scene captured by a video camera. The event detected at step 1204 may be based on this scene understanding.


If the received content 602 includes a provisional notification (e.g., generated by an electronic device), the step of detecting the event at step 1204 may include processing the received provisional notification (e.g., by reading or interpreting a message included in the notification) and identifying the event (as detected by the electronic device 110) based on the processing.


Example process 1200 continues at step 1206 with determining if the detected event satisfies a specified criterion. As previously discussed, a purpose of processing content generated by a network-connected security system may be to determine if a notification to a user is necessary. In a security context, a notification is generally understood to be necessary when an event has occurred in a surveilled environment is abnormal, or more specifically indicative of a security risk. The specified criterion may therefore differ in various embodiments, but is generally established based on a need to selectively notify a user of activity in the surveilled environment that may be of interest to the user whether that activity is merely outside of a normal baseline or more specifically indicative of a security risk or threat.


In an illustrative embodiment, the step of determining if the detected event satisfies a specified criterion includes comparing data associated with the detected event against a database of data associated with a plurality of candidate security threats, generating a pattern matching score based on the detected event and a particular candidate security threat of the plurality of candidate security threats and identifying the detected event as indicative of the particular candidate security threat if the generated pattern matching score satisfies a threshold criterion. For example, pattern matching scores may be generated on a scale of 0-10 with a threshold criterion set at 7 to indicate that a detected event is indicative of a particular candidate security threat based on the comparison.


Depending on the computer system performing example process 1200, steps 1204 and 1206 may include transmitting the content 602 to another computing system for processing. For example, if a base station 105 is performing process 1200, steps 1204 and 1206 may include transmitting, by the base station 105, the content 602, via an external network, to an external computing system such as a network-accessible server system 145 to process the content 602 to detect an event and determine if the detected event satisfies a specified criterion.


Example process 1200 concludes at step 1208 with causing a notification to be presented at a user device 102 communicatively coupled to the network-connected security system if the detected event satisfies the specified criterion. In some embodiments, the notification is presented at the user device 102 in real time or near real time as the content 602 is generated by the electronic device 110. For example, a notification may be presented at a user device 102 within seconds or fractions of a second a portion of video content is captured at a camera 110a associated with a network-connected security system. The latency between content generation and presentation of the notification will depend on certain limitations in the system (e.g., processing resources, network speed, etc.).


In some embodiments, the notification presented at the user device 102 at step 1208 may include an alert message (e.g., emails, text messages (e.g., SMS, MMS, etc.), automated phone calls, alerts within interface 510) informing a user of the event. In other words, in some embodiments, step 1208 may include causing the alert message to be transmitted, via a computer network, to the client device 102.


In some embodiments, the notification presented at the user device 102 at step 1208 may include at least a portion of the content 602 generated by an electronic device 110. In such embodiments, step 1208 may include causing at least a portion of the content 602 received from the electronic device 110 to be transmitted, via a computer network, to the client device 102. In some embodiments, causing the at least a portion of the content 602 to be transmitted to the client device 102 may include initiating a peer-to-peer connection between the electronic device 110 and the client device 102 and causing the portion of content 602 to be transmitted via the peer-to-peer connection.


In cases where the content 602 received at step 1202 includes a provisional notification, step 1208 may include forwarding the received provisional notification if the event (e.g., as detected by the electronic device 110) satisfies the specified criterion.


Computer System


FIG. 13 is a block diagram illustrating an example of a computer system 1300 in which at least some operations described herein can be implemented. For example, some components of the computer system 1300 may be hosted any one or more of the devices described with respect to operating environment 100 in FIG. 1 such as electronic devices 110, base station 105, APs 120, local storage 115, network-accessible server system 145 and user devices 102.


The computer system 1300 may include one or more central processing units (“processors”) 1302, main memory 1306, non-volatile memory 1310, network adapter 1312 (e.g., network interface), video display 318, input/output devices 1320, control device 1322 (e.g., keyboard and pointing devices), drive unit 1324 including a storage medium 1326, and signal generation device 1330 that are communicatively connected to a bus 1316. The bus 1316 is illustrated as an abstraction that represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. The bus 1316, therefore, can include a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as “Firewire”).


The computer system 1300 may share a similar computer processor architecture as that of a desktop computer, tablet computer, personal digital assistant (PDA), mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness tracker), network-connected (“smart”) device (e.g., a television or home assistant device), virtual/augmented reality systems (e.g., a head-mounted display), or another electronic device capable of executing a set of instructions (sequential or otherwise) that specify action(s) to be taken by the computer system 1300.


While the main memory 1306, non-volatile memory 1310, and storage medium 1326 (also called a “machine-readable medium”) are shown to be a single medium, the term “machine-readable medium” and “storage medium” should be taken to include a single medium or multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions 1328. The term “machine-readable medium” and “storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computer system 1300.


In general, the routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions 1304, 1308, 1328) set at various times in various memory and storage devices in a computing device. When read and executed by the one or more processors 1302, the instruction(s) cause the computer system 1300 to perform operations to execute elements involving the various aspects of the disclosure.


Moreover, while embodiments have been described in the context of fully functioning computing devices, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms. The disclosure applies regardless of the particular type of machine or computer-readable media used to actually effect the distribution.


Further examples of machine-readable storage media, machine-readable media, or computer-readable media include recordable-type media such as volatile and non-volatile memory devices 1310, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS), Digital Versatile Disks (DVDs)), and transmission-type media such as digital and analog communication links.


The network adapter 1312 enables the computer system 1300 to mediate data in a network 1314 with an entity that is external to the computer system 1300 through any communication protocol supported by the computer system 1300 and the external entity. The network adapter 1312 can include a network adapter card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, a bridge router, a hub, a digital media receiver, and/or a repeater.


The network adapter 1312 may include a firewall that governs and/or manages permission to access/proxy data in a computer network and tracks varying levels of trust between different machines and/or applications. The firewall can be any number of modules having any combination of hardware and/or software components able to enforce a predetermined set of access rights between a particular set of machines and applications, machines and machines, and/or applications and applications (e.g., to regulate the flow of traffic and resource sharing between these entities). The firewall may additionally manage and/or have access to an access control list that details permissions including the access and operation rights of an object by an individual, a machine, and/or an application, and the circumstances under which the permission rights stand.


The techniques introduced here can be implemented by programmable circuitry (e.g., one or more microprocessors), software and/or firmware, special-purpose hardwired (i.e., non-programmable) circuitry, or a combination of such forms. Special-purpose circuitry can be in the form of one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc.


REMARKS

The foregoing description of various embodiments of the claimed subject matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling those skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated.


Although the Detailed Description describes certain embodiments and the best mode contemplated, the technology can be practiced in many ways no matter how detailed the Detailed Description appears. Embodiments may vary considerably in their implementation details, while still being encompassed by the specification. Particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the technology encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the embodiments.


The language used in the specification has been principally selected for readability and instructional purposes. It may not have been selected to delineate or circumscribe the subject matter. It is therefore intended that the scope of the technology be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, of the scope of the technology as set forth in the following claims.

Claims
  • 1. A method for selectively notifying a user of certain events in a physical environment surveilled by a network-connected security system, the method comprising: receiving, by a computer system, content generated by an electronic device associated with the network-connected security system, the electronic device located in the physical environment;processing, by the computer system, the content to detect an event in the physical environment; andcausing, by the computer system, a notification to be presented at a user device communicatively coupled to the network-connected security system if the detected event satisfies a specified criterion.
  • 2. The method of claim 1, wherein, the notification is presented at the user device in real time or near real time as the content is generated by the electronic device.
  • 3. The method of claim 1, wherein the content includes video files.
  • 4. The method of claim 1, wherein the network-connected security system is a video surveillance system and the electronic device is a network-connected video camera.
  • 5. The method of claim 4, wherein the content includes video captured by the network-connected video camera and wherein processing the content to detect the event includes: processing, by the computer system, the video to detect one or more instances of physical objects in the physical environment; andanalyzing, by the computer system, data associated with the detected one or more instances of physical objects to detect a scene captured by the network-connected video camera;wherein the event is detected based on the detected scene captured by the network-connected video camera.
  • 6. The method of claim 1, further comprising: determining, by the computer system, that the detected event satisfies the specified criterion by: comparing, by the computer system, data associated with the detected event against a database of data associated with a plurality of candidate security threats;generating, by the computer system, a pattern matching score based on the detected event and a particular candidate security threat of the plurality of candidate security threats; andidentifying, by the computer system, the detected event as indicative of the particular candidate security threat if the generated pattern matching score satisfies a threshold criterion.
  • 7. The method of claim 1, wherein the notification includes any of an alert message or a portion of the content generated by the electronic device.
  • 8. The method of claim 1, wherein the computer system is part of a base station associated with the network-connected security system.
  • 9. The method of claim 9, wherein processing the content includes transmitting, by the base station at least a portion of the content to a network-accessible server system for additional processing.
  • 10. The method of claim 1, wherein causing the notification to be presented at a user device includes: causing, by the computer system, an alert message to be transmitted, via a computer network, to the client device.
  • 11. The method of claim 1, wherein causing the notification to be presented at a user device includes: causing, by the computer system, at least a portion of the content received from the electronic device to be transmitted, via a computer network, to the client device.
  • 12. The method of claim 11, wherein causing the at least portion of the content to be transmitted to the client device includes: initiating, by the computer system, a peer-to-peer connection between the electronic device and the client device, andwherein the at least portion of the content is transmitted via the peer-to-peer connection.
  • 13. The method of claim 1, wherein the content generated by the electronic device includes a provisional notification of the event as detected by the electronic device, and wherein processing the content to detect the event includes identifying the event as detected by the electronic device based on the provisional notification, and wherein causing the notification to be presented at the user device includes forwarding the provisional notification if the detected event satisfies the specified criterion.
  • 14. The method of claim 1, further comprising: receiving, by the computer system, feedback data from the user and/or a cohort of users that share a characteristic in common with the user, anddetermining, by the computer system, whether the detected event satisfies the specified criterion based at least in part on the received feedback data.
  • 15. A method for selectively notifying a user of certain events in a physical environment surveilled by a network-connected security system, the network-connected security system comprising a base station and one or more electronic devices, the base station and one or more electronic devices communicatively coupled over a local network, the base station operable as a gateway to securely connect the one or more electronic devices to an external network, the method comprising: receiving, by the base station, via the local network, content generated by an electronic device associated with the network-connected security system, the electronic device located in the physical environment;processing, by the base station, the content to detect an event in the physical environment; andcausing, by the base station, a notification to be presented at a user device communicatively coupled to the network-connected security system if the detected event satisfies a specified criterion.
  • 16. The method of claim 15, wherein, the notification is presented at the user device in real time or near real time as the content is generated by the electronic device.
  • 17. The method of claim 15, wherein processing the content includes: transmitting, by the base station, the content, via the external network, to a server configured to process the content to determine if the event satisfies the specified criterion; andreceiving, by the base station, a message from the server indicating whether the event satisfies the specified criterion.
  • 18. The method of claim 15, wherein the network-connected security system is a video surveillance system and the electronic device is a network-connected video camera.
  • 19. The method of claim 18, wherein the content includes video captured by the network-connected video camera and wherein processing the content to detect the event includes: processing, by the base station, the video to detect one or more instances of physical objects in the physical environment; andanalyzing, by the base station, data associated with the detected one or more instances of physical objects to detect a scene captured by the network-connected video camera;wherein the event is detected based on the detected scene captured by the network-connected video camera.
  • 20. The method of claim 15, wherein the notification includes any of an alert message or a portion of the content generated by the electronic device.
  • 21. The method of claim 15, wherein the content generated by the electronic device includes a provisional notification of the event as detected by the electronic device and wherein causing the notification to be presented at the user device includes forwarding, by the base station, the provisional notification if the detected event satisfies the specified criterion.
  • 22. A wireless networking device comprising: a network interface for communicating over a local network;a processor; anda memory communicatively coupled to the processor, the memory having instructions stored thereon, which when executed by the processor, cause the wireless networking device to: receive, by a computer system, content generated by an electronic device associated with the network-connected security system, the electronic device located in the physical environment;process, by the computer system, the content to detect an event in the physical environment;cause, by the computer system, a notification to be presented at a user device communicatively coupled to the network-connected security system if the detected event satisfies a specified criterion.
  • 23. The wireless networking device of claim 22, wherein, the notification is presented at the user device in real time or near real time as the content is generated by the electronic device.
  • 24. The wireless networking device of claim 22, wherein the electronic device is a network-connected video camera, wherein the content includes video captured by the network-connected video camera, and wherein processing the content to detect the event includes: processing the video to detect one or more instances of physical objects in the physical environment; andanalyzing data associated with the detected one or more instances of physical objects to detect a scene captured by the network-connected video camera;wherein the event is detected based on the detected scene captured by the network-connected video camera.
  • 25. The wireless networking device of claim 22, wherein the memory has further instructions stored thereon, which when executed by the processor, cause the wireless networking device to further: determine the detected event satisfies the specified criterion by: comparing data associated with the detected event against a database of data associated with a plurality of candidate security threats;generating a pattern matching score based on the detected event and a particular candidate security threat of the plurality of candidate security threats; andidentifying the detected event as indicative of the particular candidate security threat if the generated pattern matching score satisfies a threshold criterion.
  • 26. The wireless networking device of claim 22, wherein the notification includes any of an alert message or a portion of the content generated by the electronic device.
  • 27. The wireless networking device of claim 22, wherein causing the notification to be presented at a user device includes: causing an alert message and/or at least a portion of the content to be transmitted to the client device.
  • 28. The wireless networking device of claim 22, wherein the memory has further instructions stored thereon, which when executed by the processor, cause the wireless networking device to further: receive feedback data from the user and/or a cohort of users that share a characteristic in common with the user; anddetermine whether the detected event satisfies the specified criterion based at least in part on the received feedback data.
  • 29. The wireless networking device of claim 22, further comprising: a second network interface for communicating over an external network;wherein the instructions for processing the content include instructions for: transmitting the content, via the second network interface, to a server configured to process the content to determine if the event satisfies the specified criterion; andreceiving, via the second network interface, a message from the server indicating whether the event satisfies the specified criterion.
CROSS-REFERENCE TO RELATED APPLICATION(S)

This application is entitled to the benefit and/or right of priority of U.S. Provisional Application No. 62/644,847 (Attorney Docket No. 110729-8095.US00), titled, “ELASTIC PROCESSING FOR VIDEO ANALYSIS AND NOTIFICATION ENHANCEMENTS,” filed Mar. 19, 2018, the contents of which are hereby incorporated by reference in their entirety for all purposes. This application is therefore entitled to a priority date of Mar. 19, 2018.

Provisional Applications (1)
Number Date Country
62644847 Mar 2018 US