The disclosed implementations relates generally to video processing, including, but not limited, to dynamically adapting encoding bitrate for a video stream.
Video surveillance produces a large amount of continuous video data over the course of hours, days, and even months. Transmitting these large amounts of data for analysis and/or review can be challenging if the network conditions are poor or inconsistent. If the encoded bitrate exceeds the capabilities of the network data may be lost or the camera may cease to operate properly. However, if the encoded bitrate is low, even when network conditions are good, events occurring within the video may be more difficult to detect and/or analyze.
It is a challenge to select optimal bitrates in view of changing network conditions and the need to effectively detect and analyze captured events.
Current camera devices are agnostic to available bandwidth (e.g., available WiFi bandwidth) and always stream at some predefined video encoding bitrate/frame rate level. Under poor connectivity, where upload rate cannot match the encoding rate, the camera accumulates media data in a local buffer (e.g., a video buffer). This leads to high end-to-end latency, and eventually the camera may reboot due to buffer overflow or by server command. Thus it is desirable to have the camera adaptively change encoding bitrate based on the available bandwidth. This enhances the live-viewing user experience and better preserves video history (no data lost due to camera reboots) under poor connectivity conditions.
Accordingly, there is a need for dynamic adaptation of encoding bitrate for video streaming. Such methods optionally complement or replace conventional methods for encoding bitrates for video streaming. The various implementations described herein include systems, methods, and/or devices used to dynamically adapt encoding bitrates.
(A1) In one aspect, some implementations include a method performed at a camera device having an image sensor, one or more processors, and memory storing one or more programs for execution by the one or more processors. The method includes: (i) capturing a stream of images using the image sensor; and (ii) while capturing the stream of images: (a) encoding a first portion of the stream of images with a first bitrate; (b) transmitting the encoded first portion of the stream of images to a server system; (c) obtaining one or more transmission metrics for the transmitted first portion of the stream of images; (d) based on the one or more transmission metrics, encoding a second portion of the stream of images with a second bitrate, distinct from the first bitrate; and (e) transmitting the encoded second portion of the stream of images to the server system.
(A2) In some implementations of the method of A1, the one or more transmission metrics include at least one of: (i) a transmission rate for the encoded first portion of the stream of images; and (ii) a buffer latency for the encoded first portion of the stream of images.
(A3) In some implementations of the method of any one of A1-A2, the method further includes obtaining at least one of the one or more transmission metrics from the server system.
(A4) In some implementations of the method of any one of A1-A3: (i) the camera device includes a video buffer; (ii) the method further includes, prior to transmitting the first portion of the stream of images, holding the encoded first portion of the stream of images in the video buffer; and (iii) transmitting the encoded first portion of the stream of images to the server system includes transmitting the encoded first portion of the stream of images from the video buffer to the server system.
(A5) In some implementations of the method of any one of A1-A4, the method further includes: (i) determining an input rate and an output rate for the video buffer; and (ii) calculating a transmission rate for the first portion of the stream of images based on the determined input rate and the determined output rate; where the one or more transmission metrics includes the calculated transmission rate
(A6) In some implementations of the method of any one of A1-A5: (i) each image in the encoded first portion of the stream of images has an associated timestamp; and (ii) the method further includes calculating a transmission latency for the first portion of the stream of images based on timestamps for images in the video buffer; where the one or more transmission metrics includes the calculated transmission latency.
(A7) In some implementations of the method of any one of A1-A6: (i) the one or more transmission metrics indicate that the first bitrate exceeds transmission bandwidth available to the camera device for transmitting to the server system; and (ii) the second bitrate is lower than the first bitrate.
(A8) In some implementations of the method of any one of A1-A6: (i) the one or more transmission metrics indicate that the first bitrate does not exceed transmission bandwidth available to the camera device for transmitting to the server system; and (ii) the second bitrate is higher than the first bitrate.
(A9) In some implementations of the method of any one of A1-A8, encoding the second portion of the stream of images with the second bitrate, distinct from the first bitrate comprises adjusting one or more of: (i) a frame rate of the stream of images; and (ii) an image resolution of the stream of images.
(A10) In some implementations of the method of any one of A1-A9, the method further includes selecting the second bitrate based on the one or more transmission metrics and one or more additional factors.
(A11) In some implementations of the method of any one of A1-A10: (i) capturing the stream of images using the image sensor comprises capturing the stream of images at a first resolution; and (ii) the method further includes: (a) obtaining one or more second transmission metrics for the transmitted second portion of the stream of images; (b) based on the one or more second transmission metrics, forgoing capturing the stream of images at the first resolution; and (c) capturing the stream of images at a second resolution.
In another aspect, some implementations include a camera device having an image sensor; and one or more controllers coupled to the image sensor. In some implementations, the one or more controllers are configured to perform any of the methods described herein (e.g., A1-A11 described above).
In yet another aspect, some implementations include a non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by a camera device with one or more controllers, cause the camera device to perform any of the methods described herein (e.g., A1-A11 described above).
In yet another aspect, some implementations include a computing system with the means to perform any of the methods described herein (e.g., A1-A11 described above).
Thus, devices, storage mediums, and computing systems are provided with methods dynamically adapting encoding bitrates for video streaming, thereby increasing the effectiveness, efficiency, and user satisfaction with such systems. Such methods may complement or replace conventional methods for encoding bitrates for video streaming.
For a better understanding of the various described implementations, reference should be made to the Description of Implementations below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
Like reference numerals refer to corresponding parts throughout the several views of the drawings.
Reference will now be made in detail to implementations, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described implementations. However, it will be apparent to one of ordinary skill in the art that the various described implementations may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the implementations.
This disclosure provides example devices, user interfaces, data processing systems, and methods for encoding bitrates.
It is to be appreciated that “smart home environments” may refer to smart environments for homes such as a single-family house, but the scope of the present teachings is not so limited. The present teachings are also applicable, without limitation, to duplexes, townhomes, multi-unit apartment buildings, hotels, retail stores, office buildings, industrial buildings, and more generally any living space or work space.
It is also to be appreciated that while the terms user, customer, installer, homeowner, occupant, guest, tenant, landlord, repair person, and the like may be used to refer to the person or persons acting in the context of some particularly situations described herein, these references do not limit the scope of the present teachings with respect to the person or persons who are performing such actions. Thus, for example, the terms user, customer, purchaser, installer, subscriber, and homeowner may often refer to the same person in the case of a single-family residential dwelling, because the head of the household is often the person who makes the purchasing decision, buys the unit, and installs and configures the unit, and is also one of the users of the unit. However, in other scenarios, such as a landlord-tenant environment, the customer may be the landlord with respect to purchasing the unit, the installer may be a local apartment supervisor, a first user may be the tenant, and a second user may again be the landlord with respect to remote control functionality. Importantly, while the identity of the person performing the action may be germane to a particular advantage provided by one or more of the implementations, such identity should not be construed in the descriptions that follow as necessarily limiting the scope of the present teachings to those particular individuals having those particular identities.
The depicted structure 150 includes a plurality of rooms 152, separated at least partly from each other via walls 154. The walls 154 may include interior walls or exterior walls. Each room may further include a floor 156 and a ceiling 158. Devices may be mounted on, integrated with and/or supported by a wall 154, floor 156 or ceiling 158.
In some implementations, the integrated devices of the smart home environment 100 include intelligent, multi-sensing, network-connected devices that integrate seamlessly with each other in a smart home network (e.g., 202
In some implementations, the one or more smart thermostats 102 detect ambient climate characteristics (e.g., temperature and/or humidity) and control a HVAC system 103 accordingly. For example, a respective smart thermostat 102 includes an ambient temperature sensor.
The one or more smart hazard detectors 104 may include thermal radiation sensors directed at respective heat sources (e.g., a stove, oven, other appliances, a fireplace, etc.). For example, a smart hazard detector 104 in a kitchen 153 includes a thermal radiation sensor directed at a stove/oven 112. A thermal radiation sensor may determine the temperature of the respective heat source (or a portion thereof) at which it is directed and may provide corresponding blackbody radiation data as output.
The smart doorbell 106 and/or the smart door lock 120 may detect a person's approach to or departure from a location (e.g., an outer door), control doorbell/door locking functionality (e.g., receive user inputs from a portable electronic device 166-1 to actuate bolt of the smart door lock 120, announce a person's approach or departure via audio or visual means, and/or control settings on a security system (e.g., to activate or deactivate the security system when occupants go and come).
The smart alarm system 122 may detect the presence of an individual within close proximity (e.g., using built-in IR sensors), sound an alarm (e.g., through a built-in speaker, or by sending commands to one or more external speakers), and send notifications to entities or users within/outside of the smart home network 100. In some implementations, the smart alarm system 122 also includes one or more input devices or sensors (e.g., keypad, biometric scanner, NFC transceiver, microphone) for verifying the identity of a user, and one or more output devices (e.g., display, speaker). In some implementations, the smart alarm system 122 may also be set to an “armed” mode, such that detection of a trigger condition or event causes the alarm to be sounded unless a disarming action is performed.
In some implementations, the smart home environment 100 includes one or more intelligent, multi-sensing, network-connected wall switches 108 (hereinafter referred to as “smart wall switches 108”), along with one or more intelligent, multi-sensing, network-connected wall plug interfaces 110 (hereinafter referred to as “smart wall plugs 110”). The smart wall switches 108 may detect ambient lighting conditions, detect room-occupancy states, and control a power and/or dim state of one or more lights. In some instances, smart wall switches 108 may also control a power state or speed of a fan, such as a ceiling fan. The smart wall plugs 110 may detect occupancy of a room or enclosure and control supply of power to one or more wall plugs (e.g., such that power is not supplied to the plug if nobody is at home).
In some implementations, the smart home environment 100 of
In some implementations, the smart home environment 100 includes one or more network-connected cameras 118 that are configured to provide video monitoring and security in the smart home environment 100. The cameras 118 may be used to determine occupancy of the structure 150 and/or particular rooms 152 in the structure 150, and thus may act as occupancy sensors. For example, video captured by the cameras 118 may be processed to identify the presence of an occupant in the structure 150 (e.g., in a particular room 152). Specific individuals may be identified based, for example, on their appearance (e.g., height, face) and/or movement (e.g., their walk/gait). Cameras 118 may additionally include one or more sensors (e.g., IR sensors, motion detectors), input devices (e.g., microphone for capturing audio), and output devices (e.g., speaker for outputting audio).
The smart home environment 100 may additionally or alternatively include one or more other occupancy sensors (e.g., the smart doorbell 106, smart door locks 120, touch screens, IR sensors, microphones, ambient light sensors, motion detectors, smart nightlights 170, etc.). In some implementations, the smart home environment 100 includes radio-frequency identification (RFID) readers (e.g., in each room 152 or a portion thereof) that determine occupancy based on RFID tags located on or embedded in occupants. For example, RFID readers may be integrated into the smart hazard detectors 104.
The smart home environment 100 may also include communication with devices outside of the physical home but within a proximate geographical range of the home. For example, the smart home environment 100 may include a pool heater monitor 114 that communicates a current pool temperature to other devices within the smart home environment 100 and/or receives commands for controlling the pool temperature. Similarly, the smart home environment 100 may include an irrigation monitor 116 that communicates information regarding irrigation systems within the smart home environment 100 and/or receives control information for controlling such irrigation systems.
By virtue of network connectivity, one or more of the smart home devices of
As discussed above, users may control smart devices in the smart home environment 100 using a network-connected computer or portable electronic device 166. In some examples, some or all of the occupants (e.g., individuals who live in the home) may register their device 166 with the smart home environment 100. Such registration may be made at a central server to authenticate the occupant and/or the device as being associated with the home and to give permission to the occupant to use the device to control the smart devices in the home. An occupant may use their registered device 166 to remotely control the smart devices of the home, such as when the occupant is at work or on vacation. The occupant may also use their registered device to control the smart devices when the occupant is actually located inside the home, such as when the occupant is sitting on a couch inside the home. It should be appreciated that instead of or in addition to registering devices 166, the smart home environment 100 may make inferences about which individuals live in the home and are therefore occupants and which devices 166 are associated with those individuals. As such, the smart home environment may “learn” who is an occupant and permit the devices 166 associated with those individuals to control the smart devices of the home.
In some implementations, in addition to containing processing and sensing capabilities, devices 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, and/or 122 (collectively referred to as “the smart devices”) are capable of data communications and information sharing with other smart devices, a central server or cloud-computing system, and/or other devices that are network-connected. Data communications may be carried out using any of a variety of custom or standard wireless protocols (e.g., IEEE 802.15.4, Wi-Fi, ZigBee, 6LoWPAN, Thread, Z-Wave, Bluetooth Smart, ISA100.11a, WirelessHART, MiWi, etc.) and/or any of a variety of custom or standard wired protocols (e.g., Ethernet, HomePlug, etc.), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
In some implementations, the smart devices serve as wireless or wired repeaters. In some implementations, a first one of the smart devices communicates with a second one of the smart devices via a wireless router. The smart devices may further communicate with each other via a connection (e.g., network interface 160) to a network, such as the Internet 162. Through the Internet 162, the smart devices may communicate with a smart home provider server system 164 (also called a central server system and/or a cloud-computing system herein). The smart home provider server system 164 may be associated with a manufacturer, support entity, or service provider associated with the smart device(s). In some implementations, a user is able to contact customer support using a smart device itself rather than needing to use other communication means, such as a telephone or Internet-connected computer. In some implementations, software updates are automatically sent from the smart home provider server system 164 to smart devices (e.g., when available, when purchased, or at routine intervals).
In some implementations, the network interface 160 includes a conventional network device (e.g., a router), and the smart home environment 100 of
In some implementations, smart home environment 100 includes a local storage device 190 for storing data related to, or output by, smart devices of smart home environment 100. In some implementations, the data includes one or more of: video data output by a camera device (e.g., camera 118), metadata output by a smart device, settings information for a smart device, usage logs for a smart device, and the like. In some implementations, local storage device 190 is communicatively coupled to one or more smart devices via a smart home network (e.g., smart home network 202,
In some implementations, some low-power nodes are incapable of bidirectional communication. These low-power nodes send messages, but they are unable to “listen”. Thus, other devices in the smart home environment 100, such as the spokesman nodes, cannot send information to these low-power nodes.
In some implementations, some low-power nodes are capable of only a limited bidirectional communication. For example, other devices are able to communicate with the low-power nodes only during a certain time period.
As described, in some implementations, the smart devices serve as low-power and spokesman nodes to create a mesh network in the smart home environment 100. In some implementations, individual low-power nodes in the smart home environment regularly send out messages regarding what they are sensing, and the other low-powered nodes in the smart home environment—in addition to sending out their own messages—forward the messages, thereby causing the messages to travel from node to node (i.e., device to device) throughout the smart home network 202. In some implementations, the spokesman nodes in the smart home network 202, which are able to communicate using a relatively high-power communication protocol, such as IEEE 802.11, are able to switch to a relatively low-power communication protocol, such as IEEE 802.15.4, to receive these messages, translate the messages to other communication protocols, and send the translated messages to other spokesman nodes and/or the smart home provider server system 164 (using, e.g., the relatively high-power communication protocol). Thus, the low-powered nodes using low-power communication protocols are able to send and/or receive messages across the entire smart home network 202, as well as over the Internet 162 to the smart home provider server system 164. In some implementations, the mesh network enables the smart home provider server system 164 to regularly receive data from most or all of the smart devices in the home, make inferences based on the data, facilitate state synchronization across devices within and outside of the smart home network 202, and send commands to one or more of the smart devices to perform tasks in the smart home environment.
As described, the spokesman nodes and some of the low-powered nodes are capable of “listening.” Accordingly, users, other devices, and/or the smart home provider server system 164 may communicate control commands to the low-powered nodes. For example, a user may use the electronic device 166 (e.g., a smart phone) to send commands over the Internet to the smart home provider server system 164, which then relays the commands to one or more spokesman nodes in the smart home network 202. The spokesman nodes may use a low-power protocol to communicate the commands to the low-power nodes throughout the smart home network 202, as well as to other spokesman nodes that did not receive the commands directly from the smart home provider server system 164.
In some implementations, a smart nightlight 170 (
Other examples of low-power nodes include battery-operated versions of the smart hazard detectors 104. These smart hazard detectors 104 are often located in an area without access to constant and reliable power and may include any number and type of sensors, such as smoke/fire/heat sensors (e.g., thermal radiation sensors), carbon monoxide/dioxide sensors, occupancy/motion sensors, ambient light sensors, ambient temperature sensors, humidity sensors, and the like. Furthermore, smart hazard detectors 104 may send messages that correspond to each of the respective sensors to the other devices and/or the smart home provider server system 164, such as by using the mesh network as described above.
Examples of spokesman nodes include smart doorbells 106, smart thermostats 102, smart wall switches 108, and smart wall plugs 110. These devices are often located near and connected to a reliable power source, and therefore may include more power-consuming components, such as one or more communication chips capable of bidirectional communication in a variety of protocols.
In some implementations, the smart home environment 100 includes service robots 168 (
As explained above with reference to
In some implementations, the devices and services platform 300 communicates with and collects data from the smart devices of the smart home environment 100. In addition, in some implementations, the devices and services platform 300 communicates with and collects data from a plurality of smart home environments across the world. For example, the smart home provider server system 164 collects home data 302 from the devices of one or more smart home environments 100, where the devices may routinely transmit home data or may transmit home data in specific instances (e.g., when a device queries the home data 302). Example collected home data 302 includes, without limitation, power consumption data, blackbody radiation data, occupancy data, HVAC settings and usage data, carbon monoxide levels data, carbon dioxide levels data, volatile organic compounds levels data, sleeping schedule data, cooking schedule data, inside and outside temperature humidity data, television viewership data, inside and outside noise level data, pressure data, video data, etc.
In some implementations, the smart home provider server system 164 provides one or more services 304 to smart homes and/or third parties. Example services 304 include, without limitation, software updates, customer support, sensor data collection/logging, remote access, remote or distributed control, and/or use suggestions (e.g., based on collected home data 302) to improve performance, reduce utility cost, increase safety, etc. In some implementations, data associated with the services 304 is stored at the smart home provider server system 164, and the smart home provider server system 164 retrieves and transmits the data at appropriate times (e.g., at regular intervals, upon receiving a request from a user, etc.).
In some implementations, the extensible devices and services platform 300 includes a processing engine 306, which may be concentrated at a single server or distributed among several different computing entities without limitation. In some implementations, the processing engine 306 includes engines configured to receive data from the devices of smart home environments 100 (e.g., via the Internet 162 and/or a network interface 160), to index the data, to analyze the data and/or to generate statistics based on the analysis or as part of the analysis. In some implementations, the analyzed data is stored as derived home data 308.
Results of the analysis or statistics may thereafter be transmitted back to the device that provided home data used to derive the results, to other devices, to a server providing a webpage to a user of the device, or to other non-smart device entities. In some implementations, usage statistics, usage statistics relative to use of other devices, usage patterns, and/or statistics summarizing sensor readings are generated by the processing engine 306 and transmitted. The results or statistics may be provided via the Internet 162. In this manner, the processing engine 306 may be configured and programmed to derive a variety of useful information from the home data 302. A single server may include one or more processing engines.
The derived home data 308 may be used at different granularities for a variety of useful purposes, ranging from explicit programmed control of the devices on a per-home, per-neighborhood, or per-region basis (for example, demand-response programs for electrical utilities), to the generation of inferential abstractions that may assist on a per-home basis (for example, an inference may be drawn that the homeowner has left for vacation and so security detection equipment may be put on heightened sensitivity), to the generation of statistics and associated inferential abstractions that may be used for government or charitable purposes. For example, processing engine 306 may generate statistics about device usage across a population of devices and send the statistics to device users, service providers or other entities (e.g., entities that have requested the statistics and/or entities that have provided monetary compensation for the statistics).
In some implementations, to encourage innovation and research and to increase products and services available to users, the devices and services platform 300 exposes a range of application programming interfaces (APIs) 310 to third parties, such as charities 314, governmental entities 316 (e.g., the Food and Drug Administration or the Environmental Protection Agency), academic institutions 318 (e.g., university researchers), businesses 320 (e.g., providing device warranties or service to related equipment, targeting advertisements based on home data), utility companies 324, and other third parties. The APIs 310 are coupled to and permit third-party systems to communicate with the smart home provider server system 164, including the services 304, the processing engine 306, the home data 302, and the derived home data 308. In some implementations, the APIs 310 allow applications executed by the third parties to initiate specific data processing tasks that are executed by the smart home provider server system 164, as well as to receive dynamic updates to the home data 302 and the derived home data 308.
For example, third parties may develop programs and/or applications (e.g., web applications or mobile applications) that integrate with the smart home provider server system 164 to provide services and information to users. Such programs and applications may be, for example, designed to help users reduce energy consumption, to preemptively service faulty equipment, to prepare for high service demands, to track past service performance, etc., and/or to perform other beneficial functions or tasks.
In some implementations, processing engine 306 includes a challenges/rules/compliance/rewards paradigm 410d that informs a user of challenges, competitions, rules, compliance regulations and/or rewards and/or that uses operation data to determine whether a challenge has been met, a rule or regulation has been complied with and/or a reward has been earned. The challenges, rules, and/or regulations may relate to efforts to conserve energy, to live safely (e.g., reducing the occurrence of heat-source alerts) (e.g., reducing exposure to toxins or carcinogens), to conserve money and/or equipment life, to improve health, etc. For example, one challenge may involve participants turning down their thermostat by one degree for one week. Those participants that successfully complete the challenge are rewarded, such as with coupons, virtual currency, status, etc. Regarding compliance, an example involves a rental-property owner making a rule that no renters are permitted to access certain owner's rooms. The devices in the room having occupancy sensors may send updates to the owner when the room is accessed.
In some implementations, processing engine 306 integrates or otherwise uses extrinsic information 412 from extrinsic sources to improve the functioning of one or more processing paradigms. Extrinsic information 412 may be used to interpret data received from a device, to determine a characteristic of the environment near the device (e.g., outside a structure that the device is enclosed in), to determine services or products available to the user, to identify a social network or social-network information, to determine contact information of entities (e.g., public-service entities such as an emergency-response team, the police or a hospital) near the device, to identify statistical or environmental conditions, trends or other information associated with a home or neighborhood, and so forth.
In some implementations, the smart home provider server system 164 or a component thereof serves as the server system 508. In some implementations, the server system 508 is a dedicated video processing server that provides video processing services to video sources and client devices 504 independent of other services provided by the server system 508.
In some implementations, each of the video sources 522 includes one or more video cameras 118 that capture video and send the captured video to the server system 508 substantially in real-time. In some implementations, each of the video sources 522 optionally includes a controller device (not shown) that serves as an intermediary between the one or more cameras 118 and the server system 508. The controller device receives the video data from the one or more cameras 118, optionally, performs some preliminary processing on the video data, and sends the video data to the server system 508 on behalf of the one or more cameras 118 substantially in real-time. In some implementations, each camera has its own on-board processing capabilities to perform some preliminary processing on the captured video data before sending the processed video data (along with metadata obtained through the preliminary processing) to the controller device and/or the server system 508.
As shown in
In some implementations, the server-side module 506 includes one or more processor 512, a video storage database 514, device and account databases 516, an I/O interface to one or more client devices 518, and an I/O interface to one or more video sources 520. The I/O interface to one or more clients 518 facilitates the client-facing input and output processing for the server-side module 506. The databases 516 store a plurality of profiles for reviewer accounts registered with the video processing server, where a respective user profile includes account credentials for a respective reviewer account, and one or more video sources linked to the respective reviewer account. The I/O interface to one or more video sources 520 facilitates communications with one or more video sources 522 (e.g., groups of one or more cameras 118 and associated controller devices). The video storage database 514 stores raw video data received from the video sources 522, as well as various types of metadata, such as motion events, event categories, event category models, event filters, and event masks, for use in data processing for event monitoring and review for each reviewer account.
Examples of a representative client device 504 include, but are not limited to, a handheld computer, a wearable computing device, a personal digital assistant (PDA), a tablet computer, a laptop computer, a desktop computer, a cellular telephone, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, a game console, a television, a remote control, a point-of-sale (POS) terminal, vehicle-mounted computer, an ebook reader, or a combination of any two or more of these data processing devices or other data processing devices.
Examples of the one or more networks 162 include local area networks (LAN) and wide area networks (WAN) such as the Internet. The one or more networks 162 are, optionally, implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Long Term Evolution (LTE), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol.
In some implementations, the server system 508 is implemented on one or more standalone data processing apparatuses or a distributed network of computers. In some implementations, the server system 508 also employs various virtual devices and/or services of third party service providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of the server system 508. In some implementations, the server system 508 includes, but is not limited to, a handheld computer, a tablet computer, a laptop computer, a desktop computer, or a combination of any two or more of these data processing devices or other data processing devices.
The server-client environment 500 shown in
It should be understood that operating environment 500 that involves the server system 508, the video sources 522 and the video cameras 118 is merely an example. Many aspects of operating environment 500 are generally applicable in other operating environments in which a server system provides data processing for monitoring and facilitating review of data captured by other types of electronic devices (e.g., smart thermostats 102, smart hazard detectors 104, smart doorbells 106, smart wall plugs 110, appliances 112 and the like).
The electronic devices, the client devices or the server system communicate with each other using the one or more communication networks 162. In an example smart home environment, two or more devices (e.g., the network interface device 160, the hub device 180, and the client devices 504-m) are located in close proximity to each other, such that they could be communicatively coupled in the same sub-network 162A via wired connections, a WLAN or a Bluetooth Personal Area Network (PAN). The Bluetooth PAN is optionally established based on classical Bluetooth technology or Bluetooth Low Energy (BLE) technology. This smart home environment further includes one or more other radio communication networks 162B through which at least some of the electronic devices of the video sources 522-n exchange data with the hub device 180. Alternatively, in some situations, some of the electronic devices of the video sources 522-n communicate with the network interface device 160 directly via the same sub-network 162A that couples devices 160, 180 and 504-m. In some implementations (e.g., in the network 162C), both the client device 504-m and the electronic devices of the video sources 522-n communicate directly via the network(s) 162 without passing the network interface device 160 or the hub device 180.
In some implementations, during normal operation, the network interface device 160 and the hub device 180 communicate with each other to form a network gateway through which data are exchanged with the electronic device of the video sources 522-n. As explained above, the network interface device 160 and the hub device 180 optionally communicate with each other via a sub-network 162A.
Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, the memory 606, optionally, stores a subset of the modules and data structures identified above. Furthermore, the memory 606, optionally, stores additional modules and data structures not described above. In some implementations, video server system 508 comprises smart home provide server system 164. In some implementations, smart home provide server system 164 includes video server system 508.
The memory 706 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. The memory 706, optionally, includes one or more storage devices remotely located from the one or more processing units 702. The memory 706, or alternatively the non-volatile memory within the memory 706, includes a non-transitory computer-readable storage medium. In some implementations, the memory 706, or the non-transitory computer-readable storage medium of memory 706, stores the following programs, modules, and data structures, or a subset or superset thereof:
Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, memory 706, optionally, stores a subset of the modules and data structures identified above. Furthermore, the memory 706, optionally, stores additional modules and data structures not described above.
In some implementations, at least some of the functions of the video server system 508 are performed by the client device 504, and the corresponding sub-modules of these functions may be located within the client device 504 rather than the video server system 508. In some implementations, at least some of the functions of the client device 504 are performed by the video server system 508, and the corresponding sub-modules of these functions may be located within the video server system 508 rather than the client device 504. The client device 504 and the video server system 508 shown in
The memory 806 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. The memory 806, or alternatively the non-volatile memory within the memory 806, includes a non-transitory computer-readable storage medium. In some implementations, the memory 806, or the non-transitory computer-readable storage medium of the memory 806, stores the following programs, modules, and data structures, or a subset or superset thereof:
Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, the memory 806, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 806, optionally, stores additional modules and data structures not described above.
In some implementations, camera controller 802 includes feedback sub-module 870 for receiving one or more transmission metric(s) (e.g., transmission metric(s) 866 and/or transmission metric(s) 868) and for outputting control signal 874. In some implementations, control signal 874 is utilized by camera controller 802 to set or adjust an encoding bitrate of encoder 854. In some implementations, camera controller 802 includes bandwidth sub-module 872 for setting or adjusting an encoding bitrate of encoder 854 based on control signal 874 and/or external data 878. In some implementations, adjusting the encoding bitrate comprises adjusting one or more of: a frame rate, a frame resolution, and compression. In some implementations, adjusting the encoding bitrate comprises changing an output format of encoded video data 860.
In some implementations, external data 878 includes priority information for the camera, talkback received by the camera, and/or information regarding whether camera software is being updated. In some implementations, bandwidth sub-module 872 sets or adjusts the encoding bitrate based on one or more other factors, such as whether or not motion is being detected by the camera. In some implementations, the one or more other factors include information regarding historical bandwidth availability. For example, historical bandwidth availability based on time of day, time of week, time of month, etc. In some implementations, feedback sub-module 870 and bandwidth sub-module 872 comprise transmissions module 829 (
Camera 118 also includes video buffer 862 and network interface 804. Video buffer 862 receives encoded data 860-1 and outputs encoded data 860-2. Video buffer 862 receives encoded video data 860-1 at a first rate based on an encoding bitrate and outputs encoded data 860-2 at a second rate based on network conditions. In some implementations, video buffer 862 has a capacity to store approximately one minute of data at a maximum encoding bitrate. In some implementations, video buffer 862 comprises DRAM. In some implementations, video buffer 862 has a capacity of approximately 8 megabytes. Network interface 804 is utilized to communicate with one or more remote devices (e.g., server system 508) via one or more networks (e.g., one or more networks 162). In some implementations, camera 118 in
In some implementations, the encoding bitrate is adjusted based on both the transmission rate and buffer latency (also sometimes called behind time). In some implementations, the encoding bitrate is adjusted based on a linear combination of the transmission rate and buffer latency. In some implementations, the encoding bitrate is adjusted in accordance with an encoding metric. In some implementations, the encoding metric is based on a combination of the transmission rate and buffer latency. For example, the encoding metric is calculated in accordance with Equation 1, below.
encode_metric=ki*buffer_latency+kp*transmission_rate Equation 1
In some implementations, as shown in Equation 1, the encoding metric is a proportional-integral metric measuring whether the camera device has enough available transmission bandwidth (also sometimes called upload bandwidth) for the current encoding bitrate. In some implementations, controller 802 (
In some implementations, when the buffer latency in Equation 1 is positive it indicates that the latency of the video buffer is increasing. In some implementations, when the transmission rate is positive it indicates that the output rate of the video buffer is not keeping pace with the input rate of the video buffer. In some implementations, when the transmission rate is negative it indicates that the output rate exceeds the input rate. In some implementations, when the transmission rate is zero it indicates that the output rate equals the input rate.
In some implementations, the transmission rate is a normalized transmission rate. In some implementations, the transmission rate is normalized by dividing a difference between an input rate and an output rate by the current bitrate. In some implementations, normalization comprises a transmission rate that is independent of the current encoding bitrate and/or current frame resolution. In some implementations, the transmission rate is a relative transmission rate. In some implementations, the transmission rate is relative to the current bitrate and/or current frame resolution. In some implementations, the transmission rate is relative to an input rate or output rate of the video buffer.
In some implementations, the encoding bitrate is adjusted when the encoding metric meets one or more predetermined criteria. For example, in accordance with a determination that the encoding metric is below 10 the encoding bitrate is increased (e.g., increased by 5%, 10%, or 15%). In another example, in accordance with a determination that the encoding metric is above 100, the encoding bitrate is decreased (e.g., decreased by 5%, 10%, or 20%). In some implementations, the encoding bitrate is increased only if the encoding metric meets one or more predetermined criteria and a predetermined amount of time has passed since the bitrate was last adjusted (e.g., 10 seconds, 20 seconds, 60 seconds).
In some implementations, the bitrate adaptation or adjustment is to reduce the encoding bitrate in accordance with a determination that the encoding metric is too high. In some implementations, the bitrate adaptation or adjustment includes periodically increasing the encoding bitrate in accordance with a determination that the available bandwidth would support a higher encoding bitrate.
The following pseudocode example illustrates controller logic for adjusting the encoding bitrate based on one or more transmission metrics:
In the above logic, the various parameters may be tuned to adjust performance. For example, UP_BITRATE_COOLDOWN is optionally decreased to be more aggressive in exploring bandwidth limit. UP_BITRATE_COOLDOWN is optionally increased to favor a more stable system. K_P is optionally increased to have earlier detection of a bandwidth limit, by increasing sensitivity of positive net data flow. However, in some instances, the rate measurement is noisy and being too sensitive will produce false positives. K_I is optionally increased to have earlier detection of the bandwidth limit, by increasing sensitivity of behind time growth. However, in some instances, controller resource limitations may lead to a small recoverable behind time and being too sensitive will produce false positives. In some instances, adjusting K_I affects the amount of buffer latency that is present in the system.
In some implementations, the above logic is implemented in camera controller 802 (
Attention is now directed towards implementations of user interfaces and associated processes that may be implemented on a respective client device 504. In some implementations, client device 504 includes one or more speakers enabled to output sound, zero or more microphones enabled to receive sound input, and a touch screen 1006 enabled to receive one or more contacts and display information (e.g., media content, webpages and/or user interfaces for an application).
Although some of the examples that follow will be given with reference to inputs on touch screen 1006 (where the touch sensitive surface and the display are combined), in some implementations, the device detects inputs on a touch-sensitive surface that is separate from the display. In some implementations, the touch sensitive surface has a primary axis that corresponds to a primary axis on the display. In accordance with these implementations, the device detects contacts with the touch-sensitive surface at locations that correspond to respective locations on the display. In this way, user inputs detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display of the device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures, etc.), it should be understood that, in some implementations, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously.
For example, the client device 504 is the portable electronic device 166 (
In
In some implementations, event timeline 1010 indicates the transmission quality of the recorded video footage at various times. For example, event timeline 1010 optionally includes indicators for points where recorded video footage is unavailable (e.g., due to poor network conditions, camera resets, and the like). In some implementations, event timeline 1010 includes an overlay denoting time periods when the video feed was not transmitted to the server system. In some implementations, event timeline 1010 includes an overlay indicating the quality of the video footage at particular times, or during particular time periods, such as displaying an FPS value and/or a resolution value. In some implementations, second region 1005 includes an overlay for indicating the quality of video footage at a particular location in the timeline, such as a location corresponding to a cursor and/or a location corresponding to indicator 1009. For example, in accordance with some implementations, a user may view one or more transmission quality statistics by hovering over, or clicking on, a particular portion of, or indicator on, event timeline 1010. In some implementations, event timeline 1010 includes at least one display characteristic (e.g., colored indicators or shading) for indicating the transmission quality of the recorded video footage at particular times, or during particular time periods.
The second region 1005 also includes affordances 1013 for changing the scale of the event timeline 1010: 5 minute affordance 1013A for changing the scale of the event timeline 1010 to 5 minutes, 1 hour affordance 1013B for changing the scale of the event timeline 1010 to 1 hour, and affordance 24 hours 1013C for changing the scale of the event timeline 1010 to 24 hours. In
In
In
In
In some implementations, control and access to the smart home environment 100 is implemented in the operating environment 500 (
The camera device captures (1102) a stream of images using an image sensor of the camera device. For example, camera 118 in
While capturing the stream of images, the camera device encodes (1104) a first portion of the stream of images with a first bitrate. For example, camera 118 in
In some implementations, prior to transmitting the first portion of the stream of images, the camera device holds (1106) the encoded first portion of the stream of images in a video buffer. For example, camera 118 in
The camera device transmits (1108) the encoded first portion of the stream of images to the server system. For example, camera device 118 in
In some implementations, the camera device transmits the encoded first portion of the stream of images to a storage system (e.g., a storage system in smart home environment 100,
In some implementations, the camera device transmits (1110) the encoded first portion of the stream of images from the video buffer to the server system. For example, camera 118 in
The camera device obtains (1112) one or more transmission metrics for the transmitted first portion of the stream of images. For example, camera 118 in
In some implementations, the one or more transmission metrics include (1114) at least one of: a transmission rate for the encoded first portion of the stream of images, and a buffer latency for the encoded first portion of the stream of images. In some implementations, the one or more transmission metrics include a metric for the amount of data stored in the video buffer at a given time or over a given period. In some implementations, the one or more transmission metrics include a combination (e.g., a linear combination) of the transmission rate and the buffer latency. For example, see Equation 1 above. In some implementations, the one or more transmission metrics include a transmission latency metric. In some implementations, the one or more transmission metrics include a behind time metric defined as the difference between the timestamps of the latest and earliest frames currently in the camera device video buffer (e.g., video buffer 862,
In some implementations, the camera device obtains (1116) at least one of the one or more transmission metrics from the server system. For example, camera 118 in
In some implementations, the camera device calculates (1118) a transmission rate for the first portion of the stream of images based on a determined input rate and a determined output rate or the video buffer. For example, a determined input rate for video buffer 862 and a determined output rate for video buffer 862 in
In some implementations, the camera device calculates (1120) a buffer latency for the first portion of the stream of images based on timestamps for images in the video buffer. For example, the buffer latency is based on timestamps 910 and 913 in video buffer 862 in
In some implementations, the camera device selects (1124) a second bitrate based on the one or more transmission metrics and one or more additional factors. For example, camera 118 in
Based on the one or more transmission metrics, the camera device encodes (1126) a second portion of the stream of images with the second bitrate, distinct from the first bitrate. For example, the camera device adjusts the encoding bitrate from Bitrate E to Bitrate F as shown in
In some implementations, the one or more transmission metrics indicate (1128) that the first bitrate exceeds transmission bandwidth available to the camera device for transmitting to the server system; and the second bitrate is lower than the first bitrate. For example, the camera device adjusts the encoding bitrate from Bitrate B to Bitrate C based on the transmission rate decreasing as shown in
In some instances, the transmission bandwidth is limited by local area network bandwidth. In some instances, the transmission bandwidth is limited by a local wireless network (e.g., WiFi) bandwidth. In some instances, the transmission bandwidth is limited by an internet service provider (ISP) or wide area network bandwidth. In some implementations, the bandwidth available to the camera device is determined by a router, modem, cable box, hub device, or other smart device. For example, a router assigns the camera device 50% of the available bandwidth. In some implementations, the amount of bandwidth available to the camera device is based on a priority assigned to the camera device. In some implementations, the priority is assigned based on whether the camera device is detecting motion. In some implementations, the priority is assigned based on whether or not a user is currently viewing the stream of images output by the camera device. In some implementations, the priority is assigned based on one or more user preferences and/or user settings. In some implementations, the priority is assigned based on historical data of the camera device. For example, a camera device that historically detects a lot of motion is prioritized over a camera device that historically does not detect as much motion.
In some implementations, in response to the one or more transmission metrics indicating that the first bitrate exceeds the available transmission bandwidth, a bandwidth assigning device (e.g., a router) assigns more bandwidth to the camera device. For example, the bandwidth assigning device increases the camera device's priority and the increased priority grants the camera device additional bandwidth. In some implementations, the bandwidth assigning device determines whether the available transmission bandwidth is limited by the local area network (e.g., WiFi). In accordance with a determination that the available transmission bandwidth is limited by the local area network, the bandwidth assigning device increases the transmission bandwidth assigned to the camera device. In accordance with a determination that the available transmission bandwidth is not limited by the local area network, the bandwidth assigning device forgoes increases the transmission bandwidth assigned to the camera device.
In some implementations, the one or more transmission metrics indicate (1130) that the first bitrate does not exceed transmission bandwidth available to the camera device for transmitting to the server system; and the second bitrate is higher than the first bitrate. For example, the camera device adjusts the encoding bitrate from Bitrate E to Bitrate F based on the transmission rate increasing as shown in
In some instances, the one or more transmission metrics indicate that the first bitrate does not exceed the transmission bandwidth available. In some implementations, the camera device determines whether the one or more transmission metrics meet one or more predetermined criteria. In accordance with a determination that the one or more transmission metrics meet the predetermined criteria, the camera device increases the encoding bitrate. In accordance with a determination that the one or more transmission metrics do not meet the predetermined criteria, the camera device does not adjust the encoding bitrate.
In some implementations, the camera device adjusts (1132) one or more of: a frame rate of the stream of images; an image resolution of the stream of images; and a compression of the stream of images. For example,
In some instances, the one or more transmission metrics indicate that the first bitrate exceeds the transmission bandwidth available. In some implementations, the camera device determines whether the encoded frame rate meets one or more predetermined criteria. In accordance with a determination that the encoded frame rate meets the one or more predetermined criteria, the camera device lowers the encoded frame rate. In accordance with a determination that the encoded frame rate does not meet one or more predetermined criteria, the camera device forgoes lowering the encoded frame rate and lowers the encoded image resolution.
The camera device transmits (1134) the encoded second portion of the stream of images to the server system. For example, camera 118 in
In some implementations, the camera device transmits the encoded second portion of the stream of images to a storage system (e.g., a storage system in smart home environment 100,
In some implementations, the camera device obtains (1136) one or more second transmission metrics for the transmitted second portion of the stream of images. For example, camera 118 in
In some implementations, based on the one or more second transmission metrics, the camera device forgoes (1138) capturing the stream of images a first resolution.
In some implementations, the camera device captures (1140) the stream of images at a second resolution. In some implementations, the second resolution is lower than the first resolution. For example, the resolution of the video feed in
It should be understood that the particular order in which the operations in
For situations in which the systems discussed above collect information about users, the users may be provided with an opportunity to opt in/out of programs or features that may collect personal information (e.g., information about a user's preferences or usage of a smart device). In addition, in some implementations, certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be anonymized so that the personally identifiable information cannot be determined for or associated with the user, and so that user preferences or user interactions are generalized (for example, generalized based on user demographics) rather than associated with a particular user.
It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first user camera could be termed a second camera, and, similarly, a second camera could be termed a first camera, without departing from the scope of the various described implementations. The first camera and the second camera are both cameras, but they are not the same camera.
The terminology used in the description of the various described implementations herein is for the purpose of describing particular implementations only and is not intended to be limiting. As used in the description of the various described implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
Although some of various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen in order to best explain the principles underlying the claims and their practical applications, to thereby enable others skilled in the art to best use the implementations with various modifications as are suited to the particular uses contemplated.
This application is a continuation of U.S. patent application Ser. No. 15/167,957, filed May 27, 2016, entitled “Methods and Devices for Dynamic Adaptation of Encoding Bitrate for Video Streaming,” which is hereby incorporated by reference in its entirety. This application is related to U.S. patent application Ser. No. 14/510,042, filed Oct. 8, 2014, entitled “Method and System for Categorizing Detected Motion Events,” which is hereby incorporated by reference in its entirety. This application is related to U.S. Design patent application Ser. No. 29/504,605, filed Oct. 7, 2014, entitled “Video Monitoring User Interface with Event Timeline and Display of Multiple Preview Windows At User-Selected Event Marks,” which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4198653 | Kamin | Apr 1980 | A |
4737847 | Araki et al. | Apr 1988 | A |
5237408 | Blum et al. | Aug 1993 | A |
5396284 | Freeman | Mar 1995 | A |
5627586 | Yamasaki | May 1997 | A |
5854902 | Wilson et al. | Dec 1998 | A |
5956424 | Wootton et al. | Sep 1999 | A |
5969755 | Courtney | Oct 1999 | A |
6028626 | Aviv | Feb 2000 | A |
6046745 | Moriya et al. | Apr 2000 | A |
6104831 | Ruland | Aug 2000 | A |
6107918 | Klein et al. | Aug 2000 | A |
6144375 | Jain et al. | Nov 2000 | A |
6236395 | Sezan et al. | May 2001 | B1 |
D450059 | Itou | Nov 2001 | S |
6366296 | Boreczky et al. | Apr 2002 | B1 |
6400378 | Snook | Jun 2002 | B1 |
6424370 | Courtney | Jul 2002 | B1 |
6476858 | Ramirez Diaz et al. | Nov 2002 | B1 |
6496598 | Harman | Dec 2002 | B1 |
6535793 | Allard | Mar 2003 | B2 |
6571050 | Park | May 2003 | B1 |
6600784 | Divakaran et al. | Jul 2003 | B1 |
6611653 | Kim et al. | Aug 2003 | B1 |
6628835 | Brill et al. | Sep 2003 | B1 |
6643416 | Daniels et al. | Nov 2003 | B1 |
6647200 | Tanaka | Nov 2003 | B1 |
6665423 | Mehrotra et al. | Dec 2003 | B1 |
6680748 | Monti | Jan 2004 | B1 |
6697103 | Fernandez et al. | Feb 2004 | B1 |
6727938 | Randall | Apr 2004 | B1 |
6741977 | Nagaya et al. | May 2004 | B1 |
D491956 | Ombao et al. | Jun 2004 | S |
6792676 | Haji et al. | Sep 2004 | B2 |
6816184 | Brill et al. | Nov 2004 | B1 |
D499740 | Ombao et al. | Dec 2004 | S |
6954859 | Simerly et al. | Oct 2005 | B1 |
6970183 | Monroe | Nov 2005 | B1 |
7016415 | Alvarez | Mar 2006 | B2 |
7023469 | Olson | Apr 2006 | B1 |
7142600 | Schonfeld et al. | Nov 2006 | B1 |
D555661 | Kim | Nov 2007 | S |
7403116 | Bittner | Jul 2008 | B2 |
7421455 | Hua et al. | Sep 2008 | B2 |
7421727 | Oya et al. | Sep 2008 | B2 |
7433493 | Miyoshi et al. | Oct 2008 | B1 |
7440613 | Xu | Oct 2008 | B2 |
7447337 | Zhang et al. | Nov 2008 | B2 |
D590412 | Saft et al. | Apr 2009 | S |
D607001 | Ording | Dec 2009 | S |
7629995 | Salivar et al. | Dec 2009 | B2 |
7649938 | Chen et al. | Jan 2010 | B2 |
7685519 | Duncan | Mar 2010 | B1 |
7760908 | Curtner et al. | Jul 2010 | B2 |
7765482 | Wood et al. | Jul 2010 | B2 |
D621413 | Rasmussen | Aug 2010 | S |
D625323 | Matsushima et al. | Oct 2010 | S |
7813525 | Aggarwal | Oct 2010 | B2 |
7823066 | Kuramura | Oct 2010 | B1 |
7920626 | Fernandez et al. | Apr 2011 | B2 |
7924323 | Walker et al. | Apr 2011 | B2 |
D638025 | Saft et al. | May 2011 | S |
7995096 | Cressy et al. | Aug 2011 | B1 |
8115623 | Green | Feb 2012 | B1 |
8122038 | Handy et al. | Feb 2012 | B2 |
8130839 | Kawashima et al. | Mar 2012 | B2 |
8200669 | Iampietro et al. | Jun 2012 | B1 |
8204273 | Chambers et al. | Jun 2012 | B2 |
8284258 | Cetin et al. | Oct 2012 | B1 |
8290038 | Wang | Oct 2012 | B1 |
8295597 | Sharma et al. | Oct 2012 | B1 |
8300890 | Gaikwad et al. | Oct 2012 | B1 |
8305447 | Wong | Nov 2012 | B1 |
8305914 | Thielman et al. | Nov 2012 | B2 |
8379851 | Mehrotra et al. | Feb 2013 | B2 |
8390684 | Piran et al. | Mar 2013 | B2 |
8401232 | Fan | Mar 2013 | B2 |
8494234 | Bozinovic et al. | Jul 2013 | B1 |
8515128 | Hildreth | Aug 2013 | B1 |
8525665 | Trundle et al. | Sep 2013 | B1 |
8537219 | Desimone et al. | Sep 2013 | B2 |
8577091 | Ivanov et al. | Nov 2013 | B2 |
8587653 | Vidunas et al. | Nov 2013 | B1 |
8613070 | Borzycki et al. | Dec 2013 | B1 |
8639796 | Covell et al. | Jan 2014 | B2 |
8676493 | M et al. | Mar 2014 | B2 |
8683013 | Major | Mar 2014 | B2 |
8688483 | Watts | Apr 2014 | B2 |
8707194 | Jenkins et al. | Apr 2014 | B1 |
8775242 | Tavares et al. | Jul 2014 | B2 |
8780201 | Scalisi et al. | Jul 2014 | B1 |
8854457 | De Vleeschouwer et al. | Oct 2014 | B2 |
8902085 | Ray et al. | Dec 2014 | B1 |
8922659 | Leny | Dec 2014 | B2 |
8941733 | Albers et al. | Jan 2015 | B2 |
8941736 | Scalisi | Jan 2015 | B1 |
8942438 | Ivanov et al. | Jan 2015 | B2 |
8953848 | Ivanov et al. | Feb 2015 | B2 |
8958602 | Lane et al. | Feb 2015 | B1 |
8966368 | Kuramura | Feb 2015 | B2 |
8982141 | Freyhult | Mar 2015 | B2 |
9014429 | Badawy | Apr 2015 | B2 |
9025836 | Ptucha | May 2015 | B2 |
9064393 | He | Jun 2015 | B2 |
9082018 | Laska | Jul 2015 | B1 |
9124858 | Jang et al. | Sep 2015 | B2 |
9158974 | Laska et al. | Oct 2015 | B1 |
9172911 | Kristiansen et al. | Oct 2015 | B2 |
9213903 | Laska et al. | Dec 2015 | B1 |
9269243 | Shet et al. | Feb 2016 | B2 |
9307217 | Day | Apr 2016 | B1 |
9325905 | Noyes | Apr 2016 | B2 |
9361011 | Burns et al. | Jun 2016 | B1 |
9420331 | Laska et al. | Aug 2016 | B2 |
9449229 | Laska et al. | Sep 2016 | B1 |
9479822 | Laska et al. | Oct 2016 | B2 |
9516053 | Muddu et al. | Dec 2016 | B1 |
9575178 | Kanamori et al. | Feb 2017 | B2 |
9582157 | Chatterjee | Feb 2017 | B1 |
9584710 | Marman et al. | Feb 2017 | B2 |
D782495 | Laska et al. | Mar 2017 | S |
9600723 | Pantofaru et al. | Mar 2017 | B1 |
9602860 | Laska et al. | Mar 2017 | B2 |
9613524 | Lamb et al. | Apr 2017 | B1 |
9621798 | Zhang et al. | Apr 2017 | B2 |
9674453 | Tangeland et al. | Jun 2017 | B1 |
9753994 | Anderson | Sep 2017 | B2 |
9940523 | Laska | Apr 2018 | B2 |
9997053 | Maneskiold et al. | Jun 2018 | B2 |
10063815 | Spivey et al. | Aug 2018 | B1 |
10108862 | Laska et al. | Oct 2018 | B2 |
10289917 | Fu et al. | May 2019 | B1 |
10506237 | Liu | Dec 2019 | B1 |
20010010541 | Fernandez et al. | Aug 2001 | A1 |
20010019631 | Ohsawa et al. | Sep 2001 | A1 |
20010024517 | Labelle | Sep 2001 | A1 |
20010043721 | Kravets et al. | Nov 2001 | A1 |
20010050712 | Dunton et al. | Dec 2001 | A1 |
20020002425 | Dossey et al. | Jan 2002 | A1 |
20020030740 | Arazi et al. | Mar 2002 | A1 |
20020054068 | Ellis et al. | May 2002 | A1 |
20020054211 | Edelson et al. | May 2002 | A1 |
20020089549 | Munro et al. | Jul 2002 | A1 |
20020113813 | Yoshimine | Aug 2002 | A1 |
20020125435 | Cofer et al. | Sep 2002 | A1 |
20020126224 | Lienhart | Sep 2002 | A1 |
20020168084 | Trajkovic et al. | Nov 2002 | A1 |
20020174367 | Kimmel et al. | Nov 2002 | A1 |
20030025599 | Monroe | Feb 2003 | A1 |
20030035592 | Cornog et al. | Feb 2003 | A1 |
20030040815 | Pavlidis | Feb 2003 | A1 |
20030043160 | Elfying et al. | Mar 2003 | A1 |
20030053658 | Pavlidis | Mar 2003 | A1 |
20030058339 | Trajkovic et al. | Mar 2003 | A1 |
20030063093 | Howard | Apr 2003 | A1 |
20030095183 | Roberts et al. | May 2003 | A1 |
20030103647 | Rui et al. | Jun 2003 | A1 |
20030133503 | Paniconi et al. | Jul 2003 | A1 |
20030135525 | Huntington et al. | Jul 2003 | A1 |
20030218696 | Bagga et al. | Nov 2003 | A1 |
20040032494 | Ito et al. | Feb 2004 | A1 |
20040060063 | Russ et al. | Mar 2004 | A1 |
20040100560 | Stavely et al. | May 2004 | A1 |
20040109059 | Kawakita | Jun 2004 | A1 |
20040123328 | Coffey et al. | Jun 2004 | A1 |
20040125133 | Pea et al. | Jul 2004 | A1 |
20040125908 | Cesmeli | Jul 2004 | A1 |
20040133647 | Ozkan et al. | Jul 2004 | A1 |
20040143602 | Ruiz et al. | Jul 2004 | A1 |
20040145658 | Lev-Ran et al. | Jul 2004 | A1 |
20040174434 | Walker et al. | Sep 2004 | A1 |
20040196369 | Fukasawa et al. | Oct 2004 | A1 |
20050005308 | Logan et al. | Jan 2005 | A1 |
20050018879 | Ito et al. | Jan 2005 | A1 |
20050036658 | Gibbins et al. | Feb 2005 | A1 |
20050046699 | Oya et al. | Mar 2005 | A1 |
20050047672 | Ben-Ezra et al. | Mar 2005 | A1 |
20050074140 | Grasso et al. | Apr 2005 | A1 |
20050078868 | Chen et al. | Apr 2005 | A1 |
20050104727 | Han et al. | May 2005 | A1 |
20050104958 | Egnal et al. | May 2005 | A1 |
20050110634 | Salcedo et al. | May 2005 | A1 |
20050132414 | Bentley et al. | Jun 2005 | A1 |
20050146605 | Lipton et al. | Jul 2005 | A1 |
20050151851 | Schnell | Jul 2005 | A1 |
20050157949 | Aiso et al. | Jul 2005 | A1 |
20050162515 | Venetianer et al. | Jul 2005 | A1 |
20050169367 | Venetianer et al. | Aug 2005 | A1 |
20050195331 | Sugano | Sep 2005 | A1 |
20050246119 | Koodali | Nov 2005 | A1 |
20060005281 | Shinozaki et al. | Jan 2006 | A1 |
20060007051 | Bear et al. | Jan 2006 | A1 |
20060028548 | Salivar et al. | Feb 2006 | A1 |
20060029363 | Iggulden et al. | Feb 2006 | A1 |
20060045185 | Kiryati et al. | Mar 2006 | A1 |
20060045354 | Hanna et al. | Mar 2006 | A1 |
20060053342 | Bazakos et al. | Mar 2006 | A1 |
20060056056 | Ahiska et al. | Mar 2006 | A1 |
20060067585 | Pace | Mar 2006 | A1 |
20060070108 | Renkis | Mar 2006 | A1 |
20060072014 | Geng et al. | Apr 2006 | A1 |
20060072847 | Chor et al. | Apr 2006 | A1 |
20060075235 | Renkis | Apr 2006 | A1 |
20060093998 | Vertegaal | May 2006 | A1 |
20060109341 | Evans | May 2006 | A1 |
20060117371 | Margulis | Jun 2006 | A1 |
20060148528 | Jung et al. | Jul 2006 | A1 |
20060164561 | Lacy et al. | Jul 2006 | A1 |
20060171453 | Rohlfing et al. | Aug 2006 | A1 |
20060195716 | Bittner | Aug 2006 | A1 |
20060227862 | Campbell et al. | Oct 2006 | A1 |
20060227997 | Au et al. | Oct 2006 | A1 |
20060228015 | Brockway et al. | Oct 2006 | A1 |
20060233448 | Pace et al. | Oct 2006 | A1 |
20060239645 | Curtner et al. | Oct 2006 | A1 |
20060243798 | Kundu et al. | Nov 2006 | A1 |
20060285596 | Kondo | Dec 2006 | A1 |
20060291694 | Venetianer et al. | Dec 2006 | A1 |
20070002141 | Lipton et al. | Jan 2007 | A1 |
20070008099 | Kimmel et al. | Jan 2007 | A1 |
20070014554 | Sasaki | Jan 2007 | A1 |
20070027365 | Kosted | Feb 2007 | A1 |
20070033632 | Baynger et al. | Feb 2007 | A1 |
20070035622 | Hanna et al. | Feb 2007 | A1 |
20070041727 | Lee et al. | Feb 2007 | A1 |
20070058040 | Zhang et al. | Mar 2007 | A1 |
20070061862 | Berger et al. | Mar 2007 | A1 |
20070086669 | Berger et al. | Apr 2007 | A1 |
20070101269 | Hua et al. | May 2007 | A1 |
20070132558 | Rowe et al. | Jun 2007 | A1 |
20070220569 | Ishii | Sep 2007 | A1 |
20070223874 | Hentschel | Sep 2007 | A1 |
20070255742 | Perez et al. | Nov 2007 | A1 |
20070257986 | Ivanov et al. | Nov 2007 | A1 |
20070268369 | Amano et al. | Nov 2007 | A1 |
20080005269 | Knighton et al. | Jan 2008 | A1 |
20080043106 | Hassapis et al. | Feb 2008 | A1 |
20080044085 | Yamamoto | Feb 2008 | A1 |
20080051648 | Suri et al. | Feb 2008 | A1 |
20080122926 | Zhou et al. | May 2008 | A1 |
20080129498 | Howarter et al. | Jun 2008 | A1 |
20080170123 | Albertson et al. | Jul 2008 | A1 |
20080178069 | Stallings | Jul 2008 | A1 |
20080181453 | Xu | Jul 2008 | A1 |
20080184245 | St-Jean | Jul 2008 | A1 |
20080192129 | Walker et al. | Aug 2008 | A1 |
20080225952 | Wang et al. | Sep 2008 | A1 |
20080231706 | Connell et al. | Sep 2008 | A1 |
20080240579 | Enomoto | Oct 2008 | A1 |
20080244453 | Cafer | Oct 2008 | A1 |
20080247601 | Ito et al. | Oct 2008 | A1 |
20080270363 | Hunt et al. | Oct 2008 | A1 |
20080303903 | Bentley et al. | Dec 2008 | A1 |
20080316311 | Albers et al. | Dec 2008 | A1 |
20090006368 | Mei et al. | Jan 2009 | A1 |
20090016599 | Eaton et al. | Jan 2009 | A1 |
20090018996 | Hunt et al. | Jan 2009 | A1 |
20090033746 | Brown et al. | Feb 2009 | A1 |
20090059031 | Miyakoshi | Mar 2009 | A1 |
20090060352 | Distante et al. | Mar 2009 | A1 |
20090080853 | Chen | Mar 2009 | A1 |
20090083787 | Morris | Mar 2009 | A1 |
20090100007 | Campbell et al. | Apr 2009 | A1 |
20090102924 | Masten, Jr. | Apr 2009 | A1 |
20090103622 | Tripathi et al. | Apr 2009 | A1 |
20090128632 | Goto et al. | May 2009 | A1 |
20090141939 | Chambers et al. | Jun 2009 | A1 |
20090154353 | Loiacono | Jun 2009 | A1 |
20090154806 | Chang et al. | Jun 2009 | A1 |
20090158308 | Weitzenfeld et al. | Jun 2009 | A1 |
20090207257 | Jung et al. | Aug 2009 | A1 |
20090208181 | Cottrell | Aug 2009 | A1 |
20090213937 | Kawase et al. | Aug 2009 | A1 |
20090232416 | Murashita | Sep 2009 | A1 |
20090244291 | Saptharishi | Oct 2009 | A1 |
20090244309 | Maison et al. | Oct 2009 | A1 |
20090249247 | Tseng et al. | Oct 2009 | A1 |
20090262189 | Marman | Oct 2009 | A1 |
20090273711 | Chapdelaine et al. | Nov 2009 | A1 |
20090278934 | Ecker et al. | Nov 2009 | A1 |
20090288011 | Piran et al. | Nov 2009 | A1 |
20090292549 | Ma et al. | Nov 2009 | A1 |
20090316956 | Higuchi et al. | Dec 2009 | A1 |
20090319829 | Takayama | Dec 2009 | A1 |
20100002070 | Ahiska | Jan 2010 | A1 |
20100002071 | Ahiska | Jan 2010 | A1 |
20100002911 | Wu et al. | Jan 2010 | A1 |
20100004839 | Yokoyama et al. | Jan 2010 | A1 |
20100013943 | Thorn | Jan 2010 | A1 |
20100023865 | Fulker et al. | Jan 2010 | A1 |
20100026802 | Titus et al. | Feb 2010 | A1 |
20100033573 | Malinovski et al. | Feb 2010 | A1 |
20100455594 | Jenks et al. | Feb 2010 | |
20100060715 | Laasik et al. | Mar 2010 | A1 |
20100098165 | Farfade et al. | Apr 2010 | A1 |
20100114623 | Bobbitt et al. | May 2010 | A1 |
20100128927 | Ikenoue | May 2010 | A1 |
20100133008 | Gawski et al. | Jun 2010 | A1 |
20100141763 | Rah et al. | Jun 2010 | A1 |
20100162114 | Roth | Jun 2010 | A1 |
20100166260 | Huang et al. | Jul 2010 | A1 |
20100192212 | Raleigh | Jul 2010 | A1 |
20100201815 | Anderson et al. | Aug 2010 | A1 |
20100205203 | Anderson et al. | Aug 2010 | A1 |
20100210240 | Mahaffey et al. | Aug 2010 | A1 |
20100245107 | Fulker et al. | Sep 2010 | A1 |
20100288468 | Patel et al. | Nov 2010 | A1 |
20100290668 | Friedman et al. | Nov 2010 | A1 |
20100304731 | Bratton et al. | Dec 2010 | A1 |
20110001605 | Kiani et al. | Jan 2011 | A1 |
20110035054 | Gal et al. | Feb 2011 | A1 |
20110050901 | Oya | Mar 2011 | A1 |
20110058708 | Ikenoue | Mar 2011 | A1 |
20110069175 | Mistretta et al. | Mar 2011 | A1 |
20110107364 | Lajoie et al. | May 2011 | A1 |
20110149078 | Fan et al. | Jun 2011 | A1 |
20110157358 | Bell | Jun 2011 | A1 |
20110167369 | Van Os | Jul 2011 | A1 |
20110173235 | Aman et al. | Jul 2011 | A1 |
20110176043 | Baker et al. | Jul 2011 | A1 |
20110199488 | Gorilovskij et al. | Aug 2011 | A1 |
20110199535 | Isu et al. | Aug 2011 | A1 |
20110211563 | Herrala et al. | Sep 2011 | A1 |
20110231428 | Kuramura | Sep 2011 | A1 |
20110235998 | Pond et al. | Sep 2011 | A1 |
20110254950 | Bibby et al. | Oct 2011 | A1 |
20110254972 | Yaguchi | Oct 2011 | A1 |
20110255741 | Jung et al. | Oct 2011 | A1 |
20110255775 | McNamer et al. | Oct 2011 | A1 |
20110276710 | Mighani et al. | Nov 2011 | A1 |
20110276881 | Keng et al. | Nov 2011 | A1 |
20110291925 | Israel | Dec 2011 | A1 |
20110300933 | Chien et al. | Dec 2011 | A1 |
20110312350 | Agerholm | Dec 2011 | A1 |
20120005628 | Isozu et al. | Jan 2012 | A1 |
20120011567 | Cronk et al. | Jan 2012 | A1 |
20120019728 | Moore | Jan 2012 | A1 |
20120045090 | Bobbitt et al. | Feb 2012 | A1 |
20120052972 | Bentley et al. | Mar 2012 | A1 |
20120098918 | Murphy | Apr 2012 | A1 |
20120120238 | Adar et al. | May 2012 | A1 |
20120121187 | Lee et al. | May 2012 | A1 |
20120173577 | Millar et al. | Jul 2012 | A1 |
20120176496 | Carbonell et al. | Jul 2012 | A1 |
20120195363 | Laganiere et al. | Aug 2012 | A1 |
20120198319 | Agnoli et al. | Aug 2012 | A1 |
20120216296 | Kidron | Aug 2012 | A1 |
20120257000 | Singhal | Oct 2012 | A1 |
20130016122 | Bhatt | Jan 2013 | A1 |
20130027581 | Price et al. | Jan 2013 | A1 |
20130076908 | Bratton et al. | Mar 2013 | A1 |
20130083198 | Maslan | Apr 2013 | A1 |
20130086665 | Filippi et al. | Apr 2013 | A1 |
20130089301 | Ju | Apr 2013 | A1 |
20130125039 | Murata | May 2013 | A1 |
20130128022 | Bose | May 2013 | A1 |
20130145270 | Piran et al. | Jun 2013 | A1 |
20130163430 | Gell et al. | Jun 2013 | A1 |
20130173819 | Lee | Jul 2013 | A1 |
20130176430 | Zhu et al. | Jul 2013 | A1 |
20130182905 | Myers et al. | Jul 2013 | A1 |
20130201329 | Thornton et al. | Aug 2013 | A1 |
20130202210 | Ryoo et al. | Aug 2013 | A1 |
20130242093 | Cobb et al. | Sep 2013 | A1 |
20130243322 | Noh et al. | Sep 2013 | A1 |
20130266292 | Sandrew et al. | Oct 2013 | A1 |
20130268357 | Heath | Oct 2013 | A1 |
20130276140 | Coffing et al. | Oct 2013 | A1 |
20130279810 | Li et al. | Oct 2013 | A1 |
20130279884 | Gifford | Oct 2013 | A1 |
20130340050 | Harrison | Dec 2013 | A1 |
20130342689 | Sanjay et al. | Dec 2013 | A1 |
20140007222 | Qureshi et al. | Jan 2014 | A1 |
20140013243 | Flynn, III | Jan 2014 | A1 |
20140043534 | Nakaoka | Feb 2014 | A1 |
20140044404 | Grundmann et al. | Feb 2014 | A1 |
20140050406 | Buehler et al. | Feb 2014 | A1 |
20140053200 | de Paz et al. | Feb 2014 | A1 |
20140055610 | Ko et al. | Feb 2014 | A1 |
20140056479 | Bobbitt et al. | Feb 2014 | A1 |
20140063229 | Olsson et al. | Mar 2014 | A1 |
20140068349 | Scott et al. | Mar 2014 | A1 |
20140068705 | Chambers et al. | Mar 2014 | A1 |
20140068789 | Watts et al. | Mar 2014 | A1 |
20140075370 | Guerin et al. | Mar 2014 | A1 |
20140082497 | Chalouhi et al. | Mar 2014 | A1 |
20140098992 | Yagi et al. | Apr 2014 | A1 |
20140105564 | Johar | Apr 2014 | A1 |
20140129942 | Rathod | May 2014 | A1 |
20140137188 | Bartholoma | May 2014 | A1 |
20140142907 | Gellaboina et al. | May 2014 | A1 |
20140146125 | Kristiansen et al. | May 2014 | A1 |
20140157370 | Plattner et al. | Jun 2014 | A1 |
20140160294 | Naylor | Jun 2014 | A1 |
20140173692 | Srinivasan et al. | Jun 2014 | A1 |
20140189808 | Mahaffey et al. | Jul 2014 | A1 |
20140195952 | Champagne et al. | Jul 2014 | A1 |
20140198237 | Noyes | Jul 2014 | A1 |
20140210646 | Subramanya | Jul 2014 | A1 |
20140219088 | Oyman | Aug 2014 | A1 |
20140229604 | Pfeffer | Aug 2014 | A1 |
20140245411 | Meng et al. | Aug 2014 | A1 |
20140245461 | O'Neill et al. | Aug 2014 | A1 |
20140253667 | Tian | Sep 2014 | A1 |
20140254863 | Marks et al. | Sep 2014 | A1 |
20140282877 | Mahaffey et al. | Sep 2014 | A1 |
20140289376 | Chan et al. | Sep 2014 | A1 |
20140300722 | Garcia | Oct 2014 | A1 |
20140313142 | Yairi | Oct 2014 | A1 |
20140313316 | Olsson et al. | Oct 2014 | A1 |
20140313542 | Benchorin et al. | Oct 2014 | A1 |
20140320740 | Wan et al. | Oct 2014 | A1 |
20140333775 | Naikal et al. | Nov 2014 | A1 |
20140339374 | Mian et al. | Nov 2014 | A1 |
20140347475 | Divakaran et al. | Nov 2014 | A1 |
20140376876 | Bentley et al. | Dec 2014 | A1 |
20150020014 | Suzuki et al. | Jan 2015 | A1 |
20150022432 | Stewart | Jan 2015 | A1 |
20150022660 | Kavadeles | Jan 2015 | A1 |
20150042570 | Lombardi et al. | Feb 2015 | A1 |
20150046184 | Cocco et al. | Feb 2015 | A1 |
20150052029 | Wu et al. | Feb 2015 | A1 |
20150054949 | Scalisi | Feb 2015 | A1 |
20150054981 | Saiki et al. | Feb 2015 | A1 |
20150074535 | Silberstein et al. | Mar 2015 | A1 |
20150098613 | Gagvani | Apr 2015 | A1 |
20150181088 | Wu et al. | Jun 2015 | A1 |
20150194134 | Dureau et al. | Jul 2015 | A1 |
20150201152 | Cho et al. | Jul 2015 | A1 |
20150201198 | Marlatt | Jul 2015 | A1 |
20150215359 | Bao | Jul 2015 | A1 |
20150215586 | Lasko | Jul 2015 | A1 |
20150234571 | Lee et al. | Aug 2015 | A1 |
20150235551 | Maneskiold et al. | Aug 2015 | A1 |
20150242687 | Seo | Aug 2015 | A1 |
20150242994 | Shen | Aug 2015 | A1 |
20150279182 | Kanaujia et al. | Oct 2015 | A1 |
20150339702 | Lin et al. | Nov 2015 | A1 |
20150341599 | Carey | Nov 2015 | A1 |
20160006932 | Zhang et al. | Jan 2016 | A1 |
20160006988 | Zhao | Jan 2016 | A1 |
20160026862 | Anderson | Jan 2016 | A1 |
20160041724 | Kirkby et al. | Feb 2016 | A1 |
20160042621 | Hogg | Feb 2016 | A1 |
20160072831 | Rieke | Mar 2016 | A1 |
20160092737 | Laska et al. | Mar 2016 | A1 |
20160092738 | Laska et al. | Mar 2016 | A1 |
20160103559 | Maheshwari et al. | Apr 2016 | A1 |
20160103887 | Fletcher et al. | Apr 2016 | A1 |
20160110612 | Sabripour | Apr 2016 | A1 |
20160117951 | Fleisher et al. | Apr 2016 | A1 |
20160189531 | Modi | Jun 2016 | A1 |
20160195716 | Nakanuma | Jul 2016 | A1 |
20160219248 | Reznik et al. | Jul 2016 | A1 |
20160235344 | Auerbach | Aug 2016 | A1 |
20160241818 | Palanisamy et al. | Aug 2016 | A1 |
20160274771 | Seong et al. | Sep 2016 | A1 |
20160285724 | Lundquist et al. | Sep 2016 | A1 |
20160307418 | Pantus | Oct 2016 | A1 |
20160316176 | Laska et al. | Oct 2016 | A1 |
20160316256 | Laska et al. | Oct 2016 | A1 |
20160321889 | Gagvani | Nov 2016 | A1 |
20160360116 | Penha et al. | Dec 2016 | A1 |
20160364966 | Dixon et al. | Dec 2016 | A1 |
20160366036 | Gupta et al. | Dec 2016 | A1 |
20170019605 | Ahiska | Jan 2017 | A1 |
20170039729 | Wang et al. | Feb 2017 | A1 |
20170085915 | Kuusela | Mar 2017 | A1 |
20170111494 | Kidron et al. | Apr 2017 | A1 |
20170123492 | Marggraff et al. | May 2017 | A1 |
20170124821 | Zhang et al. | May 2017 | A1 |
20170162230 | Maliuk et al. | Jun 2017 | A1 |
20170163929 | Maliuk et al. | Jun 2017 | A1 |
20170180678 | Fish et al. | Jun 2017 | A1 |
20170257612 | Emeott et al. | Sep 2017 | A1 |
20180004784 | Tompkins | Jan 2018 | A1 |
20180089328 | Bath et al. | Mar 2018 | A1 |
20180096197 | Kephart | Apr 2018 | A1 |
20180121035 | Filippi et al. | May 2018 | A1 |
20180139254 | Oyman | May 2018 | A1 |
20180144314 | Miller | May 2018 | A1 |
20180182148 | Yanagisawa | Jun 2018 | A1 |
20180218053 | Koneru | Aug 2018 | A1 |
20180219897 | Muddu et al. | Aug 2018 | A1 |
20190004639 | Faulkner | Jan 2019 | A1 |
20190066473 | Laska et al. | Feb 2019 | A1 |
20190311201 | Selinger et al. | Oct 2019 | A1 |
Number | Date | Country |
---|---|---|
1024666 | Aug 2000 | EP |
WO 2009138037 | Nov 2000 | WO |
Entry |
---|
Birk, Deterministic Load-Balancing Schemes for Disk-Based Video-on-Demand Storage Servers, 14 IEEE Symposium on Mass Storage Systems, Sep. 1995, pp. 17-25. |
Google Inc., International Search Report and Written Opinion, PCT/US2015/039425, dated Sep. 28, 2015, 12 pgs. |
Castellanos, Event Detection in Video Using Motion Analysis, 1st ACM Int'l Workshop on Analysis & Retrieval of Tracked Events & Motion in Imagery Streams, Oct. 2010, pp. 57-62. |
D. D Buzan, S. Sclaroff, & G. Kollios, “Extraction and clustering of motion trajectories in video”, 2 Proceedings of the 17th Intl Conf. on Pattern Recognition 521-524 (Aug. 2004). |
Delbruck, Frame-free dynamic digital vision, 2008 Intl Symp. On Secure-Life Electronics, Advanced Electronics for Quality Life & Socy, Mar. 2008, pp. 21-26. |
Ellis, Model-based vision for automatic alarm interpretation, IEEE 1990 Int'l Camahan Conf. on Security Tech, Oct. 1990, pp. 62-67. |
Gresham, Review: iZon wi-fi Video monitor and its companion iOS app, 2012, p. 1-8, www.idownloadblog.com/2012/11/21/stem-izon-review. |
Halliquist, How do I set up Activity Alerts, 2013, p. 1-3, http://support.dropcam.com/entries/27880086-How-do-i-set-up-Activity-Alerts. |
ISPY, Motion Detection Setting up Motion Detection, Dec. 11, 2011, pp. 1-3, https://www.ispyconnect.com/userguide-motion-detection.aspx. |
IZON App Guide, 2014, p. 1-30, www.isoncam.com/wp-content/uploads/2014/06/IZON-App-Guide.pdf. |
James Drinkwater, “HOWTO: Set up motion detection in a Mobotix network camera”, http://www.networkwebcams.com/ip-camera-learning-center/2010/03/03/howto-setting-up-motion-detection-in-a-mobotix-camera/, Mar. 3, 2010. |
L. L Zelnik-Manor, “Event-based analysis of video”, 2 Proceedings of the 2001 IEEE Computer Soc'y Conf. on Computer Vision & Pattern Recognition 123-130 (2001). |
Logitech, Logitech Alert Video Security System: Getting to Know, 2010, p. 1-9, www.logitech.com/assets/32688/good-to-know.pdf. |
Medioni, Event detection and analysis from video streams, 23 IEEE Transactions on Pattern Analysis & Machine Intelligence, Aug. 2001, pp. 873-889. |
Revis, How to Setup Motion Detection of your D-Link Camera, Apr. 9, 2014, pp. 1-8, http://blog.dlink.com/how-to-set-up-motion-detection-on-your-d-link-camera. |
Schraml, A spatio-termporal clustering method using real-time motion analysis on event-based 3D vision, 2010 IEEE Comp. Socy Conf. on Comp. Vision & Pattern Recognition Workshops, Jun. 2010, pp. 57-63. |
Shim, A Study of Surveillance System of Objects Abnormal Behaviour by Blob Composition Analysis, 8 Int'l J. of Security & Its Applications, Mar. 2014, pp. 333-340. |
Yoon, Event Detection from MPEG Video in the Compressed Domain, 15th Int'l Conf. on Pattern Recognition, Sep. 2000, pp. 819-822. |
Li, W. Whuang, I.Y.H. Gu, & Q. Tian, “Statistical Modeling of Complex Backgrounds for Foreground Object Detection”, 13 IEEE Transactions on Image Processing 1459-1472 (Nov. 2004). |
M. Camplani, T. Mantecon, & L. Salgado, “Accurate Depth-Color Scene Modeling for 3D Contents Generation with Low Cost Depth Cameras”, 19 IEEE Int'l Conf. on Image Processing 1741-1744 (Oct. 2012). |
F. Zhou, F. De la Torre, & J.K. Hodgins, “Aligned Cluster Analysis for Temporal Segmentation of Human Motion”, 8 IEEE Int'l Conf. on Automatic Face & Gesture Recognition 1-7 (Sep. 2008). |
Yuri Ivanov and Christopher Wren, “Toward Spatial Queries for Spatial Surveillance Tasks”, May 2006, https://www.researchgate.netlprofile/Yuri 1van0v2/publication/21 5439735 Toward Spatial Queries for Spatial Surveillance Tasks/links/0c960539e6408cb328000000.pdf, p. 1-9. |
Author unknown, “Ch. 1 Configuring Main System” (GEOVision), 2009, https://www.web.archive.org/web/20090520185506/https:/videos/cctvcamerapros.com/pdf/geovision/geovision-8-manual-chl.pdf, p. 1-79. |
Central Intelligence Agency “Words of Estimative Probability” May 25, 2018, 12 pgs. |
Literature Review—Graphical Tools, [online], publication date unknown, retrieved on Dec. 20, 2018. Retrieved from <URL: https:// www.stat.auckland.ac.nz/-joh024/LitReviews/LitReview GraphicalTools.pdf>, all pages. |
Clustered/Stacked Filled Bar Graph Generator, [online], website crawled on Mar. 26, 2014, retrieved on Dec. 31, 2018. Retrieved from, < URL: https://web.archive.org/web/20140326054333/http://www.burningcutlery.conn:80/derek/bargraph/>, all pages. |
N. Hueber, C. Hennequin, P. Raymond, & J.P. Moeglin, “Real-time movement detection and analysis for video surveillance applications”, 9079 Proc. SPIE 0b-1-0B7 (Jun. 10, 2014) (Year: 2014). |
“Advisory Action”, U.S. Appl. No. 15/167,957, dated Feb. 8, 2019, 6 pages. |
“Final Office Action”, U.S. Appl. No. 15/167,957, dated Oct. 22, 2018, 20 pages. |
“Non-Final Office Action”, U.S. Appl. No. 15/167,957, dated Mar. 26, 2018, 16 pages. |
“Notice of Allowance”, U.S. Appl. No. 15/167,957, dated Jul. 30, 2019, 8 pages. |
“Pre-Interview First Office Action”, U.S. Appl. No. 15/167,957, dated Feb. 5, 2018, 4 pages. |
“FI8921W email notification and motion alarm”, http://foscam.us/forum/fi8921w-email-notification-and-motion-alarm-t5874.html, Jun. 4, 2013, 2 pages. |
“File:savedemo.png”, https://wiki.freepascal.org/File:savedemo.png, Apr. 3, 2014, 2 pages. |
“Graph Maker”, https://forum.unity.com/threads/released-graph-maker-ugui-ngui-dfgui-line-graphs-bar-gaphs-pie-graphs-etc.202437, 21 pages. |
“Histograms”, Retrieved at: https://root.cem.ch/root/htmldoc/guides/users-guide/Histograms.html—on Dec. 20, 2018, 55 pages. |
“You Tube, Sky News Live”, (screenshot of website illustrating live stream video with timeline having preview thumbnail of past images within the stream) retrieved at: https://www.youtube.com/watch?v=9Auq9mYxFEE, 2 pages. |
Villier, “Amplification of the Antibody Response by C3b Complexed to Antigen Through an Ester Link”, https://www.iimmunol.org/content/162/6/3647, Mar. 15, 1999, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20200112726 A1 | Apr 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15167957 | May 2016 | US |
Child | 16709755 | US |