At least some embodiments disclosed herein relate to the control of cameras in general and more specifically but not limited to autonomous control of camera operations during the capture of still and/or video images.
Sensor data can be used to determine the motion characteristics of sporting actions, performances of athletes, and/or states of the participants of sporting activities.
U.S. Pat. App. Pub. No. 2013/0346013, entitled “Method and Apparatus for Determining Sportsman Jumps using Fuzzy Logic,” discloses a technique that uses fuzzy logic in the analysis of accelerometer data, generated in response to the motions of a sportsperson, to identify a subset of the data as representing a jump and thus separate the jump from other motions of the sportsperson.
U.S. Pat. No. 8,929,709, entitled “Automatic Digital Curation and Tagging of Action Videos,” discloses a system for automatic digital curation, annotation, and tagging of action videos, where sensor data from a device carried by a sportsperson during a sporting activity is used to identify a sportsperson event which is then stored in a performance database to automatically select, annotate, tag or edit corresponding video data of the sporting activity.
U.S. Pat. No. 9,060,682, entitled “Distributed Systems and Methods to Measure and Process Sport Motions,” discloses a distributed, multi-stage, intelligent system configured to determine action performance characteristics parameters in action sports.
U.S. Pat. App. Pub. No. 2014/0257743, entitled “Systems and Methods for Identifying and Characterizing Athletic Maneuvers,” discloses techniques to automatically identify athletic maneuvers by determining, from sensor data, motion characteristics and then based on the motion characteristics, an athletic maneuver.
U.S. Pat. App. Pub. No. 2014/0257744, entitled “Systems and Methods for Synchronized Display of Athletic Maneuvers,” discloses techniques to synchronize the video streams of different sportspersons based on synchronizing the occurrences of motion characteristics identified from sensor data such that the athletic maneuvers of the sportspersons can be visually compared side by side.
U.S. Pat. App. Pub. No. 2015/0335949, entitled “Use of Gyro Sensors for Identifying Athletic Maneuvers,” discloses techniques to use at least one gyroscopic sensor in identify athletic maneuvers performed by sportspersons.
U.S. Pat. App. Pub. No. 2015/0340066, entitled “Systems and Methods for Creating and Enhancing Videos,” discloses the use of sensor data to identify motion characteristics of sports video in combining video selected from multiple video sources to provide a unique and rich viewing experience.
The entire disclosures of the above discussed patent documents are hereby incorporated herein by reference.
The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.
Sport events in general, and action sport events in particular, are very popular with their fans and viewing audiences. Such sport events as NFL games, X-Games, Dew Tour, World Surfing Tour, Formula One, etc., attract thousands of fans and real time spectators.
Many of these fans have the capability, skills, and desire to film these events or parts of the events using their smart phones or point-of-view (POV) cameras. This richness of possible video sources is further increased with the introduction of video drones and intelligent cameras.
Very often a single camera is used to record activity of multiple athletes. For example, multiple surfers or kite surfers could surf at the same location, or several snowboarders can follow the same course within a short time interval.
Under such scenarios, a camera operator has a choice to use a general wide angle shot that covers multiple players for the entire duration of the event, or use a zoom view for individual players which excludes other players from the view.
A zoom view is preferable when an athlete is performing a special activity, such as catching a wave during surfing, making a jump, etc. However, to use the zoom view effectively, the operator needs to know when each athlete is going to perform a special event, jump, catching wave, etc. While a human operator of a camera can usually predict such events, such prediction during autonomous operation is often very difficult.
At least some embodiments disclosed herein address these and other issues with solutions that allow autonomous camera tracking of an event, along with the automatic selection of an optimal zoom level and event.
In exemplary embodiments disclosed herein, each athlete has one or more data collection device that is mounted on the body, clothing, and/or equipment of the athlete to measure data related to the activity of the athlete. Such measurements can be performed using any number and type of data collection devices, including a GPS device, an inertial sensor, a magnetic sensor, a pressure sensor, and/or other devices.
In
The processing unit can be integrated with the sensors (11) within the same device housing of the sensors (11), or a separate device that is in communication with one or more sensor devices (e.g., via a wireless personal area network, a wireless local area network, a wired connection). For example, the distributed, multi-stage system to process sensor data as disclosed in U.S. Pat. No. 9,060,682 can be used to process the sensor data generated from the sensors (11) that are attached to the body, clothing, and/or equipment of the athlete, the disclosure of which patent is hereby incorporated herein by reference.
The processing unit can be co-located with the data collection devices containing the sensors (e.g., coupled to the athlete, the athlete's clothing, and/or the athlete's equipment), or be located remotely from the data collection devices. The data collection devices may provide the sensor data to the processing unit via any suitable combination of wired or wireless communication method.
While a single camera controller controlling a single camera is shown receiving data collected from sensors coupled to two different athletes in
In
For example, different states of an athlete engaging in a predetermined sport have different patterns in sensor data (e.g., action performance characteristics parameters, motion characteristics). Thus, by identifying the patterns in the current sensor data of a particular athlete, the controller (13) is configured to automatically identify the current state of the athlete and/or predict the subsequent state of the athlete.
In various embodiments, the states of multiple actors are analyzed by a camera controller (13) to select an actor of interest (e.g., athlete, sportsperson, participant) and focus the camera(s) on the selected actor by directing the camera at the selected actor with a selected camera zoom level.
For example, the controller (13) of one embodiment is configured to adjust the direction of the camera (15). Based on a location sensor attached to the sportsperson to measure the location of the sportsperson and a location sensor attached to the camera, the controller (13) of one embodiment is configured to compute a desired direction of the camera (15) and adjust the camera (15) to the desired direction. Alternatively or in combination, the controller (13) uses image recognition techniques to search, in a captured wide scene, the selected actor that has a predetermined visual characteristic and direct the camera to the location of the selected actor.
For example, the controller (13) of one embodiment is configured to control the optical zoom level of a lens (17) of the camera (15) to capture a scene limited to the selected actor and thus exclude other non-selected actors. In general, the zoom level of the camera can be adjusted via a combination of an optical zoom function of the camera (15) and a digital zoom function.
When no actors are selected (e.g., for being important or of interest) at the moment, the controller (13) instructs the camera (15) to use a general pan wide angle view to capture a broader scene than a narrow scene that focuses on one or more selected actors.
In one embodiment, the processing unit may analyze information about a camera controller (e.g., received from the camera controller (13) itself) to identify a target actor and zoom level to utilize, and then instruct the camera controller accordingly.
In other embodiments, the camera controller (13) receives the state information for multiple actors from the processing units associated with each actor; and the camera controller (13) makes the decision as to which actor should be focused on and at what zoom level.
In yet other embodiments, decisions regarding which actors to be focused on and the appropriate zoom levels to be used are determined by a third-party server or other computing device in communication with both the data processing unit(s) and the camera controller(s).
In
For example, from the current sensor data, the computing apparatus predicts that the actor is about to perform an action of interest and thus directs the camera to zoom in on the actor and/or direct the camera to place the actor at a predetermined location in a scene captured by the camera.
In
For example, a first processing unit is configured to: collect (201) sensor data from sensors (11) attached to a first actor; determine (203) a state/performance of the first actor; and send (205) the state/performance of the first actor to a controller (13).
Similarly, a second processing unit is configured to: collect (207) sensor data from sensors (11) attached to a second actor; determine (209) a state/performance of the second actor; and send (211) the state/performance of the second actor to the controller (13).
The controller (13) is configured to: receive (213) the state/performance data of the actors; compare (215) the state/performance data of the actors; select (217) an actor from the actors based on a result of the comparison; select (219) a camera parameter (e.g., direction and zoom) based on the identification of the selected actor; and overlay (221) and tag image data captured by the camera (15) with identification of the selected actor and sensor-based information of the selected actor.
After the camera (15) captures video segments according to the directions and zoom levels determined according to any embodiment disclosed herein, the video segments captured by the cameras (15) can be enhanced by overlaying information on the segments. Such information may include, for example, the name, date, time, location, and performance characteristics of the athlete. The tagged video segments may be stored in a database, and retrieved using any of the types of information used in tagging the video. For example, the video tagging method disclosed in U.S. Pat. No. 8,929,709 can be used, the disclosure of which patent is hereby incorporated herein by reference.
Thus, among other things, embodiments of the present disclosure help provide the autonomous control of unattended cameras (such as drones or stationary cameras) that cover multiple athletes, and thereby provide optimal video coverage with limited video resources.
In one embodiment, each respective actor of a set of actors in a scene has one or more sensors in communication with a respective processing unit among the plurality of processing units; and the respective processing unit is configured to process sensor data from the one or more sensors to identify a state of the respective actor.
A system of one embodiment includes: a plurality of processing units associated with a plurality of actors participating in an activity; at least one interface to communicate with the processing units, receive from the processing units data identifying states of the actors determined from sensors attached to the actors, and communicate with a camera; at least one microprocessor; and a memory storing instructions configured to instruct the at least one microprocessor to adjust, via the at least one interface, an operation parameter of the camera, based on the data identifying the states of the actors, such as a zoom level of the camera, and a direction of the camera.
The instructions of one embodiment are further configured to instruct the at least one microprocessor to identify an actor of focus from the plurality of actors based on the states of the actors. For example, the actor of focus is selected based at least in part on performances of the plurality of actors measured using the sensors attached to the actors. For example, the actor of focus is selected based at least in part on a prediction, based on the states of the actors, that the actor of focus is about to perform an action of interest.
A method implemented in a computing device of one embodiment includes: receiving, by the computing device, sensor data from one or more sensors attached to an actor; determining, by the computing device, a state of the actor based on the sensor data received from the one or more sensors; and controlling a camera based on the state of the actor determined based on the sensor data.
For example, the controlling of the camera includes: determining an operation parameter of the camera; and adjusting the camera based on the operation parameter. For example, the operation parameter is a zoom level of the camera, or a direction of the camera.
In one embodiment, states of multiple actors are determined based on the sensor data from respective sensors attached to the actors. The states of the actors are compared to select an operation parameter for the camera, such as the camera direction and/or the camera zoom level. For example, the operation parameter can be selected to increase a percentage of an image of a first actor within an image captured by the camera, and/or to reduce (or eliminate) a percentage of an image of the second actor within the image captured by the camera. The desired operation parameter can be computed based on a location of the actor selected as the focus of the camera.
In one embodiment, the one or more sensors are attached to a piece of athletic equipment of the actor. Examples of the one or more sensors include: a GPS device, an inertial sensor, a magnetic sensor, and a pressure sensor.
In one embodiment, the computing device causes the camera to capture one or more images based on the controlling of the direction and/or zoom level of the camera, overlays data derived from the sensor data on the one or more images, tags the one or more images with information derived from the sensor data, and/or store the tagged images in a database, where the tagged images are retrievable via the tagged information.
The present disclosure includes methods performed by the computing device, non-transitory computer-readable media storing instructions that, when executed by such a computing device, cause the computing device to perform such methods, and computing devices configured to perform such methods.
In
For example, a set of gyro sensors (119) attached to an actor are configured to measure the angular velocities of a gyro along a plurality of axes. An analog to digital (A/D) converter (117) converts the analog signals from the gyro sensors (119) into digital signals for the input/output interface (115) of a computing device that performs the bias calibration. A global positioning system (GPS) receiver (127) is configured to measure the current location of the actor; and a magnetometer having magnetic sensors (123) with a corresponding A/D converter (121), and/or an accelerometer (not shown in
In
The computing device further includes a bus (103) that connects the input/output interface (115), at least one microprocessor (101), random access memory (105), read only memory (ROM) (107), a data storage device (109), a display device (111), and an input device (113).
The memory devices (e.g., 105, 107, and 109) store instructions; and the microprocessor(s) (101) is (are) configured via the instructions to perform various operations disclosed herein to determine the camera control (131) and/or process the camera data.
The computing device of one embodiment is configured to store at least a portion of the processed sensor data (e.g., location, state, motion, maneuver and/or performance of the actors).
In
In general, the memory devices (e.g., 105, 107, and 109) of the computing apparatus includes one or more of: ROM (Read Only Memory) (107), volatile RAM (Random Access Memory) (105), and non-volatile memory (109), such as hard drive, flash memory, etc.
Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory. Non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after power is removed from the system. The non-volatile memory may also be a random access memory.
The non-volatile memory can be a local device coupled directly to the rest of the components in the data processing system. A non-volatile memory that is remote from the system, such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, can also be used.
At least some of the functions and operations described herein are performed a microprocessor executing instructions stored in the memory devices (e.g., 105, 107, and 109).
Alternatively, or in combination, the functions and operations as described here can be implemented using special purpose circuitry, with or without software instructions, such as using Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.
While one embodiment can be implemented in fully functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
Routines executed to implement the embodiments may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically include one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
A machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time.
Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others. The computer-readable media may store the instructions.
The instructions may also be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc. However, propagated signals, such as carrier waves, infrared signals, digital signals, etc. are not tangible machine readable medium and are not configured to store instructions.
In general, a machine readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.
The description and drawings are illustrative and are not to be construed as limiting. The present disclosure is illustrative of inventive features to enable a person skilled in the art to make and use the techniques. Various features, as described herein, should be used in compliance with all current and future rules, laws and regulations related to privacy, security, permission, consent, authorization, and others. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.
The use of headings herein is merely provided for ease of reference, and shall not be interpreted in any way to limit this disclosure or the following claims.
Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, and are not necessarily all referring to separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by one embodiment and not by others. Similarly, various requirements are described which may be requirements for one embodiment but not other embodiments. Unless excluded by explicit description and/or apparent incompatibility, any combination of various features described in this description is also included here. For example, the features described above in connection with “in one embodiment” or “in some embodiments” can be all optionally included in one implementation, except where the dependency of certain features on other features, as apparent from the description, may limit the options of excluding selected features from the implementation, and incompatibility of certain features with other features, as apparent from the description, may limit the options of including selected features together in the implementation.
The entire disclosures of the patent documents discussed above are hereby incorporated herein by reference.
In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
The present application is a continuation application of U.S. patent application Ser. No. 16/277,596, filed Feb. 15, 2019, which is a continuation application of U.S. patent application Ser. No. 15/046,144, filed Feb. 17, 2016 and issued as U.S. Pat. No. 10,212,325 on Feb. 19, 2019, and entitled “Systems And Methods To Control Camera Operations”, which claims the benefit of the filing date of Prov. U.S. Pat. App. Ser. No. 62/117,398, filed Feb. 17, 2015 and entitled “Predictive Camera Targeting,” the entire disclosures of which applications are hereby incorporated in their entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
4800897 | Nilsson | Jan 1989 | A |
5067717 | Harlan et al. | Nov 1991 | A |
5337758 | Moore et al. | Aug 1994 | A |
5724265 | Hutchings | Mar 1998 | A |
5825667 | Van Den Broek | Oct 1998 | A |
6013007 | Root et al. | Jan 2000 | A |
6167356 | Squadron et al. | Dec 2000 | A |
6436052 | Nikolic et al. | Aug 2002 | B1 |
6445882 | Hirano | Sep 2002 | B1 |
6499000 | Flentov et al. | Dec 2002 | B2 |
6571193 | Unuma et al. | May 2003 | B1 |
6825777 | Vock et al. | Nov 2004 | B2 |
6963818 | Flentov et al. | Nov 2005 | B2 |
7451056 | Flentov et al. | Nov 2008 | B2 |
7602301 | Stirling et al. | Oct 2009 | B1 |
7640135 | Vock et al. | Dec 2009 | B2 |
7827000 | Stirling et al. | Nov 2010 | B2 |
7860666 | Vock et al. | Dec 2010 | B2 |
7991565 | Vock et al. | Aug 2011 | B2 |
8055469 | Kulach et al. | Nov 2011 | B2 |
8239146 | Vock et al. | Aug 2012 | B2 |
8270670 | Chen et al. | Sep 2012 | B2 |
8628453 | Balakrishnan et al. | Jan 2014 | B2 |
8929709 | Lokshin | Jan 2015 | B2 |
9060682 | Lokshin | Jun 2015 | B2 |
9326704 | Lokshin et al. | May 2016 | B2 |
9497407 | Lokshin | Nov 2016 | B2 |
9566021 | Lokshin et al. | Feb 2017 | B2 |
9769387 | Beard et al. | Sep 2017 | B1 |
10008237 | Lokshin et al. | Jun 2018 | B2 |
10212325 | Lokshin et al. | Feb 2019 | B2 |
10659672 | Lokshin | May 2020 | B2 |
20020052541 | Cuce et al. | May 2002 | A1 |
20020115927 | Tsukada et al. | Aug 2002 | A1 |
20030065257 | Mault et al. | Apr 2003 | A1 |
20030163287 | Vock et al. | Aug 2003 | A1 |
20040225467 | Vock et al. | Nov 2004 | A1 |
20050223799 | Murphy | Oct 2005 | A1 |
20050243061 | Liberty et al. | Nov 2005 | A1 |
20060166737 | Bentley | Jul 2006 | A1 |
20060190419 | Bunn et al. | Aug 2006 | A1 |
20060247504 | Tice | Nov 2006 | A1 |
20060291840 | Murata et al. | Dec 2006 | A1 |
20070027367 | Oliver et al. | Feb 2007 | A1 |
20070063850 | Devaul et al. | Mar 2007 | A1 |
20080096726 | Riley et al. | Apr 2008 | A1 |
20080246841 | Chen et al. | Oct 2008 | A1 |
20090009605 | Ortiz | Jan 2009 | A1 |
20090041298 | Sandler et al. | Feb 2009 | A1 |
20090046152 | Aman | Feb 2009 | A1 |
20090063097 | Vock et al. | Mar 2009 | A1 |
20090088204 | Culbert et al. | Apr 2009 | A1 |
20090210078 | Crowley | Aug 2009 | A1 |
20090322540 | Richardson et al. | Dec 2009 | A1 |
20100030482 | Li | Feb 2010 | A1 |
20100081116 | Barasch et al. | Apr 2010 | A1 |
20100113115 | Hightower | May 2010 | A1 |
20100120585 | Quy | May 2010 | A1 |
20100149331 | DiMare et al. | Jun 2010 | A1 |
20100161271 | Shah et al. | Jun 2010 | A1 |
20100191499 | Vock et al. | Jul 2010 | A1 |
20100204615 | Kyle et al. | Aug 2010 | A1 |
20100268459 | O'Shea | Oct 2010 | A1 |
20110071792 | Miner | Mar 2011 | A1 |
20110208822 | Rathod | Aug 2011 | A1 |
20110222766 | Kato et al. | Sep 2011 | A1 |
20110246122 | Iketani et al. | Oct 2011 | A1 |
20110270135 | Dooley et al. | Nov 2011 | A1 |
20110313731 | Vock et al. | Dec 2011 | A1 |
20120004883 | Vock et al. | Jan 2012 | A1 |
20120113274 | Adhikari et al. | May 2012 | A1 |
20120130515 | Homsi et al. | May 2012 | A1 |
20120154557 | Perez et al. | Jun 2012 | A1 |
20120178534 | Ferguson et al. | Jul 2012 | A1 |
20120191705 | Tong et al. | Jul 2012 | A1 |
20120251079 | Meschter et al. | Oct 2012 | A1 |
20130044043 | Abdollahi et al. | Feb 2013 | A1 |
20130176401 | Monari et al. | Jul 2013 | A1 |
20130188067 | Koivukangas et al. | Jul 2013 | A1 |
20130218504 | Fall et al. | Aug 2013 | A1 |
20130242105 | Boyle | Sep 2013 | A1 |
20130265440 | Mizuta | Oct 2013 | A1 |
20130274040 | Coza et al. | Oct 2013 | A1 |
20130278727 | Tamir et al. | Oct 2013 | A1 |
20130316840 | Marks | Nov 2013 | A1 |
20130330054 | Lokshin | Dec 2013 | A1 |
20130346013 | Lokshin et al. | Dec 2013 | A1 |
20140028855 | Pryor | Jan 2014 | A1 |
20140120838 | Lokshin | May 2014 | A1 |
20140257743 | Lokshin et al. | Sep 2014 | A1 |
20140257744 | Lokshin et al. | Sep 2014 | A1 |
20140287389 | Kallmann et al. | Sep 2014 | A1 |
20150050972 | Sarrafzadeh et al. | Feb 2015 | A1 |
20150098688 | Lokshin | Apr 2015 | A1 |
20150154452 | Bentley et al. | Jun 2015 | A1 |
20150335949 | Lokshin et al. | Nov 2015 | A1 |
20150340066 | Lokshin et al. | Nov 2015 | A1 |
20160006974 | Pulkkinen et al. | Jan 2016 | A1 |
20160042493 | MacMillan et al. | Feb 2016 | A1 |
20160042622 | Takiguchi | Feb 2016 | A1 |
20160045785 | Tzovanis et al. | Feb 2016 | A1 |
20160050360 | Fisher | Feb 2016 | A1 |
20160225410 | Lee | Aug 2016 | A1 |
20160241768 | Lokshin et al. | Aug 2016 | A1 |
20170026608 | Lokshin | Jan 2017 | A1 |
20170106238 | Lokshin et al. | Apr 2017 | A1 |
20190182417 | Lokshin et al. | Jun 2019 | A1 |
Number | Date | Country |
---|---|---|
1308505 | Aug 2001 | CN |
1533672 | Sep 2004 | CN |
1907222 | Feb 2007 | CN |
102198317 | Sep 2011 | CN |
0866949 | Sep 1998 | EP |
2001317959 | Nov 2001 | JP |
2003244691 | Aug 2003 | JP |
2005286377 | Oct 2005 | JP |
2005286394 | Oct 2005 | JP |
2006345270 | Dec 2006 | JP |
2009065324 | Mar 2009 | JP |
2009078134 | Apr 2009 | JP |
2010088886 | Apr 2010 | JP |
2012008683 | Jan 2012 | JP |
2012512608 | May 2012 | JP |
2012523900 | Oct 2012 | JP |
2013130808 | Jul 2013 | JP |
2006081395 | Aug 2006 | WO |
2007006346 | Jan 2007 | WO |
2010025467 | Mar 2010 | WO |
2011069291 | Jun 2011 | WO |
2011101858 | Aug 2011 | WO |
2011140095 | Oct 2011 | WO |
2012027626 | Mar 2012 | WO |
2014073454 | May 2014 | WO |
Entry |
---|
Chinese Application No. 201380075986.2, Search Report, dated Mar. 18, 2017. |
Chinese Patent Application No. 201380020481.6, Search Report, dated Apr. 25, 2017. |
European Patent Application No. 12886942.7, Extended Search Report, dated Sep. 22, 2015. |
European Patent Application No. 13877362.7, Supplementary Search Report, dated Sep. 29, 2016. |
European Patent Application No. 13876901.3, European Search Report, dated Mar. 2, 2017. |
International Patent Application PCT/US2012/071867, International Search Report and Written Opinion, dated Jul. 4, 2013. |
International Patent Application PCT/US2012/071869, International Search Report and Written Opinion, dated Apr. 29, 2013. |
International Patent Application PCT/US2013/021122, International Search Report and Written Opinion, dated Apr. 23, 2013. |
International Patent Application PCT/US2013/059410, International Search Report and Written Opinion, dated Dec. 9, 2013. |
International Patent Application PCT/US2013/058807, International Search Report and Written Opinion, dated Dec. 30, 2013. |
International Patent Application PCT/US2015/047251, International Search Report and Written Opinion, dated Dec. 16, 2015. |
International Patent Application No. PCT/US2016/018291, International Search Report and Written Opinion, dated Apr. 26, 2016. |
The Extended European Search Report 13804492.0, dated Nov. 20, 2015. |
Systems and Methods to Control Camera Operations, U.S. Appl. No. 15/046,144, filed Feb. 17, 2016, David Lokshin et al., Patented Case, Jan. 30, 2019. |
Systems and Methods to Control Camera Operations, U.S. Appl. No. 16/277,596, filed Feb. 15, 2019, David Lokshin et al., Patented Case, Apr. 29, 2019. |
Number | Date | Country | |
---|---|---|---|
20200275008 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62117398 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16277596 | Feb 2019 | US |
Child | 15931473 | US | |
Parent | 15046144 | Feb 2016 | US |
Child | 16277596 | US |