The present disclosure relates generally to tracking and surveillance of an object.
Fixed video and stills cameras may be used for monitoring an object such as a person within given coverage or monitored area. Examples include security cameras mounted in and about buildings which are monitored by security guards. Other cameras may incorporate controllable movement so that a security guard may track an object, for example to follow a person of interest moving about a building. Various surveillance systems are available to detect an object, for example by detecting movement or scene changes. The object may then be tracked or otherwise monitored using fixed and/or moveable cameras.
In particular embodiments a method of tracking an object carrying a wireless location device is provided. The method comprises recording and storing images from a plurality of cameras corresponding to respective coverage areas having predetermined locations, and determining location information associated with the wireless location device, the location information corresponding to one or more of the coverage areas. The method further comprises determining which of the images correspond to the location information, and retrieving these images.
In particular embodiments a system for tracking an object carrying a wireless location device is provided. The system comprises a plurality of cameras arranged to record images from respective coverage areas having predetermined locations, an image server coupled to the cameras and arranged to store the images recorded by the cameras, a location server arranged to determine and store location information associated with the wireless location device, the location information corresponding to one or more of said coverage areas, and a tracking server arranged to determine which of the images from the image server correspond to the location information, and to retrieve these images.
In particular embodiments a method of surveillance of an object is provided. The method comprises receiving a surveillance request comprising a surveillance time window, determining location information associated with the object over this surveillance time window, the location information corresponding to two or more coverage areas having predetermined locations, retrieving images of the two or more coverage areas which correspond to the location information over the surveillance time window, and displaying the images retrieved.
In particular embodiments a tracking server for tracking an object carrying a wireless location device is provided. The tracking server comprises a processor arranged to determine location information associated with the object over a surveillance time window in response to receiving a surveillance request comprising said surveillance time window. The location information corresponds to two or more coverage areas having predetermined locations. The processor is further arranged to retrieve images of the two or more coverage areas which correspond to the location information over the surveillance time window.
Embodiments are described with reference to the following drawings, by way of example only and without intending to be limiting, in which:
Referring to
The system 100 is used to track one or more objects 105a, 105b through the surveillance region. The objects may include a person (105a), a notebook computer (105b), an artwork, or any moveable object. Each object carries a respective wireless location device 110a, 110b, for example in a pocket of a person (105a) or integrated within a notebook computer (105b). The wireless location devices 110a, 110b may be radio frequency identity tags (RFID), or any wireless device such as a mobile phone which can be configured to communicate with the system in order to enable location information associated with the device to be determined.
The cameras 115x-115z periodically record images of their respective coverage areas 120x-120z which may or may not include an object 105a, 105b, and forward these recorded images together with a respective camera identifier (CamID) to the image server 150. For example each recorded image may be sent as an image file and associated camera identifier (170x, 170y, 170z). The cameras 115x-115z and image server 150 may be coupled using a local area network, coaxial cable or any other suitable mechanism as would be appreciated by those skilled in the art. The image server 150 timestamps the received image file and camera identifier (170x, 170y, 170z) using a suitable timestamp such as a time from a common clock (180) also used by the location server 155 or an internal clock sufficiently synchronized with a corresponding internal clock within the location server 155. The time-stamped image files and camera identifiers are then stored on the image server 150.
The base stations 125a-125c periodically determine location information associated with the wireless location devices 110a, 110b, for example by identifying near-by wireless location devices 110a, 110b and measuring the signal strength of signals received from these identified wireless location devices. The wireless location devices 110a, 110b are configured to periodically transmit their own unique device identifier (WLD_ID). The signal strength of this signal from the wireless location devices can then be measured by receiving base stations 125a-125c as will be appreciated by those skilled in the art. This signal strength measurement can then be used as a proxy for range or distance between the wireless location device 110a, 110b and the respective base station 125a-125c. If the wireless location device signal is picked up by a number of base stations 125a-125c, then the relative measured signal strengths from each base station can be used to determine the relative position of the wireless location devices 110a, 110b using triangulation as will also be appreciated by those skilled in the art. By knowing the locations of the base stations 125a-125c, the estimated positions of the wireless location devices 110a, 110b can then be estimated. Various system configurations will be available to those skilled in the art in order to coordinate the activities of the base stations 125a-125c and wireless location devices 110a, 110b, for example in order to ensure that the base stations are listening for the wireless location device signal transmissions at the right time. This may be achieved for example by arranging the base stations to periodically transmit a common beacon signal to which each of the wireless location devices 110a, 110b is configured to respond.
The base stations 125a-125c are typically located in and around the coverage areas 120x-120z so that each coverage area may be “observed” by at least three base stations 125a-125c. In other words, if an object (105a) and hence a respective wireless location device (110a) are located in a coverage area (120x), then at least three base stations (125a, 125b, 125c) would normally receive and be able to measure the signal strength of signals from that wireless location device (110a).
The base stations 125a-125c forward the wireless location device (110a, 110b) identifiers (WLD_ID) and their respective signal strength measures to the location server 155, together with their respective base station identifiers (BS_ID). This location information 175a-175c is received by the location server 155 and corresponds to one or more of the coverage areas 120x-120z. In other words, because the locations of the base stations 125a-125c are known and positioned around the coverage areas 120x-120z, the positions of the wireless location devices 110a, 110b can be estimated and “located”within or near-by one of the coverage areas 120x-120z. The location information 175a-175c can therefore include the wireless location device (110a, 110b) identifiers (WLD_ID), their respective signal strengths, and the base station identifier (BS_ID) of the base station 125a-125c that received the signal from the wireless location device 110a, 110b. Further location information may include the locations of the respective base stations 125a-125c, received signal angle-of-arrival information, received signal time-of-arrival information, global positioning satellite (GPS) co-ordinates from the wireless location devices 110a, 110b.
The base stations 125a-125c may be coupled to the location server 155 by a local area network (LAN) or any other suitable mechanism. A common LAN (not shown) may be used for coupling the base stations 125a-125c and location server 155, as well as the cameras 115x-115z and image server 150.
The location information received by the location server 155 may simply be time-stamped and stored, for example using the time-stamp functionality 180 used by the image server 150. Alternatively, the location server may further process this location information in order to determine further location information; for example by estimating a position for each wireless location device 110a, 110b. This position estimating may be implemented using the known locations of the base stations 125a-125c which received a signal from the respective wireless location devices 110a, 110b, together with the respective signal strengths of these signals. For example, taking the first object 105a in
The system 100 of this embodiment therefore provides time-stamped images of each coverage area 120x-120z as well as time-stamped location information for each object 105a, 105b, this location information corresponding to one or more of the coverage areas. This allows the tracking server 160 to track a selected object 105a through the coverage areas over time, and hence to retrieve images of that object. Thus given a surveillance time window, the tracking server 160 can determine from the location server the location information of the selected object 105a over that surveillance time window. This location information may simply comprise the coverage area 120x-120z in which the wireless location device 110a carried by the selected object 105a was located at each of a number of time intervals within the surveillance time window. Alternatively, this coverage area information may be determined from other location information stored within the location server 155, for example wireless location device 110a identifiers, corresponding signal strengths and associated base station locations. Once the coverage areas 120x-120z and the respective time intervals during which the wireless location device 110a was located in each coverage area are determined, images corresponding to those coverage areas 120x-120z at those time intervals can be requested from the image server 150. The sequence of coverage areas over the surveillance time window can then be displayed on the screen 165 in order to track the object 105a.
The system of this embodiment may be used for many applications, for example tracking a lost child in an amusement park or other crowded public area or tracking a notebook computer which has been removed from its last known position. More generally, embodiments may be used for security surveillance, inventory tracking in enterprises, and any application that requires video surveillance.
In alternative embodiments, the wireless location devices 110a, 110b may be arranged to simply forward their estimated coordinates to the location server 155, without the need for signal strength measuring at base stations having known locations. For example the wireless location devices 110a, 110b may incorporate GPS functionality and periodically forward their respective GPS coordinates to the location server 155 using a cellular network, or using WLAN base stations whose location is not required. In another example the wireless location devices 110a, 110b may estimate their locations using signals received from base stations having known locations, and forward this location information to the location server 155. In yet a further example, a base station may be positioned within each coverage areas 120x-120z such that when a wireless location device hands-off from one base station to another, it can be determined that the wireless location device has also moved from one coverage area to another—the locations of the base stations or their correspondence with the coverage areas being known.
In further alternative embodiments, the image server 150, location server 155, tracking server 160, screen 165, and time-stamp function 180 may be implemented in a single computer system, or distributed in any suitable manner as would be appreciated by those skilled in the art. Furthermore, the functionality implemented in the image server 150, location server 155, and tracking server 165 may be combined or distributed differently in other apparatus.
Referring now to
The image server (150) receives the recorded images and camera identifiers from a plurality of cameras (115x-115z) at step 220. Thus the image server 150 receives images of a plurality of fixed or known location coverage areas (120x-120z) over time. The image server (150) then timestamps these image files (and camera identifiers) at step 225. This step may be implemented using timestamp signals received from a time-stamping function (180) also used by the location server (155), however the time-stamping function does not require a high degree or tolerance given the speed of the objects (105a, 105b), typically people or objects carried by people, moving about within the coverage areas (120x-120z). The image server then stores the time-stamped image files and camera identifiers at step 230. Given the large size of image files, reduced resolution images or reduced frequency of recorded images may be used in order to reduce the storage requirements in some implementations. Similarly, images may only be stored when a wireless location device (110a, 110b) has been determined to be within the coverage area as will be described in more detail below.
Referring now to
The location server (155) then determines further location information associated with the wireless location devices (110a, 110b) which corresponds to one or more of the coverage areas (120x-120z) at step 320. For each wireless location device (110a), the location server (155) may identify a signal strength measurement and a corresponding base station location from the base station identifier, and estimate the position of the device (110a) using trilateration, triangulation or any other suitable locating method as would be appreciated by those skilled in the art. The estimated position will typically correspond to the predetermined locations of one of the coverage areas, in other words the estimated position is within one of the coverage areas. The location server then timestamps the determined location information (in this example the estimated position) at step 325. This step may be implemented using timestamp signals received from a time-stamping function (180) also used by the image server (150), however an internal clock will typically be adequate. The location server (155) then stores the determined location information at step 330. Whilst the determined location information has been described in this embodiment as an estimated position, or base station locations together with wireless location device signal strengths, the location information could simply be an identifier for the coverage area corresponding to the estimated position of the wireless location device.
Referring now to
The tracking server (160) receives this location information and determines which coverage areas (120x-120z) each location information corresponds to at each time interval at step 420. The correspondence between the location information and the coverage areas is available using the predetermined locations of the coverage areas (120x-120z). The tracking server (160) then requests images from the image server (150) which correspond to the determined coverage areas and respective time intervals at step 425. The requested times correspond to the timestamps used by the location server 155, and also in some embodiments by the image server (150). The image server (150) receives these requested coverage areas and respective time intervals from the tracking server (160) and returns the corresponding recorded and stored images at step 430. The image server may implement this step by matching the requested coverage areas with respective camera identifiers and search for image files having these camera identifiers and the requested time intervals. The tracking server (160) retrieves these images from the image server (150) at step 435. The retrieved image files are recorded images of the coverage areas corresponding to the location information of the identified object at each time interval over the surveillance time window. The tracking server (160) may arrange the received images into chronological order at step 440, for example using the timestamps associated with each image. The images of the coverage areas (120x-120z) traversed by the object (105a) are then displayed on the display screen by the tracking server at step 445. Thus the object (105a) can be tracked over the surveillance time window by viewing the images of the coverage areas showing the object. For example a lost child can be tracked or viewed as he or she moves around an amusement park to determine whether the child has just got lost or been abducted.
The tracking server 160 may additionally be arranged to display images from the coverage area in which an object is currently located. This may be implemented by interrogating the location server on the latest location information for the identified object and wireless location device, and requesting images from the image server of the coverage area corresponding to that location information. Indeed a direct feed from the camera 115x-115z associated with the coverage area may be displayed on the screen 165.
Referring now to
Whilst the embodiment has been described with respect to one object, it may be implemented with respect to many such objects, so that whenever the predetermined location of one of these objects changes, the storing of images is triggered.
In an example implementation, a notebook computer (105b) may have a normal or predetermined location which may or may not be within one of the coverage areas (120x-120y). When the notebook is removed from this predetermined location, the system (100) is configured by methods 500 and 200 to start recording images of the coverage areas in order to enable tracking of the notebook computer. Thus images of the notebook computer (105b) may be used to determine whether the notebook computer was legitimately moved by an authorized person, or has been stolen. If the notebook computer has been stolen, then the thief may be tracked on through the coverage areas, and perhaps their identity determined manually or by the public release of suitable images of the thief.
Another method of triggering the image server to start recording images in response to a predetermined event is shown in FIG. SB. This method 550 may be implemented by the tracking server 160 and the image server 150 of
Whilst the embodiment has been described with respect to one object, it may be implemented with respect to many such objects, so that whenever one of the objects is detected within one of the coverage areas, the storing of images is triggered. Alternatively, storing of images of each of the coverage areas may be triggered by independently by the detection of one of a number of objects within the respective coverage area. Such an arrangement reduces the storage space required for the image files, as only images of one or more predetermined objects are stored. Furthermore, in addition or alternatively, the camera or the respective coverage areas may be arranged to start recording in response to the trigger instructions from the tracking server. In a further arrangement, recording and/or storing of images for a coverage area may be stopped when no objects are detected within the coverage area.
In an example implementation, a person (105a) such as a child in an amusement arcade may receive an RFID tag on a wrist-band when entering. The storing of images from a particular camera may then be triggered upon detection of the child within a corresponding coverage area. In other words, location information associated with the RFID tag (110a) and recorded in the location server (155) is monitored to determine when it corresponds with a coverage area (120x). The image server (150) is then instructed to store images received from the camera (115x) corresponding to the coverage area (120x) which the child (105a) has just entered. Storing of images of the child in different coverage areas may then be triggered as the child enters these areas. Similarly storing of images from other coverage areas may also be triggered when different children enter them. Thus even though there is not continuous image recording of all coverage areas, there is continuous image recording of all objects.
Referring now to
The skilled person will recognise that the above-described apparatus and methods may be embodied as processor control code, for example on a carrier medium such as a disk, CD- or DVD-ROM, programmed memory such as read only memory (Firmware), or on a data carrier such as an optical or electrical signal carrier. For some applications embodiments of the invention may be implemented on a DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array). Thus the code may comprise conventional programme code or microcode or, for example code for setting up or controlling an ASIC or FPGA. The code may also comprise code for dynamically configuring re-configurable apparatus such as re-programmable logic gate arrays. Similarly the code may comprise code for a hardware description language such as Verilog™ or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate, the code may be distributed between a plurality of coupled components in communication with one another. Where appropriate, the embodiments may also be implemented using code running on a field-(re)programmable analogue array or similar device in order to configure analogue hardware.
The skilled person will also appreciate that the various embodiments and specific features described with respect to them could be freely combined with the other embodiments or their specifically described features in general accordance with the above teaching. The skilled person will also recognise that various alterations and modifications can be made to specific examples described without departing from the scope of the appended claims.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Number | Name | Date | Kind |
---|---|---|---|
2911462 | Brady | Nov 1959 | A |
3793489 | Sank | Feb 1974 | A |
3909121 | De Mesquita Cardoso | Sep 1975 | A |
4494144 | Brown | Jan 1985 | A |
4750123 | Christian | Jun 1988 | A |
4815132 | Minami | Mar 1989 | A |
4853764 | Sutter | Aug 1989 | A |
4961211 | Tsugane et al. | Oct 1990 | A |
5020098 | Celli | May 1991 | A |
5136652 | Jibbe et al. | Aug 1992 | A |
5187571 | Braun et al. | Feb 1993 | A |
5200818 | Neta et al. | Apr 1993 | A |
5249035 | Yamanaka | Sep 1993 | A |
5255211 | Redmond | Oct 1993 | A |
5268734 | Parker et al. | Dec 1993 | A |
5317405 | Kuriki et al. | May 1994 | A |
5337363 | Platt | Aug 1994 | A |
5347363 | Yamanaka | Sep 1994 | A |
5406326 | Mowry | Apr 1995 | A |
5423554 | Davis | Jun 1995 | A |
5446834 | Deering | Aug 1995 | A |
5448287 | Hull | Sep 1995 | A |
5467401 | Nagamitsu et al. | Nov 1995 | A |
5495576 | Ritchey | Feb 1996 | A |
5502481 | Dentinger et al. | Mar 1996 | A |
5532737 | Braun | Jul 1996 | A |
5541639 | Takatsuki et al. | Jul 1996 | A |
5541773 | Kamo et al. | Jul 1996 | A |
5625410 | Washino et al. | Apr 1997 | A |
5666153 | Copeland | Sep 1997 | A |
5675374 | Kohda | Oct 1997 | A |
5729471 | Jain et al. | Mar 1998 | A |
5748121 | Romriell | May 1998 | A |
5760826 | Nayar | Jun 1998 | A |
5790182 | Hilaire | Aug 1998 | A |
5815196 | Alshawi | Sep 1998 | A |
5940118 | Van Schyndel | Aug 1999 | A |
5940530 | Fukushima et al. | Aug 1999 | A |
5956100 | Gorski | Sep 1999 | A |
6101113 | Paice | Aug 2000 | A |
6124896 | Kurashige | Sep 2000 | A |
6148092 | Qian | Nov 2000 | A |
6167162 | Jacquin et al. | Dec 2000 | A |
6226035 | Korein et al. | May 2001 | B1 |
6249318 | Girod et al. | Jun 2001 | B1 |
6266082 | Yonezawa et al. | Jul 2001 | B1 |
6285392 | Satoda et al. | Sep 2001 | B1 |
6424377 | Driscoll, Jr. | Jul 2002 | B1 |
6493032 | Wallerstein et al. | Dec 2002 | B1 |
6583808 | Boulanger et al. | Jun 2003 | B2 |
6593956 | Potts et al. | Jul 2003 | B1 |
6680856 | Schreiber | Jan 2004 | B2 |
6704048 | Malkin et al. | Mar 2004 | B1 |
6751106 | Zhang et al. | Jun 2004 | B2 |
6819354 | Foster et al. | Nov 2004 | B1 |
6917271 | Zhang et al. | Jul 2005 | B2 |
6963653 | Miles | Nov 2005 | B1 |
6980526 | Jang et al. | Dec 2005 | B2 |
6990086 | Holur et al. | Jan 2006 | B1 |
7002973 | MeLampy et al. | Feb 2006 | B2 |
7028092 | MeLampy et al. | Apr 2006 | B2 |
7031311 | MeLampy et al. | Apr 2006 | B2 |
7057662 | Malzbender | Jun 2006 | B2 |
7061896 | Jabbari et al. | Jun 2006 | B2 |
7080157 | McCanne | Jul 2006 | B2 |
7111045 | Kato et al. | Sep 2006 | B2 |
7136651 | Kalavade | Nov 2006 | B2 |
D533525 | Arie | Dec 2006 | S |
D533852 | Ma | Dec 2006 | S |
D534511 | Maeda et al. | Jan 2007 | S |
D535954 | Hwang et al. | Jan 2007 | S |
7158674 | Suh | Jan 2007 | B2 |
7161942 | Chen et al. | Jan 2007 | B2 |
D539243 | Chiu et al. | Mar 2007 | S |
D541773 | Chong et al. | May 2007 | S |
D542247 | Kinoshita et al. | May 2007 | S |
7246118 | Chastain et al. | Jul 2007 | B2 |
D550635 | DeMaio et al. | Sep 2007 | S |
D551184 | Kanou et al. | Sep 2007 | S |
D555610 | Yang et al. | Nov 2007 | S |
7336299 | Kostrzewski | Feb 2008 | B2 |
D567202 | Rieu Piquet | Apr 2008 | S |
7353279 | Durvasula et al. | Apr 2008 | B2 |
7359731 | Choksi | Apr 2008 | B2 |
7411975 | Mohaban | Aug 2008 | B1 |
7428000 | Cutler et al. | Sep 2008 | B2 |
D578496 | Leonard | Oct 2008 | S |
7471320 | Malkin et al. | Dec 2008 | B2 |
7477657 | Murphy et al. | Jan 2009 | B1 |
D588560 | Mellingen et al. | Mar 2009 | S |
7518051 | Redmann | Apr 2009 | B2 |
7545761 | Kalbag | Jun 2009 | B1 |
D602453 | Ding et al. | Oct 2009 | S |
7616226 | Roessler et al. | Nov 2009 | B2 |
D610560 | Chen | Feb 2010 | S |
D615514 | Mellingen et al. | May 2010 | S |
D626102 | Buzzard et al. | Oct 2010 | S |
D626103 | Buzzard et al. | Oct 2010 | S |
D628175 | Desai et al. | Nov 2010 | S |
D628968 | Desai et al. | Dec 2010 | S |
7990422 | Ahiska et al. | Aug 2011 | B2 |
20020108125 | Joao | Aug 2002 | A1 |
20020140804 | Colmenarez et al. | Oct 2002 | A1 |
20020149672 | Clapp et al. | Oct 2002 | A1 |
20020186528 | Huang | Dec 2002 | A1 |
20030048218 | Milnes et al. | Mar 2003 | A1 |
20030072460 | Gonopolskiy et al. | Apr 2003 | A1 |
20030160861 | Barlow et al. | Aug 2003 | A1 |
20040003411 | Nakai et al. | Jan 2004 | A1 |
20040061787 | Liu et al. | Apr 2004 | A1 |
20040091232 | Appling, III | May 2004 | A1 |
20040119814 | Clisham et al. | Jun 2004 | A1 |
20040164858 | Lin | Aug 2004 | A1 |
20040178955 | Menache et al. | Sep 2004 | A1 |
20040246962 | Kopeikin et al. | Dec 2004 | A1 |
20040254982 | Hoffman et al. | Dec 2004 | A1 |
20040260796 | Sundqvist et al. | Dec 2004 | A1 |
20050007954 | Sreemanthula et al. | Jan 2005 | A1 |
20050024484 | Leonard | Feb 2005 | A1 |
20050081160 | Wee et al. | Apr 2005 | A1 |
20050110867 | Schulz | May 2005 | A1 |
20050117022 | Marchant | Jun 2005 | A1 |
20050147257 | Melchior et al. | Jul 2005 | A1 |
20050248652 | Firestone et al. | Nov 2005 | A1 |
20050268823 | Bakker et al. | Dec 2005 | A1 |
20060017807 | Lee et al. | Jan 2006 | A1 |
20060028983 | Wright | Feb 2006 | A1 |
20060056056 | Ahiska et al. | Mar 2006 | A1 |
20060066717 | Miceli | Mar 2006 | A1 |
20060082643 | Richards | Apr 2006 | A1 |
20060120307 | Sahashi | Jun 2006 | A1 |
20060120568 | McConville et al. | Jun 2006 | A1 |
20060125691 | Menache et al. | Jun 2006 | A1 |
20060152489 | Sweetser et al. | Jul 2006 | A1 |
20060152575 | Amiel et al. | Jul 2006 | A1 |
20060182436 | Tabuchi et al. | Aug 2006 | A1 |
20060256187 | Sheldon et al. | Nov 2006 | A1 |
20060274157 | Levien et al. | Dec 2006 | A1 |
20060284786 | Takano et al. | Dec 2006 | A1 |
20070039030 | Romanowich et al. | Feb 2007 | A1 |
20070040928 | Jung et al. | Feb 2007 | A1 |
20070052856 | Jung et al. | Mar 2007 | A1 |
20070109411 | Jung et al. | May 2007 | A1 |
20070121353 | Zhang et al. | May 2007 | A1 |
20070140337 | Lim et al. | Jun 2007 | A1 |
20070182818 | Buehler | Aug 2007 | A1 |
20070206556 | Yegani et al. | Sep 2007 | A1 |
20070217406 | Riedel et al. | Sep 2007 | A1 |
20070217500 | Gao et al. | Sep 2007 | A1 |
20070222865 | Levien et al. | Sep 2007 | A1 |
20070247470 | Dhuey et al. | Oct 2007 | A1 |
20070250567 | Graham et al. | Oct 2007 | A1 |
20070250620 | Shah et al. | Oct 2007 | A1 |
20080077390 | Nagao | Mar 2008 | A1 |
20080167078 | Eibye | Jul 2008 | A1 |
20080208444 | Ruckart | Aug 2008 | A1 |
20080240237 | Tian et al. | Oct 2008 | A1 |
20080240571 | Tian et al. | Oct 2008 | A1 |
20090009593 | Cameron et al. | Jan 2009 | A1 |
20090122867 | Mauchly et al. | May 2009 | A1 |
20090207233 | Mauchly et al. | Aug 2009 | A1 |
20090207234 | Chen et al. | Aug 2009 | A1 |
20090244257 | MacDonald et al. | Oct 2009 | A1 |
20090256901 | Mauchly et al. | Oct 2009 | A1 |
20090279476 | Li et al. | Nov 2009 | A1 |
20090324023 | Tian et al. | Dec 2009 | A1 |
20100123770 | Friel et al. | May 2010 | A1 |
20100171808 | Harrell et al. | Jul 2010 | A1 |
20100208078 | Tian et al. | Aug 2010 | A1 |
20100225732 | De Beer et al. | Sep 2010 | A1 |
20100283829 | De Beer et al. | Nov 2010 | A1 |
Number | Date | Country |
---|---|---|
101953158 | Jan 2011 | CN |
102067593 | May 2011 | CN |
0 650 299 | Oct 1994 | EP |
0 714 081 | Nov 1995 | EP |
0 740 177 | Apr 1996 | EP |
1 178 352 | Jun 2002 | EP |
1 589 758 | Oct 2005 | EP |
1701308 | Sep 2006 | EP |
1768058 | Mar 2007 | EP |
2 294 605 | May 1996 | GB |
2 355 876 | May 2001 | GB |
WO 9416517 | Jul 1994 | WO |
WO 9621321 | Jul 1996 | WO |
WO 9708896 | Mar 1997 | WO |
WO 9847291 | Oct 1998 | WO |
WO 9959026 | Nov 1999 | WO |
WO 2005013001 | Feb 2005 | WO |
WO 2005031001 | Feb 2005 | WO |
WO2007106157 | Sep 2007 | WO |
WO2007123946 | Nov 2007 | WO |
WO 2007123960 | Nov 2007 | WO |
WO 2007123960 | Nov 2007 | WO |
WO 2008040258 | Apr 2008 | WO |
WO 2008101117 | Aug 2008 | WO |
WO 2008118887 | Oct 2008 | WO |
WO 2008118887 | Oct 2008 | WO |
WO 2009102503 | Aug 2009 | WO |
WO 2009102503 | Aug 2009 | WO |
WO 2009120814 | Oct 2009 | WO |
WO 2009120814 | Oct 2009 | WO |
WO 2010059481 | May 2010 | WO |
WO2010096342 | Aug 2010 | WO |
WO 2010104765 | Sep 2010 | WO |
WO 2010132271 | Nov 2010 | WO |
Entry |
---|
U.S. Appl. No. 12/781,722, filed May 17, 2010, entitled “System and Method for Providing Retracting Optics in a Video Conferencing Environment,” Inventor(s): Joseph T. Friel, et al. |
U.S. Appl. No. 12/877,833, filed Sep. 8, 2010, entitled “System and Method for Skip Coding During Video Conferencing in a Network Environment,” Inventor[s]: Dihong Tian, et al. |
U.S. Appl. No. 12/870,687, filed Aug. 27, 2010, entitled “System and Method for Producing a Performance Via Video Conferencing in a Network Environment,” Inventor(s): Michael A. Arnao et al. |
U.S. Appl. No. 12/912,556, filed Oct. 26, 2010, entitled “System and Method for Provisioning Flows in a Mobile Network Environment,” Inventors: Balaji Vankat Vankataswami, et al. |
U.S. Appl. No. 12/949,614, filed Nov. 18, 2010, entitled “System and Method for Managing Optics in a Video Environment,” Inventors: Torence Lu, et al. |
U.S. Appl. No. 12/873,100, filed Aug. 31, 2010, entitled “System and Method for Providing Depth Adaptive Video Conferencing,” Inventor(s): J. William Mauchly et al. |
U.S. Appl. No. 12/946,679, filed Nov. 15, 2010, entitled “System and Method for Providing Camera Functions in a Video Environment,” Inventors: Peter A.J. Fornell, et al. |
U.S. Appl. No. 12/946,695, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Audio in a Video Environment,” Inventors: Wei Li, et al. |
U.S. Appl. No. 12/907,914, filed Oct. 19, 2010, entitled “System and Method for Providing Videomail in a Network Environment,” Inventors: David J. Mackie et al. |
U.S. Appl. No. 12/950,786, filed Nov. 19, 2010, entitled “System and Method for Providing Enhanced Video Processing in a Network Environment,” Inventor[s]: David J. Mackie. |
U.S. Appl. No. 12/907,919, filed Oct. 19, 2010, entitled “System and Method for Providing Connectivity in a Network Environment,” Inventors: David J. Mackie et al. |
U.S. Appl. No. 12/946,704, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Graphics in a Video Environment,” Inventors: John M. Kanalakis, Jr., et al. |
U.S. Appl. No. 12/957,116, filed Nov. 30, 2010, entitled “System and Method for Gesture Interface Control,” Inventors: Shuan K. Kirby, et al. |
U.S. Appl. No. 12/907,925, filed Oct. 19, 2010, entitled “System and Method for Providing a Pairing Mechanism in a Video Environment,” Inventors: Gangfeng Kong et al. |
U.S. Appl. No. 12/939,037, filed Nov. 3, 2010, entitled “System and Method for Managing Flows in a Mobile Network Environment,” Inventors: Balaji Venkat Venkataswami et al. |
U.S. Appl. No. 12/946,709, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Graphics in a Video Environment,” Inventors: John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/375,624, filed Sep. 24, 2010, entitled “Mounted Video Unit,” Inventor(s): Ashok T. Desai et al. |
Design U.S. Appl. No. 29/375,627, filed Sep. 24, 2010, entitled “Mounted Video Unit,” Inventor(s): Ashok T. Desai et al. |
Design U.S. Appl. No. 29/369,951, filed Sep. 15, 2010, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al. |
Design U.S. Appl. No. 29/375,458, filed Sep. 22, 2010, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al. |
Design U.S. Appl. No. 29/375,619, filed Sep. 24, 2010, entitled “Free-Standing Video Unit,” Inventor(s): Ashok T. Desai et al. |
Design U.S. Appl. No. 29/381,245, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/381,250, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/381,254, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/381,256, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s); John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/381,259, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/381,260, filed Dec. 16, 2010; entitled “Interface Element,” Inveritor(s): John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/381,262, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al. |
Design U.S. Appl. No. 29/381,264, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al. |
3G, “World's First 3G Video Conference Service with New TV Commercial,” Apr. 28, 2005, 4 pages; http://www.3g.co.uk/PR/April2005/1383.htm. |
Andersson, L., et al., ““LDP Specification,”” Network Working Group, RFC 3036, Jan. 2001, 133 pages; http://tools.ietf.org/html/rfc3036. |
Awduche, D., et al., “Requirements for Traffic Engineering over MPLS,” Network Working Group, RFC 2702, Sep. 1999, 30 pages; http://tools.ietf.org/pdf/rfc2702.pdf. |
Berzin, O., et al., “Mobility Support Using MPLS and MP-BGP Signaling,” Network Working Group, Apr. 28, 2008, 60 pages; http://www.potaroo.net/ietf/all-ids/draft-berzin-malis-mpls-mobility-01.txt. |
Chen, Qing, et al., “Real-time Vision-based Hand Gesture Recognition Using Haar-like Features,” Instrumentation and Measurement Technology Conference, Warsaw, Poland, May 1-3, 2007, 6 pages; http://www.google.com/url?sa=t&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.93.103%26rep%3Drep1%26type%3Dpdf&ei=A28RTLKRDeftnQeXzZGRAw&usg=AFQjCNHpwj5MwjgGp-3goVzSWad6CO-Jzw. |
“Custom 3D Depth Sensing Prototype System for Gesture Control,” 3D Depth Sensing, GestureTek, 3 pages; [Retrieved and printed on Dec. 1, 2010] http://www.gesturetek.com/3ddepth/introduction.php. |
Digital Video Enterprises, “DVE Eye Contact Silhouette,” 1 page, © DVE 2008; http://www.dvetelepresence.com/products/eyeContactSilhouette.asp. |
Gluckman, Joshua, et al., “Rectified Catadioptric Stereo Sensors,” 8 pages, retrieved and printed on May 17, 2010; http://cis.poly.edu/˜gluckman/papers/cvpr00.pdf. |
Gundaveili S., et al., “Proxy Mobile IPv6,” Network Working Group, RFC 5213, Aug. 2008, 93 pages; http://tools.ietf.org/pdf/rfc521.pdf. |
Hopper, D., “Efficiency Analysis and Application of Uncovered Background Prediction in a Low BitRate Image Coder,” IEEE Transactions on Communications, vol. 38, No. 9, pp. 1578-1584, Sep. 1990. |
Jamoussi, Bamil, “Constraint-Based LSP Setup Using LDP,” MPLS Working Group, Sep. 1999, 34 pages; http://tools.ietf.org/html/draft-ietf-mpls-cr-idp-03. |
Jeyatharan, M., et al., “3GPP TFT Reference for Flow Binding,” MEXT Working Group, Mar. 2, 2010, 11 pages; http:/www.ietf.org/id/draft-jeyatharan-mext-flow-tftemp-reference-00.txt. |
Kollarits, R.V., et al., “34.3: An Eye Contact Camera/Display System for Videophone Applications Using a Conventional Direct-View LCD,” © 1995 SID, ISSN0097-0966X/95/2601, pp. 765-768; http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=47A1E7E028C26503975E633895D114EC?doi=10.1.1.42.1772&rep=rep1&type=pdf. |
Kolsch, Mathias, “Vision Based Hand Gesture Interfaces for Wearable Computing and Virtual Environments,” A Dissertation submitted in partial satisfacton of the requirements for the degree of Doctor of Philosophy in Computer Science, University of California, Santa Barbara, Nov. 2007, 288 pages; http://fulfillment.umi.com/dissertations/b7afbcb56ba72fdb14d26dfccc6b470f/1291487062/3143800.pdf. |
Marvin Imaging Processing Framework, “Skin-colored pixels detection using Marvin Framework,” video clip, YouTube, posted Feb. 9, 2010 by marvinproject, 1 page; http://www.youtube.com/user/marvinproject#p/a/u/0/3ZuQHYNicrl. |
Miller, Paul, “Microsoft Research patents controller-free computer input via EMG muscle sensors,” Engadget.com, Jan. 3, 2010, 1 page; http://www.engadget.com/2010/01/03/microsoft-research-patents-controller-free-computer-input-via-em/. |
Oh, Hwang-Seok, et al., “Block-Matching Algorithm Based on Dynamic Search Window Adjustment,” Dept. of CS, KAIST, 1997, 6 pages; http://citeseerx.ist.psu.edu/viewdoc/similar?doi=10.1.1.29.8621&type=ab. |
PCT International Preliminary Report on Patentability mailed Aug. 26, 2010 for PCT/US2009/001070; 10 pages. |
PCT International Preliminary Report on Patentability mailed Oct. 7, 2010 for PCT/US2009/038310; 10 pages. |
PCT International Report of Patentability dated May 15, 2006, for PCT International Application PCT/US2004/021585, 6 pages. |
PCT International Search Report mailed Aug. 24, 2010 for PCT/US2010033880; 4 pages. |
“Real-time Hand Motion/Gesture Detection for HCI-Demo 2,” video clip, YouTube, posted Dec. 17, 2008 by smmy0705, 1 page; www.youtube.com/watch?v=mLT4CFLIi8A&feature=related. |
“Simple Hand Gesture Recognition,” video clip, YouTube, posted Aug. 25, 2008 by pooh8210, 1 page; http://www.youtube.com/watch?v=F8GVeV0dYLM&feature=related. |
Soliman, H., et al., “Flow Bindings in Mobile IPv6 and NEMO Basic Support,” IETF MEXT Working Group, Nov. 9, 2009, 38 pages; http://tools.ietf.org/html/draft-ietf-mext-flow-binding-04. |
Sudan, Ranject, “Signaling in MPLS Networks with RSVP-TE-Technology Information,” Telecommunications, Nov. 2000, 3 pages; http://findarticles.com/p/articles/mi—mOTLC/is—11—34/ai—67447072/. |
Trevor Darrell, “A Real-Time Virtual Mirror Display,” 1 page, Sep. 9, 1998; http://people.csail.mit.edu/trevor/papers/1998-021/node6.html. |
Wachs, J., et al., “A Real-time Hand Gesture System Based on Evolutionary Search,” Vision, 3rd Quarter 2006, vol. 22, No. 3, 18 pages; http://web.ics.purdue.edu/˜jpwachs/papers/3q06vi.pdf. |
Wang, Robert and Jovan Popovic, “Bimanual rotation and scaling,” video clip, YouTube, posted by rkeltset on Apr. 14, 2010, 1 page; http://www.youtube.com/watch?v=7TPFSCX79U. |
Wang, Robert and Jovan Popovic, “Desktop virtual reality,” video clip, YouTube, posted by rkeltset on Apr. 8, 2010, 1 page; http://www.youtube.com/watch?v=9rBtm62Lkfk. |
Wang, Robert and Jovan Popovic, “Gestural user input,” video clip, YouTube, posted by rkeltset on May 19, 2010, 1 page; http://www.youtube.com/watch?v=3JWYTtBjdTE. |
Wang, Robert and Jovan Popovic, “Manipulating a virtual yoke,” video clip, YouTube, posted by rkeltset on Jun. 8, 2010, 1 page; http://www.youtube.com/watch?v=UfgGOO2uM. |
Wang, Robert and Jovan Popovic, “Real-Time Hand-Tracking with a Color Glove, ACM Transaction on Graphics,” 4 pages, [Retrieved and printed on Dec. 1, 2010] http://people.csail.mit.edu/rywang/hand. |
Wang, Robert and Jovan Popovic, “Real-Time Hand-Tracking with a Color Glove, ACM Transaction on Graphics” (SIGGRAPH 2009), 28(3), Aug. 2009; 8 pages http://people.csail.mit.edu/rywang/handtracking/s09-hand-tracking.pdf. |
Wang, Robert and Jovan Popovic, “Tracking the 3D pose and configuration of the hand,” video clip, YouTube, posted by rkeltset on Mar. 31, 2010, 1 page; http://www.youtube.com/watch?v=JOXwjkWP6Sw. |
“Wi-Fi Protected Setup,” from Wikipedia, Sep. 2, 2010, 3 pages http://en.wikipedia.org/wiki/Wi-Fi—Protected—Setup. |
Xia, F., et al., “Home Agent Initiated Flow Binding for Mobile IPv6,” Network Working Group, Oct. 19, 2009, 15 pages; http://tools.ietf.orghtml/draft-xia-mext-ha-init-flow-binding-01.txt. |
Yegani, P. et al., “GRE Key Extension for Mobile IPv4,” Network Working Group, Feb. 2006, 11 pages; http://tools.ietf.org/pdf/draft-yegani-gre-key-extension-01.pdf. |
Zhong, Ren, et al., “Integration of Mobile IP and MPLS,” Network Working Group, Jul. 2000, 15 pages; http://tools.ietf.org/html/draft-zhong-mobile-ip-mpls-01. |
“Oblong Industries is the developer of the g-speak spatial operation environment,” Oblong Industries Information Page, 2 pages, [Retrieved and printed on Dec. 1, 2010] http://oblong.com. |
Underkoffler, John, “G-Speak Overview 1828121108,” video clip, Vimeo.com, 1 page, [Retrieved and printed on Dec. 1, 2010] http://vimeo.com/2229299. |
Kramer, Kwindla, “Mary Ann de Lares Norris at Thinking Digital,” Oblong Industries, Inc. Web Log, Aug. 24, 2010; 1 page; http://oblong/com/articles/0BS6hEeJmoHoCwgJ.html. |
“Mary Ann de Lares Norris,” video clip, Thinking Digital 2010 Day Two, Thinking Digital Videos, May 27, 2010, 3 pages; http://videos.thinkingdigital.co.uk/2010/05/mary-ann-de-lares-norris-oblong/. |
Kramer, Kwindla, “Oblong at TED,” Oblong Industries, Inc. Web Log, Jun. 6, 2010, 1 page; http://oblong.com/article/0B22LFIS1NVyrOmR.html. |
Video on TED.com, Pranav Mistry: the Thrilling Potential of SixthSense Technology (5 pages) and Interactive Transcript (5 pages), retrieved and printed on Nov. 30, 2010; http://www.ted.com/talks/pranav—mistry—the—thrilling—potential—of—sixthsense—technology.html. |
“John Underkoffler points to the future of UI,” video clip and interactive transcript, Video on TED.com, Jun. 2010, 6 pages; http://www.ted.com/talks/john—underkoffler—drive—3d—data—with—a—gesture.html. |
Kramer, Kwindla, “Oblong on Bloomberg TV,” Oblong Industries, Inc. Web Log, Jan. 28, 2010, 1 page; http://oblong.com/article/0AN—1KD9q990PEnw.html. |
Kramer, Kwindla, “g-speak at RISD, Fall 2009,” Oblong Industries, Inc. Web Log, Oct. 29, 2009, 1 page; http://oblong.com/article/09uW060q6xRIZYvm.html. |
Kramer, Kwindla, “g-speak + TMG,” Oblong Industries, Inc. Web Log, Mar. 24, 2009, 1 page; http://oblong.com/article/08mM77zpYMm7kFtv.html. |
“g-stalt version 1,” video clip, YouTube.com, posted by zigg1es on Mar. 15, 2009, 1 page; http://youtube.com/watch?v=k8ZAql4mdvk. |
Underkoffler, John, “Carlton Sparrell speaks at MIT,” Oblong Industries, Inc. Web Log, Oct. 30, 2009, 1 page; http://oblong.com/article/09usAB4l1Ukb6CPw.html. |
Underkoffler, John, “Carlton Sparrell at MIT Media Lab,” video clip, Vimeo.com, 1 page, [Retrieved and printed Dec. 1, 2010] http://vimeo.com/7355992. |
Underkoffler, John, “Oblong at Altitude: Sundance 2009,” Oblong Industries, Inc. Web Log, Jan. 20, 2009, 1 page; http://oblong.com/article/08Sr62ron—2akg0D.html. |
Underkoffler, John, “Oblong's tamper system 1801011309,” video clip, Vimeo.com, 1 page, [Retrieved and printed Dec. 1, 2010] http://vimeo.com/2821182. |
Feld, Brad, “Science Fact,” Oblong Industries, Inc. Web Log, Nov. 13, 2008, 2 pages,http://oblong.com/article/084H-PKI5Tb9I4Ti.html. |
Kwindla Kramer, “g-speak in slices,” Oblong Industries, Inc. Web Log, Nov. 13, 2008, 6 pages; http://oblong.com/article/0866JqfNrFg1NeuK.html. |
Underkoffler, John, “Origins: arriving here,” Oblong Industries, Inc. Web Log, Nov. 13, 2008, 5 pages; http://oblong.com/article/085zBpRSY9JeLv2z.html. |
Rishel, Christian, “Commercial overview: Platform and Products,” Oblong Industries, Inc., Nov. 13, 2008, 3 pages; http://oblong.com/article/086E19gPvDcktAf9.html. |
Arrington, Michael, “eJamming—Distributed Jamming,” TechCrunch; Mar. 16, 2006; http://www.techcrunch.com/2006/03/16/ejamming-distributed-jamming/; 1 page. |
Beesley, S.T.C., et al., “Active Macroblock Skipping in the H.264 Video Coding Standard,” in Proceedings of 2005 Conference on Visualization, Imaging, and Image Processing—VIIP 2005, Sep. 7-9, 2005, Benidorm, Spain, Paper 480-261. ACTA Press, ISBN: 0-88986-528-0; 5 pages. |
Chan et al., “Experiments on Block-Matching Techniques for Video Coding,” Multimedia Systems, vol. 2, 1994, pp. 228-241. |
Chen et al., “Toward a Compelling Sensation of Telepresence: Demonstrating a Portal to a Distant (Static) Office,” Proceedings Visualization 2000; VIS 2000; Salt Lake City, UT, Oct. 8-13, 2000; Annual IEEE Conference on Visualization, Los Alamitos, CA; IEEE Comp. Soc., US, Jan. 1, 2000, pp. 327-333; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.1287. |
“Cisco Expo Germany 2009 Opening,” Posted on YouTube on May 4, 2009; http://www.youtube.com/watch?v=SDKsaSlz4MK; 2 pages. |
eJamming Audio, Learn More; [retrieved and printed on May 27, 2010] http://www.ejamming.com/learnmore/; 4 pages. |
Foote, J., et al., “Flycam: Practical Panoramic Video and Automatic Camera Control,” in Proceedings of IEEE International Conference on Multimedia and Expo, vol. III, Jul. 30, 2000; pp. 1419-1422; http://citeseerx.ist.psu.edu/viewdoc/versions?doi=10.1.1.138.8686. |
“France Telecom's Magic Telepresence Wall,” Jul. 11, 2006; http://www.humanproductivitylab.com/archive—blogs/2006/07/11/france—telecoms—magic—telepres—1.php; 4 pages. |
Guili, D., et al., “Orchestral; A Distributed Platform for Virtual Musical Groups and Music Distance Learning over the Internet in JavaTM Technology”; [retrieved and printed on Jun. 6, 2010] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=arnumber=778626; 2 pages. |
He, L., et al., “The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing,” Proc. SIGGRAPH, © 1996; http://research.microsoft.com/en-us/um/people/lhe/papers.siggraph96.vc.pdf; 8 pages. |
Jiang, Minqiang, et al., “On Lagrange Multiplier and Quantizer Adjustment of H.264 Frame-layer Video Rate Control,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, Issue 5, May 2006, pp. 663-669. |
Kannangara, C.S., et al., “Complexity Reduction of H.264 Using Lagrange Multiplier Methods,” IEEE Int. Conf. on Visual Information Engineering, Apr. 2005; www.rgu.ac.uk/files/h264—complexity—kannangara.pdf; 6 pages. |
Kannangara, C.S., et al., “Low Complexity Skip Prediction for H.264 through Lagrangian Cost Estimation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, No. 2, Feb. 2006; www.rgu.ac.uk/files/h264—skippredict—richardson—final.pdf; 20 pages. |
Kim, Y.H., et al., “Adaptive mode decision for H.264 encoder,” Electronics letters, vol. 40, Issue 19, pp. 1172-1173, Sep. 2004; 2 pages. |
Lee, J. and Jeon, B., “Fast Mode Decision for H.264,” ISO/IEC MPEG and ITU-T VCEG Joint Video Team, Doc. JVT-J033, Dec. 2003; http://media.skku.ac.kr/publications/paper/IntC/liy—ICME2004.pdf; 4 pages. |
Liu, Z., “Head-Size Equalization for Better Visual Perception of Video Conferencing,” Proceedings, IEEEInternational Conference on Multimedia & Expo (ICME2005), Jul. 6-8, 2005, Amsterdam, The Netherlands; http://research.microsoft.com/users/cohen/HeadSizeEqualizationICME2005.pdf; 4 pages. |
Mann, S., et al., “Virtual Bellows: Constructing High Quality Still from Video,” Proceedings, First IEEE International Conference on Image Processing ICIP-94, Nov. 13-16, 1994, Austin, TX; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.8405; 5 pages. |
“Opera Over Cisco TelePresence at Cisco Expo 2009, in Hannover Germany—Apr. 28, 29,” posted on YouTube on May 5, 2009; http://www.youtube.com/watch?v=xN5jNH5E-38; 1 page. |
Payatagool, Chris, “Orchestral Manoeuvres in the Light of Telepresence,” Telepresence Options, Nov. 12, 2008; http://www.telepresenceoptions.com/2008/11/orchestral—manoeuvres; 2pages. |
PCT “International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” PCT/US2010/026456, dated Jun. 29, 2010, 11 pages. |
PCT Search Report for PCT Application No. PCT/US2009/064061 dated Feb. 11, 2010, 4 pages. |
PCT Written Opinion for PCT Application No. PCT/US2009/064061 dated Feb. 23, 2010; 14 pages. |
Pixel Tools “Rate Control and H.264: H.264 rate control algorithm dynamically adjusts encoder parameters,” [retrieved and printed on Jun. 10, 2010] http://www.pixeltools.om/rate—control—paper.html; 7 pages. |
Richardson, I.E.G., et al., “Fast H.264 Skip Mode Selection Using and Estimation Framework,” Picture Coding Symposium, (Beijing, China), Apr. 2006; www.rgu.ac.uk/files/richardson—fast—skip—estimation—pcs06.pdf; 6 pages. |
Satoh, Kiyohide et al., “Passive Depth Acquisition for 3D Image Displays”, IEICE Transactions on Information and Systems, Information Systems Society, Tokyo, JP, Sep. 1, 1994, vol. E77-D, No. 9, pp. 949-957. |
Schroeder, Erica, “The Next Top Model—Collaboration,” Collaboration, The Workspace: A New World of Communications and Collaboration, Mar. 9, 2009; http//blogs.cisco.com/collaboration/comments/the—next—top—model; 3 pages. |
Shum, H.-Y, et al., “A Review of Image-Based Rendering Techniques,” in SPIE Proceedings vol. 4067(3); Proceedings of the Conference on Visual Communications and Image Processing 2000, Jun. 20-23, 2000, Perth, Australia; pp. 2-13; https://research.microsoft.com/pubs/68826/review—image—rendering.pdf. |
Sonoma Wireworks Forums, “Jammin on Rifflink,” [retrieved and printed on May 27, 2010] http://www.sonomawireworks.com/forums/viewtopic.php?id=2659; 5 pages. |
Sonoma Wireworks Rifflink, [retrieved and printed on Jun. 2, 2010] http://www.sonomawireworks.com/rifflink.php; 3 pages. |
Sullivan, Gary J.,et al., “Video Compression—From Concepts to the H.264/AVC Standard,” Proceedings IEEE, vol. 93, No. 1, Jan. 2005; http://ip.hhi.de/imagecom—G1/assets/pdfs/pieee—sullivan—wiegand—2005.pdf; 14 pages. |
Sun, X., et al., “Region of Interest Extraction and Virtual Camera Control Based on Panoramic Video Capturing,” IEEE Trans. Multimedia, Oct. 27, 2003; http://vision.ece.ucsb.edu/publications/04mmXdsun.pdf; 14 pages. |
Westerink, P.H., et al., “Two-pass MPEG-2 variable-bitrate encoding,” IBM Journal of Research and Development, Jul. 1991, vol. 43, No. 4; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.128.421; 18 pages. |
Wiegand, T., et al., “Efficient mode selection for block-based motion compensated video coding,” Proceedings, 2005 International Conference on Image Processing IIP 2005, pp. 2559-2562; citeseer.ist.psu.edu/wiegand95efficient.html. |
Wiegand, T., et al., “Rate-distortion optimized mode selection for very low bit rate video coding and the emerging H.263 standard,” IEEE Trans. Circuits Syst. Video Technol., Apr. 1996, vol. 6, No. 2., pp. 182-190. |
Xin, Jun, et al., “Efficient macroblock coding-mode decision for H.264/AVC video coding,” Technical Repot MERL 2004-079, Mitsubishi Electric Research Laboratories, Jan. 2004; www.merl.com/publications/TR2004-079/; 12 pages. |
Yang, Xiaokang, et al., Rate Control for H.264 with Two-Step Quantization Parameter Determination but Single-Pass Encoding, EURASIP Journal on Applied Signal Processing, Jun. 2006; http://downloads.hindawi.com/journals/asp/2006/063409.pdf; 13 pages. |
U.S. Appl. No. 12/234,291, filed Sep. 19, 2008, entitled “System and Method for Enabling Communication Session in a Network Environment,” Inventor(s): Yifan Gao et al. |
U.S. Appl. No. 12/366,593, filed Feb. 5, 2009, entitled “System and Method for Depth Perspective Image Rendering,” Inventor(s): J. William Mauchly et al. |
U.S. Appl. No. 12/475,075, filed May 29, 2009, entitled “System and Method for Extending Communications Between Participants in a Conferencing Environment,” Inventor(s): Brian J. Baldino et al. |
U.S. Appl. No. 12/400,540, filed Mar. 9, 2009, entitled “System and Method for Providing Three Dimensional Video Conferencing in a Network Environment,” Inventor(s): Karthik Dakshinamoorthy et al. |
U.S. Appl. No. 12/400,582, filed Mar. 9, 2009, entitled “System and Method for Providing Three Dimensional Imaging in a Network Environment,” Inventor(s): Shmuel Shaffer et al. |
U.S. Appl. No. 12/539,461, filed Aug. 11, 2009, entitled “System and Method for Verifying Parameters in an Audiovisual Environment,” Inventor: James M. Alexander. |
U.S. Appl. No. 12/463,505, filed May 11, 2009, entitled “System and Method for Translating Communications Between Participants in a Conferencing Environment,” Inventor(s): Marthinus F. De Beer et al. |
U.S. Appl. No. 12/727,089, filed Mar. 18, 2010, entitled “System and Method for Enhancing Video Images in a Conference Environment,” Inventor: Joseph T. Friel. |
U.S. Appl. No. 12/784,257, filed May 20, 2010, entitled “Implementing Selective Image Enhancement,” Inventor: Dihong Tian et al. |
“3D Particles Experiments in AS3 and Flash CS3,” printed Mar. 18, 2010, 2 pages; http://www.flashandmath.com/advanced/fourparticles/notes.html. |
Active8-3D—Holographic Projection—3D Hologram Retail Display & Video Project, [retrieved Feb. 24, 2009], http://www.activ8-3d.co.uk/3d—holocubes, 1 page. |
Avrithis, Y., et al., “Color-Based Retrieval of Facial Images,” European Signal Processing Conference (EUSIPCO '00), Tampere, Finland; Sep. 2000; 18 pages. |
Bakstein Hynek, et al., “Visual Fidelity of Image Based Rendering,” Center for Machine Perception, Czech Technical University, 10 pages. |
Boccaccio, Jeff; CEPro, “Inside HDMI CEC: The Little-Known Control Feature,” http://www.cepro.com/article/print/inside—hdmi—cec—the—little—known—control—feature; Dec. 28, 2007, 2 pages. |
Bücken R: “Bildfernsprechen: Videokonferenz vom Arbeitsplatz aus” Funkschau, Weka Fachzeitschriften Verlag, Poing, DE, No. 17, Aug. 14, 1986, pp. 41-43, XP002537729; ISSN: 0016-2841, p. 43, left-handed column, line 34—middle column, line 24; 3pgs. |
Chen, Jason, “iBluetooth Lets iPhone Users Send and Receive Filed Over Bluetooth,” Mar. 13, 2009; 1 page; http://i.gizmodo.com/5169545/ibluetooth-lets-iphone-users-send-and-receive-files-over-bluetooth. |
Cisco: Bill Mauchly and Mod Marathe; UNC: Henry Fuchs, et al., “Depth-Dependent Perspective Rendering,” 6 pgs. |
Costa, Cristina, et al., “Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distorion Map,” EURASIP Journal on Applied Signal Processing, vol. 2004, No. 12; pp. 1899-1911; © 2004 Hindawi Publishing Corp,; XP002536356; ISSN: 1110-8657; 16 pages. |
Criminisi, A., et al., “Efficient Dense-Stereo and Novel-view Synthesis for Gaze Manipulation in One-to-one Teleconferencing,” Technical Rpt MSR-TR-2003-59, Sep. 2003 [retrieved Feb. 26, 2009], http://research.microsoft.com/pubs/67266/criminis—techrep2006-59.pdf, 41 pages. |
Daly, S., et al., “Face-based visually-optimized image sequence coding,” Image Processing, 1998. ICIP 98. Proceedings; 1998 International Conference on Chicago, IL; Oct. 4-7, 1998, Los Alamitos; IEEE Computing; vol. 3, Oct. 4, 1998; pp. 443-447, ISBN: 978-0-8186-8821-8; XP010586786, 5 pages. |
Diaz, Jesus, iPhone Blue-tooth File Transfer Coming Soon (YES!): Jan. 25, 2009; 1 page; http://i.gizmodo.com//5138797/iphone-bluetooth-file-transfer-coming-soon-yes. |
Diaz, Jesus, “Zcam 3D Camera is Like Wii Without Wiimote and Minority Report Without Gloves,” Dec. 15, 2007, 3 pgs.; http://gizmodo.com/gadgets/zcam-depth-camera-could-be-wii-challenger/zcam-3d-camera-is-like-wii-without-wiimote-and-minority-report-without-gloves-334426.php. |
DVE Digital Video Enterprises, “DVE Tele-Immersion Room,” http://www.dvetelepresence.com/products/immersion—room.asp; 2009, 2 pgs. |
“Dynamic Displays,” copyright 2005-2008 [retrieved Feb. 24, 2009], http://www.zebraimaging.com/html/lighting—display.html, 2 pages. |
ECmag.com, “IBS Products,” Published Apr. 2009, 2 pages; http://www.ecmag.com/index.cfm?fa=article&articleID=10065. |
Electrophysics Glossary, “Infrared Cameras, Thermal Imaging, Night Vision, Roof Moisture Detection,” printed Mar. 18, 2010, 11 pages; http://www.electrophysics.com/Browse/Brw—Glossary.asp. |
Farrukh, A., et al., Automated Segmentation of Skin-Tone Regions in Video Sequences, Proceedings IEEE Students Conference, ISCON—apos—02; Aug. 16-17, 2002; pp. 122-128. |
Fiala, Mark, “Automatic Projector Calibration Using Self-Identifying Patterns,” National Research Council of Canada, 2005; http://www.procams.org/procams2005/papers/procams05-36.pdf; 6 pages. |
Freeman, Professor Wilson T., Computer Vision Lecture Slides, “6.869 Advances in Computer Vision: Learning and Interfaces,” Spring 2005; 21 pages. |
Gemmell, Jim, et al., “Gaze Awareness for Video-conferencing: A Software Approach,” IEEE MultiMedia, Oct.-Dec. 2000; 10 pages. |
Gotchev, Atanas, “Computer Technologies for 3D Video Delivery for Home Entertainment,” International Conference on Computer Systems and Technologies; CompSysTech '08; 6 pgs; http://ecet.ecs.ru.acad.bg/cst08/docs/cp/Plenary/P.1.pdf. |
Gries, Dan, “3D Particles Experiments in AS3 and Flash CS3, Dan's Comments,” printed May 24, 2010 http://www.flashandmath.com/advanced/fourparticles/notes.html; 3pgs. |
Guernsey, Lisa, “Toward Better Communication Across the Language Barrier,” Jul. 29, 1999, http://www.nytimes.com/1999/07/29/technology/toward-better-communication-across-the-language-barrier.html; 2 pages. |
Habili, Nariman, et al., “Segmentation of the Face and Hands in Sign Language Video Sequences Using Color and Motion Cues” IEEE Transaction on Circuits and Systems for Video Technology, IEEE Service Center, vol. 14, No. 8, Aug. 1, 2004; ISSN: 1051-8215; pp. 1086-1097; XP011115755; 13 pages. |
Holographic Imaging, “Dynamic Holography for scientific uses, military heads up display and even someday HoloTV Using TI's DMD,” [retrieved Feb. 26, 2009], http://innovation.swmed.edu/ research/instrumentation/res—inst—dev3d.html, 5 pages. |
Hornbeck, Larry J., “Digital Light Processing™: A New MEMS-Based Display Technology,” [retrieved Feb. 26, 2009]; http://focus.ti.com/pdfs/dlpdmd/17—Digital—Light—Processing—MEMS—display—technology.pdf, 22 pages. |
“Infrared Cameras TVS-200-EX,” printed May 24, 2010; 3 pgs.; http://www.electrophysics.com/Browse/Brw—ProductLineCategory.asp?CategoryID=184&Area=IS. |
IR Distribution Category @ Envious Technology, “IR Distribution Category,” 2 pages http://www.envioustechnology.com.au/ products/product-list.php?CID=305, printed on Apr. 22, 2009. |
IR Trans—Products and Orders—Ethernet Devices, 2 pages http://www.irtrans.de/en/shop/lan.php, printed on Apr. 22, 2009. |
Isgro, Francesco et al., “Three-Dimensional Image Processing in the Future of Immersive Media,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, No. 3; XP011108796; ISSN: 1051-8215; Mar. 1, 2004; pp. 288-303; 16 pages. |
Itoh, Hiroyasu, et al., “Use of a gain modulating framing camera for time-resolved imaging of cellular phenomena,” SPIE vol. 2979, 1997, pp. 733-740; 8 pages. |
Kauff, Peter, et al., “An Immersive 3D Video-Conferencing System Using Shared Virtual Team User Environments,” Proceedings of the 4th International Conference on Collaborative Virtual Environments, XP040139458; Sep. 30, 2002; 8 pages. |
Kazutake, Uehira, “Simulation of 3D image depth perception in a 3D display using two stereoscopic displays at different depths,” http://adsabs.harvard.edu./abs/2006SPIE.6055.408U; 2006, 2 pgs. |
Keijser, Jeroen, et al., “Exploring 3D Interaction in Alternate Control-Display Space Mappings,” IEEE Symposium on 3D User Interfaces, Mar. 10-11, 2007, pp. 17-24; 8 pages. |
Klint, Josh, “Deterred Rendering in Leadwerks Engine,” Copyright Leadwersk Corporation 2008, 10 pages; http://www.leadwerks.com/files/Deferred—Rendering—in—Leadwerks—Engine.pdf. |
Koyama, S., et al. “A Day and Night Vision MOS Imager with Robust Photonic-Crystal-Based RGB-and-IR,” Mar. 2008, pp. 754-759; ISSN: 0018-9383; IEE Transactions on Electron Devices, vol. 55, No. 3; 6 pages http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4455782&isnumber=4455723. |
Lawson, S., “Cisco Plans TelePresence Translation Next Year,” Dec. 9, 2008; http://www.pcworld.com/ article/155237/.html?ik=rss—news; 2 pages. |
Miller, Gregor, et al., “interactive Free-Viewpoint Video,” Centre for Vision, Speech and Signal Processing, [retrieved Feb. 26, 2009], http://www.ee.surrey.ac.uk/CVSSP/VMRG/ Publications/miller05cvmp.pdf, 10 pages. |
“Minoru from Novo is the worlds first consumer 3D Webcam,” Dec. 11, 2008 [retrieved Feb. 24, 2009], http://www.minoru3d.com, 4 pages. |
Mitsubishi Electric Research Laboratories, copyright 2009 [Retrieved Feb. 26, 2009], http://www.merl.com/projects/3dtv, 2 pages. |
National Training Systems Association Home—Main, Interservice/Industry Training, Simulation & Education Conference, Dec. 1-4, 2008 [retrieved Feb. 26, 2009], http://ntsa.metapress.com/app/ home/main.asp?referrer=default, 1 page. |
OptolQ, “Anti-Speckle Techniques Uses Dynamic Optics,” Jun. 1, 2009, 2 pages; http://www.optoiq.com/index/photonics-technologies-applications/lfw-display/lfw-article-display/363444/articles/optoiq2/photonics-technologies/technology-products/optical-components/optical-mems/2009/12/anti-speckle-technique-uses-dynamic-optics/QP129867/cmpid=EnlOptoLFWJanuary132010.html. |
OptolQ, “Smart Camera Supports Multiple Interfaces,” Jan. 22, 2009, 2 pages; http://www.optoiq.com/index/machine-vision-imaging-processing/display/vsd-article-display/350639/articles/vision-systems-design/daliy-product-2/2009/01/smart-camera-supports-multiple-interfaces.html. |
OptoIQ, “Vision + Automation Products—VideometerLab 2,” 11 pgs.; http://www.optoiq.com/optoiq-2/en-us/index/machine-vision-imaging-processing/display/vsd-articles-tools-template.articles.vision-systems-design.volume-11.issue-10.departments.new-products.vision-automation-products.htmlhtml. |
OptolQ, “Vision Systems Design—Machine Vision and Image Processing Technology,” printed Mar. 18, 2010, 2 pages; http://www.optoiq.com/index/machine-vision-imaging-processing.html. |
PCT “Notification of Transmittal Opinion of the International Searching Report and the Written Opinion of the International Searching Authority, or the Declaration,” PCT/US2009/001070, dated Apr. 8, 2009, 17 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration; PCT/US2009/038310; dated Oct. 10, 2009; 19 pages. |
Radhika, N., et al., “Mobile Dynamic recofigurable Context aware middleware for Adhoc smart spaces,” vol. 22, 2008, 3 pages http://www.acadjournal.com/2008/V22/part6/p7. |
“Rayvel Business-to-Business Products,” copyright 2004 [retrieved Feb. 24, 2009], http://www.rayvel.com/b2b.html, 2 pages. |
“Robust Face Localisation Using Motion, Colour & Fusion” Dec. 10, 2003; Proc. VIIth Digital Image Computing: Techniques and Applications, Sun C. et al (Eds.), Sydney; 10 pgs.; Retrieved from the internet: http://www.cmis.csiro.au/Hugues.Talbot/dicta2003/cdrom/pdf/0899.pdf; pp. 899-908, XP007905630. |
School of Computing, “Bluetooth over IP for Mobile Phones,” 1 page http://www.computing.dcu.ie/wwwadmin/fyp-abstract/list/fyp...details05.jsp?year=2005&number=51470574. |
Sena, “Industrial Bluetooth,” 1 page http://www.sena.com/products/industrial—bluetooth, printed on Apr. 22, 2009. |
Shaffer, Shmuel, “Translation—State of the Art” presentation; Jan. 15, 2009; 22 pages. |
Shi, C. et al., “Automatic Image Quality Improvement for Videoconferencing,” IEEE ICASSP © 2004, 4 pgs. |
SMARTHOME, “IR Extender Expands Your IR Capabilities,” 3 pages http://www.smarthome.com/8121.html, printed Apr. 22, 2009. |
Soohuan, Kim, et al., “Block-based face detection scheme using face color and motion estimation,” Real-Time Imaging VIII; Jan. 20-22, 2004, San Jose, CA; vol. 5297, No. 1; Proceedings of the SPIE—The International Society for Optical Engineering SPIE—Int. Soc. Opt. Eng USA ISSN: 0277-786X; pp. 78-88; XP007905596; 11 pgs. |
“Super Home Inspectors or Super Inspectors,” printed Mar. 18, 2010, 3 pages; http://www.umrt.com/PageManager/Default.aspx/PageID=2120325. |
Total immersion, Video Gallery, copyright 2008-2009 [retrieved Feb. 26, 2009], http://www.t-immersion.com/en,video-gallery,36.html, 1 page. |
Trucco, E., et al., “Real-Time Disparity Maps for Immersive 3-D Teleconferencing by Hybrid Recursive Matching and Census Transform,” 9 pages; retrieved and printed from the website on May 4, 2010 from http://server.cs.ucf.edu/˜vision/papers/VidReg-final.pdf. |
Tsapatsoulis, N., et al., “Face Detection for Multimedia Applications,” Proceedings of the ICIP '00; Vancouver, BC, Canada; Sep. 2000; 4 pages. |
Tsapatsoulis, N., et al., “Face Detection in Color Images and Video Sequences,” 10th Mediterranean Electrotechnical Conference (MELECON), 2000; vol. 2; pp. 498-502; 21 pgs. |
Wang, Hualu, et al., “A Highly Efficient System for Automatic Face Region Detection inMPEG Video,” IEEE Transactions on Circuits and Systems for Video Technology; vol. 7, Issue 4; 1977 pp. 615-628; 26 pgs. |
Wilson, Mark, “Dreamoc 3D Display Turns Any Phone Into Hologram Machine,” Oct. 30, 2008 [retrieved Feb. 24, 2009], http://gizmodo.com/5070906/dreamoc-3d-display-turns-any-phone-into-hologram-machine, 2 pages. |
WirelessDevNet, Melody Launches Bluetooth Over IP, http://www.wirelessdevnet.com/news/2001/ 155/news5.html; 2 pages, printed on Jun. 5, 2001. |
WO 2008/118887 A3 Publication with PCT International Search Report (4 pages), International Preliminary Report on Patentability (1 page), and Written Opinion of the ISA (7 pages); PCT/US2008/058079; dated Sep. 18, 2008. |
Yang, Jie, et al., “A Real-Time Face Tracker,” Proceedings 3rd IEEE Workshop on Applications of Computer Vision; 1996; Dec. 2-4, 1996; pp. 142-147; 6 pgs. |
Yang, Ming-Hsuan, et al., “Detecting Faces in Images: A Survey,” vol. 24, No. 1; Jan. 2002; pp. 34-58; 25 pgs. |
Yang, Ruigang, et al., “Real-Time Consensus-Based Scene Reconstruction using Commodity Graphics Hardware,” Department of Computer Science, University of North Carolina at Chapel Hill, 10 pgs. |
Yoo, Byounghun, et al., “Image-Based Modeling of Urban Buildings Using Aerial Photographs and Digital Maps,” Transactions in GIS, vol. 10 No. 3, p. 377-394, 2006; 18 pages [retrieved May 17, 2010], http://icad,kaist.ac.kr/publication/paper—data/image—based.pdf. |
U.S. Appl. No. 13/036,925, filed Feb. 28, 2011 ,entitled “System and Method for Selection of Video Data in a Video Conference Environment,” Inventor(s) Sylvia Olayinka Aya Manfa N'guessan. |
U.S. Appl. No. 13/096,772, filed Apr. 28, 2011, entitled “System and Method for Providing Enhanced Eye Gaze in a Video Conferencing Environment,” Inventor(s): Charles C. Byers. |
U.S. Appl. No. 13/106,002, filed May 12, 2011, entitled “System and Method for Video Coding in a Dynamic Environment,” Inventors: Dihong Tian et al. |
U.S. Appl. No. 13/098,430, filed Apr. 30, 2011, entitled “System and Method for Transferring Transparency Information in a Video Environment,” Inventors: Eddie Collins et al. |
U.S. Appl. No. 13/096,795, filed Apr. 28, 2011, entitled “System and Method for Providing Enhanced Eye Gaze in a Video Conferencing Environment,” Inventors: Charles C. Byers. |
Design U.S. Appl. No. 29/389,651, filed Apr. 14, 2011, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al. |
Design U.S. Appl. No. 29/389,654, filed Apr. 14, 2011, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al. |
Richardson, Iain, et al., “Video Encoder Complexity Reduction by Estimating Skip Mode Distortion,” Image Communication Technology Group; [Retrieved and printed Oct. 21, 2010] 4 pages; http://www4.rgu.ac.uk/files/ICIP04—richardson—zhao—final.pdf. |
Boros, S., “Policy-Based Network Management with SNMP,” Proceedings of the EUNICE 2000 Summer School Sep. 13-15, 2000, p. 3. |
Cumming, Jonathan, “Session Border Control in IMS, An Analysis of the Requirements for Session Border Control in IMS Networks,” Sections 1.1, 1.1.1, 1.1.3, 1.1.4, 2.1.1, 3.2, 3.3.1, 5.2.3 and pp. 7-8, Data Connection, 2005. |
Dornaika F., et al., “Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters,” 20040627; 20040627-20040602, Jun. 27, 2004, 22 pages; Heudiasy Research Lab, http://eprints.pascal-network.org/archive/00001231/01/rtvhci—chapter8.pdf. |
EPO Aug. 15, 2011 Response to EPO Communication mailed Feb. 25, 2011 from European Patent Application No. 09725288.6; 15 pages. |
EPO Communication dated Feb. 25, 2011 for EP09725288.6 (published as EP22777308); 4 pages. |
Geys et al., “Fast Interpolated Cameras by Combining a GPU Based Plane Sweep With a Max-Flow Regularisation Algorithm,” Sep. 9, 2004; 3D Data Processing, Visualization and Transmission 2004, pp. 534-541. |
Hammadi, Nait Charif et al., “Tracking the Activity of Participants in a Meeting,” Machine Vision and Applications, Springer, Berlin, De Lnkd—DOI:10.1007/S00138-006-0015-5, vol. 17, No. 2, May 1, 2006, pp. 83-93, XP019323925 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.9832. |
Kwolek, B., “Model Based Facial Pose Tracking Using a Particle Filter,” Geometric Modeling and Imaging—New Trends, 2006 London, England Jul. 5-6, 2005, Piscataway, NJ, USA, IEEE LNKD-DOI: 10.1109/GMAI.2006.34 Jul. 5, 2006, pp. 203-208; XP010927285 [Abstract Only]. |
PCT Sep. 25, 2007 Notification of Transmittal of the International Search Report from PCT/US06/45895. |
PCT Sep. 2, 2008 International Preliminary Report on Patentability (1 page) and the Written Opinion of th ISA (4 pages) from PCT/US2006/045895. |
PCT Sep. 11, 2008 Notification of Transmittal of the International Search Report from PCT/US07/09469. |
PCT Nov. 4, 2008 International Preliminary Report on Patentability (1 page) and the Written Opinion of the ISA (8 pages) from PCT/US2007/009469. |
PCT May 11, 2010 International Search Report from PCT/US2010/024059; 4 pages. |
PCT Aug. 23, 2011 International Preliminary Report on Patentability and Written Opinion of the ISA from PCT/US2010/024059; 6 pages. |
PCT Sep. 13, 2011 International Preliminary Report on Patentability and the Written Opinion of the ISA from PCT/US2010/026456; 5 pages. |
Number | Date | Country | |
---|---|---|---|
20080303901 A1 | Dec 2008 | US |