System and method for managing optics in a video environment

Information

  • Patent Grant
  • 8542264
  • Patent Number
    8,542,264
  • Date Filed
    Thursday, November 18, 2010
    13 years ago
  • Date Issued
    Tuesday, September 24, 2013
    10 years ago
Abstract
An apparatus is provided in one example and includes a camera configured to receive image data associated with an end user involved in a video session. The apparatus further includes a display configured to interface with the camera. The camera and the display cooperate such that the apparatus can initiate the video session involving the end user, and activate a retracting mechanism configured to move the camera such that the camera is retracted from a view of the display and the camera moves to an inactive state.
Description
TECHNICAL FIELD

This disclosure relates in general to the field of video and, more particularly, to managing optics in a video environment.


BACKGROUND

Video services have become increasingly important in today's society. In certain architectures, service providers may seek to offer sophisticated video conferencing services for their end users. The video conferencing architecture can offer an “in-person” meeting experience over a network. Video conferencing architectures can deliver real-time, face-to-face interactions between people using advanced visual, audio, and collaboration technologies. Some issues have arisen in video conferencing scenarios where mechanical parts can obscure portions of a video conference. Deficient effective viewpoints can distort the video images being sent to participants in a video conference. The ability to optimize video environments provides a significant challenge to system designers, device manufacturers, and participants of video conferences.





BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:



FIG. 1A is a simplified schematic diagram of a system for managing optics in a video environment in accordance with one embodiment of the present disclosure;



FIGS. 1B-1D are simplified schematic diagrams illustrating various example operations associated with the system;



FIG. 1E is a simplified schematic diagram illustrating example illuminating elements associated with the system for managing optics in a video environment;



FIG. 2 is a simplified schematic diagram illustrating one possible design for a camera associated with the system;



FIG. 3 is a simplified schematic diagram illustrating one potential arrangement associated with the camera of FIG. 2;



FIG. 4 is a simplified schematic diagram of a system for controlling optics in a video conferencing environment in accordance with another embodiment of the present disclosure; and



FIGS. 5-6 are simplified flow diagrams illustrating potential operations associated with the system.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview


An apparatus is provided in one example and includes a camera configured to receive image data associated with an end user involved in a video session. The apparatus further includes a display configured to interface with the camera. The camera and the display cooperate such that the apparatus can initiate the video session involving the end user, and activate a retracting mechanism configured to move the camera such that the camera is retracted from a view of the display and the camera moves to an inactive state.


In more particular embodiments, the apparatus can include a housing unit, which includes the retracting mechanism. The retracting mechanism includes a motor configured to provide a retracting force to the camera. The apparatus can further be configured to activate the retracting mechanism such that the camera is moved to a position in the view of the display and the camera moves to an active state.


In yet other embodiments, the display includes a perimeter configured to illuminate when the video session is initiated. The apparatus can also include a motor control element configured to signal a motor to provide a retracting force to the camera. The retracting mechanism includes a sensor configured to monitor a position of the camera. The apparatus can also include a controlling element configured to activate the retracting mechanism; and a retracting module configured to receive a wireless signal in order to activate the retracting mechanism. In more specific implementations, the apparatus can include a telescopic stand coupled to the display and configured to be adjusted in a horizontal plane such that the display moves in concert with adjustments to the telescopic stand.


Example Embodiments


Turning to FIGS. 1A-1B, FIGS. 1A-1B are simplified schematic diagrams of a system 10 for providing a retracting camera 20 in a video conferencing environment. FIGURE 1A includes a housing unit 12 and a display 14. In one particular implementation, display 14 may include a stand 18, which can support or otherwise stabilize display 14. FIG. 1B illustrates camera 20 in a deployed state. In accordance with one example embodiment of system 10, a retractable mechanism allows camera 20 to drop down in front of display 14 when video conferencing is initiated. When video conferencing is terminated, a retractable mechanism allows camera 20 to retract from in front of display 14 into housing unit 12.


Returning to FIG. 1A, camera 20 is illustrated in a retracted state (i.e., an inactive state) such that camera 20 is appropriately stowed in housing unit 12. The term ‘inactive state’ is meant to connote any type of dormant status such that camera 20 is not engaged, or being used by the architecture. This inactive state can be the result of a retraction operation, or a general movement of camera 20 such that it does not block a view for a given end user. Also, as used herein in this Specification, the term ‘housing unit’ can include mechanical elements to facilitate its retracting function (e.g., inclusive of hooks, springs, pins, latches, pinions, gears, screws, levers, snaps, Velcro, etc.). In other embodiments, camera 20 can be retracted in a motorized fashion, using any type of electronics, cable system, etc. As used herein in this Specification, the term ‘retraction mechanism’ is meant to include any type of element capable of reeling, pulling, or providing a general force that moves an object in any variant of a direction. Such a direction may be upward, lateral (where a camera and an optics element would be mounted on the side of a display), downward (where a camera and an optics element would be mounted on the bottom of a display), or any other suitable angle. For purposes of discussion, a set of example retracting approaches are described below with reference to FIGS. 1B-1D.


Note that in most video conferencing systems, a video camera is mounted such that it hangs in front of its associated display, where this arrangement can obscure portions of the display area. For example, in the case of 65″ screens, a small percentage of the display area is obscured. The benefit is that the video camera can be close to the position of the displayed person's eyes, thereby giving a better apparent eye contact than if the video camera were mounted farther above (e.g., on a bezel). When this scenario is moved to other types of video conferencing systems (e.g., a desktop system, where the display may be 24″), and the user sits about two-three feet from the display, several problems occur. First, the video camera covers an objectionably larger percentage of the display. Hence, the camera installation (collectively: the custom brackets, the camera, the wires, etc.) obstruct the view of the display. Furthermore, the display is not useful as a general-purpose computer display.


In addition, it should be noted that other problems exist with personal use video conferencing architectures (e.g., webcams). For example, a given end user may be afraid that a counterparty is routinely watching them, regardless of whether a video session is occurring. Also, camera lenses collect dust that inhibits the image quality of the captured video data. Further, most low-cost cameras have small apertures, and typically have noise problems in low light.


System 10 can resolve these issues (and others) in providing an elegant configuration that accommodates several types of users, and that captures optimal image data. By utilizing a retractable camera 20 (e.g., as shown in FIG. 1B), system 10 can offer a viable solution for capturing an ideal field of view of a subject. Furthermore, such an arrangement can improve eye contact for the end user of display 14. In operational terms, when camera 20 is not visible to the audience, the architecture is in its inactive state, which positions camera 20 out of the way of display 14. In the inactive state, an end user has an unobstructed view of display 14. When the camera 20 is retracted out of the way of display 14, system 10 looks and operates as a display for other potential video applications (e.g., in personal computing). Further, when camera 20 is retracted in housing unit 12, an audience can intuitively appreciate that camera 20 is no longer recording or transmitting images of the audience or their surroundings. Moreover, housing unit 12 provides physical protection from dust, dirt, or physical contact with the retracted camera 20.


Turning to FIGS. 1C-1D, these FIGURES are simplified schematic diagrams illustrating possible approaches for retracting camera 20 into housing unit 12. In FIG. 1C, camera 20 is retracted rotationally (e.g., on a pivot) into housing unit 12. Camera 20 may be rotated clockwise or counterclockwise as indicated by dashed lines. Similarly, as illustrated in FIG. 1D, camera 20 may be retracted rotationally toward an audience (i.e., away from display 14) as indicated by dashed lines. Although a rotational retraction is illustrated in three specific directions, camera 20 may be rotationally retracted into housing unit 12 in a variant of directional planes and suitable angles.


In one particular implementation, as illustrated in FIG. 1E, the perimeter of display 14 is configured to illuminate when a video conference is initiated and, further, remains illuminated while the video conference is in progress. In one particular implementation, illuminating the perimeter of display 14 signals that a video conference is in progress. When a video conference is terminated, the perimeter of display 14 dulls. In one particular implementation, a dulled perimeter of display 14 indicates that display 14 is operating as a display for other potential video applications (e.g., in personal computing). Although display 14 has been described as having a perimeter that illuminates, other aspects of the display could be illuminated and dulled to indicate additional functional states of display 14. Additionally, display 14 can have illuminating elements of different colors, which can signal different events. For example, a red illuminating perimeter may be indicative of an end user seeking not to be disturbed during the video conference. Similarly, a green illuminating perimeter may signal to other users that the end user in the video conference can receive communications. A blinking perimeter may be indicative of a video call about to end, or to begin. Any such coloring schemes, or other coloring/intermittent flashing schemes, are encompassed within the broad teachings of the present disclosure.


Before turning to details and operational capabilities of this architecture, a brief discussion is provided about some of the infrastructure of FIGS. 1A-1E. Display 14 offers a screen at which video data can be rendered for the end user. Note that as used herein in this Specification, the term ‘display’ is meant to connote any element that is capable of delivering an image, video data, text, sound, audiovisual data, etc. to an end user during a video session. This would necessarily be inclusive of any panel, plasma element, television, monitor, electronic surface, computer interface, screen, or any other suitable element that is capable of delivering such information. Note also that the term ‘video session’ is meant to connote any type of media or video session (or audio-video) provided in any protocol or format that could be provided in conjunction with display 14. Similarly, the term ‘image data’ is meant to include any type of image information that can be captured by camera 20.


In one particular example, camera 20 is an Internet protocol (IP) camera configured to record, maintain, cache, receive, and/or transmit data. This could include transmitting packets over an IP network to a suitable next destination. Recorded files could be stored in camera 20 itself, or provided in some suitable storage area (e.g., a database, server, etc.). In one particular instance, camera 20 is its own separate network device and it has a separate IP address. Camera 20 could be a wireless camera, a high-definition camera, or any other suitable camera device configured to capture image information associated with a participant positioned in front of display 14.


Camera 20 can be configured to capture the image data and send it to any suitable processing platform, or to a server attached to the network for processing and for subsequent distribution to remote sites (e.g., to other participants and the video session). The server could include an image-processing platform such as a media experience engine (MXE), which is a processing element that can attach to the network. The MXE can simplify media sharing across the network by optimizing its delivery in any format for any device. It could also provide media conversion, real-time postproduction, editing, formatting, and network distribution for subsequent communications. The system can utilize real-time face and eye recognition algorithms to detect the position of the participant's eyes in a video session. Any type of image synthesizer (e.g., within the server, at a remote location, somewhere in the network, etc.) can process the video data captured by camera 20.



FIG. 2 is a simplified schematic diagram associated with one particular retracting mechanism 30. This particular implementation includes camera 20, a set of position sensors 22, a mounting unit 26, and a set of guides 28. In one particular arrangement, these elements can be included within (or be provided in conjunction with) housing unit 12, which can be configured to store camera 20. Camera 20 is suitably coupled to mounting unit 26. Mounting unit 26 interfaces with guides 28 in order to move camera 20 to various positions (e.g., retracted and deployed). Position sensors 22 can interface with mounting unit 26 and camera 20 to evaluate when camera 20 is positioned at a desired location. In one particular implementation, position sensors 22 (e.g., a high sensor and a low sensor) can be evaluated in order to determine when camera 20 is in the up position (i.e., when camera 20 is in an inactive state) or in the down position (i.e., camera 20 is in a deployed (inactive) state). A motor element can be implemented to create a force (e.g., a rotational force) that is translated in order to manipulate mounting unit 26 and camera 20 in a certain direction (e.g., raise and lower). In one particular implementation, the motor element can be performed by a linkage drive; however, other motor elements are equally suitable. Alternatives include, a linear actuator, a worm gear system, or any other suitable mechanism. Moreover, although camera 20 is described as being suitably coupled to mounting unit 26, camera 20 could easily be designed to provide the interface functions between mounting unit 26 and guides 28. Thus, camera 20 and mounting unit 26 could be implemented as a single element.


It is imperative to note that retracting mechanism 30 of FIG. 2 is not solely limited to the mounting unit 26, guides 28, and position sensors 22 arrangement discussed above. For example, an air system could be used in conjunction with any of the previously discussed objects in order to quietly release camera 20 from its retracted position. Other examples could include spring mechanisms that secure camera 20 in place and/or allow camera 20 to extend downward. In other embodiments involving more mechanical systems, a simple latching mechanism could be used to restrain camera 20 at its designated location. Virtually any type of retracting and/or storage mechanism could be employed. For example, a simple hand-crank could be used to retract and, subsequently, store camera 20. Other architectures could be similarly manual, where an individual could simply push camera 20 up and away from display 14 when camera 20 is not being used. In this sense, an individual can rotate camera 20 (e.g., on a pivot) such that it can be stored when not in use. Any of these viable alternatives are included within the broad term ‘retracting mechanism’ as used herein in this Specification.


Retracting mechanism 30 outlined above has several pragmatic advantages associated with video conferencing systems. For example, by employing such a mechanism, the underlying display can be used for various other purposes (e.g., general personal computing applications, television uses, presentations, etc.). Also, the retractable feature minimizes dust and debris from forming on the video optics generally. Furthermore, based on its apparent physical state, retraction mechanism 30 can provide a clear indication that the video conferencing system is in use. As video conferencing architectures have become more prevalent, certain users have developed an awareness that camera 20 (e.g., regardless of its operational status) may be tracking their movements. When a camera is retracted (and suitably stored), this physical cue offers an assurance that an individual's movement is not being captured by camera 20.



FIG. 3 is a simplified schematic diagram of a printed circuit board (PCB) 40 for offering a retracting camera in a video environment. FIG. 3 includes camera 20, a position sensor 42, an audio multiplexer 44, an audio port 46, and a motor controller 48. A codec of PCB 40 can send a signal to motor controller 48 to initiate a motor element to manipulate camera 20 (e.g., deploy and retract). Position sensor 42, through the codec, can send a signal to motor controller 48 that camera 20 is located in a desired position. Motor controller 48 can also signal the motor element to terminate the force it is applying to camera 20. The codec can send signals to motor controller 48 to both deploy and retract camera 20. Likewise, motor controller 48 can signal a motor element to deploy and retract camera 20. Further, PCB 40 may include an audio multiplexer 44 that suitably combines audio signals received from multiple microphones deployed in system 10. Audio port 46 interfaces with audio multiplexer 44 to send audio signals from PCB to suitable receiver circuits or elements not integrated on PCB 40. Audio port 46 may also be configured to transmit various other signals (e.g., data, power, etc.) Further, audio port 46 may also receive various signals (e.g., audio, data, power, etc.) from sources not integrated on PCB 40.



FIG. 4 is a simplified schematic diagram of a system 90 for managing optics in a video environment. In addition to the components discussed previously, FIG. 4 also includes a telescopic supporting stand 96, a touchpad 92, and a remote control 94. Telescopic supporting stand 96 can be suitably coupled to display 14 for adjustment in a horizontal plane such that display 14 moves in concert with adjustments to telescopic supporting stand 96. Touchpad 92 and remote control 94 are ‘controlling elements’ that may have overlapping functions, complementary functions, or completely different functions. In one particular example, each of touchpad 92 and remote control 94 can operate the retraction system associated with camera 20. Housing unit 12, touchpad 92, and remote control 94 may include a respective processor 97a-c, a memory element 98a-c, and a retracting module 99a-c. Note that retracting modules 99a-c can be tasked with deployment operations in addition to retraction activities.


Touchpad 92 may include audio features, sharing features (e.g., for sharing data, documents, applications, etc. between video conferencing participants), application features (e.g., where the applications are being executed in conjunction with a video conference), calling/connection features (e.g., transferring calls, bridging calls, initiating calls, connecting parties, receiving calls, etc.) or any other end-user features that can be applicable to a video conference. In one particular arrangement, touchpad 92 and remote control 94 are wireless; however, touchpad 92 and remote control 94 could alternatively be implemented with suitable wires, cables, infrared, connections, etc. in order to facilitate the operations thereof.


In operation of one example scenario, an individual can schedule a video conferencing session with a counterparty. This scheduling can be inclusive of designating appropriate times, reminders, location information, invitees, applications to be used during the video conference, etc. The individual uses a touchpad (e.g., touchpad 92 of FIG. 4) to initiate the call. In one particular example, initiating the call triggers housing unit 12 to begin deploying camera 20. For example, touchpad 92 can interface with housing unit 12 and, thereby, receive signals from housing unit 12. In other instances, housing unit 12 can be synchronized with a calendar function such that it (intuitively or automatically) understands when to deploy camera 20 at designated times.


In another embodiment, touchpad 92 can be used to trigger the deployment of camera 20 before the call is initiated. Note that the terms ‘trigger’, ‘initiate’, and ‘activate’ are simply connoting some type of signal being provided to any of the elements discussed herein. This could include simple ON/OFF signaling, retracting activities, deployment activities, etc., all of which could apply to individual components of the described architectures, or collectively to multiple components such that they move in concert with a single signal. Subsequently, the video conference ends, and the individual can use touchpad 92 to retract/store camera 20.



FIG. 5 is a simplified flowchart 100 illustrating one example embodiment associated with system 10. The flow begins at 110, where a first user seeks to contact a second user for the purpose of conducting a video conference. Using a video capable terminal (e.g., an IP Phone, personal computer, etc.), the first user enters (e.g., dials) the second user's contact information (e.g., phone number). Note that the video conference could have been prescheduled such that a Calendar Invite, a WebEx notification, a Calendar Reminder, etc. could have triggered the first user's contacting activity.


At 120, the second user's video capable terminal (e.g., IP Phone, personal computer, etc.) receives the request to commence a video conference and the second user answers the call. The video conference commences once the second user answers the video capable terminal. Once the video conference commences, there could be an audio prompt, or a graphical illustration that signals to each of the users that the video conference has effectively been established. In this particular example, and as reflected by 130, both displays may be illuminated in order to signify that the call is in session. Note that if the second user chooses to answer the call while he/she is using his/her display for other video purposes (e.g., a software application on a personal computer), then the video call takes over the display screen such that the application is minimized during the call. The second user may still share that application if he/she chooses (e.g., a software prompt, a physical button, etc.), but not necessarily as a default protocol (i.e., the second user needs to suitably authorize this sharing activity before the first individual would see the second user's current screen).


At 140, the camera associated with each of the displays may move from its respective housing into its appropriate position for capturing image data. The deployment of each camera may also indicate to each respective user that the video conference has been initiated. At 150, both users can see each other on their respective displays. An ensuing conversation can occur, where the parties may freely share documents and conduct any appropriate activities associated with video conferencing.


As shown in 160, at the conclusion of the call, both users may end the call by pressing some button (e.g., a software icon, a physical button on an IP Phone, etc.). At 170, the cameras associated with each display may be retracted into their respective housings. At approximately the same time, any illumination elements associated with the displays may be turned off to signify that the video conferencing has ended. Likewise, the retraction of each camera may indicate to each respective user that the video conference session has terminated.



FIG. 6 is a simplified flowchart 200 illustrating one generic example operation associated with system 10. The flow begins at 210, where a signal is sent from a given endpoint (e.g., a remote control) to housing unit 12. At step 220 signal is received at housing unit 12, which reacts to the signal by triggering a force to deploy camera 20 (shown by operation 230). A perimeter of display 14 is illuminated at 240 to indicate the video session is active. At 250, another signal is sent to housing unit 12. At 260, housing unit 12 activates a retracting mechanism configured to move camera 20 such that it is retracted from a view of the display. Camera 20 moves to an inactive state at 270, and the perimeter is dulled, or turned off.


Note that in certain example implementations, the retracting functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit (ASIC), digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.). In some of these instances, a memory element can store data used for the operations described herein. This includes the memory element (e.g., as shown in FIG. 4) being able to store software, logic, code, or processor instructions that are executed to carry out the activities described in this Specification. A processor (e.g., as shown in FIG. 4) can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.


In one example implementation, retracting mechanism 30, PCB 40 and/or housing unit 12 includes software (e.g., provisioned as retracting module 99c, and/or in any suitable location of PCB 40) in order to achieve the retracting/deployment functions outlined herein. These activities can be facilitated by motor controller 48. Additionally, each of these devices may include a processor that can execute software or an algorithm to perform the retracting/deployment activities, as discussed in this Specification. These devices may further keep information in any suitable memory element (random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein (e.g., database, table, cache, key, etc.) should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’ Each of retracting mechanism 30, PCB 40, and/or housing unit 12 can also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment.


Note that with the example provided above, as well as numerous other examples provided herein, interaction may be described in terms of two or three components. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of components. It should be appreciated that system 10 (and its teachings) are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of system 10 as potentially applied to a myriad of other architectures.


It is also important to note that the operations in the preceding flow diagrams illustrate only some of the possible video conferencing scenarios and patterns that may be executed by, or within, system 10. Some of these operations may be deleted or removed where appropriate, or these operations may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by system 10 in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.


For example, although camera 20 has been described as being mounted in a particular fashion, camera 20 could be mounted in any suitable manner in order to capture image data from an effective viewpoint. Other configurations could include suitable wall mountings, aisle mountings, furniture mountings, cabinet mountings, etc., or arrangements in which cameras would be appropriately spaced or positioned to perform its functions. Additionally, system 10 can have direct applicability in Telepresence environments (both large and small) such that quality image data can be collected during video sessions. Moreover, although system 10 has been illustrated with reference to particular elements and operations that facilitate the communication process, these elements and operations may be replaced by any suitable architecture or process that achieves the intended functionality of system 10.

Claims
  • 1. A method, comprising: initiating a video session involving an end user, wherein a camera is configured to receive image data associated with the video session; andactivating a retracting mechanism configured to move the camera such that the camera is retracted from a view of a display into a housing unit and the camera moves to an inactive state;wherein the display is configured to illuminate a portion of the display when the video session is initiated.
  • 2. The method of claim 1, further comprising: activating the retracting mechanism such that the camera is moved to a position in the view of the display and the camera moves to an active state.
  • 3. The method of claim 1, wherein the housing unit includes the retracting mechanism, which includes a motor configured to provide a retracting force to the camera.
  • 4. The method of claim 1, wherein a motor control element signals a motor to provide a retracting force to the camera.
  • 5. The method of claim 1, wherein the retracting mechanism includes a sensor configured to monitor a position of the camera.
  • 6. The method of claim 1, wherein a wireless controlling element is configured to activate the retracting mechanism.
  • 7. Logic encoded in one or more non-transitory tangible media that includes code for execution and when executed by a processor operable to perform operations comprising: initiating a video session involving an end user, wherein a camera is configured to receive image data associated with the video session; andactivating a retracting mechanism configured to move the camera such that the camera is retracted from a view of a display into a housing unit and the camera moves to an inactive state:wherein the display is configured to illuminate a portion of the display when the video session is initiated.
  • 8. The logic of claim 7, the operations further comprising: activating the retracting mechanism such that the camera is moved to a position in the view of the display and the camera moves to an active state.
  • 9. The logic of claim 7, wherein a motor control element signals a motor to provide a retracting force to the camera.
  • 10. The logic of claim 7, wherein the retracting mechanism includes a sensor configured to monitor a position of the camera.
  • 11. The logic of claim 7, wherein a wireless controlling element is configured to activate the retracting mechanism.
  • 12. An apparatus, comprising: a camera configured to receive image data associated with an end user involved in a video session; anda display configured to interface with the camera, wherein the camera and the display cooperate such that the apparatus is configured to; initiate the video session involving the end user; andactivate a retracting mechanism configured to move the camera such that the camera is retracted from a view of the display into a housing unit and the camera moves to an inactive state;wherein the display is configured to illuminate a portion of the display when the video session is initiated.
  • 13. The apparatus of claim 12, further comprising: the housing unit that includes the retracting mechanism, wherein the retracting mechanism includes a motor configured to provide a retracting force to the camera.
  • 14. The apparatus of claim 12, wherein the apparatus is further configured to: activate the retracting mechanism such that the camera is moved to a position in the view of the display and the camera moves to an active state.
  • 15. The apparatus of claim 12, wherein the display includes a perimeter configured to illuminate when the video session is initiated.
  • 16. The apparatus of claim 12, further comprising: a motor control element configured to signal a motor to provide a retracting force to the camera, and wherein the retracting mechanism includes a sensor configured to monitor a position of the camera.
  • 17. The apparatus of claim 12, further comprising: a controlling element configured to activate the retracting mechanism; anda retracting module configured to receive a wireless signal from the controlling element in order to activate the retracting mechanism.
  • 18. The apparatus of claim 12, further comprising: a telescopic stand coupled to the display and configured to be adjusted in a horizontal plane such that the display moves in concert with adjustments to the telescopic stand.
US Referenced Citations (557)
Number Name Date Kind
2911462 Brady Nov 1959 A
D212798 Dreyfuss Nov 1968 S
3793489 Sank Feb 1974 A
3909121 De Mesquita Cardoso Sep 1975 A
D270271 Steele Aug 1983 S
4400724 Fields Aug 1983 A
4473285 Winter Sep 1984 A
4494144 Brown Jan 1985 A
4750123 Christian Jun 1988 A
4815132 Minami Mar 1989 A
4827253 Maltz May 1989 A
4853764 Sutter Aug 1989 A
4890314 Judd et al. Dec 1989 A
4961211 Tsugane et al. Oct 1990 A
4994912 Lumelsky et al. Feb 1991 A
5003532 Ashida et al. Mar 1991 A
5020098 Celli May 1991 A
5033969 Kamimura Jul 1991 A
5136652 Jibbe et al. Aug 1992 A
5187571 Braun et al. Feb 1993 A
5200818 Neta et al. Apr 1993 A
5243697 Hoeber et al. Sep 1993 A
5249035 Yamanaka Sep 1993 A
5255211 Redmond Oct 1993 A
D341848 Bigelow et al. Nov 1993 S
5268734 Parker et al. Dec 1993 A
5317405 Kuriki et al. May 1994 A
5337363 Platt Aug 1994 A
5347363 Yamanaka Sep 1994 A
5351067 Lumelsky et al. Sep 1994 A
5359362 Lewis et al. Oct 1994 A
D357468 Rodd Apr 1995 S
5406326 Mowry Apr 1995 A
5423554 Davis Jun 1995 A
5446834 Deering Aug 1995 A
5448287 Hull Sep 1995 A
5467401 Nagamitsu et al. Nov 1995 A
5495576 Ritchey Feb 1996 A
5502481 Dentinger et al. Mar 1996 A
5502726 Fischer Mar 1996 A
5506604 Nally et al. Apr 1996 A
5532737 Braun Jul 1996 A
5541639 Takatsuki et al. Jul 1996 A
5541773 Kamo et al. Jul 1996 A
5570372 Shaffer Oct 1996 A
5572248 Allen et al. Nov 1996 A
5587726 Moffat Dec 1996 A
5625410 Washino et al. Apr 1997 A
5666153 Copeland Sep 1997 A
5673401 Volk et al. Sep 1997 A
5675374 Kohda Oct 1997 A
5689663 Williams Nov 1997 A
5708787 Nakano et al. Jan 1998 A
5713033 Sado Jan 1998 A
5715377 Fukushima et al. Feb 1998 A
D391558 Marshall et al. Mar 1998 S
D391935 Sakaguchi et al. Mar 1998 S
D392269 Mason et al. Mar 1998 S
5729471 Jain et al. Mar 1998 A
5737011 Lukacs Apr 1998 A
5745116 Pisutha-Arnond Apr 1998 A
5748121 Romriell May 1998 A
D395292 Vu Jun 1998 S
5760826 Nayar Jun 1998 A
D396455 Bier Jul 1998 S
D396456 Bier Jul 1998 S
5790182 St. Hilaire Aug 1998 A
5796724 Rajamani et al. Aug 1998 A
D397687 Arora et al. Sep 1998 S
D398595 Baer et al. Sep 1998 S
5815196 Alshawi Sep 1998 A
D399501 Arora et al. Oct 1998 S
5818514 Duttweiler et al. Oct 1998 A
5821985 Iizawa Oct 1998 A
5825362 Retter Oct 1998 A
D406124 Newton et al. Feb 1999 S
5889499 Nally et al. Mar 1999 A
5894321 Downs et al. Apr 1999 A
D409243 Lonergan May 1999 S
D410447 Chang Jun 1999 S
5929857 Dinallo et al. Jul 1999 A
5940118 Van Schyndel Aug 1999 A
5940530 Fukushima et al. Aug 1999 A
5953052 McNelley et al. Sep 1999 A
5956100 Gorski Sep 1999 A
5996003 Namikata et al. Nov 1999 A
D419543 Warren et al. Jan 2000 S
D420995 Imamura et al. Feb 2000 S
6069648 Suso et al. May 2000 A
6069658 Watanabe May 2000 A
6088045 Lumelsky et al. Jul 2000 A
6097390 Marks Aug 2000 A
6101113 Paice Aug 2000 A
6124896 Kurashige Sep 2000 A
6137485 Kawai et al. Oct 2000 A
6148092 Qian Nov 2000 A
D435561 Pettigrew et al. Dec 2000 S
6167162 Jacquin et al. Dec 2000 A
6172703 Lee Jan 2001 B1
6173069 Daly et al. Jan 2001 B1
D438873 Wang et al. Mar 2001 S
D440575 Wang et al. Apr 2001 S
6211870 Foster Apr 2001 B1
6226035 Korein et al. May 2001 B1
6243130 McNelley et al. Jun 2001 B1
6249318 Girod et al. Jun 2001 B1
6256400 Takata et al. Jul 2001 B1
6259469 Ejima et al. Jul 2001 B1
6266082 Yonezawa et al. Jul 2001 B1
6266098 Cove et al. Jul 2001 B1
D446790 Wang et al. Aug 2001 S
6285392 Satoda et al. Sep 2001 B1
6292188 Carlson et al. Sep 2001 B1
6292575 Bortolussi et al. Sep 2001 B1
D450323 Moore et al. Nov 2001 S
D453167 Hasegawa et al. Jan 2002 S
D454574 Wasko et al. Mar 2002 S
6356589 Gebler et al. Mar 2002 B1
6380539 Edgar Apr 2002 B1
6396514 Kohno May 2002 B1
6424377 Driscoll, Jr. Jul 2002 B1
D461191 Hickey et al. Aug 2002 S
6430222 Okada Aug 2002 B1
6459451 Driscoll et al. Oct 2002 B2
6462767 Obata et al. Oct 2002 B1
6493032 Wallerstein et al. Dec 2002 B1
D468322 Walker et al. Jan 2003 S
6507356 Jackel et al. Jan 2003 B1
D470153 Billmaier et al. Feb 2003 S
6515695 Sato et al. Feb 2003 B1
D474194 Kates et al. May 2003 S
6573904 Chun et al. Jun 2003 B1
6577333 Tai et al. Jun 2003 B2
6583808 Boulanger et al. Jun 2003 B2
6590603 Sheldon et al. Jul 2003 B2
6591314 Colbath Jul 2003 B1
6593955 Falcon Jul 2003 B1
6593956 Potts et al. Jul 2003 B1
D478090 Nguyen et al. Aug 2003 S
D478912 Johnson Aug 2003 S
6611281 Strubbe Aug 2003 B2
D482368 den Toonder et al. Nov 2003 S
6680856 Schreiber Jan 2004 B2
6693663 Harris Feb 2004 B1
6694094 Partynski et al. Feb 2004 B2
6704048 Malkin et al. Mar 2004 B1
6710797 McNelley et al. Mar 2004 B1
6751106 Zhang et al. Jun 2004 B2
D492692 Fallon et al. Jul 2004 S
6763226 McZeal Jul 2004 B1
6768722 Katseff et al. Jul 2004 B1
D494186 Johnson Aug 2004 S
6771303 Zhang et al. Aug 2004 B2
6774927 Cohen et al. Aug 2004 B1
D495715 Gildred Sep 2004 S
6795108 Jarboe et al. Sep 2004 B2
6795558 Matsuo et al. Sep 2004 B2
6798834 Murakami et al. Sep 2004 B1
6806898 Toyama et al. Oct 2004 B1
6807280 Stroud et al. Oct 2004 B1
6809724 Shiraishi et al. Oct 2004 B1
6831653 Kehlet et al. Dec 2004 B2
6844990 Artonne et al. Jan 2005 B2
6853398 Malzbender et al. Feb 2005 B2
6867798 Wada et al. Mar 2005 B1
6882358 Schuster et al. Apr 2005 B1
6888358 Lechner et al. May 2005 B2
D506208 Jewitt et al. Jun 2005 S
6909438 White et al. Jun 2005 B1
6911995 Ivanov et al. Jun 2005 B2
6917271 Zhang et al. Jul 2005 B2
6922718 Chang Jul 2005 B2
6925613 Gibson Aug 2005 B2
6963653 Miles Nov 2005 B1
D512723 Wirz Dec 2005 S
6980526 Jang et al. Dec 2005 B2
6989754 Kisacanin et al. Jan 2006 B2
6989836 Ramsey Jan 2006 B2
6989856 Firestone et al. Jan 2006 B2
6990086 Holur et al. Jan 2006 B1
7002973 MeLampy et al. Feb 2006 B2
7028092 MeLampy et al. Apr 2006 B2
7030890 Jouet et al. Apr 2006 B1
7031311 MeLampy et al. Apr 2006 B2
7036092 Sloo et al. Apr 2006 B2
D521521 Jewitt et al. May 2006 S
7043528 Schmitt et al. May 2006 B2
7046862 Ishizaka et al. May 2006 B2
D522559 Naito et al. Jun 2006 S
7057636 Cohen-Solal et al. Jun 2006 B1
7057662 Malzbender Jun 2006 B2
7058690 Maehiro Jun 2006 B2
7061896 Jabbari et al. Jun 2006 B2
D524321 Hally et al. Jul 2006 S
7072504 Miyano et al. Jul 2006 B2
7080157 McCanne Jul 2006 B2
7092002 Ferren et al. Aug 2006 B2
7111045 Kato et al. Sep 2006 B2
7131135 Virag et al. Oct 2006 B1
7136651 Kalavade Nov 2006 B2
7139767 Taylor et al. Nov 2006 B1
D533525 Arie Dec 2006 S
D533852 Ma Dec 2006 S
D534511 Maeda et al. Jan 2007 S
D535954 Hwang et al. Jan 2007 S
D536001 Armstrong et al. Jan 2007 S
7158674 Suh Jan 2007 B2
7161942 Chen et al. Jan 2007 B2
7164435 Wang et al. Jan 2007 B2
D536340 Jost et al. Feb 2007 S
D539243 Chiu et al. Mar 2007 S
D540336 Kim et al. Apr 2007 S
D541773 Chong et a May 2007 S
D542247 Kinoshita et al. May 2007 S
D544494 Cummins Jun 2007 S
D545314 Kim Jun 2007 S
D547320 Kim et al. Jul 2007 S
7239338 Krisbergh et al. Jul 2007 B2
7246118 Chastain et al. Jul 2007 B2
D548742 Fletcher Aug 2007 S
7254785 Reed Aug 2007 B2
D550635 DeMaio et al. Sep 2007 S
D551184 Kanou et al. Sep 2007 S
D551672 Wirz Sep 2007 S
7269292 Steinberg Sep 2007 B2
7274555 Kim et al. Sep 2007 B2
D554664 Van Dongen et al. Nov 2007 S
D555610 Yang et al. Nov 2007 S
D559265 Armstrong et al. Jan 2008 S
D560225 Park et al. Jan 2008 S
D560681 Fletcher Jan 2008 S
D561130 Won et al. Feb 2008 S
7336299 Kostrzewski Feb 2008 B2
D563965 Van Dongen et al. Mar 2008 S
D564530 Kim et al. Mar 2008 S
D567202 Rieu Piquet Apr 2008 S
7352809 Wenger et al. Apr 2008 B2
7353279 Durvasula et al. Apr 2008 B2
7353462 Caffarelli Apr 2008 B2
7359731 Choksi Apr 2008 B2
7399095 Rondinelli Jul 2008 B2
D574392 Kwag et al. Aug 2008 S
7411975 Mohaban Aug 2008 B1
7413150 Hsu Aug 2008 B1
7428000 Cutler et al. Sep 2008 B2
D578496 Leonard Oct 2008 S
7440615 Gong et al. Oct 2008 B2
D580451 Steele et al. Nov 2008 S
7471320 Malkin et al. Dec 2008 B2
D585453 Chen et al. Jan 2009 S
7477322 Hsieh Jan 2009 B2
7477657 Murphy et al. Jan 2009 B1
7480870 Anzures et al. Jan 2009 B2
D588560 Mellingen et al. Mar 2009 S
D589053 Steele et al. Mar 2009 S
7505036 Baldwin Mar 2009 B1
D591306 Setiawan et al. Apr 2009 S
7518051 Redmann Apr 2009 B2
D592621 Han May 2009 S
7529425 Kitamura et al. May 2009 B2
7532230 Culbertson et al. May 2009 B2
7532232 Shah et al. May 2009 B2
7534056 Cross et al. May 2009 B2
7545761 Kalbag Jun 2009 B1
7551432 Bockheim et al. Jun 2009 B1
7555141 Mori Jun 2009 B2
D595728 Scheibe et al. Jul 2009 S
D596646 Wani Jul 2009 S
7575537 Ellis Aug 2009 B2
D602033 Vu et al. Oct 2009 S
D602453 Ding et al. Oct 2009 S
D602495 Um et al. Oct 2009 S
7610352 AlHusseini et al. Oct 2009 B2
7610599 Nashida et al. Oct 2009 B1
7616226 Roessler et al. Nov 2009 B2
D608788 Meziere Jan 2010 S
7646419 Cernasov Jan 2010 B2
D610560 Chen Feb 2010 S
7661075 Lahdesmaki Feb 2010 B2
7664750 Frees et al. Feb 2010 B2
D612394 La et al. Mar 2010 S
7676763 Rummel Mar 2010 B2
7679639 Harrell et al. Mar 2010 B2
7692680 Graham et al. Apr 2010 B2
7707247 Dunn et al. Apr 2010 B2
D615514 Mellingen et al. May 2010 S
7710448 De Beer et al. May 2010 B2
7710450 Dhuey et al. May 2010 B2
7715657 Lin et al. May 2010 B2
7719605 Hirasawa et al. May 2010 B2
7719662 Bamji et al. May 2010 B2
7720277 Hattori May 2010 B2
D617806 Christie et al. Jun 2010 S
D619608 Meziere Jul 2010 S
D619609 Meziere Jul 2010 S
D619610 Meziere Jul 2010 S
D619611 Meziere Jul 2010 S
7752568 Park et al. Jul 2010 B2
D621410 Verfuerth et al. Aug 2010 S
D626102 Buzzard et al. Oct 2010 S
D626103 Buzzard et al. Oct 2010 S
D628175 Desai et al. Nov 2010 S
7839434 Ciudad et al. Nov 2010 B2
D628968 Desai et al. Dec 2010 S
7861189 Watanabe et al. Dec 2010 B2
D631891 Vance et al. Feb 2011 S
D632698 Judy et al. Feb 2011 S
7889851 Shah et al. Feb 2011 B2
7890888 Glasgow et al. Feb 2011 B2
7894531 Cetin et al. Feb 2011 B1
D634726 Harden et al. Mar 2011 S
D634753 Loretan et al. Mar 2011 S
D635569 Park Apr 2011 S
D635975 Seo et al. Apr 2011 S
D637199 Brinda May 2011 S
D638025 Saft et al. May 2011 S
D638850 Woods et al. May 2011 S
D638853 Brinda May 2011 S
7939959 Wagoner May 2011 B2
D640268 Jones et al. Jun 2011 S
D642184 Brouwers et al. Jul 2011 S
7990422 Ahiska et al. Aug 2011 B2
7996775 Cole et al. Aug 2011 B2
8000559 Kwon Aug 2011 B2
D646690 Thai et al. Oct 2011 S
D648734 Christie et al. Nov 2011 S
D649556 Judy et al. Nov 2011 S
8077857 Lambert Dec 2011 B1
D652050 Chaudhri Jan 2012 S
D652429 Steele et al. Jan 2012 S
D654926 Lipman et al. Feb 2012 S
D656513 Thai et al. Mar 2012 S
8130256 Trachtenberg et al. Mar 2012 B2
8132100 Seo et al. Mar 2012 B2
8135068 Alvarez Mar 2012 B1
D656948 Knudsen et al. Apr 2012 S
D660313 Williams et al. May 2012 S
8179419 Girish et al. May 2012 B2
8209632 Reid et al. Jun 2012 B2
8219920 Langoulant et al. Jul 2012 B2
D664985 Tanghe et al. Aug 2012 S
D669086 Boyer et al. Oct 2012 S
D669088 Boyer et al. Oct 2012 S
D669913 Maggiotto et al. Oct 2012 S
D670723 Khan et al. Nov 2012 S
D671136 Barnett et al. Nov 2012 S
D671141 Peters et al. Nov 2012 S
8339499 Ohuchi Dec 2012 B2
20020047892 Gonsalves Apr 2002 A1
20020106120 Brandenburg et al. Aug 2002 A1
20020108125 Joao Aug 2002 A1
20020113827 Perlman et al. Aug 2002 A1
20020118890 Rondinelli Aug 2002 A1
20020131608 Lobb et al. Sep 2002 A1
20020149672 Clapp et al. Oct 2002 A1
20020163538 Shteyn Nov 2002 A1
20020186528 Huang Dec 2002 A1
20030017872 Oishi et al. Jan 2003 A1
20030048218 Milnes et al. Mar 2003 A1
20030072460 Gonopolskiy et al. Apr 2003 A1
20030160861 Barlow et al. Aug 2003 A1
20030179285 Naito Sep 2003 A1
20030185303 Hall Oct 2003 A1
20030197687 Shetter Oct 2003 A1
20040003411 Nakai et al. Jan 2004 A1
20040032906 Lillig Feb 2004 A1
20040038169 Mandelkern et al. Feb 2004 A1
20040039778 Read et al. Feb 2004 A1
20040061787 Liu et al. Apr 2004 A1
20040091232 Appling, III May 2004 A1
20040118984 Kim et al. Jun 2004 A1
20040119814 Clisham et al. Jun 2004 A1
20040164858 Lin Aug 2004 A1
20040165060 McNelley et al. Aug 2004 A1
20040178955 Menache et al. Sep 2004 A1
20040189463 Wathen Sep 2004 A1
20040189676 Dischert Sep 2004 A1
20040196250 Mehrotra et al. Oct 2004 A1
20040207718 Boyden et al. Oct 2004 A1
20040218755 Marton et al. Nov 2004 A1
20040221243 Twerdahl et al. Nov 2004 A1
20040246962 Kopeikin et al. Dec 2004 A1
20040254982 Hoffman et al. Dec 2004 A1
20040260796 Sundqvist et al. Dec 2004 A1
20050007954 Sreemanthula et al. Jan 2005 A1
20050022130 Fabritius Jan 2005 A1
20050024484 Leonard Feb 2005 A1
20050034084 Ohtsuki et al. Feb 2005 A1
20050039142 Jalon et al. Feb 2005 A1
20050050246 Lakkakorpi et al. Mar 2005 A1
20050081160 Wee et al. Apr 2005 A1
20050099492 Orr May 2005 A1
20050110867 Schulz May 2005 A1
20050117022 Marchant Jun 2005 A1
20050129325 Wu Jun 2005 A1
20050147257 Melchior et al. Jul 2005 A1
20050149872 Fong et al. Jul 2005 A1
20050154988 Proehl et al. Jul 2005 A1
20050223069 Cooperman et al. Oct 2005 A1
20050235209 Morita et al. Oct 2005 A1
20050248652 Firestone et al. Nov 2005 A1
20050251760 Sato et al. Nov 2005 A1
20050268823 Bakker et al. Dec 2005 A1
20060013495 Duan et al. Jan 2006 A1
20060017807 Lee et al. Jan 2006 A1
20060028983 Wright Feb 2006 A1
20060038878 Takashima et al. Feb 2006 A1
20060048070 Taylor et al. Mar 2006 A1
20060066717 Miceli Mar 2006 A1
20060072813 Matsumoto et al. Apr 2006 A1
20060082643 Richards Apr 2006 A1
20060093128 Oxford May 2006 A1
20060100004 Kim et al. May 2006 A1
20060104470 Akino May 2006 A1
20060120307 Sahashi Jun 2006 A1
20060120568 McConville et al. Jun 2006 A1
20060125691 Menache et al. Jun 2006 A1
20060126878 Takumai et al. Jun 2006 A1
20060152489 Sweetser et al. Jul 2006 A1
20060152575 Amiel et al. Jul 2006 A1
20060158509 Kenoyer et al. Jul 2006 A1
20060168302 Boskovic et al. Jul 2006 A1
20060170769 Zhou Aug 2006 A1
20060181607 McNelley et al. Aug 2006 A1
20060200518 Sinclair et al. Sep 2006 A1
20060233120 Eshel et al. Oct 2006 A1
20060256187 Sheldon et al. Nov 2006 A1
20060284786 Takano et al. Dec 2006 A1
20060289772 Johnson et al. Dec 2006 A1
20070022388 Jennings Jan 2007 A1
20070039030 Romanowich et al. Feb 2007 A1
20070040903 Kawaguchi Feb 2007 A1
20070070177 Christensen Mar 2007 A1
20070074123 Omura et al. Mar 2007 A1
20070080845 Amand Apr 2007 A1
20070112966 Eftis et al. May 2007 A1
20070120971 Kennedy May 2007 A1
20070121353 Zhang et al. May 2007 A1
20070140337 Lim et al. Jun 2007 A1
20070153712 Fry et al. Jul 2007 A1
20070157119 Bishop Jul 2007 A1
20070159523 Hillis et al. Jul 2007 A1
20070162866 Matthews et al. Jul 2007 A1
20070183661 El-Maleh et al. Aug 2007 A1
20070188597 Kenoyer et al. Aug 2007 A1
20070192381 Padmanabhan Aug 2007 A1
20070206091 Dunn et al. Sep 2007 A1
20070206556 Yegani et al. Sep 2007 A1
20070217406 Riedel et al. Sep 2007 A1
20070217500 Gao et al. Sep 2007 A1
20070229250 Recker et al. Oct 2007 A1
20070240073 McCarthy et al. Oct 2007 A1
20070247470 Dhuey et al. Oct 2007 A1
20070250567 Graham et al. Oct 2007 A1
20070250620 Shah et al. Oct 2007 A1
20070273752 Chambers et al. Nov 2007 A1
20070279483 Beers et al. Dec 2007 A1
20070279484 Derocher et al. Dec 2007 A1
20080046840 Melton et al. Feb 2008 A1
20080077390 Nagao Mar 2008 A1
20080077883 Kim et al. Mar 2008 A1
20080084429 Wissinger Apr 2008 A1
20080119211 Paas et al. May 2008 A1
20080134098 Hoglund et al. Jun 2008 A1
20080136896 Graham et al. Jun 2008 A1
20080148187 Miyata et al. Jun 2008 A1
20080151038 Khouri et al. Jun 2008 A1
20080167078 Eibye Jul 2008 A1
20080208444 Ruckart Aug 2008 A1
20080212677 Chen et al. Sep 2008 A1
20080215974 Harrison et al. Sep 2008 A1
20080215993 Rossman Sep 2008 A1
20080218582 Buckler Sep 2008 A1
20080232692 Kaku Sep 2008 A1
20080240237 Tian et al. Oct 2008 A1
20080240571 Tian et al. Oct 2008 A1
20080246833 Yasui et al. Oct 2008 A1
20080256474 Chakra et al. Oct 2008 A1
20080261569 Britt et al. Oct 2008 A1
20080266380 Gorzynski et al. Oct 2008 A1
20080267282 Kalipatnapu et al. Oct 2008 A1
20080276184 Buffet et al. Nov 2008 A1
20080297586 Kurtz et al. Dec 2008 A1
20080298571 Kurtz et al. Dec 2008 A1
20080303901 Variyath et al. Dec 2008 A1
20090009593 Cameron et al. Jan 2009 A1
20090012633 Liu et al. Jan 2009 A1
20090037827 Bennetts Feb 2009 A1
20090051756 Trachtenberg Feb 2009 A1
20090115723 Henty May 2009 A1
20090119603 Stackpole May 2009 A1
20090122867 Mauchly et al. May 2009 A1
20090172596 Yamashita Jul 2009 A1
20090174764 Chadha et al. Jul 2009 A1
20090183122 Webb et al. Jul 2009 A1
20090193345 Wensley et al. Jul 2009 A1
20090204538 Ley et al. Aug 2009 A1
20090207233 Mauchly et al. Aug 2009 A1
20090207234 Chen et al. Aug 2009 A1
20090217199 Hara et al. Aug 2009 A1
20090228807 Lemay Sep 2009 A1
20090244257 MacDonald et al. Oct 2009 A1
20090256901 Mauchly et al. Oct 2009 A1
20090260060 Smith et al. Oct 2009 A1
20090265628 Bamford et al. Oct 2009 A1
20090279476 Li et al. Nov 2009 A1
20090324023 Tian et al. Dec 2009 A1
20100005419 Miichi et al. Jan 2010 A1
20100027907 Cherna et al. Feb 2010 A1
20100030389 Palmer et al. Feb 2010 A1
20100049542 Benjamin et al. Feb 2010 A1
20100082557 Gao et al. Apr 2010 A1
20100118112 Nimri et al. May 2010 A1
20100123770 Friel et al. May 2010 A1
20100171807 Tysso Jul 2010 A1
20100171808 Harrell et al. Jul 2010 A1
20100183199 Smith et al. Jul 2010 A1
20100199228 Latta et al. Aug 2010 A1
20100201823 Zhang et al. Aug 2010 A1
20100205281 Porter et al. Aug 2010 A1
20100205543 Von Werther et al. Aug 2010 A1
20100208078 Tian et al. Aug 2010 A1
20100225732 De Beer et al. Sep 2010 A1
20100225735 Shaffer et al. Sep 2010 A1
20100259619 Nicholson Oct 2010 A1
20100262367 Riggins et al. Oct 2010 A1
20100268843 Van Wie et al. Oct 2010 A1
20100277563 Gupta et al. Nov 2010 A1
20100283829 De Beer et al. Nov 2010 A1
20100302345 Baldino et al. Dec 2010 A1
20100306703 Bourganel et al. Dec 2010 A1
20100313148 Hochendoner et al. Dec 2010 A1
20100316232 Acero et al. Dec 2010 A1
20100325547 Keng et al. Dec 2010 A1
20110008017 Gausereide Jan 2011 A1
20110029868 Moran et al. Feb 2011 A1
20110037636 Alexander Feb 2011 A1
20110063467 Tanaka Mar 2011 A1
20110082808 Beykpour et al. Apr 2011 A1
20110085016 Kristiansen et al. Apr 2011 A1
20110109642 Chang et al. May 2011 A1
20110113348 Twiss et al. May 2011 A1
20110164106 Kim Jul 2011 A1
20110202878 Park et al. Aug 2011 A1
20110225534 Wala Sep 2011 A1
20110242266 Blackburn et al. Oct 2011 A1
20110249081 Kay et al. Oct 2011 A1
20110249086 Guo et al. Oct 2011 A1
20110276901 Zambetti et al. Nov 2011 A1
20110279627 Shyu Nov 2011 A1
20110319885 Skwarek et al. Dec 2011 A1
20120026278 Goodman et al. Feb 2012 A1
20120038742 Robinson et al. Feb 2012 A1
20120226997 Pang Sep 2012 A1
20120266082 Webber Oct 2012 A1
20120297342 Jang et al. Nov 2012 A1
20120327173 Couse et al. Dec 2012 A1
Foreign Referenced Citations (42)
Number Date Country
101953158 Jan 2011 CN
102067593 May 2011 CN
502600 Sep 1992 EP
0 650 299 Oct 1994 EP
0 714 081 Nov 1995 EP
0 740 177 Apr 1996 EP
1143745 Oct 2001 EP
1 178 352 Jun 2002 EP
1 589 758 Oct 2005 EP
1701308 Sep 2006 EP
1768058 Mar 2007 EP
2073543 Jun 2009 EP
2255531 Dec 2010 EP
2277308 Jan 2011 EP
2 294 605 May 1996 GB
2336266 Oct 1999 GB
2355876 May 2001 GB
WO 9416517 Jul 1994 WO
WO 9621321 Jul 1996 WO
WO 9708896 Mar 1997 WO
WO 9847291 Oct 1998 WO
WO 9959026 Nov 1999 WO
WO 0133840 May 2001 WO
WO 2005013001 Feb 2005 WO
WO 2006072755 Jul 2006 WO
WO2007106157 Sep 2007 WO
WO2007123946 Nov 2007 WO
WO 2007123960 Nov 2007 WO
WO2008039371 Apr 2008 WO
WO 2008040258 Apr 2008 WO
WO 2008101117 Aug 2008 WO
WO 2008118887 Oct 2008 WO
WO 2009102503 Aug 2009 WO
WO 2009120814 Oct 2009 WO
WO 2010059481 May 2010 WO
WO2010096342 Aug 2010 WO
WO 2010104765 Sep 2010 WO
WO 2010132271 Nov 2010 WO
WO2012033716 Mar 2012 WO
WO2012068008 May 2012 WO
WO2012068010 May 2012 WO
WO2012068485 May 2012 WO
Non-Patent Literature Citations (201)
Entry
Design U.S. Appl. No. 29/358,006, filed Mar. 21, 2010, entitled “Mounted Video Unit,” Inventor(s): Ashok T. Desai et al.
Design U.S. Appl. No. 29/375,624, filed Sep. 24, 2010, entitled “Mounted Video Unit,” Inventor(s): Ashok T. Desai et al.
Design U.S. Appl. No. 29/375,627, filed Sep. 24, 2010, entitled “Mounted Video Unit,” Inventor(s): Ashok T. Desai et al.
Design U.S. Appl. No. 29/369,951, filed Sep. 15, 2010, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al.
Design U.S. Appl. No. 29/375,458, filed Sep. 22, 2010, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al.
Design U.S. Appl. No. 29/358,009, filed Mar. 21, 2010, entitled “Free-Standing Video Unit,” Inventor(s): Ashok T. Desai et al.
Design U.S. Appl. No. 29/375,619, filed Sep. 24, 2010, entitled “Free- Standing Video Unit,” Inventor(s): Ashok T. Desai et al.
U.S. Appl. No. 12/781,722, filed May 17, 2010, entitled “System and Method for Providing Retracting Optics in a Video Conferencing Environment,” Inventor(s): Joseph T. Friel, et al.
Joshua Gluckman and S.K. Nayar, “Rectified Catadioptric Stereo Sensors,” 8 pages, retrieved and printed on May 17, 2010; http://cis.poly.edu/˜gluckman/papers/cypr00.pdf.
France Telecom R&D, “France Telecom's Magic Telepresence Wall—Human Productivity Lab,” 5 pages, retrieved and printed on May 17, 2010; http://www.humanproductivitylab.com/archive—blogs/2006/07/11/france—telecoms—magic—telepres—1.php.
Digital Video Enterprises, “DVE Eye Contact Silhouette,” 1 page, © DVE 2008; http://www.dvetelepresence.com/products/eyeContactSilhouette.asp.
R.V. Kollarits, et al., “34.3: An Eye Contact Camera/Display System for Videophone Applications Using a Conventional Direct-View LCD,” © 1995 SID, ISSN0097-0966X/95/2601, pp. 765-768; http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=47A1E7E028C26503975E633895D114EC?doi=10.1.1.42.1772&rep=rep1&type=pdf.
Trevor Darrell, “A Real-Time Virtual Mirror Display,” 1 page, Sep. 9, 1998; http://people.csail.mit.edu/trevor/papers/1998-021/node6.html.
3G, “World's First 3G Video Conference Service with New TV Commercial,” Apr. 28, 2005, 4 pages; http://www.3g.co.uk/PR/April2005/1383.htm.
U.S. Appl. No. 13/096,772, filed Apr. 28, 2011, entitled “System and Method for Providing Enhanced Eye Gaze in a Video Conferencing Environment,” Inventor(s): Charles C. Byers.
U.S. Appl. No. 13/106,002, filed May 12, 2011, entitled “System and Method for Video Coding in a Dynamic Environment,” Inventors: Dihong Tian et al.
U.S. Appl. No. 13/098,430, filed Apr. 30, 2011, entitled “System and Method for Transferring Transparency Information in a Video Environment,” Inventors: Eddie Collins et al.
U.S. Appl. No. 13/096,795, filed Apr. 28, 2011, entitled “System and Method for Providing Enhanced Eye Gaze in a Video Conferencing Environment,” Inventors: Charles C. Byers.
U.S. Appl. No. 13/298,022, filed Nov. 16, 2011, entitled “System and Method for Alerting a Participant in a Video Conference,” Inventor(s): TiongHu Lian, et al.
Design U.S. Appl. No. 29/389,651, filed Apr. 14, 2011, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al.
Design U.S. Appl. No. 29/389,654, filed Apr. 14, 2011, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al.
EPO Communication dated Feb. 25, 2011 for EP09725288.6 (published as EP22777308); 4 pages.
EPO Aug. 15, 2011 Response to EPO Communication mailed Feb. 25, 2011 from European Patent Application No. 09725288.6; 15 pages.
PCT Sep. 25, 2007 Notification of Transmittal of the International Search Report from PCT/US06/45895.
PCT Sep. 2, 2008 International Preliminary Report on Patentability (1 page) and the Written Opinion of th ISA (4 pages) from PCT/US2006/045895.
PCT Sep. 11, 2008 Notification of Transmittal of the International Search Report from PCT/US07/09469.
PCT Nov. 4, 2008 International Preliminary Report on Patentability (1 page) and the Written Opinion of the ISA (8 pages) from PCT/US2007/009469.
PCT May 11, 2010 International Search Report from PCT/US2010/024059; 4 pages.
PCT Aug. 23, 2011 International Preliminary Report on Patentability and Written Opinion of the ISA from PCT/US2010/024059; 6 pages.
PCT Sep. 13, 2011 International Preliminary Report on Patentability and the Written Opinion of the ISA from PCT/US2010/026456; 5 pages.
PCT Oct. 12, 2011 International Search Report and Written Opinion of the ISA from PCT/US2011/050380.
PCT Nov. 24, 2011 International Preliminary Report on Patentability from International Application No. PCT/US2010/033880; 6 pages.
Dornaika F., et al., ““Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters,”” 20040627; 20040627-20040602, Jun. 27, 2004, 22 pages; HEUDIASY Research Lab, http://eprints.pascal-network.org/archive/00001231/01/rtvhci—chapter8.pdf.
Hammadi, Nait Charif et al., ““Tracking the Activity of Participants in a Meeting,”” Machine Vision and Applications, Springer, Berlin, De Lnkd—DOI:10.1007/S00138-006-0015-5, vol. 17, No. 2, May 1, 2006, pp. 83-93, XP019323925 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.9832.
Kwolek, B., “Model Based Facial Pose Tracking Using a Particle Filter,” Geometric Modeling and Imaging—New Trends, 2006 London, England Jul. 5-6, 2005, Piscataway, NJ, USA, IEEE LNKD-DOI: 10.1109/GMAI.2006.34 Jul. 5, 2006, pp. 203-208; XP010927285 [Abstract Only].
U.S. Appl. No. 12/366,593, flied Feb. 5, 2009, entitled “System and Method for Depth Perspective Image Rendering,” Inventor(s): J. William Mauchly et al.
U.S. Appl. No. 12/727,089, filed Mar. 18, 2010, entitled “System and Method for Enhancinh Video Images in a Conferencing Environment,” Inventor: Joseph T. Friel.
U.S. Appl. No. 12/877,733, filed Sep. 8, 2010, entitled “System and Method for Skip Coding During Video Conferencing in a Network Environment,” Inventor(s): Dihong Tian, et al.
U.S. Appl. No. 12/870,687, flied Aug. 27, 2010, entitled “System and Method for Producing a Performance Via Video Conferencing in a Network Environment,” Inventor(s): Michael A. Arnao et al.
U.S. Appl. No. 12/912,556, filed Oct. 26, 2010, for “System and Method for Provisioning Flows in a Mobile Network Environment,” Inventors: Balaji Vankay Vankataswami, et al.
U.S. Appl. No. 12/873,100, filed Aug. 31, 2010, entitled “System and Method for Providing Depth Adaptive Video Conferencing,” Inventor(s): J. William Mauchly et al.
U.S. Appl. No. 12/946,679, filed Nov. 15, 2010, entitled “System and Method for Providing Camera Functions in a Video Environment,” Inventors: Peter A.J. Fornell, et al.
U.S. Appl. No. 12/946,695, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Audio in a Video Environment,” Inventors: Wei Li, et al.
U.S. Appl. No. 12/907,914, filed Oct. 19, 2010, entitled “System and Method for Providing Videomail in a Network Environment,” Inventors: David J. Mackie et al.
U.S. Appl. No. 12/950,786, filed Nov. 19, 2010, “System and Method for Providing Enhanced Video Processing in a Network Environment,” Inventor[s]: David J. Mackie.
U.S. Appl. No. 12/907,919, filed Oct. 19, 2010, entitled “System and Method for Providing Connectivity in a Network Environment,” Inventors: David J. Mackie et al.
U.S. Appl. No. 12/946,704, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Graphics in a Video Environment,” Inventors: John M. Kanalakis, Jr., et al.
U.S. Appl. No. 12/957,116, filed Nov. 30, 2010, entitled “System and Method for Gesture Interface Control,” Inventors: Shuan K. Kirby, et al.
U.S. Appl. No. 13/036,925, filed Feb. 28, 2011 ,entitled “System and Method for Selection of Video Data in a Video Conference Environment,” Inventor(s) Sylvia Olayinka Aya Manfa N'guessam.
U.S. Appl. No. 12/907,925, filed Oct. 19, 2010, entitled “System and Method for Providing a Pairing Mechanism in a Video Environment,” Inventors: Gangfeng Kong et al.
U.S. Appl. No. 12/939,037, filed Nov. 3, 2010, entitled “System and Method for Managing Flows in a Mobile Network Environment,” Inventors: Balaji Venkat Venkataswami et al.
U.S. Appl. No. 12/946,709, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Graphics in a Video Environment,” Inventors: John M. Kanalakis, Jr., et al.
U.S. Appl. No. 12/784,257, filed May 20, 2010, entitled “Implementing Selective Image Enhancement,” Inventors: Dihong, Tian et al.
Design U.S. Appl. No. 29/381,245, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,250, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,254, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,256, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis Jr., et al.
Design U.S. Appl. No. 29/381,259, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,260 filed, Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,262, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,264, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
“3D Particles Experiments in AS3 and Flash CS3,” [retrieved and printed on Mar. 18, 2010]; 2 pages; http://www.flashandmath.com/advanced/fourparticles/notes.html.
“Cisco Expo Germany 2009 Opening,” Posted on YouTube on May 4, 2009; http://www.youtube.com/watch?v=SDKsaSlz4MK; 2 pages.
“Eye Tracking,” from Wikipedia, (printed on Aug. 31, 2011) 12 pages; http://en.wikipedia.org/wiki/Eye—tracker.
“Infrared Cameras TVS-200-EX,” [retrieved and printed on May 24, 2010] http://www.electrophysics.com/Browse/Brw—ProductLineCategory.asp?CategoryID=184&Area=IS; 2 pages.
“RoundTable, 360 Degrees Video Conferencing Camera unveiled by Microsoft,” TechShout, Jun. 30, 2006, 1 page; http://www.techshout.com/gadgets/2006/30/roundtable-360-degrees-video-conferencing-camera-unveiled-by-microsoft/#.
“Vocative Case,” from Wikipedia, [retrieved and printed on Mar. 3, 2011] 11 pages; http://en.wikipedia.org/wiki/Vocative—case.
“Custom 3D Depth Sensing Prototype System for Gesture Control,” 3D Depth Sensing, GestureTek, 3 pages; [Retrieved and printed on Dec. 1, 2010] http://www.gesturetek.com/3ddepth/introduction.php.
“Eye Gaze Response Interface Computer Aid (Erica) tracks Eye movement to enable hands-free computer operation,” UMD Communication Sciences and Disorders Tests New Technology, University of Minnesota Duluth, posted Jan. 19, 2005; 4 pages http://www.d.umn.edu/unirel/homepage/05/eyegaze.html.
“Real-time Hand Motion/Gesture Detection for HCI-Demo 2,” video clip, YouTube, posted Dec. 17, 2008 by smmy0705, 1 page; www.youtube.com/watch?v=mLT4CFLIi8A&feature=related.
“Simple Hand Gesture Recognition,” video clip, YouTube, posted Aug. 25, 2008 by pooh8210, 1 page; http://www.youtube.com/watch?v=F8GVeV0dYLM&feature=related.
Active8-3D—Holographic Projection—3D Hologram Retail Display & Video Project, [retrieved and printed on Feb. 24, 2009], http://www.activ8-3d.co.uk/3d—holocubes; 1 page.
Andersson, L., et al., “LDP Specification,” Network Working Group, RFC 3036, Jan. 2001, 133 pages; http://tools.ietf.org/html/rfc3036.
Andreopoulos, Yiannis, et al., “In-Band Motion Compensated Temporal Filtering,” Signal Processing: Image Communication 19 (2004) 653-673, 21 pages http://medianetlab.ee.ucla.edu/papers/011.pdf.
Arrington, Michael, “eJamming—Distributed Jamming,” TechCrunch; Mar. 16, 2006; http://www.techcrunch.com/2006/03/16/ejamming-distributed-jamming/; 1 page.
Arulampalam, M. Sanjeev, et al., “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, vol. 50, No. 2, Feb. 2002, 15 pages; http://www.cs.ubc.ca/˜murphyk/Software/Kalman/ParticleFilterTutorial.pdf.
Hock, Hans Henrich, “Prosody vs. Syntax: Prosodic rebracketing of final vocatives in English,” 4 pages; [retrieved and printed on Mar. 3, 2011] http://speechprosody2010.illinois.edu/papers/100931.pdf.
Holographic Imaging, “Dynamic Holography for scientific uses, military heads up display and even someday HoloTV Using TI's DMD,” [retrieved and printed on Feb. 26, 2009] http://innovation.swmed.edu/ research/instrumentation/res—inst—dev3d.html; 5 pages.
Hornbeck, Larry J., “Digital Light ProcessingTM: A New MEMS-Based Display Technology,” [retrieved and printed on Feb. 26, 2009] http://focus.ti.com/pdfs/dlpdmd/17—Digital—Light—Processing—MEMS—display—technology.pdf; 22 pages.
IR Distribution Category @ Envious Technology, “IR Distribution Category,” [retrieved and printed on Apr. 22, 2009] http://www.envioustechnology.com.au/ products/product-list.php?CID=305; 2 pages.
IR Trans—Products and Orders—Ethernet Devices, [retrieved and printed on Apr. 22, 2009] http://www.irtrans.de/en/shop/lan.php; 2 pages.
Isgro, Francesco et al., “Three-Dimensional Image Processing in the Future of Immersive Media,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, No. 3; XP011108796; ISSN: 1051-8215; Mar. 1, 2004; pp. 288-303.
Itoh, Hiroyasu, et al., “Use of a gain modulating framing camera for time-resolved imaging of cellular phenomena,” SPIE vol. 2979, 1997, pp. 733-740.
Jamoussi, Bamil, “Constraint-Based LSP Setup Using LDP,” MPLS Working Group, Sep. 1999, 34 pages; http://tools.ietf.org/html/draft-ietf-mpls-cr-ldp-03.
Jeyatharan, M., et al., “3GPP TFT Reference for Flow Binding,” MEXT Working Group, Mar. 2, 2010, 11 pages; http:/www.ietf.org/id/draft-jeyatharan-mext-flow-tftemp-reference-00.txt.
Jiang, Minqiang, et al., “On Lagrange Multiplier and Quantizer Adjustment for H.264 Frame-layer Video Rate Control,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, Issue 5, May 2006, pp. 663-669.
Jong-Gook Ko et al., “Facial Feature Tracking and Head Orientation-Based Gaze Tracking,” ITC-CSCC 2000, International Technical Conference on Circuits/Systems, Jul. 11-13, 2000, 4 pages http://www.umiacs.umd.edu/˜knkim/paper/itc-cscc-2000-jgko.pdf.
Kannangara, C.S., et al., “Complexity Reduction of H.264 Using Lagrange Multiplier Methods,” IEEE Int. Conf. on Visual Information Engineering, Apr. 2005; www.rgu.ac.uk/files/h264—complexity—kannangara.pdf; 6 pages.
Kannangara, C.S., et al., “Low Complexity Skip Prediction for H.264 through Lagrangian Cost Estimation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, No. 2, Feb. 2006; www.rgu.ac.uk/files/h264—skippredict—richardson—final.pdf; 20 pages.
Kauff, Peter, et al., “An Immersive 3D Video-Conferencing System Using Shared Virtual Team User Environments,” Proceedings of the 4th International Conference on Collaborative Virtual Environments, XP040139458; Sep. 30, 2002; http://ip.hhi.de/imedia—G3/assets/pdfs/CVE02.pdf; 8 pages.
Kazutake, Uehira, “Simulation of 3D image depth perception in a 3D display using two stereoscopic displays at different depths,” Jan. 30, 2006; http://adsabs.harvard.edu/abs/2006SPIE.6055.408U; 2 pages.
Keijser, Jeroen, et al., “Exploring 3D Interaction in Alternate Control-Display Space Mappings,” IEEE Symposium on 3D User Interfaces, Mar. 10-11, 2007, pp. 17-24.
Kim, Y.H., et al., “Adaptive mode decision for H.264 encoder,” Electronics letters, vol. 40, Issue 19, pp. 1172-1173, Sep. 2004; 2 pages.
Klint, Josh, “Deferred Rendering in Leadwerks Engine,” Copyright Leadwerks Corporation © 2008; http://www.leadwerks.com/files/Deferred—Rendering—in—Leadwerks—Engine.pdf; 10 pages.
Kolsch, Mathias, “Vision Based Hand Gesture Interfaces for Wearable Computing and Virtual Environments,” A Dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science, University of California, Santa Barbara, Nov. 2004, 288 pages http://fulfillment.umi.com/dissertations/b7afbcb56ba72fdb14d26dfccc6b470f/1291487062/3143800.pdf.
Koyama, S., et al. “A Day and Night Vision MOS Imager with Robust Photonic-Crystal-Based RGB-and-IR,” Mar. 2008, pp. 754-759; ISSN: 0018-9383; IEE Transactions on Electron Devices, vol. 55, No. 3; http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4455782&isnumber=4455723.
Lambert, “Polycom Video Communications,” © 2004 Polycom, Inc., Jun. 20, 2004 http://www.polycom.com/global/documents/whitepapers/video—communications—h.239—people—content—polycom—patented—technology.pdf.
Lawson, S., “Cisco Plans TelePresence Translation Next Year,” Dec. 9, 2008; http://www.pcworld.com/ article/155237/.html?tk=rss—news; 2 pages.
Lee, J. and Jeon, B., “Fast Mode Decision for H.264,” ISO/IEC MPEG and ITU-T VCEG Joint Video Team, Doc. JVT-J033, Dec. 2003; http://media.skku.ac.kr/publications/paper/IntC/ljy—ICME2004.pdf; 4 pages.
Liu, Shan, et al., “Bit-Depth Scalable Coding for High Dynamic Range Video,” SPIE Conference on Visual Communications and Image Processing, Jan. 2008; 12 pages http://www.merl.com/papers/docs/TR2007-078.pdf.
Liu, Z., “Head-Size Equalization for Better Visual Perception of Video Conferencing,” Proceedings, IEEEInternational Conference on Multimedia & Expo (ICME2005), Jul. 6-8, 2005, Amsterdam, The Netherlands; http://research.microsoft.com/users/cohen/HeadSizeEqualizationICME2005.pdf; 4 pages.
Mann, S., et al., “Virtual Bellows: Constructing High Quality Still from Video,” Proceedings, First IEEE International Conference on Image Processing ICIP-94, Nov. 13-16, 1994, Austin, TX; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.8405; 5 pages.
Marvin Imaging Processing Framework, “Skin-colored pixels detection using Marvin Framework,” video clip, YouTube, posted Feb. 9, 2010 by marvinproject, 1 page; http://www.youtube.com/user/marvinproject#p/a/u/0/3ZuQHYNIcrl.
Miller, Gregor, et al., “Interactive Free-Viewpoint Video,” Centre for Vision, Speech and Signal Processing, [retrieved and printed on Feb. 26, 2009], http://www.ee.surrey.ac.uk/CVSSP/VMRG/ Publications/miller05cvmp.pdf, 10 pages.
Miller, Paul, “Microsoft Research patents controller-free computer input via EMG muscle sensors,” Engadget.com, Jan. 3, 2010, 1 page; http://www.engadget.com/2010/01/03/microsoft-research-patents-controller-free-computer-input-via-em/.
Minoru from Novo is the worlds first consumer 3D Webcam, Dec. 11, 2008; http://www.minoru3d.com; 4 pages.
Mitsubishi Electric Research Laboratories, copyright 2009 [retrieved and printed on Feb. 26, 2009], http://www.merl.com/projects/3dtv, 2 pages.
Nakaya, Y., et al. “Motion Compensation Based on Spatial Transformations,” IEEE Transactions on Circuits and Systems for Video Technology, Jun. 1994, Abstract Only http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F76%2F7495%2F00305878.pdf%3Farnumber%3D305878&authDecision=-203.
National Training Systems Association Home—Main, Interservice/Industry Training, Simulation & Education Conference, Dec. 1-4, 2008; http://ntsa.metapress.com/app/home/main.asp?referrer=default; 1 page.
Oh, Hwang-Seok, et al., “Block-Matching Algorithm Based on Dynamic Search Window Adjustment,” Dept. of CS, KAIST, 1997, 6 pages; http://citeseerx.ist.psu.edu/viewdoc/similar?doi=10.1.1.29.8621&type=ab.
Opera Over Cisco TelePresence at Cisco Expo 2009, in Hannover Germany—Apr. 28, 29, posted on YouTube on May 5, 2009; http://www.youtube.com/watch?v=xN5jNH5E-38; 1 page.
OptoIQ, “Vision + Automation Products—VideometerLab 2,” [retrieved and printed on Mar. 18, 2010], http://www.optoiq.com/optoiq-2/en-us/index/machine-vision-imaging-processing/display/vsd-articles-tools-template.articles.vision-systems-design.volume-11.issue-10.departments.new-products.vision-automation-products.htmlhtml; 11 pages.
OptoIQ, “Anti-Speckle Techniques Uses Dynamic Optics,” Jun. 1, 2009; http://www.optoiq.com/index/photonics-technologies-applications/Ifw-display/Ifw-article-display/363444/articles/optoiq2/photonics-technologies/technology-products/optical-components/optical-mems/2009/12/anti-speckle-technique-uses-dynamic-optics/QP129867/cmpid=EnIOptoLFWJanuary132010.html; 2 pages.
OptoIQ, “Smart Camera Supports Multiple Interfaces,” Jan. 22, 2009; http://www.optoiq.com/index/machine-vision-imaging-processing/display/vsd-article-display/350639/articles/vision-systems-design/daily-product-2/2009/01/smart-camera-supports-multiple-interfaces.html; 2 pages.
OptoIQ, “Vision Systems Design—Machine Vision and Image Processing Technology,” [retrieved and printed on Mar. 18, 2010], http://www.optoiq.com/index/machine-vision-imaging-processing.html; 2 pages.
Patterson, E.K., et al., “Moving-Talker, Speaker-Independent Feature Study and Baseline Results Using the CUAVE Multimodal Speech Corpus,” EURASIP Journal on Applied Signal Processing, vol. 11, Oct. 2002, 15 pages http://www.clemson.edu/ces/speech/papers/CUAVE—Eurasip2002.pdf.
Payatagool, Chris, “Orchestral Manoeuvres in the Light of Telepresence,” Telepresence Options, Nov. 12, 2008; http://www.telepresenceoptions.com/2008/11/orchestral—manoeuvres; 2pages.
PCT Jun. 29, 2010 PCT “International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” PCT/US2010/026456, dated Jun. 29, 2010, 11 pages.
PCT May 15, 2006 International Report of Patentability dated May 15, 2006, for PCT International Application PCT/US2004/021585, 6 pages.
PCT Sep. 18, 2008 PCT International Search Report (4 pages), International Preliminary Report on Patentability (1 page), and Written Opinion of the ISA (7 pages); PCT/US2008/058079; dated Sep. 18, 2008.
PCT Oct. 10, 2009 PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration; PCT/US2009/038310; dated Oct. 10, 2009; 19 pages.
PCT Apr. 4, 2009 Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration from PCT/US2009/001070, 17 pages.
PCT Oct. 7, 2010 PCT International Preliminary Report on Patentability mailed Oct. 7, 2010 for PCT/US2009/038310; 10 pages.
PCT Feb. 23, 2010 PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT/US2009/064061 mailed Feb. 23, 2010; 14 pages.
PCT Aug. 24, 2010 PCT International Search Report mailed Aug. 24, 2010 for PCT/US2010033880; 4 pages.
PCT Aug. 26, 2010 International Preliminary Report on Patentability mailed Aug. 26, 2010 for PCT/US2009/001070; 10 pages.
PCT Jan. 23, 2012 International Search Report and Written Opinion of the ISA from International Application Serial No. PCT/US2011/060579; 10 pages.
PCT Jan. 23, 2010 International Search Report and Written Opinion of the ISA from International Application Serial No. PCT/US2011/060584; 11 pages.
PCT Feb. 20, 2012 International Search Report and Written Opinion of the ISA from International Application Serial No. PCT/US2011/061442; 12 pages.
Perez, Patrick, et al., “Data Fusion for Visual Tracking with Particles,” Proceedings of the IEEE, vol. XX, No. XX, Feb. 2004, 18 pages http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.2480.
Pixel Tools “Rate Control and H.264: H.264 rate control algorithm dynamically adjusts encoder parameters,” [retrieved and printed on Jun. 10, 2010] http://www.pixeltools.om/rate—control—paper.html; 7 pages.
Potamianos, G., et a., “An Image Transform Approach for HMM Based Automatic Lipreading,” in Proceedings of IEEE ICIP, vol. 3, 1998, 5 pages http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.6802.
Radhika, N., et al., “Mobile Dynamic reconfigurable Context aware middleware for Adhoc smart spaces,” vol. 22, 2008, http://www.acadjournal.com/2008/V22/part6/p7; 3 pages.
Rayvel Business-to-Business Products, copyright 2004 [retrieved and printed on Feb. 24, 2009], http://www.rayvel.com/b2b.html; 2 pages.
Richardson, I.E.G., et al., “Fast H.264 Skip Mode Selection Using and Estimation Framework,” Picture Coding Symposium, (Beijing, China), Apr. 2006; www.rgu.ac.uk/files/richardson—fast—skip—estmation—pcs06.pdf; 6 pages.
Richardson, Iain, et al., “Video Encoder Complexity Reduction by Estimating Skip Mode Distortion,” Image Communication Technology Group; [Retrieved and printed Oct. 21, 2010] 4 pages; http://www4.rgu.ac.uk/files/ICIP04—richardson—zhao—final.pdf.
Rikert, T.D., et al., “Gaze Estimation using Morphable models,” IEEE International Conference on Automatic Face and Gesture Recognition, Apr. 1998; 7 pgs http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.9472.
Robust Face Localisation Using Motion, Colour & Fusion; Proc. VIIth Digital Image Computing: Techniques and Applications, Sun C. et al (Eds.), Sydney; XP007905630; pp. 899-908; Dec. 10, 2003; http://www.cmis.csiro.au/Hugues.Talbot/dicta2003/cdrom/pdf/0899.pdf.
Satoh, Kiyohide et al., “Passive Depth Acquisition for 3D Image Displays”, IEICE Transactions on Information and Systems, Information Systems Society, Tokyo, JP, Sep. 1, 1994, vol. E77-D, No. 9, pp. 949-957.
School of Computing, “Bluetooth over IP for Mobile Phones,” 2005; http://www.computing.dcu.ie/wwwadmin/fyp-abstract/list/fyp—details05.jsp?year=2005&number=51470574; 1 page.
Schroeder, Erica, “The Next Top Model—Collaboration,” Collaboration, The Workspace: A New World of Communications and Collaboration, Mar. 9, 2009; http//blogs.cisco.com/collaboration/comments/the—next—top—model; 3 pages.
Sena, “Industrial Bluetooth,” [retrieved and printed on Apr. 22, 2009] http://www.sena.com/products/industrial—bluetooth; 1 page.
Shaffer, Shmuel, “Translation—State of the Art” presentation; Jan. 15, 2009; 22 pages.
Shi, C. et al., “Automatic Image Quality Improvement for Videoconferencing,” IEEE ICASSP May 2004; http://research.microsoft.com/pubs/69079/0300701.pdf; 4 pages.
Shum, H.-Y, et al., “A Review of Image-Based Rendering Techniques,” in SPIE Proceedings vol. 4067(3); Proceedings of the Conference on Visual Communications and Image Processing 2000, Jun. 20-23, 2000, Perth, Australia; pp. 2-13; https://research.microsoft.com/pubs/68826/review—image—rendering.pdf.
Smarthome, “IR Extender Expands Your IR Capabilities,” [retrieved and printed on Apr. 22, 2009], http://www.smarthome.com/8121.html; 3 pages.
Soliman, H., et al., “Flow Bindings in Mobile IPv6 and NEMO Basic Support,” IETF MEXT Working Group, Nov. 9, 2009, 38 pages; http://tools.ietf.org/html/draft-ietf-mext-flow-binding-04.
Sonoma Wireworks Forums, “Jammin on Rifflink,” [retrieved and printed on May 27, 2010] http://www.sonomawireworks.com/forums/viewtopic.php?id=2659; 5 pages.
Sonoma Wireworks Rifflink, [retrieved and printed on Jun. 2, 2010] http://www.sonomawireworks.com/rifflink.php; 3 pages.
Soohuan, Kim, et al., “Block-based face detection scheme using face color and motion estimation,” Real-Time Imaging VIII; Jan. 20-22, 2004, San Jose, CA; vol. 5297, No. 1; Proceedings of the SPIR—The International Society for Optical Engineering SPIE—Int. Soc. Opt. Eng USA ISSN: 0277-786X; XP007905596; pp. 78-88.
Sudan, Ranjeet, “Signaling in MPLS Networks with RSVP-TE-Technology Information,” Telecommunications, Nov. 2000, 3 pages; http://findarticles.com/p/articles/min—mOTLC/is—11—34/ai—67447072/.
Sullivan, Gary J., et al., “Video Compression—From Concepts to the H.264/AVC Standard,” Proceedings IEEE, vol. 93, No. 1, Jan. 2005; http://ip.hhi.de/imagecom—G1/assets/pdfs/pieee—sullivan—wiegand—2005.pdf; 14 pages.
Sun, X., et al., “Region of Interest Extraction and Virtual Camera Control Based on Panoramic Video Capturing,” IEEE Trans. Multimedia, Oct. 27, 2003; http://vision.ece.ucsb.edu/publications/04mmXdsun.pdf; 14 pages.
Super Home Inspectors or Super Inspectors, [retrieved and printed on Mar. 18, 2010] http://www.umrt.com/PageManager/Default.aspx/PageID=2120325; 3 pages.
Tan, Kar-Han, et al., “Appearance-Based Eye Gaze Estimation,” In Proceedings IEEE WACV'02, 2002, 5 pages http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.8921.
Total immersion, Video Gallery, copyright 2008-2009 [retrieved and printed on Feb. 26, 2009], http://www.t-immersion.com/en,video-gallery,36.html, 1 page.
Trucco, E., et al., “Real-Time Disparity Maps for Immersive 3-D Teleconferencing by Hybrid Recursive Matching and Census Transform,” [retrieved and printed on May 4, 2010] http://server.cs.ucf.edu/˜vision/papers/VidReg-final.pdf; 9 pages.
Tsapatsoulis, N., et al., “Face Detection for Multimedia Applications,” Proceedings of the ICIP Sep. 10-13, 2000, Vancouver, BC, Canada; vol. 2, pp. 247-250.
Avrithis, Y., et al., “Color-Based Retrieval of Facial Images,” European Signal Processing Conference (EUSIPCO '00), Tampere, Finland; Sep. 2000; http://www.image.ece.ntua.gr/˜ntsap/presentations/eusipco00.ppt#256; 18 pages.
Awduche, D., et al., “Requirements for Traffic Engineering over MPLS,” Network Working Group, RFC 2702, Sep. 1999, 30 pages; http://tools.ietf.org/pdf/rfc2702.pdf.
Bakstein, Hynek, et al., “Visual Fidelity of Image Based Rendering,” Center for Machine Perception, Czech Technical University, Proceedings of the Computer Vision, Winter 2004, http://www.benogo.dk/publications/Bakstein-Pajdla-CVWW04.pdf; 10 pages.
Beesley, S.T.C., et al., “Active Macroblock Skipping in the H.264 Video Coding Standard,” in Proceedings of 2005 Conference on Visualization, Imaging, and Image Processing, Sep. 7-9, 2005, Benidorm, Spain, Paper 480-261. ACTA Press, ISBN: 0-88986-528-0; 5 pages.
Berzin, O., et al., “Mobility Support Using MPLS and MP-BGP Signaling,” Network Working Group, Apr. 28, 2008, 60 pages; http://www.potaroo.net/ietf/all-/draft-berzin-malis-mpls-mobility-01.txt.
Boccaccio, Jeff; CEPro, “Inside HDMI CEC: The Little-Known Control Feature,” Dec. 28, 2007; http://www.cepro.com/article/print/inside—hdmi—cec—the—little—known—control—feature; 2 pages.
Boros, S., “Policy-Based Network Management with SNMP,” Proceedings of the EUNICE 2000 Summer School Sep. 13-15, 2000, p. 3.
Bücken R: “Bildfernsprechen: Videokonferenz vom Arbeitsplatz aus” Funkschau, Weka Fachzeitschriften Verlag, Poing, DE, No. 17, Aug. 14, 1986, pp. 41-43, XP002537729; ISSN: 0016-2841, p. 43, left-hand column, line 34—middle column, line 24.
Chan, Eric, et al., “Experiments on block-matching techniques for video coding,” Multimedia Systems; 9 Springer-Verlag 1994, Multimedia Systems (1994) 2 pages 228-241.
Chen et al., “Toward a Compelling Sensation of Telepresence: Demonstrating a Portal to a Distant (Static) Office,” Proceedings Visualization 2000; VIS 2000; Salt Lake City, UT, Oct. 8-13, 2000; Annual IEEE Conference on Visualization, Los Alamitos, CA; IEEE Comp. Soc., US, Jan. 1, 2000, pp. 327-333; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.1287.
Chen, Jason, “iBluetooth Lets iPhone Users Send and Receive Files Over Bluetooth,” Mar. 13, 2009; http://i.gizmodo.com/5169545/ibluetooth-lets-iphone-users-send-and-receive-files-over-bluetooth; 1 page.
Chen, Qing, et al., “Real-time Vision-based Hand Gesture Recognition Using Haar-like Features,” Instrumentation and Measurement Technology Conference, Warsaw, Poland, May 1-3, 2007, 6 pages; http://www.google.com/url?sa=t&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.93.103%26rep%3Drep1%26type%3Dpdf&ei=A28RTLKRDeftnQeXzZGRAw&usg=AFQjCNHpwj5MwjgGp-3goVzSWad6CO-Jzw.
Chien et al., “Efficient moving Object Segmentation Algorithm Using Background Registration Technique,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, No. 7, Jul. 2002, 10 pages.
Cisco: Bill Mauchly and Mod Marathe; UNC: Henry Fuchs, et al., “Depth-Dependent Perspective Rendering,” Apr. 15, 2008; 6 pages.
Costa, Cristina, et al., “Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distorion Map,” EURASIP Journal on Applied Signal Processing, Jan. 7, 2004, vol. 2004, No. 12; © 2004 Hindawi Publishing Corp.; XP002536356; ISSN: 1110-8657; pp. 1899-1911; http://downloads.hindawi.com/journals/asp/2004/470826.pdf.
Criminisi, A., et al., “Efficient Dense-Stereo and Novel-view Synthesis for Gaze Manipulation in One-to-one Teleconferencing,” Technical Rpt MSR-TR-2003-59, Sep. 2003 [retrieved and printed on Feb. 26, 2009], http://research.microsoft.com/pubs/67266/ criminis—techrep2003-59.pdf, 41 pages.
Cumming, Jonathan, “Session Border Control in IMS, An Analysis of the Requirements for Session Border Control in IMS Networks,” Sections 1.1, 1.1.1, 1.1.3, 1.1.4, 2.1.1, 3.2, 3.3.1, 5.2.3 and pp. 7-8, Data Connection, 2005.
Daly, S., et al., “Face-based visually-optimized image sequence coding,” Image Processing, 1998. ICIP 98. Proceedings; 1998 International Conference on Chicago, IL; Oct. 4-7, 1998, Los Alamitos; IEEE Computing; vol. 3, Oct. 4, 1998; ISBN: 978-0-8186-8821-8; XP010586786; pp. 443-447.
Diaz, Jesus, “Zcam 3D Camera is Like Wii Without Wiimote and Minority Report Without Gloves,” Dec. 15, 2007; http://gizmodo.com/gadgets/zcam-depth-camera-could-be-wii-challenger/zcam-3d-camera-is-like-wii-without-wiimote-and-minority-report-without-gloves-334426.php; 3pages.
Diaz, Jesus, iPhone Bluetooth File Transfer Coming Soon (Yes!); Jan. 26, 2009; http://i.gizmodo.com/5138797/iphone-bluetooth-file-transfer-coming-soon-yes; 1page.
DVE Digital Video Enterprises, “DVE Tele-Immersion Room,” [retrieved and printed on Feb. 5, 2009] http://www.dvetelepresence.com/products/immersion—room.asp; 2 pages.
Dynamic Displays, copyright 2005-2008 [retrieved and printed on Feb. 24, 2009] http://www.zebraimaging.com/html/lighting—display.html, 2 pages.
ECmag.com, “IBS Products,” Published Apr. 2009; http://www.ecmag.com/index.cfm?fa=article&articleID=10065; 2 pages.
Eisert, Peter, “Immersive 3-D Video Conferencing: Challenges, Concepts and Implementations,” Proceedings of SPIE Visual Communications and Image Processing (VCIP), Lugano, Switzerland, Jul. 2003; 11 pages; http://iphome.hhi.de/eisert/papers/vcip03.pdf.
eJamming Audio, Learn More; [retrieved and printed on May 27, 2010] http://www.ejamming.com/learnmore/; 4 pages.
Electrophysics Glossary, “Infrared Cameras, Thermal Imaging, Night Vision, Roof Moisture Detection,” [retrieved and printed on Mar. 18, 2010] http://www.electrophysics.com/Browse/Brw—Glossary.asp; 11 pages.
Farrukh, A., et al., Automated Segmentation of Skin-Tone Regions in Video Sequences, Proceedings IEEE Students Conference, ISCON—apos—02; Aug. 16-17, 2002; pp. 122-128.
Fiala, Mark, “Automatic Projector Calibration Using Self-Identifying Patterns,” National Research Council of Canada, Jun. 20-26, 2005; http://www.procams.org/procams2005/papers/procams05-36.pdf; 6 pages.
Foote, J., et al., “Flycam: Practical Panoramic Video and Automatic Camera Control,” in Proceedings of IEEE International Conference on Multimedia and Expo, vol. III, Jul. 30, 2000; pp. 1419-1422; http://citeseerx.ist.psu.edu/viewdoc/versions?doi=10.1.1.138.8686.
Freeman, Professor Wilson T., Computer Vision Lecture Slides, “6.869 Advances in Computer Vision: Learning and Interfaces,” Spring 2005; 21 pages.
Garg, Ashutosh, et al., “Audio-Visual ISpeaker Detection Using Dynamic Bayesian Networks,” IEEE International Conference on Automatic Face and Gesture Recognition, 2000 Proceedings, 7 pages; http://www.ifp.illinois.edu/˜ashutosh/papers/FG00.pdf.
Gemmell, Jim, et al., “Gaze Awareness for Video-conferencing: A Software Approach,” IEEE MultiMedia, Oct.-Dec. 2000; vol. 7, No. 4, pp. 26-35.
Geys et al., “Fast Interpolated Cameras by Combining a GPU Based Plane Sweep With a Max-Flow Regularisation Algorithm,” Sep. 9, 2004; 3D Data Processing, Visualization and Transmission 2004, pp. 534-541.
Gotchev, Atanas, “Computer Technologies for 3D Video Delivery for Home Entertainment,” International Conference on Computer Systems and Technologies; CompSysTech, Jun. 12-13, 2008; http://ecet.ecs.ru.acad.bg/cst08/docs/cp/Plenary/P.1.pdf; 6 pages.
Gries, Dan, “3D Particles Experiments in AS3 and Flash CS3, Dan's Comments,” [retrieved and printed on May 24, 2010] http://www.flashandmath.com/advanced/fourparticles/notes.html; 3 pages.
Guernsey, Lisa, “Toward Better Communication Across the Language Barrier,” Jul. 29, 1999; http://www.nytimes.com/1999/07/29/technology/toward-better-communication-across-the-language-barrier.html; 2 pages.
Guili, D., et al., “Orchestra!: A Distributed Platform for Virtual Musical Groups and Music Distance Learning over the Internet in JavaTM Technology” ; [retrieved and printed on Jun. 6, 2010] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=778626; 2 pages.
Gundavelli, S., et al., “Proxy Mobile IPv6,” Network Working Group, RFC 5213, Aug. 2008, 93 pages; http://tools.ietf.org/pdf/rfc5213.pdf.
Gussenhoven, Carlos, “Chapter 5 Transcription of Dutch Intonation,” Nov. 9, 2003, 33 pages; http://www.ru.nl/publish/pages/516003/todisun-ah.pdf.
Gvili, Ronen et al., “Depth Keying,” 3DV System Ltd., [Retrieved and printed on Dec. 5, 2011] 11 pages; http://research.microsoft.com/en-us/um/people/eyalofek/Depth%20Key/DepthKey.pdf.
Habili, Nariman, et al., “Segmentation of the Face and Hands in Sign Language Video Sequences Using Color and Motion Cues” IEEE Transaction on Circuits and Systems for Video Technology, IEEE Service Center, vol. 14, No. 8, Aug. 1, 2004; ISSN: 1051-8215; XP011115755; pp. 1086-1097.
Hammadi, Nait Charif et al., “Tracking the Activity of Participants in a Meeting,” Machine Vision and Applications, Springer, Berlin, De Lnkd—DOI:10.1007/S00138-006-0015-5, vol. 17, No. 2, May 1, 2006, pp. 83-93, XP019323925 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.9832.
He, L., et al., “The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing,” Proc. SIGGRAPH, © 1996; http://research.microsoft.com/en-us/um/people/lhe/papers/siggraph96.vc.pdf; 8 pages.
Related Publications (1)
Number Date Country
20120127257 A1 May 2012 US