System and method for providing enhanced video processing in a network environment

Information

  • Patent Grant
  • 8723914
  • Patent Number
    8,723,914
  • Date Filed
    Friday, November 19, 2010
    13 years ago
  • Date Issued
    Tuesday, May 13, 2014
    10 years ago
  • CPC
  • US Classifications
    Field of Search
    • US
    • 348 01401-01416
    • 382 118000
    • 382 170000
    • 382 231000
    • 382 225000
    • 382 162000
    • 375 240080
    • CPC
    • H04N7/15
    • H04N7/142
  • International Classifications
    • H04N7/15
    • Term Extension
      529
Abstract
A method is provided in one example and includes receiving a video input from a camera element; using change detection statistics to identify background image data; using the background image data as a temporal reference to determine foreground image data of a particular video frame within the video input; using a selected foreground image for a background registration of a subsequent video frame; and providing at least a portion of the subsequent video frame to a next destination.
Description
TECHNICAL FIELD

This disclosure relates in general to the field of communications and, more particularly, to providing enhanced video processing in a network environment.


BACKGROUND

Video services have become increasingly important in today's society. In certain architectures, service providers may seek to offer sophisticated video conferencing services for their end users. The video conferencing architecture can offer an “in-person” meeting experience over a network. Video conferencing architectures can deliver real-time, face-to-face interactions between people using advanced visual, audio, and collaboration technologies. The ability to optimize video communications provides a significant challenge to system designers, device manufacturers, and service providers alike.





BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:



FIG. 1 is a simplified block diagram of a system for providing a video session in a network environment in accordance with one embodiment of the present disclosure;



FIG. 2 is a simplified block diagram illustrating one example implementation of certain components associated with the system;



FIG. 3 is a simplified block diagram illustrating one example implementation of network traffic management associated with the system;



FIG. 4 is a simplified block diagram illustrating another example implementation of network traffic management associated with the system;



FIG. 5 is a simplified schematic diagram illustrating another example of the system for providing a video session in accordance with one embodiment of the present disclosure;



FIG. 6 is a simplified block diagram illustrating an example flow of data within an endpoint in accordance with one embodiment of the present disclosure;



FIG. 7 is a simplified diagram showing a multi-stage histogram in accordance with one embodiment of the present disclosure;



FIG. 8 is a simplified schematic diagram illustrating an example decision tree for making a skip coding determination for a portion of input video; and



FIG. 9 is a simplified flow diagram illustrating potential operations associated with the system of FIG. 5.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview


A method is provided in one example and includes receiving a video input from a camera element; using change detection statistics to identify background image data; using the background image data as a temporal reference to determine foreground image data of a particular video frame within the video input; using a selected foreground image for a background registration of a subsequent video frame; and providing at least a portion of the subsequent video frame to a next destination.


In more specific implementations, the method can include an advanced skip coding technique that comprises identifying values of pixels from noise within the video input; creating a skip-reference video image associated with the identified pixel values; comparing a portion of a current video image to the skip-reference video image; and determining a macroblock associated with the current video image to be skipped before an encoding operation occurs.


Other embodiments can include evaluating video data from the video input to determine whether a particular element within a plurality of elements in the video data is part of a stationary image. Portions of stationary images can be skipped before certain encoding operations occur. The foreground image data can further include a face and a torso image of a participant in a video session. The method can also include encoding non-skipped macroblocks associated with the current video image based on a noise level being above a designated noise threshold.


Additionally, certain embodiments may include generating a plurality of histograms to represent variation statistics between a current input video frame and a temporally preceding video frame. Each of the histograms includes differing levels of luminance, and if a selected one of the histograms reaches a certain level of luminance, a corresponding pixel of an associated video image is marked to be registered to a reference buffer.


Example Embodiments


Turning to FIG. 1, FIG. 1 is a simplified block diagram of a system 10 for providing a video session in a network environment. In this particular example, system 10 may include a display 12, a camera element 14, a user interface (UI) 18, a console element 20, a handset 28, and a network 30. A series of speakers 16 are provisioned in conjunction with camera element 14 in order to transmit and receive audio data. In one particular example implementation, a wireless microphone 24 is provided in order to receive audio data in a surrounding environment (e.g., from one or more audience members). Note that this wireless microphone 24 is purely optional, as speakers 16 are capable of sufficiently capturing audio data in a surrounding environment during any number of videoconferencing applications (which are detailed below).


In general terms, system 10 can be configured to capture video image data and/or audio data in the context of videoconferencing. System 10 may include a configuration capable of transmission control protocol/internet protocol (TCP/IP) communications for the transmission and/or reception of packets in a network. System 10 may also operate in conjunction with a user datagram protocol/IP (UDP/IP) or any other suitable protocol, where appropriate and based on particular communication needs.


In certain implementations, handset 28 can be used as a remote control for system 10. For example, handset 28 can offer a wireless remote control that allows it to communicate with display 12, camera element 14, and/or console element 20 via a wireless network link (e.g., infrared, Bluetooth, any type of IEEE 802.11-based protocol, etc.). Handset 28 can further be provisioned as a wireless mobile phone (e.g., a speakerphone device) with various dial pads: some of which are shown by way of example in FIG. 1. In other implementations, handset 28 operates as a learning mechanism and/or a universal remote controller, which allows it to readily control display 12, camera element 14, console element 20, and/or any audiovisual (AV) receiver device (e.g., managing functions such as ON/OFF, volume, input select, etc. to enhance the overall video experience). In a particular set of examples, a specific button on handset 28 can launch UI 18 for navigating through any number of options provided in submenus of the UI software. Additionally, a dedicated button can be used to make/answer calls, end calls, turn on/off camera element 14, turn on/off the microphone on, turn on/off console element 20, etc. Furthermore, a set of playback controls can be provided on handset 28 in order to control the video data being rendered on display 12.


Note that handset 28 can be configured to launch, control, and/or manage UI 18. In one particular instance, UI 18 includes a clover design having four separate functions along its perimeter (i.e., up, down, left, right). The center of UI 18 can be used to initiate calls or to configure call options. The lower widget icon may be used to adjust settings, inclusive of controlling profile information, privacy settings, console settings, etc. The right-hand icon (when selected) can be used to view video messages sent to a particular user. The upper icon can be used to manage contacts (e.g., add, view, and connect to other individuals). The director's card (provided as the left icon) can be used to record and send video messages to other individuals. It is imperative to note that these menu choices can be changed considerably without departing from the scope of the present disclosure. Additionally, these icons may be customized, changed, or managed in any suitable fashion. Furthermore, the icons of UI 18 are not exhaustive, as any other suitable features may be provided in the context of UI 18. Along similar lines, the submenu navigation choices provided beneath each of these icons can include any suitable parameter applicable to videoconferencing, networking, user data management, profiles, etc.


In operation of an example implementation, system 10 can be used to conduct video calls (e.g., supporting both inbound and outbound directional call flows). For the inbound call scenario, on reception of an inbound call request, console element 20 is configured to contact the paired handset(s) 28 (e.g., waking it from sleep, where appropriate). Handset 28 can be configured to play a ringtone, turn on an LED indicator, and/or display UI 18 (e.g., including the incoming caller's contact information). If configured to do so, UI 18 can also be displayed over any passthrough video sources on console element 20. If the callee chooses to answer the call with one of the call control buttons, console element 20 offers its media capabilities to the caller's endpoint. In certain example implementations, by default, audio media can be offered at the start of the call. At any time during a voice call, both parties can agree to enter into a full video session (e.g., referred to as a “go big” protocol) at which point video media is negotiated. As a shortcut, the intention to “go big” can be pre-voted at the start of the call. At any time after video media is flowing, the call can also be de-escalated back to an audio-only call. In certain instances, there could be an option to automatically answer incoming calls as immediate full-video sessions.


In the case of an ad hoc outbound call, the user can select a callee from their contact list, select a callee via a speed dial setting, or alternatively the user can enter any type of identifier (e.g., a telephone number, a name, a videoconferencing (e.g., Telepresence, manufactured by Cisco, Inc. of San Jose, Calif.) number directly). If the callee answers, the call scenario proceeds, similar to that of an inbound call. In the case of a hold and resume scenario, an in-call UI 18 signal can be provided to put a call on hold, and subsequently the call can be resumed at a later time. Note that in other instances, system 10 can be used to execute scheduled calls, call transfer functions, multipoint calls, and/or various other conferencing capabilities.


In the case of the consumer user attempting a communication with a business entity, certain parameters may be changed based on interoperability issues. For example, secure business endpoints may be supported, where signaling and media would be secure (both audio and video). Appropriate messages can be displayed in UI 18 to inform the user of the reason for any security-forced call drops. Signaling can be considered secure by having both a business exchange and consumer networks physically co-located, or by using a secure tunnel (e.g., a site-to-site virtual private network (VPN) tunnel) between the two entities.


Before turning to additional flows associated with system 10, FIG. 2 is introduced in order to illustrate some of the potential arrangements and configurations for system 10. In the particular example implementation of FIG. 2, camera element 14 includes a processor 40a and a memory element 42a. Camera element 14 is coupled to console element 20, which similarly includes a processor 40b and a memory element 42b. A power cord 36 is provided between an outlet and console element 20. Any suitable connections (wired or wireless) can be used in order to connect any of the components of FIG. 2. In certain examples, the cables used may include Ethernet cables, High-Definition Multimedia Interface (HDMI) cables, universal serial bus (USB) cables, or any other suitable link configured for carrying data or energy between two devices.


In regards to a physical infrastructure, camera element 14 can be configured to fasten to any edge (e.g., a top edge) of display 12 (e.g., a flat-screen HD television). Camera element 14 can be included as part of an integrated component (i.e., a single component, a proprietary element, a set-top box, console element 20, etc.) that could include speakers 16 (e.g., an array microphone). Thus, all of these elements (camera element 14, speakers 16, console element 20) can be combined and/or be suitably consolidated into an integrated component that rests on (or that is fixed to, or that is positioned near) display 12. Alternatively, each of these elements are their own separate devices that can be coupled (or simply interact with each other), or be adequately positioned in any appropriate fashion.


Also provided in FIG. 2 are a router 34 and a set-top box 32: both of which may be coupled to console element 20. In a particular example, router 34 can be a home wireless router configured for providing a connection to network 30. Alternatively, router 34 can employ a simple Ethernet cable in order to provide network connectivity for data transmissions associated with system 10. Handset 28 can be recharged through a cradle dock 26 (as depicted in FIG. 2). [Handset 28 can be functional while docked.] Alternatively, handset 28 may be powered by batteries, solar charging, a cable, or by any power source, or any suitable combination of these mechanisms.


In one particular example, the call signaling of system 10 can be provided by a session initiation protocol (SIP). In addition, the media for the videoconferencing platform can be provided by Secure Real-time Transport Protocol (SRTP), or any other appropriate real-time protocol. SRTP addresses security for RTP and, further, can be configured to add confidentiality, message authentication, and replay protection to that protocol. SRTP is preferred for protecting voice over IP (VoIP) traffic because it can be used in conjunction with header compression and, further, it generally has no effect on IP quality of service (QoS). For network address translation (NAT)/firewall (FW) traversal, any suitable mechanism can be employed by system 10. In one particular example, these functions can be provided by a split-tunneled VPN with session traversal utilities for NAT (STUN) and Interactive Connectivity Establishment (ICE).


Signaling can propagate to a call agent via the VPN. Additionally, media can be sent directly from the endpoint to another endpoint (i.e., from one videoconferencing platform to another). Note that as used herein, the term ‘media’ is inclusive of audio data (which may include voice data) and video data (which may include any type of image data). The video data can include any suitable images (such as that which is captured by camera element 14, by a counterparty's camera element, by a Webcam, by a smartphone, by an iPad, etc.). The term ‘smartphone’ as used herein includes any type of mobile device capable of operating in conjunction with a video service. This would naturally include items such as the Google Droid, the iPhone, an iPad, etc. In addition, the term ‘signaling data’ is inclusive of any appropriate control information that can be sent toward a network. This may be inclusive of traffic used to establish a video session initially, along with any type of negotiations (e.g., for bit rates, for bandwidth, etc.) that may be appropriate for the particular videoconference. This may further be inclusive of items such as administrative traffic, account traffic (for user account management, contact lists [which include buddy lists, as detailed below], etc.), and/or other types of traffic, which are not provided as part of the media data.


In order to handle symmetric NAT, Traversal Using Relay NAT (TURN) can be used by system 10 in particular embodiments. User names for the videoconferencing platform can be provided by E.164 numbers in a particular example. Alternatively, the user naming can be a simple user ID (e.g., assigned by the service provider, selected by the user, etc.), a full name of the user (or a group name), an avatar, or any other symbol, number, or letter combination that can be used to distinguish one user from another. Note that a single name can also be associated with a group (e.g., a family, a business unit, etc.). The security for communications of system 10 can be addressed a number of ways. In one implementation, the video services (i.e., cloud services) can be protected by any suitable security protocol (e.g., security software, adaptive security appliances (ASA), etc.). Additionally, intrusion protection systems, firewalls, anti-denial of service mechanisms can be provided for the architecture (both out in the network, and/or locally within a residential environment).


Turning to details associated with the infrastructure of system 10, in one particular example, camera element 14 is a video camera configured to capture, record, maintain, cache, receive, and/or transmit image data. This could include transmitting packets over network 30 to a suitable next destination. The captured/recorded image data could be stored in camera element 14 itself, or be provided in some suitable storage area (e.g., a database, a server, console element 20, etc.). In one particular instance, camera element 14 can be its own separate network device and have a separate IP address. Camera element 14 could include a wireless camera, a high-definition camera, or any other suitable camera device configured to capture image data.


Camera element 14 may interact with (or be inclusive of) devices used to initiate a communication for a video session, such as a switch, console element 20, a proprietary endpoint, a microphone, a dial pad, a bridge, a telephone, a computer, or any other device, component, element, or object capable of initiating video, voice, audio, media, or data exchanges within system 10. Camera element 14 can also be configured to include a receiving module, a transmitting module, a processor, a memory, a network interface, a call initiation and acceptance facility such as a dial pad, one or more displays, etc. Any one or more of these items may be consolidated, combined, eliminated entirely, or varied considerably and those modifications may be made based on particular communication needs.


Camera element 14 can include a high-performance lens and an optical zoom, where camera element 14 is capable of performing panning and tilting operations. The video and the audio streams can be sent from camera element 14 to console element 20, where they are mixed into the HDMI stream. In certain implementations, camera element 14 can be provisioned as a light sensor such that the architecture can detect whether the shutter of the camera is open or closed (or whether the shutter is partially open.) An application program interface (API) can be used to control the operations of camera element 14.


Display 12 offers a screen on which video data can be rendered for the end user. Note that as used herein in this Specification, the term ‘display’ is meant to connote any element that is capable of delivering image data (inclusive of video information), text, sound, audiovisual data, etc. to an end user. This would necessarily be inclusive of any panel, plasma element, television (which may be high-definition), monitor, computer interface, screen, Telepresence devices (inclusive of Telepresence boards, panels, screens, surfaces, etc.), or any other suitable element that is capable of delivering/rendering/projecting such information.


Network 30 represents a series of points or nodes of interconnected communication paths for receiving and transmitting packets of information that propagate through system 10. Network 30 offers a communicative interface between any of the components of FIGS. 1-2 and remote sites, and may be any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), wide area network (WAN), VPN, Intranet, Extranet, or any other appropriate architecture or system that facilitates communications in a network environment.


Console element 20 is configured to receive information from camera element 14 (e.g., via some connection that may attach to an integrated device (e.g., a set-top box, a proprietary box, etc.) that sits atop (or near) display 12 and that includes (or is part of) camera element 14). Console element 20 may also be configured to control compression activities, or additional processing associated with data received from camera element 14. Alternatively, the actual integrated device can perform this additional processing before image data is sent to its next intended destination. Console element 20 can also be configured to store, aggregate, process, export, or otherwise maintain image data and logs in any appropriate format, where these activities can involve processor 40b and memory element 42b. Console element 20 is a video element that facilitates data flows between endpoints and a given network. As used herein in this Specification, the term ‘video element’ is meant to encompass servers, proprietary boxes, network appliances, set-top boxes, or other suitable device, component, element, or object operable to exchange video information with camera element 14.


Console element 20 may interface with camera element 14 through a wireless connection, or via one or more cables or wires that allow for the propagation of signals between these elements. These devices can also receive signals from an intermediary device, a remote control, handset 28, etc. and the signals may leverage infrared, Bluetooth, WiFi, electromagnetic waves generally, or any other suitable transmission protocol for communicating data (e.g., potentially over a network) from one element to another. Virtually any control path can be leveraged in order to deliver information between console element 20 and camera element 14. Transmissions between these two devices can be bidirectional in certain embodiments such that the devices can interact with each other. This would allow the devices to acknowledge transmissions from each other and offer feedback where appropriate. Any of these devices can be consolidated with each other, or operate independently based on particular configuration needs. In one particular instance, camera element 14 is intelligently powered using a USB cable. In a more specific example, video data is transmitted over an HDMI link, and control data is communicated over a USB link.


In certain examples, console element 20 can have an independent light sensor provisioned within it to measure the lighting in a given room. Subsequently, the architecture can adjust camera exposure, shuttering, lens adjustments, etc. based on the light that is detected in the room. Camera element 14 is also attempting to provide this function; however, having a separate light sensor offers a more deterministic way of adjusting these parameters based on the light that is sensed in the room. An algorithm (e.g., within camera element 14 and/or console element 20) can be executed to make camera adjustments based on light detection. In an IDLE mode, the lens of camera element 14 can close automatically. The lens of camera element 14 can open for an incoming call, and can close when the call is completed (or these operations may be controlled by handset 28). The architecture can also account for challenging lighting environments for camera element 14. For example, in the case of bright sunlight behind an individual, system 10 can optimize the exposure of the individual's face.


In regards to audio data (inclusive of voice data), in one particular example, speakers 16 are provisioned as a microphone array, which can be suitably calibrated. Note that in certain consumer applications, the consumer's home system is the variant, which is in contrast to most enterprise systems that have fixed (predictable) office structures. Camera element 14 can include an array of eight microphones in a particular example, but alternatively any number of microphones can be provisioned to suitably capture audio data. The microphones can be spaced linearly, or logarithmically in order to achieve a desired audio capture function. MicroElectrical-Mechanical System (MEMS) technology can be employed for each microphone in certain implementations. The MEMS microphones represent variations of the condenser microphone design, having a built in analog-to-digital converter (ADC) circuits.


The audio mechanisms of system 10 can be configured to add a delay to the system in order to ensure that the acoustics function properly. In essence, the videoconferencing architecture does not inherently know the appropriate delay because of the unique domain of the consumer. For example, there could be a home theater system being used for acoustic purposes. Hence, system 10 can determine the proper delay, which would be unique to that particular environment. In one particular instance, the delay can be measured, where the echoing effects from the existing speakers are suitably canceled. An embedded watermarking signature can also be provided in each of the speakers, where the signature can be detected in order to determine an appropriate delay. Note that there is also some additional delay added by display 12 itself because the clocking mechanism is generally not deterministic. The architecture can dynamically update the delay to account for this issue. Many of these functions can be accomplished by console element 20 and/or camera element 14: both of which can be intelligently configured for performing these function adjustments.


The architecture can also send out a signal (e.g., white noise) as a test for measuring delay. In certain instances, this function is done automatically without having to prompt the user. The architecture can also employ wireless microphone 24, which can use a dedicated link in certain implementations. Wireless microphone 24 can be paired (akin to Bluetooth pairing) such that privacy issues can be suitably addressed. Wireless microphone 24 can be taken anywhere (e.g., in the room, in the house, etc.) and still provide appropriate audio functions, where multiplexing would occur at console element 20 for this particular application. Similarly, there could be an incarnation of the same for a given speaker (or the speaker/microphone can be provided together as a mobile unit, which is portable). The speaker could be similarly used anywhere in the room, in the house, etc. It should be noted that this is not only a convenience issue, but also a performance issue in suitably capturing/delivering audio signals having the proper strength and quality.


In terms of call answering and video messaging, handset 28 allows an individual to have the option of taking a voice call instead of answering a videoconferencing call. This is because handset 28 can have the intelligence to operate purely as a mobile phone. For this reason, handset 28 can readily be substituted/replaced by various types of smartphones, which could have an application provisioned thereon for controlling the videoconferencing activities. Handset 28 also affords the ability to be notified (through the handset itself) of an incoming videoconferencing call, with the option of rendering that call on display 12. A simple visual alert (e.g., an LED, a vibration, etc.) can be used to indicate a video message is waiting to be heard/watched.


The video messaging can include snapshots of video frames that would be indicative of the actual message images. In the user's video Inbox, the current videomail can include images of the actual messages being stored for future playback. For example, if the message were from the user's mother, the videomail would include a series of snapshots of the mother speaking during that videomail. In one particular example, the actual videomail is sampled at certain time intervals (e.g., every 10 seconds) in order to generate these images, which serve as a preview of the videomail message. Alternatively, the snapshots can be limited in number. In other instances, the snapshots are arbitrarily chosen, or selected at the beginning, the middle, and the end of the video message. In other implementations, the snapshots are taken as a percentage of the entire video message (e.g., at the 20% mark, at the 40% mark, and at the 100% mark). In other examples, the videomail in the Inbox is previewed by just showing the image associated with that particular user ID that authored the video message.


In operation of an example involving a user watching a normal television program on display 12, an incoming call can be received by the videoconferencing platform. The notification can arrive even if the television is off (e.g., through speakers of system 10). If an individual chooses to answer the call, then the videoconferencing platform takes over the television. In one example involving a digital video recorder (DVR), the programming can be paused. In other examples, the user can keep the call minimized so (for example) a user could speak with a friend while watching a football game. Console element 20 can be configured to record a message, and then send that message to any suitable next destination. For example, the user can send a link to someone for a particular message. The user can also use Flip Share or YouTube technology to upload/send a message to any appropriate destination. In a general sense, the messages can be resident in a network cloud such that they could still be accessed (e.g., over a wireless link) even if the power were down at the residence, or if the user were not at the residence.


The user can also switch from a video call to handset 28, and from handset 28 back to a video call. For example, the user can initiate a call on a smartphone and subsequently transition it to the videoconferencing display 12. The user can also do the reverse, where the user starts at the videoconferencing platform and switches to a smartphone. Note that wireless microphone 24 can operate in a certain, preferred range (e.g., 12 to 15 feet), where if the individual moves further away from that range, users could elect to transition to handset 28 (in a more conventional telephony manner). Consider the case where the room becomes noisy due to family members, and the user on the videoconferencing call elects to simply switch over to a smartphone, to a given landline, etc.


Motion detection can also be used in order to initiate, or to answer video calls. For example, in the case where a remote control is difficult to find in a living room, a simple hand-waving gesture could be used to answer an incoming video call. Additionally, the system (e.g., camera element 14 cooperating with console element 20) can generally detect particular body parts in order to execute this protocol. For example, the architecture can distinguish between a dog running past display 12, versus handwaving being used to answer an incoming call. Along similar lines, the user can use different gestures to perform different call functions (e.g., clasping his hands to put a call on hold, clapping his hands to end the call, pointing in order to add a person to a contact list, etc.).


Note that Wi-Fi is fully supported by system 10. In most videoconferencing scenarios, there can be massive amounts of data (much of which is time critical) propagating into (or out of) the architecture. Video packets (i.e., low-latency data) propagating over a Wi-Fi connection can be properly accommodated by system 10. In one particular example, nonmoving (static) background images can be segmented out of the video image, which is being rendered by display 12. The architecture (e.g., through console element 20) can then lower the bit rate significantly on those images. Allocations can then be made for other images that are moving (i.e., changing in some way). In certain example implementations, face-detection algorithms can also be employed, where the video is optimized based on those algorithm results.


Certain phone features allow for handset 28 to offer speed dialing, and a mechanism for saving contacts into a contact list. Calls can be made to users on the speed dial list or the contact list with a single button push on handset 28. Additionally, calls can be initiated using either the UI of handset 28, or the on-screen UI 18. Furthermore, calls can be initiated from a web portal, where the caller can confirm call initiation at the endpoint by pressing voice-only, or a video call button on handset 28. Also, calls can be initiated from other web pages via a call widget (e.g., calling a person by clicking on his Facebook object). In addition, the caller can look up a recipient in an online directory (e.g., a directory of all Telepresence users stored in a database), place a call to that recipient, and save the recipient's contact information into the contact list. In terms of receiving videoconferencing calls, incoming calls can be accepted with a single button push on handset 28. Call recipients have the opportunity to accept or reject a call. Rejected calls can be routed to videomail (if permitted by the recipient's safety settings).


In regards to call quality, if the available bandwidth decreases during a call, the video resolution is scaled down, as appropriate. If the available bandwidth increases during a call, the video resolution can be scaled up. An on-screen icon can be provided on display 12 to inform the user of the quality of his videoconferencing experience. The purpose of this information can be to inform the user of a poor experience, potentially being caused by network conditions, and that the user can improve his experience by upgrading his broadband service. When communicating with a Webcam, the picture on display 12 can be windowed inside a black frame: regardless of the actual quality of the Webcam video.


In regards to videomail, when a call cannot be answered in real time, it is not lost, but rather, forwarded automatically to videomail. Videomail can be accessed from the videoconferencing system, a web portal, a smartphone, laptop, or any other suitable endpoint device to be used by a user. Note that the user is afforded the ability to set a designated interval for when an incoming counterparty would be relegated to the user's videomail Inbox. The term ‘designated interval’ is inclusive of a number of rings, a certain time period (e.g., in seconds), or a zero interval, in which case the counterparty's video call request would be immediately routed to the user's videomail. In certain embodiments, the ‘designated interval’ has a default configured by an administrator.


Videomail can be stored in the network (e.g., in the cloud) in particular implementations of system 10. Alternatively, the videomail can be stored locally at the consumer's residence (e.g., at a laptop, a personal computer, an external hard drive, a server, or in any other appropriate data storage device). Videomail can be played with the following minimum set of playback controls: Play, Pause, Stop, Fast or Skip Forward, Fast or Skip Reverse, Go Back to Start. In a particular implementation, videomail is only viewed by the intended recipient. Notifications of new videomail can be sent to other devices by short message service (SMS) text message (e.g., to a mobile device) or by email. An immediate notification can also be shown on handset 28. For video recordings, videos can be recorded and stored in the network for future viewing and distribution (e.g., as part of video services, which are detailed below with reference FIG. 3). Calls can similarly be recorded in real time and stored in the network for future viewing and distribution. When sharing recorded videos with videoconferencing users, the architecture can specify exactly which videoconferencing users have access to the video data. When the share list contains one or more email addresses, access control is not enabled in particular implementations (e.g., any individual who has the URL could access the video).


In terms of media sharing, system 10 can provide a simple mechanism for sharing digital photos and videos with removable flash media, flash and hard-drive high definition digital camcorders, digital still cameras, and other portable storage devices. This can be fostered by supporting an external USB connection for these devices to the USB port, which can be provisioned at console element 20, display 12, camera element 14, a proprietary device, or at any other suitable location.


The media sharing application (e.g., resident in console element 20) supports playback of compressed AV file media that is stored on the USB device. Furthermore, this media sharing can be supported via an external HDMI connection for these devices to the HDMI port. System 10 can also provide a mechanism for sharing digital photos and videos that are on a computer, on a Network Attached Storage (NAS) device, on the local network, etc. The mechanism can be universal plug and play (UPnP)/digital living network alliance (DLNA) renderer compliant. The media sharing application can also provide a mechanism for sharing digital photos and videos that are on either a photo or video sharing site (e.g., Flickr, YouTube, etc.), as discussed herein.


System 10 can also provide a mechanism for viewing broadcast HDTV programs (e.g., watching the Superbowl) with the HDTV set-top box HDMI AV feed displayed in picture-in-picture (PIP) with the call video. Continuing with this example, the Super Bowl broadcast feed can be from a local set-top box 32 and not be shared. Only the call video and voice would be shared in this example. The audio portion of the call can be redirected to handset 28 (e.g., speakerphone by default). The audio from the local TV can be passed through to HDMI and optical links (e.g., TOSlink outputs).


In an example scenario, initially the game video can fill the main screen and the call video could be in the smaller PIP. The audio for the game can pass through the box to the television, or to AV receiver surround-sound system. The audio for the video call would be supported by handset 28. In a different scenario, while watching the game, where one caller prefers to switch the main screen from the game to the video call (e.g., during halftime), then the following activities would occur. [Note that this is consistent with the other PIP experiences.] The call video can fill the main screen, where the game fills the smaller PIP window. The audio for the video call can move to the TV or to the AV receiver surround-sound system, and the game audio can switch to handset 28. Note that none of these activities requires the user to be “off camera” to control the experience: meaning, the user would not have to leave his couch in order to control/coordinate all of these activities.


In one particular example, console element 20 and camera element 14 can support any suitable frame rate (e.g., a 50-60 frames/second (fps) rate) for HD video for local, uncompressed inputs and outputs. Additionally, the video (e.g., the HDMI 1.3 video) can be provided as a digital signal input/output for local, uncompressed inputs and outputs. There is a passthrough for high-bandwidth Digital Content Protection (HDCP) data for local, uncompressed inputs and outputs from HDMI.


In regards to audio support, HDMI audio can be provided as a digital signal input/output. There can also be a stereo analog line-level output to support legacy devices in the environment. This is in addition to a digital audio output, which may be in the form of an optical link output such as a TOSlink output. For the audiovisual switching activities, audio and video can be patched from inputs, videoconferencing video, or other generated sources, to a local full-screen output. The architecture can offer a protocol for automatically turning on and selecting the correct source of the HDTV (along with any external audio system, when the audiovisual configuration allows for this while answering a call). This feature (and the other features of handset 28) can be implemented via infrared, Bluetooth, any form of the IEEE 802.11 protocol, HDMI-Consumer Electronics Control (CEC), etc.


In regards to camera element 14, the architecture can provide a full-motion video (e.g., at 30 fps). Participants outside of the range may be brought into focus via autofocus. Camera element 14 can provide identification information to console element 20, a set-top satellite, and/or any other suitable device regarding its capabilities. Camera element 14 can be provisioned with any suitable pixel resolution (e.g., 1280×720 pixel (720 p) resolution, 1920×1080 pixel (1080 p) resolution, etc.). If depth of focus is greater than or equal to two meters, then manual focus can be suggested for setup activities, and the autofocus feature/option would be desirable for the user. In operation, the user can manually focus camera element 14 on his sofa (or to any other target area) during setup. If successful, this issue would not have to be revisited. If depth of focus is less than or equal to one meter (which is commonly the case) then autofocus can be implemented. A digital people-action finder may also be provisioned for system 10 using camera element 14. Both pan and tilt features are available manually at setup, and during a video call. Similarly, zoom is available manually at set-up time, and during a video call.


Handset 28 may be equipped with any suitable microphone. In one particular implementation, the microphone is a mono-channel mouthpiece microphone optimized for capturing high quality audio in a voice range. The microphone may be placed to optimize audio capture with standard ear-mouth distance. Handset 28 can have a 3.5 mm jack for a headphone with microphone. Note that system 10 can support Home Network Administration Protocol (HNAP) and, further, be compatible with Network Magic, Linksys Easy-Link Advisor, or any other suitable home network management tool.


In one example, handset 28 has an infrared transmitter for controlling standard home theatre components. The minimum controls for handset 28 in this example can be power-on, input select, volume up/down, and audio output mute of the TV and AV receiver. Console element 20 (along with camera element 14) can have an infrared receiver to facilitate pairing of the videoconferencing system with other remote controls, which can allow other remotes to control the videoconferencing system. Suitable pairing can occur either by entering infrared codes into handset 28, or by pointing a remote from the target system at an infrared receiver of the videoconferencing system (e.g., similar to how universal remotes learn and are paired).


For call management, system 10 can allow a user to initiate, accept, and disconnect calls to and from voice-only telephones (e.g., using handset 28 in a voice-only mode). Call forwarding can also be provided such that video calls are forwarded between console elements 20 at each endpoint of the video session. Additionally, announcements can be provided such that a default announcement video can be played to callers who are leaving a videomail. A self-view is available at any time, and the self-view can be triggered through a user demand by the user pressing a button on handset 28. The self-view can be supported with a mirror mode that shows the reverse image of the camera, as if the user was looking in a mirror. This can occur at any time, including while idle, while on a videoconferencing call, while on an audio-only call, etc.



FIG. 3 is a simplified block diagram illustrating one potential operation associated with system 10. In this particular implementation, console element 20 is provisioned with a VPN client module 44, and a media module 46. Console element 20 is coupled to a home router 48, which can provide connectivity to another videoconferencing endpoint 50 via a network 52. Home router 48 can also provide connectivity to a network that includes a number of video services 56. In this example, video services 56 include a consumer database 58, a videomail server 60 a call control server 62, a web services 64, and a session border controller 66.


Any number of traffic management features can be supported by system 10. In a simple example, system 10 can allow a point-to-point connection to be made between two home video conferencing systems. A connection can also be made between a home video conferencing system and an enterprise videoconferencing system. The packets associated with the call may be routed through a home router, which can direct the packets to an exchange or a gateway in the network. The consumer endpoint does not need to support the second data channel; any shared content can be merged into the main data stream. A multipoint connection can be made between a combination of three or more home and enterprise videoconferencing systems.


In operation, the VPN is leveraged in order to transmit administrative and signaling traffic to the network. Additionally, the media data (e.g., voice and video) can be exchanged outside of that link (e.g., it can be provisioned to flow over a high bandwidth point-to-point link). This linking can be configured to protect administrative and signaling traffic (which may be inclusive of downloads), while simultaneously conducting high-speed data communications over the point-to-point pathway.


In the particular example of FIG. 3, secure signaling and administrative data is depicted as propagating between home router 48 and video services 56. A number of VPN ports are also illustrated in FIG. 3. The ports can be associated with any appropriate security protocol (e.g., associated with IPsec, secure socket layer (SSL), etc.). Additionally, media data can propagate between network 52 and home router 48, where RTP ports are being provisioned for this particular exchange involving a counterparty endpoint 50. Semantically, multiple pathways can be used to carry the traffic associated with system 10. In contrast to other applications that bundle their traffic (i.e., provide a single hole into the firewall), certain implementations of system 10 can employ two different pathways in the firewall: two pathways for carrying two different types of data.


The objects within video services 56 are network elements that route or that switch (or that cooperate with each other in order to route or switch) traffic and/or packets in a network environment. As used herein in this Specification, the term ‘network element’ is meant to encompass servers, switches, routers, gateways, bridges, loadbalancers, firewalls, inline service nodes, proxies, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. This network element may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange (reception and/or transmission) of data or information.


Note that videomail server 60 may share (or coordinate) certain processing operations between any of the elements of video services 56. Using a similar rationale, their respective memory elements may store, maintain, and/or update data in any number of possible manners. In one example implementation, videomail server 60 can include software to achieve the video processing applications involving the user, as described herein. In other embodiments, these features may be provided externally to any of the aforementioned elements, or included in some other network element to achieve this intended functionality. Alternatively, several elements may include software (or reciprocating software) that can coordinate in order to achieve the operations, as outlined herein. In still other embodiments, any of the devices of the FIGURES may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate these switching operations.


In certain instances, videomail 60 can be provisioned in a different location, or some other functionalities can be provided directly within the videoconferencing platform (e.g., within console element 20, camera element 14, display 12, etc.). This could be the case in scenarios in which console element 20 has been provisioned with increased intelligence to perform similar tasks, or to manage certain repositories of data for the benefit of the individual user.



FIG. 4 is a simplified block diagram illustrating additional details associated with call signaling and call media. In this particular instance, the call media links are provided in broken lines, whereas the call signaling links are provided as straight-lines. More specifically, call signaling propagates from a set of endpoints 74a-b over a broadband network, where these links have a suitable connection at video services 56. These links are labeled 70a-b in the example of FIG. 4. Video services 56 include many of the services identified previously with respect to FIG. 3. Call media between endpoints 74a-b propagate over the broadband network, where these links are identified as 72a-b. Endpoints 74a-b are simply videoconferencing entities that are leveraging the equipment of system 10.



FIG. 5 is a simplified schematic diagram illustrating a system 100 for providing video sessions in accordance with another embodiment of the present disclosure. In this particular implementation, system 100 is representative of an architecture for facilitating a video conference over a network utilizing advanced skip-coding protocols (or any suitable variation thereof). System 100 includes two distinct communication systems that are represented as endpoints 112 and 113, which are provisioned in different geographic locations. Endpoint 112 may include a display 114, a plurality of speakers 121, a camera element 116, and a video processing unit 117. Note that the equipment and infrastructure of FIG. 5 is similar to that of FIG. 1, where FIG. 5 (and the ensuing FIGURES) can be used to discuss enhanced video processing operations (e.g., face detection, background registration, advanced skip coding, etc.).


Endpoint 113 may include a display 124, a plurality of speakers 123, a camera element 126, and a video processing unit 127. Additionally, endpoints 112 and 113 may be coupled to a console element 120, 122 respectively, where the endpoints are connected to each other via a network 30. Each video processing unit 117, 127 may further include a respective processor 130a, 130b, a respective memory element 132a, 132b, a respective video encoder 134a, 134b, and a respective advanced skip coding module 136a, 136b. The function and operation of these elements are discussed in detail below. In the context of a conference involving a participant 119 (present at endpoint 112) and a participant 129 (present at endpoint 113), packet information may propagate over network 30 during the conference. As each participant 119 and 129 communicates, camera elements 116, 126 suitably capture video images as data. Each video processing unit 117, 127 evaluates this video data and then determines which data to send to the other location for rendering on displays 114, 124.


Note that for purposes of illustrating certain example techniques of system 100, it is important to understand the data issues present in many video applications. Video processing units can be configured to skip macroblocks of a video signal during encoding of a video sequence. This means that no coded data would be transmitted for these macroblocks. This can include codecs (e.g., MPEG-4, H.263, etc.) for which bandwidth and network congestion present significant concerns. Additionally, for mobile video-telephony and for computer-based conferencing, processing resources are at a premium. This includes personal computer (PC) applications, as well as more robust systems for video conferencing (e.g., Telepresence).


Coding performance is often constrained by computational complexity. Computational complexity can be reduced by not processing macroblocks of video data (e.g., prior to encoding) when they are expected to be skipped. Skipping macroblocks saves significant computational resources because the subsequent processing of the macroblock (e.g., motion estimation, transform and quantization, entropy encoding, etc.) can be avoided. Some software video applications control processor utilization by dropping frames during encoding activities: often resulting in a jerky motion in the decoded video sequence. Distortion is also prevalent when macroblocks are haphazardly (or incorrectly) skipped. It is important to reduce computational complexity and to manage bandwidth, while simultaneously delivering a video signal that is adequate for the participating viewer (i.e., the video signal has no discernible deterioration, distortion, etc.).


In accordance with the teachings of the present disclosure, system 100 employs an advanced skip coding (ASC) methodology that effectively addresses the aforementioned issues. In particular, the protocol can include three significant components that can collectively address problems presented by temporal video noise. First, system 100 can efficiently represent the variation statistics of the temporally preceding frames. Second, system 100 can identify the most likely “skip-able” values of each picture element. Third, system 100 can determine whether the current encoded picture element should be coded as skip, in conjunction with being provided with the reference picture. Each of these components is further discussed in detail below.


Operating together, these coding components can be configured to determine which new data should be encoded and sent to the other counterparty endpoint and, further, which data (having already been captured and encoded) can be used as reference data. By minimizing the amount of new data that is to be encoded, the architecture can minimize processing power and bandwidth consumption in the network between endpoints 112, 113. Before detailing additional operations associated with the present disclosure, some preliminary information is provided about the corresponding infrastructure of FIG. 1.


Each video processing unit 117, 127 is configured to evaluate video data and make determinations as to which data should be rendered, coded, skipped, manipulated, analyzed, or otherwise processed within system 100. As used herein in this Specification, the term ‘video element’ is meant to encompass any suitable unit, module, software, hardware, server, program, application, application program interface (API), proxy, processor, field programmable gate array (FPGA), erasable programmable read only memory (EPROM), electrically erasable programmable ROM (EEPROM), application specific integrated circuit (ASIC), digital signal processor (DSP), or any other suitable device, component, element, or object configured to process video data. This video element may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange (reception and/or transmission) of data or information. The video element may be included in a camera element, or a console element (shown in FIGS. 1,5), or distributed across both of these devices.


Note that each video processing unit 117, 127 may also share (or coordinate) certain processing operations (e.g., with respective endpoints 112, 113). Using a similar rationale, their respective memory elements may store, maintain, and/or update data in any number of possible manners. Additionally, because some of these video elements can be readily combined into a single unit, device, or server (or certain aspects of these elements can be provided within each other), some of the illustrated processors may be removed, or otherwise consolidated such that a single processor and/or a single memory location could be responsible for certain activities associated with skip coding controls. In a general sense, the arrangement depicted in FIG. 5 may be more logical in its representations, whereas a physical architecture may include various permutations/combinations/hybrids of these elements.


In one example implementation, video processing units 117, 127 include software (e.g., as part of advanced skip coding modules 136a-b and video encoders 134a-b respectively, or a face preferred coding module 135 shown in FIG. 6) to achieve the intelligent video enhancement operations, as outlined herein in this document. In other embodiments, this feature may be provided externally to any of the aforementioned elements, or included in some other video element or endpoint (either of which may be proprietary) to achieve this intended functionality. Alternatively, several elements may include software (or reciprocating software) that can coordinate in order to achieve the operations, as outlined herein. In still other embodiments, any of the devices of the illustrated FIGURES may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate these skip coding management operations, as disclosed herein.


Integrated video processing unit 117 is configured to receive information from camera 116 via some connection, which may attach to an integrated device (e.g., a set-top box, a proprietary box, etc.) that can sit atop a display. Video processing unit 117 may also be configured to control compression activities, or additional processing associated with data received from the cameras. Alternatively, a physically separate device can perform this additional processing before image data is sent to its next intended destination. Video processing unit 117 can also be configured to store, aggregate, process, export, and/or otherwise maintain image data and logs in any appropriate format, where these activities can involve processor 130a and memory element 132a. In certain example implementations, video processing units 117 and 127 are part of set-top box configurations and/or camera elements 116 and 126. In other instances, video processing units 117, 127 are part of a server (e.g., console elements 120 and 122). In yet other examples, video processing units 117, 127 are network elements that facilitate a data flow with their respective counterparty. This includes proprietary elements equally, which can be provisioned with particular features to satisfy a unique scenario or a distinct environment.


Video processing unit 117 may interface with camera element 116 through a wireless connection, or via one or more cables or wires that allow for the propagation of signals between these two elements. These devices can also receive signals from an intermediary device, a remote control, etc., where the signals may leverage infrared, Bluetooth, WiFi, electromagnetic waves generally, or any other suitable transmission protocol for communicating data (e.g., potentially over a network) from one element to another. Virtually any control path can be leveraged in order to deliver information between video processing unit 117 and camera element 116. Transmissions between these two sets of devices can be bidirectional in certain embodiments such that the devices can interact with each other (e.g., dynamically, real-time, etc.). This would allow the devices to acknowledge transmissions from each other and offer feedback, where appropriate. Any of these devices can be consolidated with each other, or operate independently based on particular configuration needs. For example, a single box may encompass audio and video reception capabilities (e.g., a set-top box that includes video processing unit 117, along with camera and microphone components for capturing video and audio data).


Turning to FIG. 6, FIG. 6 is a simplified block diagram illustrating an example flow of data within a single endpoint in accordance with one embodiment of the present disclosure. In this particular implementation, camera element 116 and video processing unit 117 are being depicted. Video processing unit 117 includes a change test 142, a threshold determination 144, a histogram update 146, a reference registration 148, and a reference 150. Video processing unit 117 may also include the aforementioned video encoder 134a, advanced skip coding module 136a, and a face preferred coding module 135. Note that the dashed lines of FIG. 6 indicate paths that are optional and, therefore, may be skipped.


In operational terms, camera element 116 can capture the input video associated with participant 119. This data can flow from camera element 116 to video processing unit 117. The data flow can be directed to video encoder 134a (which can include advanced skip coding module 136a) and subsequently propagate to threshold determination 144 and to change test 142. The data can be analyzed as a series of still images or frames, which are temporally displaced from each other. These images are analyzed by threshold determination 144 and change test 142, as detailed below.


Referring now to FIG. 7, FIG. 7 is a simplified diagram showing a multi-stage histogram in accordance with one embodiment of the present disclosure. This particular activity can take place within threshold determination 144 and change test 142. In this embodiment, the data is analyzed in multi-stage histograms to represent the variation statistics of every two consecutive frames. It should be noted that this concept is based on the inherent knowledge that typical videoconferencing scenes (e.g., Telepresence scenes) do not change frequently and/or significantly. Each histogram can record the variation statistics of one picture element (i.e., a video image). A picture element can be considered to be one pixel in the original image, or a resolution-reduced (downscaled) image. Pixels can be combined to form macroblocks of the image, and the image can be grouped into a 16×16 macroblock grid in this particular example. Other groupings can readily be used, where such groupings or histogram configurations may be based on particular needs.


In this embodiment, the multi-stage histogram has three stages 160, 162, 164. Each stage contains 8 bins in this example. First stage histogram 160 divides the 256 luminance levels into 8 bins: each bin corresponding to 32 luminance levels (256/8=32). Second stage histogram 162 corresponds to the best two adjacent bins of the first-stage histogram and, further, divides the corresponding 64 luminance levels into 8 bins (i.e., 8 levels each). Similarly, third stage histogram 164 divides the best two adjacent bins of the second stage histogram 162 into 8 bins: each corresponding to 2 luminance levels (16/8=2). This breakdown of data occurs for both change test 142 and threshold determination 144.


Referring again to FIG. 6, within threshold determination 144, the images can be analyzed in accordance with the estimated temporal noise level. This is estimated through evaluating the current environment: more specifically, through evaluating various light levels, such as the amount of background light, for example. Once the temporal noise level is suitably determined, a threshold determination can be made, where this data is sent to change test 142. For every two consecutive frames, a change test can be conducted for each picture element. The test can compare each image to the previous image, along with the threshold determination from threshold determination 144. If a picture element is detected as unchanged from the previous frame, the corresponding bins of the histogram can be incremented by 1. When a third stage bin in a histogram reaches its maximum height, the corresponding picture element is marked as “to be registered” for the process detailed below.


Note that with the ability to look over a much longer history than simply two frames, the multi-stage histograms described above can offer a memory-efficient method to identify the noise-free values of the “most stationary” pixels in the video. When a picture element is marked “to be registered” the data can be sent to reference registration 148. A value of the corresponding pixel can be registered to a reference buffer. The bins of histograms 160, 162, 164 are then reset and the entire process can be repeated.


Any suitable number of reference buffers may be used. By employing a single buffer, the registered reference can be systematically replaced by a newer value. Alternatively, by employing multiple buffers, more than one reference can be stored. A newer value that differs from the old values may be registered to a new buffer. These values can be determined in reference registration 148, and subsequently sent to video encoder 134a, where they are stored in an appropriate storage location (e.g., reference 150) for use during the skip coding decision process.


Referring now to FIG. 8, FIG. 8 is a simplified schematic diagram illustrating an example decision tree 170 for making a skip coding determination for a section of input video. Decision tree 170 shows the logic process that occurs within advanced skip coding module 136a of video encoder 134a in this particular implementation. Advanced skip coding module 136a can receive data from three sources: a prediction reference 172 from video encoder 134a (which is a copy of an encoded preceding image) threshold determination 144, a current image 174 from camera element 116, and a skip reference 176 from a storage element (e.g., reference 150) that can comprise pixels registered from reference registration 148. Prediction reference 172 and current image 174 can be compared in order to create a frame difference 182. Current image 174 and skip reference 176 can be compared to create a first reference difference 184. Prediction reference 172 and skip reference 176 can be compared to create a second reference difference 186.


When coding a video frame, skip reference 176 can be used to aid skip-coding decisions. In this embodiment, a single reference buffer is employed; however, multiple reference buffers can readily be employed, as well. In this embodiment of FIG. 8, a video block is considered for skip coding when motion search in its proximate neighborhood favors a direct prediction (i.e., zero motion). In such cases, a metric for frame difference 182 is evaluated against two strict thresholds. Depending on the noise level, these thresholds can be selected such that a video block can be coded as skip with confidence, provided the frame difference metric is below a lower threshold at a decision block 188. Alternatively, the video block can be coded as non-skip with confidence, if the frame difference metric is above the larger threshold at a decision block 190. For those that are in between these values, reference difference 184 metric is further evaluated at a decision block 192 between current image 174 and skip reference 176. Subsequently, this can be further evaluated at a decision block 194 between a reference picture (for inter-frame prediction) and skip reference 176, against another properly defined threshold. If for both comparisons the metric is below the threshold, the video block can be coded as a skip candidate.


Referring now to FIG. 9, FIG. 9 is a simplified flow diagram illustrating one potential operation associated with system 200. The flow may begin at step 210, where a video signal is captured as a series of temporally displaced images. At step 212, the raw image data may be sent to a suitable video processing unit. Step 214 can include analyzing the data for variation statistics. At step 216, reference frames can be registered and stored for subsequent comparison. At the start of the video capture, the first images can form the first reference frames.


The skip coding decision can be made at step 218 and the non-skipped frames can be encoded at step 220. The newly encoded data, along with the reference-encoded data from skipped portions, can be sent to the second location via a network in step 222. This data is then displayed as an image of a video on the display of the second location, as being shown in step 224. In some embodiments, a similar process is occurring at the second location (i.e., the counterparty endpoint), where video data is also being sent from the second location to the first.


Turning to another aspect of the video processing capabilities of the present disclosure, face detection activities, background/foreground optimizations, etc. may be accommodated by the architectures of the present disclosure. (Note that the entire content of U.S. Ser. No. 12/164,292 entitled Combined Face Detection and Background Registration (filed Jun. 30, 2008) is hereby incorporated by reference into this disclosure.) In streaming video systems such as videoconferencing, the video image can be regarded as composition of a background image and a foreground image. The background image can include various stationary objects, while the foreground image can include objects that are moving. Particularly in videoconferencing, the foreground image can refer to people in the actual conference, and the background image can refer to the video image that would otherwise be captured by the camera, if there were no participants in front of the camera.


In accordance with certain aspects of the present disclosure, the construction of the background reference picture can be based on change detection. With a stationary background and with a substantially constant illumination, the change detection algorithm (e.g., provided within a given camera element's encoder) addresses the camera and quantization noise. A threshold technique, adaptive to noise statistics, can be configured to test if a picture element (e.g., inclusive of a pixel or a small block of pixels) is moving or stationary. This can be based on the differences between two consecutive frames.


In more general terms, example embodiments of the present disclosure can include camera elements 14, 116, 126 having dynamic intrinsic properties (e.g. auto exposure, auto white balance, and auto focus) and extrinsic properties (unpredictable lighting, etc.). Operationally, a method being performed by camera elements 14, 116, 126 can include analyzing the camera output and identifying stationary parts in the images from temporal variations, which may be a combination of a sensor's temporal noise and variation due to the camera's tweaking of its intrinsic properties (e.g., focus).


Such operations may further include utilizing the output to perform advanced skip coding on identified stationary/background regions for incoming frames. Additionally, an operation can be performed that similarly utilizes the output to locate head contours (e.g., faces) in segmented foreground regions. This method may be done by simply processing the segmented foreground, or by combining the segmented foreground with frame-to-frame temporal difference (i.e., motion), or further combining texture from the color space. This operation can further include optimizing the coding of the foreground regions by preferably spending bits on located face areas. In a different operational aspect, an operation can be performed in camera elements 14, 116, 126 to take the output (the faces) and perform intrinsic adjustments with an emphasized measurement from those regions.


Turning to additional details relating to these activities, change detection results of video data can be accumulated along a temporal axis. A histogram of the averaged luminance value (Y) can be constructed for each picture element in the plurality of picture elements. Each bin of a histogram can correspond to a level between 0 and 255. When a picture element is identified as stationary for a predefined number of consecutive frames (L), it can be marked as a static element, where the associated bin in its histogram is incremented by one. Additionally, the associated chrominance (U and V) values can be averaged and stored for each bin. This histogram construction process can be performed repeatedly for every frame. In an embodiment, a picture element is registered into the background buffer when one bin in its histogram reaches a predefined value.


After performing the background registration for a predefined number of frames, an initial registration of the background can be used for face detection. A background registration mask is maintained: indicative of the availability of background information of a picture element. For each input frame, a different image can be produced by subtracting the background from the frame and then filtering the noise. Where a complete background picture is available, or where the background difference aligns with the unregistered portion of the image, an object mask is derived from the background difference image. Alternatively, the background difference image can be combined with noise filtered frame differences and with the background registration mask to determine the foreground object, and to generate the object mask. In one example embodiment, when the difference between the present frame and the previous frame is minor, the most recent significant frame difference image is used.


In certain example implementations, the object mask can be applied to face detection with complex backgrounds in order to limit the edge and the color-based face detection activities to the object mask (as opposed to the entire frame). A complex background can refer to a background picture with non-uniform color (e.g., containing texture with variable luminance values), resulting in numerous edges when performing edge detection. A simple background refers to a background with clean and uniform textures and colors: resulting in fewer edge results when performing edge detection.


In one example operation, the detected head and torso contour can be used in a background registration to adjust the histograms. For example, when a picture element is within the detected face contour, the statistics of the corresponding bin in its histogram would reset to zero. To account for noise, the statistics of neighboring bins can be reduced to a fraction of their previous values: in proportion to their distances from the actual bin. This method can be performed to reduce false registrations of still face and torso data, as correlating to background. Alternatively, when a picture element is not within the detected contour, it generally is part of the uncovered background. By adjusting the histograms to reflect such probabilities, background that is temporarily revealed by moving face and torso objects can be quickly registered.


Semantically, in order to minimize false registrations of still head and torso as being part of the background, the detected head and shoulder contour is fed back to the background registration (to adjust the histograms). For instance, if an element is within the detected contour, the corresponding bin in its histogram is reset to zero. In one embodiment, when portions of the face and torso area are being falsely registered as part of the background, the background registration may still be used until a new registration with adjusted histogram is available. Alternatively, the background registration may be cleared. To account for noise, the neighboring bins are reduced by a fraction of their previous values: dependent on their distances from the actual bin, where the fraction follows a function that is adaptive to the noise variance.


In terms of particular processing alternatives, an algorithm using multi-stage, quantized histograms can be leveraged to reduce the memory usage from 256 bytes per-pixel to approximately 1.5 bytes per-pixel. A three-stage histogram can be constructed for each 4×4 block. To reduce the noise associated with the background, the background registration may be processed for a period of time, where the results are averaged with a new value if both are within a threshold. When the average results and a new value are not within the threshold, a previous value could replace the new value.


To adjust the histograms, when a picture element is within the face and body contour, statistics of its corresponding bin in the first-stage histogram are unchanged (as opposed to being increased by one). Subsequently, the statistics of its corresponding bin in the second-stage histogram can be halved, and its corresponding bin in the third-stage histogram is cleared. Additionally, depending on the noise variance, the neighboring one or two bins of the third-stage histogram may be reduced to their quarter or half values.


In operation of an example scenario, for each input frame, the architecture generates several results: an object (foreground) mask, a head and torso detection result, and an updated background picture. The latter two results can be fed back into the encoder to improve the coding of the subsequent frames. This combined face detection and background registration architecture has several benefits. First, it improves the face detection with complex background by limiting the color and edge-based algorithm to the object mask. Second, it improves the background construction by forcing the head and the torso to be a non-background area, which avoids false registration for those picture elements as background.


The input to both background registration and face detection can be the original frame, in which case the architecture is independent of a video encoder. The input for face detection can be the original frame, while the input to the background registration can be the reconstructed output by a video encoder. The encoder can use the results of face detection and background registration to improve the coding of face areas (and be uncovered background). The input to both the background registration and the face detection can be the reconstructed frame. This structure allows the encoder to perform face quantization ramping and uncovered background coding. The entire process can be replicated at both the encoder and the decoder, where the decoder can construct and update the background reference picture in synchronization with the encoder, which can save the overhead of bandwidth to transmit the constructed background picture.


The construction of the background reference picture can be based on a change detection. While a stationary background and constant illumination are assumed, the change detection algorithm (e.g., provisioned in a camera element, a console element, etc.) can effectively address the camera and quantization noise. A thresholding technique that is adaptive to noise statistics can be deployed to test if a picture element (a pixel or a small block of pixels) is moving or stationary, using the difference of two consecutive frames.


During startup, detection can begin with an initial registration of the background (after running background registration for a certain number of frames), which may have a certain portion of unavailable background (i.e., not yet registered). A background registration mask can be maintained to indicate whether the background information of a picture element is available. For each input frame, a difference image is produced by subtracting the background from the frame and then filtering noise. If a complete background picture is available or the background difference aligns with the unregistered portion of the image, an object mask can be derived directly from the background difference image. Otherwise, the background difference image can be combined with frame differences (also noise filtered) and the background registration mask to determine the foreground object and generate the object mask. In the case that the difference between the present and the previous frame is ignorable, the most recent significant frame difference image can be used. Finally, the object mask can be applied to face detection to address the complex background, which is achieved by limiting the edge and color based face detection algorithm to the object mask as opposed to the entire frame.


In a particular implementation, a combined face detection and background registration architecture can be integrated into a videoconferencing video encoder (e.g. such as that which is depicted by FIG. 5). The improved face detection result and the constructed background picture can be used in a variety of applications such as face quantization ramping and uncovered background prediction, as detailed herein.


Note that in certain example implementations, the video processing functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit [ASIC], digital signal processor [DSP] instructions, software [potentially inclusive of object code and source code] to be executed by a processor, or any other similar machine, etc.). In some of these instances, a memory element [as shown in FIG. 5] can store data used for the video enhancement operations described herein (e.g., skip coding, face detection, background registration, etc.). This includes the memory element being able to store software, logic, code, or processor instructions that are executed to carry out the activities described in this Specification. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor [as shown in FIG. 5] could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the video enhancement activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array [FPGA], an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.


Note that the equipment of FIG. 5 may share (or coordinate) certain processing operations. Using a similar rationale, their respective memory elements may store, maintain, and/or update data in any number of possible manners. In a general sense, the arrangements depicted in the preceding FIGURES may be more logical in their representations, whereas a physical architecture may include various permutations/combinations/hybrids of these elements. In one example implementation, camera elements 116, 126 include software (e.g., as part of the modules of FIG. 5) to achieve the video enhancement operations, as outlined herein in this document. In other embodiments, these features may be provided externally to any of the aforementioned elements (e.g., included in console elements 120, 122), or included in some other device to achieve these functionalities. Alternatively, several elements may include software (or reciprocating software) that can coordinate in order to achieve the operations, as outlined herein. In still other embodiments, any of the devices of the FIGURES may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate these video enhancement operations.


All of the aforementioned devices may further keep information in any suitable memory element (e.g., random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein (e.g., database, table, key, queue, etc.) should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’ Console elements 20, 120, 122 and/or camera elements 14, 116, 126 can also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment.


Note that with the examples provided herein, interaction may be described in terms of two, three, or four elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of elements. It should be appreciated that systems 10, 100 (and their teachings) are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of systems 10, 100 as potentially applied to a myriad of other architectures.


It is also important to note that the steps in the preceding flow diagrams illustrate only some of the possible signaling scenarios and patterns that may be executed by, or within, systems 10, 100. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by systems 10, 100 in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.


Although the present disclosure has been described in detail with reference to particular arrangements and configurations, these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. For example, although the present disclosure has been described with reference to particular communication exchanges involving certain server components, systems 10, 100 may be applicable to other protocols and arrangements (e.g., those involving any type of videoconferencing scenarios). Additionally, although camera element 14 has been described as being mounted in a particular fashion, camera element 14 could be mounted in any suitable manner in order to suitably capture video images. Other configurations could include suitable wall mountings, aisle mountings, furniture mountings, cabinet mountings, upright (standing) assemblies, etc., or arrangements in which cameras would be appropriately spaced or positioned to perform its functions.


Furthermore, the users described herein are simply individuals within the proximity, or within the field of view, of display 12, 114, 124. Audience members can be persons engaged in a video conference involving other individuals at a remote site. Audience members can be associated with corporate scenarios, consumer scenarios, residential scenarios, etc. or associated with any other suitable environment to which systems 10, 100 may be applicable.


Moreover, although the previous discussions have focused on videoconferencing associated with particular types of endpoints, handheld devices that employ video applications could readily adopt the teachings of the present disclosure. For example, iPhones, iPads, Google Droids, personal computing applications (i.e., Desktop video solutions), etc. can readily adopt and use the skip coding, face-detection, and enhanced video processing operations detailed above. Any communication system or device that encodes video data would be amenable to the skip coding features discussed herein. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.


Additionally, systems 10, 100 can involve different types of counterparties, where there can be asymmetry in the technologies being employed by the individuals. For example, one user may be using a laptop, while another user is using the architecture of systems 10, 100. Similarly, a smartphone could be used as one individual endpoint, while another user continues to use the architecture of systems 10, 100. Also, Webcams can readily be used in conjunction with systems 10, 100. Along similar lines, multiparty calls can readily be achieved using the teachings of the present disclosure. Moreover, although systems 10, 100 have been illustrated with reference to particular elements and operations that facilitate the communication process, these elements and operations may be replaced by any suitable architecture or process that achieves the intended functionality of systems 10, 100.

Claims
  • 1. A method, comprising: receiving a video input from a camera element;using change detection statistics to identify background image data;using the background image data as a temporal reference to determine foreground image data of a particular video frame within the video input;using a selected foreground image for a background registration of a subsequent video frame;providing at least a portion of the subsequent video frame to a next destination; andgenerating a plurality of histograms to represent variation statistics between a current input video frame and a temporally preceding video frame.
  • 2. The method of claim 1, further comprising: identifying values of pixels from noise within the video input;creating a skip-reference video image associated with the identified pixel values;comparing a portion of a current video image to the skip-reference video image; anddetermining a macroblock associated with the current video image to be skipped before an encoding operation occurs.
  • 3. The method of claim 2, further comprising: evaluating video data from the video input to determine whether a particular element within a plurality of elements in the video data is part of a stationary image.
  • 4. The method of claim 3, wherein portions of stationary images are skipped before certain encoding operations occur.
  • 5. The method of claim 3, wherein the foreground image data further comprises a face and a torso image of a participant in a video session.
  • 6. The method of claim 3, further comprising: encoding non-skipped macroblocks associated with the current video image based on a noise level being above a designated noise threshold.
  • 7. The method of claim 3, further comprising: aggregating non-skipped macroblocks and the skipped macroblock associated with the current video image; andcommunicating the macroblocks over a network connection to a console element associated with a video session.
  • 8. The method of claim 1, wherein each of the histograms includes differing levels of luminance, and wherein if a selected one of the histograms reaches a certain level of luminance, a corresponding pixel of an associated video image is marked to be registered to a reference buffer.
  • 9. Logic encoded in one or more non-transitory tangible media that includes code for execution and when executed by a processor operable to perform operations comprising: receiving a video input from a camera element;using change detection statistics to identify background image data;using the background image data as a temporal reference to determine foreground image data of a particular video frame within the video input;using a selected foreground image for a background registration of a subsequent video frame;providing at least a portion of the subsequent video frame to a next destination;identifying values of pixels from noise within the video input;creating a skip-reference video image associated with the identified pixel values;comparing a portion of a current video image to the skip-reference video image; anddetermining a macroblock associated with the current video image to be skipped before an encoding operation occurs.
  • 10. The logic of claim 9, the operations further comprising: evaluating video data from the video input to determine whether a particular element within a plurality of elements in the video data is part of a stationary image.
  • 11. The logic of claim 9, wherein the foreground image data further comprises a face and a torso image of a participant in a video session.
  • 12. The logic of claim 9, the operations further comprising: generating a plurality of histograms to represent variation statistics between a current input video frame and a temporally preceding video frame.
  • 13. The logic of claim 12, wherein each of the histograms includes differing levels of luminance, and wherein if a selected one of the histograms reaches a certain level of luminance, a corresponding pixel of an associated video image is marked to be registered to a reference buffer.
  • 14. An apparatus, comprising: a memory element configured to store data; anda processor operable to execute instructions associated with the data, wherein the processor and the memory element cooperate such that the apparatus is configured to: receive a video input from a camera element;use change detection statistics to identify background image data;use the background image data as a temporal reference to determine foreground image data of a particular video frame within the video input;use a selected foreground image for a background registration of a subsequent video frame;provide at least a portion of the subsequent video frame to a next destination; andgenerate a plurality of histograms to represent variation statistics between a current input video frame and a temporally preceding video frame.
  • 15. The apparatus of claim 14, wherein the apparatus is configured to: identify values of pixels from noise within the video input;create a skip-reference video image associated with the identified pixel values;compare a portion of a current video image to the skip-reference video image; anddetermine a macroblock associated with the current video image to be skipped before an encoding operation occurs.
  • 16. The apparatus of claim 14, wherein the apparatus is further configured to: evaluate video data from the video input to determine whether a particular element within a plurality of elements in the video data is part of a stationary image.
  • 17. The apparatus of claim 14, further comprising: a console element coupled to the camera element, wherein the apparatus is further configured to:record a video message;select a particular identifier string associated with a particular user; andcommunicate the video message to a destination associated with the particular user.
  • 18. The apparatus of claim 14, wherein each of the histograms includes differing levels of luminance, and wherein if a selected one of the histograms reaches a certain level of luminance, a corresponding pixel of an associated video image is marked to be registered to a reference buffer.
US Referenced Citations (585)
Number Name Date Kind
2911462 Brady Nov 1959 A
3793489 Sank Feb 1974 A
3909121 De Mesquita Cardoso Sep 1975 A
D270271 Steele Aug 1983 S
4400724 Fields Aug 1983 A
4473285 Winter Sep 1984 A
4494144 Brown Jan 1985 A
4750123 Christian Jun 1988 A
4815132 Minami Mar 1989 A
4827253 Maltz May 1989 A
4853764 Sutter Aug 1989 A
4890314 Judd et al. Dec 1989 A
4961211 Tsugane et al. Oct 1990 A
4994912 Lumelsky et al. Feb 1991 A
5003532 Ashida et al. Mar 1991 A
5020098 Celli May 1991 A
5033969 Kamimura Jul 1991 A
5136652 Jibbe et al. Aug 1992 A
5187571 Braun et al. Feb 1993 A
5200818 Neta et al. Apr 1993 A
5243697 Hoeber et al. Sep 1993 A
5249035 Yamanaka Sep 1993 A
5255211 Redmond Oct 1993 A
5268734 Parker et al. Dec 1993 A
5317405 Kuriki et al. May 1994 A
5337363 Platt Aug 1994 A
5347363 Yamanaka Sep 1994 A
5351067 Lumelsky et al. Sep 1994 A
5359362 Lewis et al. Oct 1994 A
D357468 Rodd Apr 1995 S
5406326 Mowry Apr 1995 A
5423554 Davis Jun 1995 A
5446834 Deering Aug 1995 A
5448287 Hull Sep 1995 A
5467401 Nagamitsu et al. Nov 1995 A
5495576 Ritchey Feb 1996 A
5502481 Dentinger et al. Mar 1996 A
5502726 Fischer Mar 1996 A
5506604 Nally et al. Apr 1996 A
5532737 Braun Jul 1996 A
5541639 Takatsuki et al. Jul 1996 A
5541773 Kamo et al. Jul 1996 A
5570372 Shaffer Oct 1996 A
5572248 Allen et al. Nov 1996 A
5587726 Moffat Dec 1996 A
5612733 Flohr Mar 1997 A
5625410 Washino et al. Apr 1997 A
5666153 Copeland Sep 1997 A
5673401 Volk et al. Sep 1997 A
5675374 Kohda Oct 1997 A
5689663 Williams Nov 1997 A
5708787 Nakano et al. Jan 1998 A
5713033 Sado Jan 1998 A
5715377 Fukushima et al. Feb 1998 A
D391558 Marshall et al. Mar 1998 S
D391935 Sakaguchi et al. Mar 1998 S
5729471 Jain et al. Mar 1998 A
5737011 Lukacs Apr 1998 A
5745116 Pisutha-Arnond Apr 1998 A
5748121 Romriell May 1998 A
D395292 Vu Jun 1998 S
5760826 Nayar Jun 1998 A
D396455 Bier Jul 1998 S
D396456 Bier Jul 1998 S
5790182 Hilaire Aug 1998 A
5796724 Rajamani et al. Aug 1998 A
D397687 Arora et al. Sep 1998 S
D398595 Baer et al. Sep 1998 S
5815196 Alshawi Sep 1998 A
D399501 Arora et al. Oct 1998 S
5818514 Duttweiler et al. Oct 1998 A
5821985 Iizawa Oct 1998 A
5825362 Retter Oct 1998 A
5889499 Nally et al. Mar 1999 A
5894321 Downs et al. Apr 1999 A
D409243 Lonergan May 1999 S
D410447 Chang Jun 1999 S
5929857 Dinallo et al. Jul 1999 A
5940118 Van Schyndel Aug 1999 A
5940530 Fukushima et al. Aug 1999 A
5953052 McNelley et al. Sep 1999 A
5956100 Gorski Sep 1999 A
5996003 Namikata et al. Nov 1999 A
D419543 Warren et al. Jan 2000 S
6069648 Suso et al. May 2000 A
6069658 Watanabe May 2000 A
6088045 Lumelsky et al. Jul 2000 A
6097390 Marks Aug 2000 A
6097441 Allport Aug 2000 A
6101113 Paice Aug 2000 A
6124896 Kurashige Sep 2000 A
6137485 Kawai et al. Oct 2000 A
6148092 Qian Nov 2000 A
D435561 Pettigrew et al. Dec 2000 S
6167162 Jacquin et al. Dec 2000 A
6172703 Lee Jan 2001 B1
6173069 Daly et al. Jan 2001 B1
D440575 Wang et al. Apr 2001 S
6211870 Foster Apr 2001 B1
6226035 Korein et al. May 2001 B1
6243130 McNelley et al. Jun 2001 B1
6249318 Girod et al. Jun 2001 B1
6256400 Takata et al. Jul 2001 B1
6259469 Ejima et al. Jul 2001 B1
6266082 Yonezawa et al. Jul 2001 B1
6266098 Cove et al. Jul 2001 B1
D446790 Wang et al. Aug 2001 S
6285392 Satoda et al. Sep 2001 B1
6292188 Carlson et al. Sep 2001 B1
6292575 Bortolussi et al. Sep 2001 B1
D450323 Moore et al. Nov 2001 S
D453167 Hasegawa et al. Jan 2002 S
D454574 Wasko et al. Mar 2002 S
6356589 Gebler et al. Mar 2002 B1
6380539 Edgar Apr 2002 B1
6396514 Kohno May 2002 B1
6424377 Driscoll, Jr. Jul 2002 B1
D461191 Hickey et al. Aug 2002 S
6430222 Okada Aug 2002 B1
6459451 Driscoll et al. Oct 2002 B2
6462767 Obata et al. Oct 2002 B1
6493032 Wallerstein et al. Dec 2002 B1
D468322 Walker et al. Jan 2003 S
6507356 Jackel et al. Jan 2003 B1
D470153 Billmaier et al. Feb 2003 S
6515695 Sato et al. Feb 2003 B1
D474194 Kates et al. May 2003 S
6573904 Chun et al. Jun 2003 B1
6577333 Tai et al. Jun 2003 B2
6583808 Boulanger et al. Jun 2003 B2
6590603 Sheldon et al. Jul 2003 B2
6591314 Colbath Jul 2003 B1
6593955 Falcon Jul 2003 B1
6593956 Potts et al. Jul 2003 B1
D478090 Nguyen et al. Aug 2003 S
6611281 Strubbe Aug 2003 B2
D482368 den Toonder et al. Nov 2003 S
6680856 Schreiber Jan 2004 B2
6693663 Harris Feb 2004 B1
6694094 Partynski et al. Feb 2004 B2
6704048 Malkin et al. Mar 2004 B1
6710797 McNelley et al. Mar 2004 B1
6751106 Zhang et al. Jun 2004 B2
D492692 Fallon et al. Jul 2004 S
6763226 McZeal Jul 2004 B1
6768722 Katseff et al. Jul 2004 B1
D494186 Johnson Aug 2004 S
6771303 Zhang et al. Aug 2004 B2
6774927 Cohen et al. Aug 2004 B1
D495715 Gildred Sep 2004 S
6795108 Jarboe et al. Sep 2004 B2
6795558 Matsuo et al. Sep 2004 B2
6798834 Murakami et al. Sep 2004 B1
6806898 Toyama et al. Oct 2004 B1
6807280 Stroud et al. Oct 2004 B1
6809724 Shiraishi et al. Oct 2004 B1
6831653 Kehlet et al. Dec 2004 B2
6844990 Artonne et al. Jan 2005 B2
6853398 Malzbender et al. Feb 2005 B2
6867798 Wada et al. Mar 2005 B1
6882358 Schuster et al. Apr 2005 B1
6888358 Lechner et al. May 2005 B2
D506208 Jewitt et al. Jun 2005 S
6909438 White et al. Jun 2005 B1
6911995 Ivanov et al. Jun 2005 B2
6917271 Zhang et al. Jul 2005 B2
6922718 Chang Jul 2005 B2
6925613 Gibson Aug 2005 B2
6963653 Miles Nov 2005 B1
D512723 Wirz Dec 2005 S
6980526 Jang et al. Dec 2005 B2
6989754 Kisacanin et al. Jan 2006 B2
6989836 Ramsey Jan 2006 B2
6989856 Firestone et al. Jan 2006 B2
6990086 Holur et al. Jan 2006 B1
7002973 MeLampy et al. Feb 2006 B2
7023855 Haumont et al. Apr 2006 B2
7028092 MeLampy et al. Apr 2006 B2
7030890 Jouet et al. Apr 2006 B1
7031311 MeLampy et al. Apr 2006 B2
7036092 Sloo et al. Apr 2006 B2
D521521 Jewitt et al. May 2006 S
7043528 Schmitt et al. May 2006 B2
7046862 Ishizaka et al. May 2006 B2
D522559 Naito et al. Jun 2006 S
7057636 Cohen-Solal et al. Jun 2006 B1
7057662 Malzbender Jun 2006 B2
7058690 Maehiro Jun 2006 B2
7061896 Jabbari et al. Jun 2006 B2
D524321 Hally et al. Jul 2006 S
7072504 Miyano et al. Jul 2006 B2
7072833 Rajan Jul 2006 B2
7080157 McCanne Jul 2006 B2
7092002 Ferren et al. Aug 2006 B2
7111045 Kato et al. Sep 2006 B2
7126627 Lewis et al. Oct 2006 B1
7131135 Virag et al. Oct 2006 B1
7136651 Kalavade Nov 2006 B2
7139767 Taylor et al. Nov 2006 B1
D533525 Arie Dec 2006 S
D533852 Ma Dec 2006 S
D534511 Maeda et al. Jan 2007 S
D535954 Hwang et al. Jan 2007 S
D536001 Armstrong et al. Jan 2007 S
7158674 Suh Jan 2007 B2
7161942 Chen et al. Jan 2007 B2
7164435 Wang et al. Jan 2007 B2
D536340 Jost et al. Feb 2007 S
D539243 Chiu et al. Mar 2007 S
7197008 Shabtay et al. Mar 2007 B1
D540336 Kim et al. Apr 2007 S
D541773 Chong et al. May 2007 S
D542247 Kinoshita et al. May 2007 S
7221260 Berezowski et al. May 2007 B2
D544494 Cummins Jun 2007 S
D545314 Kim Jun 2007 S
D547320 Kim et al. Jul 2007 S
7239338 Krisbergh et al. Jul 2007 B2
7246118 Chastain et al. Jul 2007 B2
D548742 Fletcher Aug 2007 S
7254785 Reed Aug 2007 B2
D550635 DeMaio et al. Sep 2007 S
D551184 Kanou et al. Sep 2007 S
D551672 Wirz Sep 2007 S
7269292 Steinberg Sep 2007 B2
7274555 Kim et al. Sep 2007 B2
D554664 Van Dongen et al. Nov 2007 S
D555610 Yang et al. Nov 2007 S
D559265 Armstrong et al. Jan 2008 S
D560225 Park et al. Jan 2008 S
D560681 Fletcher Jan 2008 S
D561130 Won et al. Feb 2008 S
7336299 Kostrzewski Feb 2008 B2
D564530 Kim et al. Mar 2008 S
D567202 Rieu Piquet Apr 2008 S
7352809 Wenger et al. Apr 2008 B2
7353279 Durvasula et al. Apr 2008 B2
7353462 Caffarelli Apr 2008 B2
7359731 Choksi Apr 2008 B2
7399095 Rondinelli Jul 2008 B2
D574392 Kwag et al. Aug 2008 S
7411975 Mohaban Aug 2008 B1
7413150 Hsu Aug 2008 B1
7428000 Cutler et al. Sep 2008 B2
D578496 Leonard Oct 2008 S
7440615 Gong et al. Oct 2008 B2
D580451 Steele et al. Nov 2008 S
7450134 Maynard et al. Nov 2008 B2
7471320 Malkin et al. Dec 2008 B2
D585453 Chen et al. Jan 2009 S
7477322 Hsieh Jan 2009 B2
7477657 Murphy et al. Jan 2009 B1
7480870 Anzures et al. Jan 2009 B2
D588560 Mellingen et al. Mar 2009 S
D589053 Steele et al. Mar 2009 S
7505036 Baldwin Mar 2009 B1
D591306 Setiawan et al. Apr 2009 S
7518051 Redmann Apr 2009 B2
D592621 Han May 2009 S
7529425 Kitamura et al. May 2009 B2
7532230 Culbertson et al. May 2009 B2
7532232 Shah et al. May 2009 B2
7534056 Cross et al. May 2009 B2
7545761 Kalbag Jun 2009 B1
7551432 Bockheim et al. Jun 2009 B1
7555141 Mori Jun 2009 B2
D595728 Scheibe et al. Jul 2009 S
D596646 Wani Jul 2009 S
7575537 Ellis Aug 2009 B2
7577246 Idan et al. Aug 2009 B2
D602033 Vu et al. Oct 2009 S
D602453 Ding et al. Oct 2009 S
D602495 Um et al. Oct 2009 S
7610352 AlHusseini et al. Oct 2009 B2
7610599 Nashida et al. Oct 2009 B1
7616226 Roessler et al. Nov 2009 B2
D608788 Meziere Jan 2010 S
7646419 Cernasov Jan 2010 B2
D610560 Chen Feb 2010 S
7661075 Lahdesmaki Feb 2010 B2
7664750 Frees et al. Feb 2010 B2
D612394 La et al. Mar 2010 S
7676763 Rummel Mar 2010 B2
7679639 Harrell et al. Mar 2010 B2
7692680 Graham Apr 2010 B2
7707247 Dunn et al. Apr 2010 B2
D615514 Mellingen et al. May 2010 S
7710448 De Beer et al. May 2010 B2
7710450 Dhuey et al. May 2010 B2
7714222 Taub et al. May 2010 B2
7715657 Lin et al. May 2010 B2
7719605 Hirasawa et al. May 2010 B2
7719662 Bamji et al. May 2010 B2
7720277 Hattori May 2010 B2
7725919 Thiagarajan et al. May 2010 B1
D619608 Meziere Jul 2010 S
D619609 Meziere Jul 2010 S
D619610 Meziere Jul 2010 S
D619611 Meziere Jul 2010 S
7752568 Park et al. Jul 2010 B2
D621410 Verfuerth et al. Aug 2010 S
D626102 Buzzard et al. Oct 2010 S
D626103 Buzzard et al. Oct 2010 S
D628175 Desai et al. Nov 2010 S
7839434 Ciudad et al. Nov 2010 B2
D628968 Desai et al. Dec 2010 S
7855726 Ferren et al. Dec 2010 B2
7861189 Watanabe et al. Dec 2010 B2
D632698 Judy et al. Feb 2011 S
7889851 Shah et al. Feb 2011 B2
7890888 Glasgow et al. Feb 2011 B2
7894531 Cetin et al. Feb 2011 B1
D634726 Harden et al. Mar 2011 S
D634753 Loretan et al. Mar 2011 S
D635569 Park Apr 2011 S
D635975 Seo et al. Apr 2011 S
D637199 Brinda May 2011 S
D638025 Saft et al. May 2011 S
7939959 Wagoner May 2011 B2
D640268 Jones et al. Jun 2011 S
D642184 Brouwers et al. Jul 2011 S
7990422 Ahiska et al. Aug 2011 B2
7996775 Cole et al. Aug 2011 B2
8000559 Kwon Aug 2011 B2
D646690 Thai et al. Oct 2011 S
D648734 Christie et al. Nov 2011 S
D649556 Judy et al. Nov 2011 S
8077857 Lambert Dec 2011 B1
8081346 Anup et al. Dec 2011 B1
8086076 Tian et al. Dec 2011 B2
D652050 Chaudhri Jan 2012 S
D652429 Steele et al. Jan 2012 S
D653245 Buzzard et al. Jan 2012 S
D655279 Buzzard et al. Mar 2012 S
D656513 Thai et al. Mar 2012 S
8130256 Trachtenberg et al. Mar 2012 B2
8132100 Seo et al. Mar 2012 B2
8135068 Alvarez Mar 2012 B1
D660313 Williams et al. May 2012 S
8179419 Girish et al. May 2012 B2
8209632 Reid et al. Jun 2012 B2
8219404 Weinberg et al. Jul 2012 B2
8219920 Langoulant et al. Jul 2012 B2
8259155 Marathe et al. Sep 2012 B2
D669086 Boyer et al. Oct 2012 S
D669088 Boyer et al. Oct 2012 S
8299979 Rambo et al. Oct 2012 B2
8315466 El-Maleh et al. Nov 2012 B2
8363719 Nakayama Jan 2013 B2
8436888 Baldino et al. May 2013 B1
20020047892 Gonsalves, Jr. Apr 2002 A1
20020106120 Brandenburg et al. Aug 2002 A1
20020108125 Joao Aug 2002 A1
20020113827 Perlman et al. Aug 2002 A1
20020114392 Sekiguchi et al. Aug 2002 A1
20020118890 Rondinelli Aug 2002 A1
20020131608 Lobb et al. Sep 2002 A1
20020140804 Colmenarez et al. Oct 2002 A1
20020149672 Clapp et al. Oct 2002 A1
20020163538 Shteyn Nov 2002 A1
20020186528 Huang Dec 2002 A1
20020196737 Bullard Dec 2002 A1
20030017872 Oishi et al. Jan 2003 A1
20030048218 Milnes et al. Mar 2003 A1
20030071932 Tanigaki Apr 2003 A1
20030072460 Gonopolskiy et al. Apr 2003 A1
20030160861 Barlow et al. Aug 2003 A1
20030179285 Naito Sep 2003 A1
20030185303 Hall Oct 2003 A1
20030197687 Shetter Oct 2003 A1
20040003411 Nakai et al. Jan 2004 A1
20040032906 Lillig Feb 2004 A1
20040038169 Mandelkern et al. Feb 2004 A1
20040039778 Read et al. Feb 2004 A1
20040061787 Liu et al. Apr 2004 A1
20040091232 Appling, III May 2004 A1
20040118984 Kim et al. Jun 2004 A1
20040119814 Clisham et al. Jun 2004 A1
20040164858 Lin Aug 2004 A1
20040165060 McNelley et al. Aug 2004 A1
20040178955 Menache et al. Sep 2004 A1
20040189463 Wathen Sep 2004 A1
20040189676 Dischert Sep 2004 A1
20040196250 Mehrotra et al. Oct 2004 A1
20040207718 Boyden et al. Oct 2004 A1
20040218755 Marton et al. Nov 2004 A1
20040221243 Twerdahl et al. Nov 2004 A1
20040246962 Kopeikin et al. Dec 2004 A1
20040246972 Wang et al. Dec 2004 A1
20040254982 Hoffman et al. Dec 2004 A1
20040260796 Sundqvist et al. Dec 2004 A1
20050007954 Sreemanthula et al. Jan 2005 A1
20050022130 Fabritius Jan 2005 A1
20050024484 Leonard Feb 2005 A1
20050034084 Ohtsuki et al. Feb 2005 A1
20050039142 Jalon et al. Feb 2005 A1
20050050246 Lakkakorpi et al. Mar 2005 A1
20050081160 Wee et al. Apr 2005 A1
20050099492 Orr May 2005 A1
20050110867 Schulz May 2005 A1
20050117022 Marchant Jun 2005 A1
20050129325 Wu Jun 2005 A1
20050147257 Melchior et al. Jul 2005 A1
20050149872 Fong et al. Jul 2005 A1
20050154988 Proehl et al. Jul 2005 A1
20050223069 Cooperman et al. Oct 2005 A1
20050248652 Firestone et al. Nov 2005 A1
20050251760 Sato et al. Nov 2005 A1
20050268823 Bakker et al. Dec 2005 A1
20060013495 Duan et al. Jan 2006 A1
20060017807 Lee et al. Jan 2006 A1
20060028983 Wright Feb 2006 A1
20060029084 Grayson Feb 2006 A1
20060038878 Takashima et al. Feb 2006 A1
20060048070 Taylor et al. Mar 2006 A1
20060066717 Miceli Mar 2006 A1
20060072813 Matsumoto et al. Apr 2006 A1
20060082643 Richards Apr 2006 A1
20060093128 Oxford May 2006 A1
20060100004 Kim et al. May 2006 A1
20060104297 Buyukkoc et al. May 2006 A1
20060104470 Akino May 2006 A1
20060120307 Sahashi Jun 2006 A1
20060120568 McConville et al. Jun 2006 A1
20060125691 Menache et al. Jun 2006 A1
20060126878 Takumai et al. Jun 2006 A1
20060152489 Sweetser et al. Jul 2006 A1
20060152575 Amiel et al. Jul 2006 A1
20060158509 Kenoyer et al. Jul 2006 A1
20060168302 Boskovic et al. Jul 2006 A1
20060170769 Zhou Aug 2006 A1
20060181607 McNelley et al. Aug 2006 A1
20060200518 Sinclair et al. Sep 2006 A1
20060233120 Eshel et al. Oct 2006 A1
20060256187 Sheldon et al. Nov 2006 A1
20060284786 Takano et al. Dec 2006 A1
20060289772 Johnson et al. Dec 2006 A1
20070019621 Perry et al. Jan 2007 A1
20070022388 Jennings Jan 2007 A1
20070039030 Romanowich et al. Feb 2007 A1
20070040903 Kawaguchi Feb 2007 A1
20070070177 Christensen Mar 2007 A1
20070074123 Omura et al. Mar 2007 A1
20070080845 Amand Apr 2007 A1
20070112966 Eftis et al. May 2007 A1
20070120971 Kennedy May 2007 A1
20070121353 Zhang et al. May 2007 A1
20070140337 Lim et al. Jun 2007 A1
20070153712 Fry et al. Jul 2007 A1
20070157119 Bishop Jul 2007 A1
20070159523 Hillis et al. Jul 2007 A1
20070162866 Matthews et al. Jul 2007 A1
20070183661 El-Maleh et al. Aug 2007 A1
20070188597 Kenoyer et al. Aug 2007 A1
20070189219 Navali et al. Aug 2007 A1
20070192381 Padmanabhan Aug 2007 A1
20070206091 Dunn et al. Sep 2007 A1
20070206556 Yegani et al. Sep 2007 A1
20070206602 Halabi et al. Sep 2007 A1
20070217406 Riedel et al. Sep 2007 A1
20070217500 Gao et al. Sep 2007 A1
20070229250 Recker et al. Oct 2007 A1
20070240073 McCarthy et al. Oct 2007 A1
20070247470 Dhuey et al. Oct 2007 A1
20070250567 Graham et al. Oct 2007 A1
20070250620 Shah et al. Oct 2007 A1
20070273752 Chambers et al. Nov 2007 A1
20070279483 Beers et al. Dec 2007 A1
20070279484 Derocher et al. Dec 2007 A1
20070285505 Korneliussen Dec 2007 A1
20080043041 Hedenstroem et al. Feb 2008 A2
20080044064 His Feb 2008 A1
20080046840 Melton et al. Feb 2008 A1
20080077390 Nagao Mar 2008 A1
20080077883 Kim et al. Mar 2008 A1
20080084429 Wissinger Apr 2008 A1
20080119211 Paas et al. May 2008 A1
20080134098 Hoglund et al. Jun 2008 A1
20080136896 Graham et al. Jun 2008 A1
20080148187 Miyata et al. Jun 2008 A1
20080151038 Khouri et al. Jun 2008 A1
20080153537 Khawand et al. Jun 2008 A1
20080167078 Elbye Jul 2008 A1
20080198755 Vasseur et al. Aug 2008 A1
20080208444 Ruckart Aug 2008 A1
20080212677 Chen et al. Sep 2008 A1
20080215974 Harrison et al. Sep 2008 A1
20080215993 Rossman Sep 2008 A1
20080218582 Buckler Sep 2008 A1
20080219268 Dennison Sep 2008 A1
20080232688 Senior et al. Sep 2008 A1
20080232692 Kaku Sep 2008 A1
20080240237 Tian et al. Oct 2008 A1
20080240571 Tian et al. Oct 2008 A1
20080246833 Yasui et al. Oct 2008 A1
20080256474 Chakra et al. Oct 2008 A1
20080261569 Britt et al. Oct 2008 A1
20080266380 Gorzynski et al. Oct 2008 A1
20080267282 Kalipatnapu et al. Oct 2008 A1
20080276184 Buffet et al. Nov 2008 A1
20080297586 Kurtz et al. Dec 2008 A1
20080298571 Kurtz et al. Dec 2008 A1
20080303901 Variyath et al. Dec 2008 A1
20090009593 Cameron et al. Jan 2009 A1
20090012633 Liu et al. Jan 2009 A1
20090037827 Bennetts Feb 2009 A1
20090051756 Trachtenberg Feb 2009 A1
20090115723 Henty May 2009 A1
20090119603 Stackpole May 2009 A1
20090122867 Mauchly et al. May 2009 A1
20090129753 Wagenlander May 2009 A1
20090172596 Yamashita Jul 2009 A1
20090174764 Chadha et al. Jul 2009 A1
20090183122 Webb et al. Jul 2009 A1
20090193345 Wensley et al. Jul 2009 A1
20090204538 Ley et al. Aug 2009 A1
20090207179 Huang et al. Aug 2009 A1
20090207233 Mauchly et al. Aug 2009 A1
20090207234 Chen et al. Aug 2009 A1
20090217199 Hara et al. Aug 2009 A1
20090228807 Lemay Sep 2009 A1
20090244257 MacDonald et al. Oct 2009 A1
20090256901 Mauchly et al. Oct 2009 A1
20090260060 Smith et al. Oct 2009 A1
20090265628 Bamford et al. Oct 2009 A1
20090279476 Li et al. Nov 2009 A1
20090324023 Tian et al. Dec 2009 A1
20100005419 Miichi et al. Jan 2010 A1
20100008373 Xiao et al. Jan 2010 A1
20100014530 Cutaia Jan 2010 A1
20100027907 Cherna et al. Feb 2010 A1
20100030389 Palmer et al. Feb 2010 A1
20100042281 Filla Feb 2010 A1
20100049542 Benjamin et al. Feb 2010 A1
20100082557 Gao et al. Apr 2010 A1
20100118112 Nimri et al. May 2010 A1
20100123770 Friel et al. May 2010 A1
20100149301 Lee et al. Jun 2010 A1
20100153853 Dawes et al. Jun 2010 A1
20100171807 Tysso Jul 2010 A1
20100171808 Harrell et al. Jul 2010 A1
20100183199 Smith et al. Jul 2010 A1
20100199228 Latta et al. Aug 2010 A1
20100201823 Zhang et al. Aug 2010 A1
20100202285 Cohen et al. Aug 2010 A1
20100205281 Porter et al. Aug 2010 A1
20100205543 Von Werther et al. Aug 2010 A1
20100208078 Tian et al. Aug 2010 A1
20100225732 De Beer et al. Sep 2010 A1
20100225735 Shaffer et al. Sep 2010 A1
20100241845 Alonso Sep 2010 A1
20100259619 Nicholson Oct 2010 A1
20100262367 Riggins et al. Oct 2010 A1
20100268843 Van Wie et al. Oct 2010 A1
20100277563 Gupta et al. Nov 2010 A1
20100283829 De Beer et al. Nov 2010 A1
20100306703 Bourganel et al. Dec 2010 A1
20100313148 Hochendoner et al. Dec 2010 A1
20100316232 Acero et al. Dec 2010 A1
20100325547 Keng et al. Dec 2010 A1
20110008017 Gausereide Jan 2011 A1
20110029868 Moran et al. Feb 2011 A1
20110039506 Lindahl et al. Feb 2011 A1
20110063467 Tanaka Mar 2011 A1
20110082808 Beykpour et al. Apr 2011 A1
20110085016 Kristiansen et al. Apr 2011 A1
20110090303 Wu et al. Apr 2011 A1
20110105220 Hill et al. May 2011 A1
20110109642 Chang et al. May 2011 A1
20110113348 Twiss et al. May 2011 A1
20110164106 Kim Jul 2011 A1
20110202878 Park et al. Aug 2011 A1
20110225534 Wala Sep 2011 A1
20110242266 Blackburn et al. Oct 2011 A1
20110249081 Kay et al. Oct 2011 A1
20110249086 Guo et al. Oct 2011 A1
20110276901 Zambetti et al. Nov 2011 A1
20110279627 Shyu Nov 2011 A1
20110319885 Skwarek et al. Dec 2011 A1
20120026278 Goodman et al. Feb 2012 A1
20120038742 Robinson et al. Feb 2012 A1
20120106428 Schlicht et al. May 2012 A1
20120143605 Thorsen et al. Jun 2012 A1
20120169838 Sekine Jul 2012 A1
20120226997 Pang Sep 2012 A1
Foreign Referenced Citations (47)
Number Date Country
101953158 Jan 2011 CN
102067593 May 2011 CN
502600 Sep 1992 EP
0 650 299 Oct 1994 EP
0 714 081 Nov 1995 EP
0 740 177 Apr 1996 EP
1143745 Oct 2001 EP
1 178 352 Jun 2002 EP
1 589 758 Oct 2005 EP
1701308 Sep 2006 EP
1768058 Mar 2007 EP
2073543 Jun 2009 EP
2255531 Dec 2010 EP
2277308 Jan 2011 EP
2 294 605 May 1996 GB
2336266 Oct 1999 GB
2355876 May 2001 GB
WO 9416517 Jul 1994 WO
WO 9621321 Jul 1996 WO
WO 9708896 Mar 1997 WO
WO 9847291 Oct 1998 WO
WO 9959026 Nov 1999 WO
WO 2001033840 May 2001 WO
WO 2005013001 Feb 2005 WO
WO 2005031001 Feb 2005 WO
WO 2006072755 Jul 2006 WO
WO2007106157 Sep 2007 WO
WO2007123946 Nov 2007 WO
WO 2007123960 Nov 2007 WO
WO 2007123960 Nov 2007 WO
WO2008039371 Apr 2008 WO
WO 2008040258 Apr 2008 WO
WO 2008101117 Aug 2008 WO
WO 2008118887 Oct 2008 WO
WO 2008118887 Oct 2008 WO
WO 2009102503 Aug 2009 WO
WO 2009102503 Aug 2009 WO
WO 2009120814 Oct 2009 WO
WO 2009120814 Oct 2009 WO
WO 2010059481 May 2010 WO
WO2010096342 Aug 2010 WO
WO 2010104765 Sep 2010 WO
WO 2010132271 Nov 2010 WO
WO2012033716 Mar 2012 WO
WO2012068008 May 2012 WO
WO2012068010 May 2012 WO
WO2012068485 May 2012 WO
Non-Patent Literature Citations (269)
Entry
U.S. Appl. No. 12/234,291, filed Sep. 19, 2008, entitled “System and Method for Enabling Communication Sessions in a Network Environment,” Inventors: Yifan Gao et al.
U.S. Appl. No. 12/366,593, filed Feb. 5, 2009, entitled “System and Method for Depth Perspective Image Rendering,” Inventors: J. William Mauchly et al.
U.S. Appl. No. 12/475,075, filed May 29, 2009, entitled “System and Method for Extending Communications Between Participants in a Conferencing Environment,” Inventors: Brian J. Baldino et al.
U.S. Appl. No. 12/400,540 filed, Mar. 9, 2009, entitled “System and Method for Providing Three Dimensional Video Conferencing in a Network Environment,” Inventors: Karthik Dakshinamoorthy et al.
U.S. Appl. No. 12/400,582, filed Mar. 9, 2009, entitled “System and Method for Providing Three Dimensional Imaging in a Network Environment,” Inventors: Shmuel Shaffer et al.
U.S. Appl. No. 12/539,461, filed Aug. 11, 2009, entitled “System and Method for Verifying Parameters in an Audiovisual Environment,” Inventor: James M. Alexander.
U.S. Appl. No. 12/463,505, filed May 11, 2009, entitled “System and Method for Translating Communications Between Participants in a Conferencing Environment,” Inventors: Marthinus F. De Beer et al.
U.S. Appl. No. 12/727,089, filed Mar. 18, 2010, entitled “System and Method for Enhancing Video Images in a Conferencing Environment,” Inventors: Joseph T. Friel.
U.S. Appl. No. 12/781,722, filed May 17, 2010, entitled “System and Method for Providing Retracting Optics in a Video Conferencing Environment,” Inventor(s): Joseph T. Friel, et al.
U.S. Appl. No. 12/877,833, filed Sep. 8, 2010, entitled “System and Method for Skip Coding During Video Conferencing in a Network Environment,” Inventors: Dihong Tian et al.
U.S. Appl. No. 12/870,687, filed Aug. 27, 2010, entitled “System and Method for Producing a Performance Via Video Conferencing in a Network Environment,” Inventors: Michael A. Arnao et al.
U.S. Appl. No. 12/912,556, filed Oct. 26, 2010, entitled “System and Method for Provisioning Flows in a Mobile Network Environment,” Inventors: Balaji Vankat Vankataswami, et al.
U.S. Appl. No. 12/949,614, filed Nov. 18, 2010, entitled “System and Method for Managing Optics in a Video Environment,” Inventors: Torence Lu, et al.
U.S. Appl. No. 12/873,100, filed Aug. 31, 2010, entitled “System and Method for Providing Depth Adaptive Video Conferencing,” Inventors: J. William Mauchly et al.
U.S. Appl. No. 12/946,679, filed Nov. 15, 2010, entitled “System and Method for Providing Camera Functions in a Video Environment,” Inventors: Peter A.J. Fornell, et al.
U.S. Appl. No. 12/946,695, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Audio in a Video Environment,” Inventors: Wei Li, et al.
U.S. Appl. No. 12/907,914, filed Oct. 19, 2010, entitled “System and Method for Providing Videomail in a Network Environment,” Inventors: David J. Mackie et al.
U.S. Appl. No. 12/907,919, filed Oct. 19, 2010, entitled “System and Method for Providing Connectivity in a Network Environment,” Inventors: David J. Mackie et al.
U.S. Appl. No. 12/946,704, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Graphics in a Video Environment,” Inventors: John M. Kanalakis, Jr., et al.
U.S. Appl. No. 12/907,925, filed Oct. 19, 2010, entitled “System and Method for Providing a Pairing Mechanism in a Video Environment,” Inventors: Gangfeng Kong et al.
U.S. Appl. No. 12/939,037, filed Nov. 3, 2010, entitled “System and Method for Managing Flows in a Mobile Network Environment,” Inventors: Balaji Venkat Venkataswami et al.
U.S. Appl. No. 12/946,709, filed Nov. 15, 2010, entitled “System and Method for Providing Enhanced Graphics in a Video Environment,” Inventors: John M. Kanalakis, Jr., et al.
U.S. Appl. No. 12/784,257, filed May 20, 2010, entitled “Implementing Selective Image Enhancement,” Inventors: Dihong Tian et al.
Design U.S. Appl. No. 29/375,624, filed Sep. 24, 2010, entitled “Mounted Video Unit,” Inventor(s): Ashok T. Desai et al.
Design U.S. Appl. No. 29/375,627, filed Sep. 24, 2010, entitled “Mounted Video Unit,” Inventor(s): Ashok T. Desai et al.
Design U.S. Appl. No. 29/369,951, filed Sep. 15, 2010, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al.
Design U.S. Appl. No. 29/375,458, filed Sep. 22, 2010, entitled “Video Unit With Integrated Features,” Inventor(s): Kyle A. Buzzard et al.
Design U.S. Appl. No. 29/358,009, filed Mar. 21, 2010, entitled “Free-Standing Video Unit,” Inventor(s): Ashok T. Desai et al.
Design U.S. Appl. No. 29/375,619, filed Sep. 24, 2010, entitled “Free-standing Video Unit,” Inventor(s): Ashok T. Desai et al.
PCT “International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” PCT/US2010/026456, dated Jun. 29, 2010; 11 pages.
PCT “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” PCT/US2009/001070, dated Apr. 4, 2009; 14 pages.
PCT “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” PCT/US2009/038310; dated Oct. 10, 2009; 17 pages.
PCT “International Preliminary Report on Patentability and Written Opinion of the International Searching Authority,” PCT/US2009/038310; dated Sep. 28, 2010; 10 pages.
PCT “International Preliminary Report on Patentability dated Sep. 29, 2009, International Search Report, and Written Opinion,” for PCT International Application PCT/US2008/058079; dated Sep. 18, 2008, 10 pages.
Joshua Gluckman and S.K. Nayar, “Rectified Catadioptric Stereo Sensors,” 8 pages, retrieved and printed on May 17, 2010; http://cis.poly.edu/˜gluckman/papers/cvpr00.pdf.
Digital Video Enterprises, “DVE Eye Contact Silhouette,” 1 page, © DVE 2008; http://www.dvetelepresence.com/products/eyeContactSilhouette.asp.
R.V. Kollarits, et al., “34.3: An Eye Contact Camera/Display System for Videophone Applications Using a Conventional Direct-View LCD,” © 1995 SID, 155N0097-0966X/95/2601, pp. 765-768; http://citeseerx.ist.psu.edu/viewdoc/download:jsessionid=47A1E7E028C26503975E633895D114EC?doi=10.1.1.42.1772&rep=rep1&type=pdf.
3G, “World's First 3G Video Conference Service with New TV Commercial,” Apr. 28, 2005, 4 pages; http://www.3g.co.uk/PR/April2005/1383.htm.
U.S. Appl. No. 12/957,116, filed Nov. 30, 2010, entitled “System and Method for Gesture Interface Control,” Inventors: Shuan K. Kirby, et al.
Design U.S. Appl. No. 29/381,245, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,250, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,254, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,256, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,259, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,260, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,262, filed Dec. 26, 2010, entitled “Interface Element,”Inventor(s): John M. Kanalakis, Jr., et al.
Design U.S. Appl. No. 29/381,264, filed Dec. 16, 2010, entitled “Interface Element,” Inventor(s): John M. Kanalakis, Jr., et al.
“3D Particles Experiments in AS3 and Flash CS3,” [retrieved and printed on Mar. 18, 2010]; 2 pages; http://www.flashandmath.com/advanced/fourparticles/notes.html.
Active8-3D—Holographic Projection—3D Hologram Retail Display & Video Project, [retrieved and printed on Feb. 24, 2009], http://www/activ8-3d.co.uk/3d—holocubes; 1 page.
Andersson, L., et al., “LDP Specification,” Network Working Group, RFC 3036, Jan. 2001, 133 pages; http://tools.ietf.org/html/rfc3036.
Arrington, Michael, “eJamming—Distributed Jamming,” TechCrunch; Mar. 16, 2006; http://www.techcrunch.com/2006/03/16/ejamming-distributed-jamming/; 1 page.
Avrithis, Y., et al., “Color-Based Retrieval of Facial Images,” European Signal Processing Conference (EUSIPCO '00), Tampere, Finland; Sep. 2000; http://www.image.ece.ntua.gr/˜ntsap/presentations/eusipco00.ppt#256; 18 pages.
Awduche, D., et al., “Requirements for Traffic Engineering over MPLS,” Network Working Group, RFC 2702, Sep. 1999, 30 pages; http://tools.ietf.org/pdf/rfc2702.pdf.
Bakstein, Hynek, et al., “Visual Fidelity of Image Based Rendering,” Center for Machine Perception, Czech Technical University, Proceedings of the Computer Vision, Winter 2004, http://www.benogo.dk/publications/Bakstein-Pajdla-CVWW04.pdf; 10 pages.
Beesley, S.T.C., et al., “Active Macroblock Skipping in the H.264 Video Coding Standard,” in Proceedings of 2005 Conference on Visualization, Imaging, and Image Processing—VIIP 2005, Sep. 7-9, 2005, Benidorm, Spain, Paper 480-261. ACTA Press, ISBN: 0-88986-528-0; 5 pages.
Berzin, O., et al., “Mobility Support Using MPLS and MP-BGP Signaling,” Network Working Group, Apr. 28, 2008, 60 pages; http://www/potaroo.net/ietf/all-ids/draft-berzin-malis-mpls-mobility-01.txt.
Boccaccio, Jeff; CEPro, “Inside HDMI CEC: The Little-Known Control Feature,” Dec. 28, 2007; http://www.cepro.com/ article/print/inside—hdmi—cec—the —little—known—control—feature; 2 pages.
Bücken R: “Bildfernsprechen: Videokonferenz vom Arbeitsplatz aus” Funkschau, Weka Fachzeitschriften Verlag, Poing, DE, No. 17, Aug. 14, 1986, pp. 41-43, XP002537729; ISSN: 0016-2841, p. 43, left-hand column, line 34-middle column, line 24.
Chen, Eric, et al., “Experiments on block-matching techniques for video coding,” Multimedia Systems; 9 Springer-Verlag 1994, Multimedia Systems (1994) 2 pages.
Chen et al., “Toward a Compelling Sensation of Telepresence: Demonstrating a Portal to a Distant (Static) Office, ” Proceedings Visualization 2000; VIS 2000; Salt Lake City, UT, Oct. 8-13, 2000; Annual IEEE Conference on Visualization, Los Alamitos, CA; IEEE Comp. Soc., US, Jan. 1, 2000, pp. 327-333; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.1287.
Chen, Jason, “iBluetooth Lets iphone Users Send and Receive Files Over Bluetooth,” Mar. 13, 2009; http://i.gizmodo.com/5169545/ibluetooth-lets-iphone-users-send-and-receive-files-over-bluetooth; 1 page.
Chen, Qing, et al., “Real-time Vision-based Hand Gesture Recognition Using Haar-like Features,” Instrumentation and Measurement Technology Conference, Warsaw, Poland, May 1-3, 2007, 6 pages; http://www.google.com/url?sa=t&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.93.103% 26rep%3Drep1%26type%3Dpdf&ei=A28RTLKDeftnQeXzZGRAw&usg=AFQjCNHpwj5MwjgGp-3goVzSWad6CO-Jzw.
“Cisco Expo Germany 2009 Opening,” Posted on YouTube on May 4, 2009; http://www.youtube.com/watch?v=SDKsaSlz4MK; 2 pages.
Cisco: Bill Mauchly and Mod Marathe; UNC: Henry Fuchs, et al., “Depth-Dependent Perspective Rendering,” Apr. 15, 2008; 6 pages.
Costa, Cristina, et al., “Quality Evaluation and Nonuniform Compression of Geometrically Distorted images Using the Quadtree Distortion Map,”EURASIP Journal on Applied Signal Processing, Jan. 7, 2004, vol. 2004, No. 12; © 2004 Hindawi Publishing Corp.; XP002536356; ISSN: 1110-8657; pp. 189-1911; http://downloads.hindawi.com/journals/asp/2004/470826.pdf.
Criminisi, A., et al., “Efficient Dense-Stereo and Novel-view Synthesis for Gaze Manipulation in One-to-one Teleconferencing,” Technical Rpt MSR-TR-2003-59, Sep. 2003 [retrieved and printed on Feb. 26, 2009], http://research.microsoft.com/pubs/67266/ criminis—techrep2003-59.pdf, 41 pages.
“Custom 3D Depth Sensing Prototype System for Gesture Control,” 3D Depth Sensing, GestureTek, 3 pages; [Retrieved and printed on Dec. 1, 2010] http://www.gesturetek.com/3ddepth/introduction.php.
Daly, S., et al., “Face-based visually-optimized image sequence coding,” Image Processing, 1998. ICIP 98. Proceedings; 1998 International Conference on Chicago, IL; Oct. 4-7, 1998, Los Alamitos; IEEE Computing; vol. 3, Oct. 4, 1998; ISBN: 978-0-8186-8821-8; XP010586786; pp. 443-447.
Diaz, Jesus, “Zcam 3D Camera is Like Wii Without Wiimote and Minority Report Without Gloves,” Dec. 15, 2007; http://gizmodo.com/gadgets/zcam-depth-camera-could-be-wii-challenger/zcam-3d-camera-is-like-wii-without-wiimote-and-minority-report-without-gloves-334426.php; 3pages.
Diaz, Jesus, iPhone Bluetooth File Transfer Coming Soon (YES!); Jan. 26, 2009; http://i.gizmodo.com/5138797/iphone-bluetooth-file-transfer-coming-soon-yes; 1page.
DVE Digital Video Enterprises, “DVE Tele-Immersion Room,” [retrieved and printed on Feb. 5, 2009] http://www.dvetelepresence.com/products/immersion—room.asp; 2 pages.
“Dynamic Displays,” copyright 2005-2008 [retrieved and printed on Feb. 24, 2009] http://www.zebraimaging.com/html/lighting—display.html, 2 pages.
ECmag.com, “IBS Products,” Published Apr. 2009; http://www.ecmag.com/index.cfm?fa=article&articleID=10065; 2 pages.
eJamming Audio, Learn More; [retrieved and printed on May 27, 2010] http://www.ejamming.com/learnmore/; 4 pages.
Electrophysics Glossary, “Infrared Cameras, Thermal Imaging, Night Vision, Roof Moisture Detection,” [retrieved and printed on Mar. 18, 2010] http://www.electrophysics.com/Browse/Brw—Glossary.asp; 11 pages.
U.S. Appl. No. 13/036,925, filed Feb. 28, 2011 ,entitled “System and Method for Selection of Video Data in a Video Conference Environment,” Inventor(s) Sylvia Olayinka Aya Manfa N'guessan.
U.S. Appl. No. 13/096,772, filed Apr. 28, 2011, entitled “System and Method for Providing Enhanced Eye Gaze in a Video Conferencing Environment,” Inventor(s): Charles C. Byers.
U.S. Appl. No. 13/106,002, filed May 12, 2011, entitled “System and Method for Video Coding in a Dynamic Environment,” Inventors: Dihong Tian et al.
U.S. Appl. No. 13/098,430, filed Apr. 30, 2011, entitled “System and Method for Transferring Transparency Information in a Video Environment,” Inventors: Eddie Collins et al.
U.S. Appl. No. 13/096,795, filed Apr. 28, 2011, entitled “System and Method for Providing Enhanced Eye Gaze in a Video Conferencing Environment,” Inventors: Charles C. Byers.
U.S. Appl. No. 13/298,022, filed Nov. 16, 2011, entitled “System and Method for Alerting a Participant in a Video Conference,” Inventor(s): TiongHu Lian, et al.
“Eye Tracking,” from Wikipedia, (printed on Aug. 31, 2011) 12 pages; http://en.wikipedia.org/wiki/Eye—tracker.
“Infrared Cameras TVS-200-EX,” [retrieved and printed on May 24, 2010] http://www.electrophysics.com/Browse/Brw—ProductLineCategory.asp?CategoryID=184&Area=IS; 2 pages.
“RoundTable, 360 Degrees Video Conferencing Camera unveiled by Microsoft,” TechShout, Jun. 30, 2006, 1 page;http://www.techshout.com/gadgets/2006/30/roundtable-360-degrees-video-conferencing-camera-unveiled-by-microsoft/#.
“Vocative Case,” from Wikipedia, [retrieved and printed on Mar. 3, 2011] 11 pages; http://en.wikipedia.org/wiki/Vocative—case.
“Eye Gaze Response Interface Computer Aid (Erica) tracks Eye movement to enable hands-free computer operation,” UMD Communication Sciences and Disorders Tests New Technology, University of Minnesota Duluth, posted Jan. 19, 2005; 4 pages http://www.d.umn.edu/unirel/homepage/05/eyegaze.html.
“Real-time Hand Motion/Gesture Detection for HCI-Demo 2,” video clip, YouTube, posted Dec. 17, 2008 by smmy0705, 1 page; www.youtube.com/watch?v=mLT4CFLIi8A&feature=related.
“Simple Hand Gesture Recognition,” video clip, YouTube, posted Aug. 25, 2008 by pooh8210, 1 page; http://www.youtube.com/watch?v=F8GVeV0dYLM&feature=related.
Andreopoulos, Yiannis, et al., “In-Band Motion Compensated Temporal Filtering,” Signal Processing: Image Communication 19 (2004) 653-673, 21 pages http://medianetlab.ee.ucla.edu/papers/011.pdf.
Arulampalam, M. Sanjeev, et al., “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, vol. 50, No. 2, Feb. 2002, 15 pages; http://www.cs.ubc.ca/˜murphyk/Software/Kalman/ParticleFilterTutorial.pdf.
Boros, S., “Policy-Based Network Management with SNMP,” Proceedings of the EUNICE 2000 Summer School Sep. 13-15, 2000, p. 3.
Cumming, Jonathan, “Session Border Control in IMS, an Analysis of the Requirements for Session Border Control in IMS Networks,” Sections 1.1, 1.1.1, 1.1.3, 1.1.4, 2.1.1, 3.2, 3.3.1, 5.2.3 and pp. 7-8, Data Connection, 2005.
Dornaika F., et al., “Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters,” 20040627; 20040627-20040602, Jun. 27, 2004, 22 pages; Heudiasy Research Lab; http://eprints.pascal-network.org/archive/00001231/01/rtvhci—chapter8.pdf.
Eisert, Peter, “Immersive 3-D Video Conferencing: Challenges, Concepts and Implementations,” Proceedings of SPIE Visual Communications and Image Processing (VCIP), Lugano, Switzerland, Jul. 2003; 11 pages; http://iphome.hhi.de/eisert/papers/vcip03.pdf.
Farrukh, A., et al., Automated Segmentation of Skin-Tone Regions in Video Sequences, Proceedings IEEE Students Conference, ISCON—apos—02; Aug. 16-17, 2002; pp. 122-128.
Fiala, Mark, “Automatic Projector Calibration Using Self-Identifying Patterns,” National Research Council of Canada, Jun. 20-26, 2005; http://www.procams.org/procams2005/papers/procams05-36.pdf; 6 pages.
Foote, J., et al., “Flycam: Practical Panoramic Video and Automatic Camera Control,” in Proceedings of IEEE International Conference on Multimedia and Expo, vol. III, Jul. 30, 2000; pp. 1419-1422; http://citeseerx.ist.psu.edu/viewdoc/versions?doi=10.1.1.138.8686.
France Telecom R&D, “France Telecom's Magic Telepresence Wall—Human Productivity Lab,” 5 pages, retrieved and printed on May 17, 2010; http://www.humanproductivitylab.com/archive—blogs/2006/07/11/france—telecoms—magic—telepres—1.php.
Freeman, Professor Wilson T., Computer Vision Lecture Slides, “6.869 Advances in Computer Vision: Learning and Interfaces,” Spring 2005; 21 pages.
Garg, Ashutosh, et al., “Audio-Visual ISpeaker Detection Using Dynamic Bayesian Networks,” IEEE International Conference on Automatic Face and Gesture Recognition, 2000 Proceedings, 7 pages; http://www.ifp.illinois.edu/˜ashutosh/papers/FG00.pdf.
Gemmell, Jim, et al., “Gaze Awareness for Video-conferencing: A Software Approach,” IEEE MultiMedia, Oct.-Dec. 2000; vol. 7, No. 4, pp. 26-35.
Gluckman, Joshua, et al., “Rectified Catadioptric Stereo Sensors,” 8 pages, retrieved and printed on May 17, 2010; http://cis.poly.edu/˜gluckman/papers/cvpr00.pdf.
Gotchev, Atanas, “Computer Technologies for 3D Video Delivery for Home Entertainment,” International Conference on Computer Systems and Technologies; CompSysTech, Jun. 12-13, 2008; http://ecet.ecs.ru.acad.bg/cst08/docs/cp/Plenary/P.1.pdf; 6 pages.
Gries, Dan, “3D Particles Experiments in AS3 and Flash CS3, Dan's Comments,” [retrieved and printed on May 24, 2010] http://www.flashandmath.com/advanced/fourparticles/notes.html; 3 pages.
Guernsey, Lisa, “Toward Better Communication Across the Language Barrier,” Jul. 29, 1999; http://www.nytimes.com/1999/07/29/technology/toward-better-communication-across-the-language-barrier.html; 2 pages.
Guili, D., et al., “Orchestra!: A Distributed Platform for Virtual Musical Groups and Music Distance Learning over the Internet in JavaTM Technology” ; [retrieved and printed on Jun. 6, 2010] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=778626; 2 pages.
Gundavelli, S., et al., “Proxy Mobile IPv6,” Network Working Group, RFC 5213, Aug. 2008, 93 pages; http://tools.ietf.org/pdf/rfc5213.pdf.
Gussenhoven, Carlos, “Chapter 5 Transcription of Dutch Intonation,” Nov. 9, 2003, 33 pages; http://www.ru.nl/publish/pages/516003/todisun-ah.pdf.
Gvili, Ronen et al., “Depth Keying,” 3DV System Ltd., [Retrieved and printed on Dec. 5, 2011] 11 pages; http://research.microsoft.com/en-us/um/people/eyalofek/Depth%20Key/DepthKey.pdf.
Habili, Nariman, et al., “Segmentation of the Face and Hands in Sign Language Video Sequences Using Color and Motion Cues” IEEE Transaction on Circuits and Systems for Video Technology, IEEE Service Center, vol. 14, No. 8, Aug. 1, 2004; ISSN: 1051-8215; XP011115755; pp. 1086-1097.
Hammadi, Nait Charif et al., “Tracking the Activity of Participants in a Meeting,” Machine Vision and Applications, Springer, Berlin, De Lnkd—DOI:10.1007/S00138-006-0015-5, vol. 17, No. 2, May 1, 2006, pp. 83-93, XP019323925 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.9832.
He, L., et al., “The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing,” Proc. SIGGRAPH, © 1996; http://research.microsoft.com/en-us/um/people/lhe/papers/siggraph96.vc.pdf; 8 pages.
Hepper, D., “Efficiency Analysis and Application of Uncovered Background Prediction in a Low BitRate Image Coder,” IEEE Transactions on Communications, vol. 38, No. 9, pp. 1578-1584, Sep. 1990.
Chien et al., “Efficient moving Object Segmentation Algorithm Using Background Registration Technique,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, No. 7, Jul. 2002, 10 pages.
EPO Feb. 25, 2011 Communication for EP09725288.6 (published as EP22777308); 4 pages.
EPO Aug. 15, 2011 Response to EPO Communication mailed Feb. 25, 2011 from European Patent Application No. 09725288.6; 15 pages.
EPO Nov. 3, 2011 Communication from European Application EP10710949.8; 2 pages.
EPO Mar. 12, 2012 Response to EP Communication dated Nov. 3, 2011 from European Application EP10710949.8; 15 pages.
EPO Mar. 20, 2012 Communication from European Application 09725288.6; 6 pages.
EPO Jul. 10, 2012 Response to EP Communication from European Application EP10723445.2.
EPO Sep. 24, 2012 Response to Mar. 20, 2012 EP Communication from European Application EP09725288.6
Geys et al., “Fast Interpolated Cameras by Combining a GPU Based Plane Sweep With a Max-Flow Regularisation Algorithm,” Sep. 9, 2004; 3D Data Processing, Visualization and Transmission 2004, pp. 534-541.
PRC Aug. 3, 2012 SIPO First Office Action from Chinese Application No. 200980119121.5; 16 pages.
Hock, Hans Henrich, “Prosody vs. Syntax: Prosodic rebracketing of final vocatives in English,” 4 pages; [retrieved and printed on Mar. 3, 2011] http://speechprosody2010.illinois.edu/papers/100931.pdf.
Holographic Imaging, “Dynamic Holography for scientific uses, military heads up display and even someday HoloTV Using TI's DMD,” [retrieved and printed on Feb. 26, 2009] http://innovation.swmed.edu/ research/instrumentation/res—inst—dev3d.html; 5 pages.
Hornbeck, Larry J., “Digital Light ProcessingTM: A New MEMS-Based Display Technology,” [retrieved and printed on Feb. 26, 2009] http://focus.ti.com/pdfs/dlpdmd/17—Digital—Light—Processing—MEMS—display—technology.pdf; 22 pages.
IR Distribution Category @ Envious Technology, “IR Distribution Category,” [retrieved and printed on Apr. 22, 2009] http://www.envioustechnology.com.au/ products/product-list.php?CID=305; 2 pages.
IR Trans—Products and Orders—Ethernet Devices, [retrieved and printed on Apr. 22, 2009] http://www.irtrans.de/en/shop/lan.php; 2 pages.
Isgro, Francesco et al., “Three-Dimensional Image Processing in the Future of Immersive Media,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, No. 3; XP011108796; ISSN: 1051-8215; Mar. 1, 2004; pp. 288-303.
Itoh, Hiroyasu, et al., “Use of a gain modulating framing camera for time-resolved imaging of cellular phenomena,” SPIE vol. 2979, 1997, pp. 733-740.
Jamoussi, Bamil, “Constraint-Based LSP Setup Using LDP,” MPLS Working Group, Sep. 1999, 34 pages; http://tools.ietf.org/html/draft-ietf-mpls-cr-Idp-03.
Jeyatharan, M., et al., “3GPP TFT Reference for Flow Binding,” MEXT Working Group, Mar. 2, 2010, 11 pages; http:/www.ietf.org/id/draft-jeyatharan-mext-flow-tftemp-reference-00.txt.
Jiang, Minqiang, et al., “On Lagrange Multiplier and Quantizer Adjustment for H.264 Frame-layer Video Rate Control,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, Issue 5, May 2006, pp. 663-669.
Jong-Gook Ko et al., “Facial Feature Tracking and Head Orientation-Based Gaze Tracking,” ITC-CSCC 2000, International Technical Conference on Circuits/Systems, Jul. 11-13, 2000, 4 pages http://www.umiacs.umd.edu/˜knkim/paper/itc-cscc-2000-jgko.pdf.
Kannangara, C.S., et al., “Complexity Reduction of H.264 Using Lagrange Multiplier Methods,” IEEE Int. Conf. on Visual Information Engineering, Apr. 2005; www.rgu.ac.uk/files/h264—complexity—kannangara.pdf; 6 pages.
Kannangara, C.S., et al., “Low Complexity Skip Prediction for H.264 through Lagrangian Cost Estimation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, No. 2, Feb. 2006; www.rgu.ac.uk/files/h264—skippredict—richardson—final.pdf; 20 pages.
Kauff, Peter, et al., “An Immersive 3D Video-Conferencing System Using Shared Virtual Team User Environments,” Proceedings of the 4th International Conference on Collaborative Virtual Environments, XP040139458; Sep. 30, 2002; http://ip.hhi.de/imedia—G3/assets/pdfs/CVE02.pdf; 8 pages.
Kazutake, Uehira, “Simulation of 3D image depth perception in a 3D display using two stereoscopic displays at different depths,” Jan. 30, 2006; http://adsabs.harvard.edu/abs/2006SPIE.6055.408U; 2 pages.
Keijser, Jeroen, et al., “Exploring 3D Interaction in Alternate Control-Display Space Mappings,” IEEE Symposium on 3D User Interfaces, Mar. 10-11, 2007, pp. 17-24.
Kollarits, R.V., et al., “34.3: An Eye Contact Camera/Display System for Videophone Applications Using a Conventional Direct-View LCD,” © 1995 SID, 1SSN0097-0966X/95/2601, pp. 765-768; http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=47A1E7E028C26503975E633895D114EC?doi=10.1.1.42.1772&rep=rep1&type=pdf.
Kim, Y.H., et al., “Adaptive mode decision for H.264 encoder,” Electronics letters, vol. 40, Issue 19, pp. 1172-1173, Sep. 2004; 2 pages.
Koyama, S., et al. “A Day and Night Vision MOS Imager with Robust Photonic-Crystal-Based RGB-and-IR,” Mar. 2008, pp. 754-759; ISSN: 0018-9383; IEE Transactions on Electron Devices, vol. 55, No. 3; http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4455782&isnumber=4455723.
Kwolek, B., “Model Based Facial Pose Tracking Using a Particle Filter,” Geometric Modeling and Imaging—New Trends, 2006 London, England Jul. 5-6, 2005, Piscataway, NJ, USA, IEEE LNKD-DOI: 10.1109/GMAI.2006.34 Jul. 5, 2006, pp. 203-208; XP010927285 [Abstract Only].
Lambert, “Polycom Video Communications,” © 2004 Polycom, Inc., Jun. 20, 2004 http://www.polycom.com/global/documents/whitepapers/video—communications—h.239—people—content—polycom—patented—technology.pdf.
Lawson, S., “Cisco Plans TelePresence Translation Next Year,” Dec. 9, 2008; http://www.pcworld.com/ article/155237/.html?tk=rss—news; 2 pages.
Lee, J. and Jeon, B., “Fast Mode Decision for H.264,” ISO/IEC MPEG and ITU-T VCEG Joint Video Team, Doc. JVT-J033, Dec. 2003; http://media.skku.ac.kr/publications/paper/IntC/Ijy—ICME2004.pdf; 4 pages.
Liu, Shan, et al., “Bit-Depth Scalable Coding for High Dynamic Range Video,” SPIE Conference on Visual Communications and Image Processing, Jan. 2008; 12 pages http://www.merl.com/papers/docs/TR2007-078.pdf.
Liu, Z., “Head-Size Equalization for Better Visual Perception of Video Conferencing,” Proceedings, IEEEInternational Conference on Multimedia & Expo (ICME2005), Jul. 6-8, 2005, Amsterdam, The Netherlands; http://research.microsoft.com/users/cohen/HeadSizeEqualizationICME2005.pdf; 4 pages.
Mann, S., et al., “Virtual Bellows: Constructing High Quality Still from Video,” Proceedings, First IEEE International Conference on Image Processing ICIP-94, Nov. 13-16, 1994, Austin, TX; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.8405; 5 pages.
Marvin Imaging Processing Framework, “Skin-colored pixels detection using Marvin Framework,” video clip, YouTube, posted Feb. 9, 2010 by marvinproject, 1 page; http://www.youtube.com/user/marvinproject#p/a/u/0/3ZuQHYNlcrl.
Miller, Gregor, et al., “Interactive Free-Viewpoint Video,” Centre for Vision, Speech and Signal Processing, [retrieved and printed on Feb. 26, 2009], http://www.ee.surrey.ac.uk/CVSSP/VMRG/ Publications/miller05cvmp.pdf, 10 pages.
Miller, Paul, “Microsoft Research patents controller-free computer input via EMG muscle sensors,” Engadget.com, Jan. 3, 2010, 1 page; http://www.engadget.com/2010/01/03/microsoft-research-patents-controller-free-computer-input-via-em/.
Minoru from Novo is the worlds first consumer 3D Webcam, Dec. 11, 2008; http://www.minoru3d.com; 4 pages.
Mitsubishi Electric Research Laboratories, copyright 2009 [retrieved and printed on Feb. 26, 2009], http://www.merl.com/projects/3dtv, 2 pages.
Nakaya, Y., et al. “Motion Compensation Based on Spatial Transformations,” IEEE Transactions on Circuits and Systems for Video Technology, Jun. 1994, Abstract Only http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5% 2F76%2F7495%2F00305878.pdf%3Farnumber%3D305878&authDecision=-203.
National Training Systems Association Home—Main, Interservice/Industry Training, Simulation & Education Conference, Dec. 1-4, 2008; http://ntsa.metapress.com/app/home/main.asp?referrer=default; 1 page.
Oh, Hwang-Seok, et al., “Block-Matching Algorithm Based on Dynamic Search Window Adjustment,” Dept. of CS, KAIST, 1997, 6 pages; http://citeseerx.ist.psu.edu/viewdoc/similar?doi=10.1.1.29.8621&type=ab.
Opera Over Cisco TelePresence at Cisco Expo 2009, in Hannover Germany—Apr. 28, 29, posted on YouTube on May 5, 2009; http://www.youtube.com/watch?v=xN5jNH5E-38; 1 page.
Patterson, E.K., et al., “Moving-Talker, Speaker-Independent Feature Study and Baseline Results Using the CUAVE Multimodal Speech Corpus,” EURASIP Journal on Applied Signal Processing, vol. 11, Oct. 2002, 15 pages http://www.clemson.edu/ces/speech/papers/CUAVE—Eurasip2002.pdf.
Payatagool, Chris, “Orchestral Manoeuvres in the Light of Telepresence,” Telepresence Options, Nov. 12, 2008; http://www.telepresenceoptions.com/2008/11/orchestral—manoeuvres; 2pages.
PCT May 15, 2006 International Report of Patentability dated May 15, 2006, for PCT International Application PCT/US2004/021585, 6 pages.
PCT Sep. 25, 2007 Notification of Transmittal of the International Search Report from PCT/US06/45895.
PCT Sep. 2, 2008 International Preliminary Report on Patentability (1 page) and the Written Opinion of th ISA (4 pages) from PCT/US2006/045895.
PCT Sep. 11, 2008 Notification of Transmittal of the International Search Report from PCT/US07/09469.
PCT Nov. 4, 2008 International Preliminary Report on Patentability (1 page) and the Written Opinion of the ISA (8 pages) from PCT/US2007/009469.
PCT Nov. 5, 2010 International Search Report from PCT/US2010/024059; 4 pages.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT/US2009/064061 mailed Feb. 23, 2010; 14 pages.
PCT International Search Report mailed Aug. 24, 2010 for PCT/US2010033880; 4 pages.
PTC International Preliminary Report on Patentability mailed Aug. 26, 2010 for PCT/US2009/001070; 10 pages.
PCT Oct. 12, 2011 International Search Report and Written Opinion of the ISA from PCT/US2011/050380.
PCT Nov. 24, 2011 International Preliminary Report on Patentability from International Application Serial No. PCT/US2010/033880; 6 pages.
PCT Aug. 23, 2011 International Preliminary Report on Patentability and Written Opinion of the ISA from PCT/US2010/024059; 6 pages.
PCT Sep. 13, 2011 International Preliminary Report on Patentability and the Written Opinion of the ISA from PCT/US2010/026456; 5 pages.
PCT Jan. 23, 2012 International Search Report and Written Opinion of the ISA from International Application Serial No. PCT/US2011/060579; 10 pages.
PCT Jan. 23, 2012 International Search Report and Written Opinion of the ISA from International Application Serial No. PCT/US2011/060584; 11 pages.
PCT Feb. 20, 2012 International Search Report and Written Opinion of the ISA from International Application Serial No. PCT/US2011/061442; 12 pages.
Kolsch, Mathias, “Vision Based Hand Gesture Interfaces for Wearable Computing and Virtual Environments,” A Dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science, University of California, Santa Barbara, Nov. 2004, 288 pages; http://fulfillment.umi.com/dissertations/b7afbcb56ba72fdb14d26dfccc6b470f/1291487062/3143800.pdf.
OptoIQ, “Vision + Automation Products—VideometerLab 2,” [retrieved and printed on Mar. 18, 2010], http://www.optoiq.com/optoiq-2/en-us/index/machine-vision-imaging-processing/display/vsd-articles-tools-template.articles.vision-systems-design.volume-11.issue-10.departments.new-products.vision-automation-products.htmlhtml; 11 pages.
OptoIQ, “Anti-Speckle Techniques Uses Dynamic Optics,” Jun. 1, 2009; http://www.optoiq.com/index/photonics-technologies-applications/lfw-display/lfw-article-display/363444/articles/optoiq2/photonics-technologies/technology-products/optical-components/optical-mems/2009/12/anti-speckle-technique-uses-dynamic-optics/QP129867/cmpid=EnlOptoLFWJanuary132010.html; 2 pages.
OptoIQ, “Smart Camera Supports Multiple Interfaces,” Jan. 22, 2009; http://www.optoiq.com/index/machine-vision-imaging-processing/display/vsd-article-display/350639/articles/vision-systems-design/daily-product-2/2009/01/smart-camera-supports-multiple-interfaces.html; 2 pages.
OptoIQ, “Vision Systems Design—Machine Vision and Image Processing Technology,” [retrieved and printed on Mar. 18, 2010], http://www.optoiq.com/index/machine-vision-imaging-processing.html; 2 pages.
Perez, Patrick, et al., “Data Fusion for Visual Tracking with Particles,” Proceedings of the IEEE, vol. XX, No. XX, Feb. 2004, 18 pages http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.2480.
Pixel Tools “Rate Control and H.264: H.264 rate control algorithm dynamically adjusts encoder parameters,” [retrieved and printed on Jun. 10, 2010] http://www.pixeltools.om/rate—control—paper.html; 7 pages.
Potamianos, G., et a., “An Image Transform Approach for HMM Based Automatic Lipreading,” in Proceedings of IEEE ICIP, vol. 3, 1998, 5 pages http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.6802.
Radhika, N., et al., “Mobile Dynamic reconfigurable Context aware middleware for Adhoc smart spaces,” vol. 22, 2008, http://www.acadjournal.com/2008/V22/part6/p7; 3 pages.
Rayvel Business-to-Business Products, copyright 2004 [retrieved and printed on Feb. 24, 2009], http://www.rayvel.com/b2b.html; 2 pages.
Richardson, I.E.G., et al., “Fast H.264 Skip Mode Selection Using and Estimation Framework,” Picture Coding Symposium, (Beijing, China), Apr. 2006; www.rgu.ac.uk/files/richardson—fast—skip—estmation—pcs06.pdf; 6 pages.
Richardson, Iain, et al., “Video Encoder Complexity Reduction by Estimating Skip Mode Distortion,” Image Communication Technology Group; [Retrieved and printed Oct. 21, 2010] 4 pages; http://www4.rgu.ac.uk/files/ICIP04—richardson—zhao—final.pdf.
Rikert, T.D., et al., “Gaze Estimation using Morphable models,” IEEE International Conference on Automatic Face and Gesture Recognition, Apr. 1998; 7 pgs. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.9472.
Robust Face Localisation Using Motion, Colour & Fusion; Proc. VIIth Digital Image Computing: Techniques and Applications, Sun C. et al (Eds.), Sydney; XP007905630; pp. 899-908; Dec. 10, 2003; http://www.cmis.csiro.au/Hugues.Talbot/dicta2003/cdrom/pdf/0899.pdf.
Satoh, Kiyohide et al., “Passive Depth Acquisition for 3D Image Displays”, IEICE Transactions on Information and Systems, Information Systems Society, Tokyo, JP, Sep. 1, 1994, vol. E77-D, No. 9, pp. 949-957.
School of Computing, “Bluetooth over IP for Mobile Phones,” 2005; http://www.computing.dcu.ie/wwwadmin/fyp-abstract/list/fyp—details05.jsp?year=2005&number=51470574; 1 page.
Schroeder, Erica, “The Next Top Model—Collaboration,” Collaboration, The Workspace: A New World of Communications and Collaboration, Mar. 9, 2009; http/blogs.cisco.com/collaboration/comments/the—next—top—model; 3 pages.
Sena, “Industrial Bluetooth,” [retrieved and printed on Apr. 22, 2009] http://www.sena.com/products/industrial—bluetooth; 1 page.
Shaffer, Shmuel, “Translation—State of the Art” presentation; Jan. 15, 2009; 22 pages.
Shi, C. et al., “Automatic Image Quality Improvement for Videoconferencing,” IEEE ICASSP May 2004; http://research.microsoft.com/pubs/69079/0300701.pdf; 4 pages.
Shum, H.-Y, et al., “A Review of Image-Based Rendering Techniques,” in SPIE Proceedings vol. 4067(3); Proceedings of the Conference on Visual Communications and Image Processing 2000, Jun. 20-23, 2000, Perth, Australia; pp. 2-13; https://research.microsoft.com/pubs/68826/review—image—rendering.pdf.
Smarthome, “IR Extender Expands Your IR Capabilities,” [retrieved and printed on Apr. 22, 2009], http://www.smarthome.com/8121.html; 3 pages.
Soliman, H., et al., “Flow Bindings in Mobile IPv6 and NEMO Basic Support,” IETF MEXT Working Group, Nov. 9, 2009, 38 pages; http://tools.ietf.org/html/draft-ietf-mext-flow-binding-04.
Sonoma Wireworks Forums, “Jammin on Rifflink,” [retrieved and printed on May 27, 2010] http://www.sonomawireworks.com/forums/viewtopic.php?id=2659; 5 pages.
Sonoma Wireworks Rifflink, [retrieved and printed on Jun. 2, 2010] http://www.sonomawireworks.com/rifflink.php; 3 pages.
Soohuan, Kim, et al., “Block-based face detection scheme using face color and motion estimation,” Real-Time Imaging VIII; Jan. 20-22, 2004, San Jose, CA; vol. 5297, No. 1; Proceedings of the SPIE—The International Society for Optical Engineering SPIE—Int. Soc. Opt. Eng USA ISSN: 0277-786X; XP007905596; pp. 78-88.
Sudan, Ranjeet, “Signaling in MPLS Networks with RSVP-TE-Technology Information,” Telecommunications, Nov. 2000, 3 pages; http://findarticles.com/p/articles/mi—mOTLC/is—11—34/ai—67447072/.
Sullivan, Gary J., et al., “Video Compression—From Concepts to the H.264/AVC Standard,” Proceedings IEEE, vol. 93, No. 1, Jan. 2005; http://ip.hhi.de/imagecom—G1/assets/pdfs/pieee—sullivan—wiegand—2005.pdf; 14 pages.
Sun, X., et al., “Region of Interest Extraction and Virtual Camera Control Based on Panoramic Video Capturing,” IEEE Trans. Multimedia, Oct. 27, 2003; http://vision.ece.ucsb.edu/publications/04mmXdsun.pdf; 14 pages.
Super Home Inspectors or Super Inspectors, [retrieved and printed on Mar. 18, 2010] http://www.umrt.com/PageManager/Default.aspx/PageID=2120325; 3 pages.
Tan, Kar-Han, et al., “Appearance-Based Eye Gaze Estimation,” In Proceedings IEEE WACV'02, 2002, 5 pages; http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.8921.
Total immersion, Video Gallery, copyright 2008-2009 [retrieved and printed on Feb. 26, 2009], http://www.t-immersion.com/en,video-gallery,36.html, 1 page.
Trevor Darrell, “A Real-Time Virtual Mirror Display,” 1 page, Sep. 9, 1998; http://people.csail.mit.edu/trevor/papers/1998-021/node6.html.
Trucco, E., et al., “Real-Time Disparity Maps for Immersive 3-D Teleconferencing by Hybrid Recursive Matching and Census Transform,” [retrieved and printed on May 4, 2010] http://server.cs.ucf.edu/˜vision/papers/VidReg-final.pdf; 9 pages.
Tsapatsoulis, N., et al., “Face Detection for Multimedia Applications,” Proceedings of the ICIP Sep. 10-13, 2000, Vancouver, BC, Canada; vol. 2, pp. 247-250.
Tsapatsoulis, N., et al., “Face Detection in Color Images and Video Sequences,” 10th Mediterranean Electrotechnical Conference (MELECON), May 29-31, 2000; vol. 2; pp. 498-502.
Veratech Corp., “Phantom Sentinel,”©VeratechAero 2006, 1 page; http://www.veratechcorp.com/phantom.html.
Vertegaal, Roel, et al., “GAZE-2: Conveying Eye Contact in Group Video Conferencing Using Eye-Controlled Camera Direction,” CHI 2003, Apr. 5-10, 2003, Fort Lauderdale, FL; Copyright 2003 ACM 1-58113-630-7/03/0004; 8 pages; http://www.hml.queensu.ca/papers/vertegaalchi0403.pdf.
Wachs, J., et al., “A Real-time Hand Gesture System Based on Evolutionary Search,” Vision, 3rd Quarter 2006, vol. 22, No. 3, 18 pages; http://web.ics.purdue.edu/˜jpwachs/papers/3q06vi.pdf.
Wang, Hualu, et al., “A Highly Efficient System for Automatic Face Region Detection inMPEG Video,” IEEE Transactions on Circuits and Systems for Video Technology; vol. 7, Issue 4; 1977 pp. 615-628.
Wang, Robert and Jovan Popovic, “Bimanual rotation and scaling,” video clip, YouTube, posted by rkeltset on Apr. 14, 2010, 1 page; http://www.youtube.com/watch?v=7TPFSCX79U.
Wang, Robert and Jovan Popovic, “Desktop virtual reality,” video clip, YouTube, posted by rkeltset on Apr. 8, 2010, 1 page; http://www.youtube.com/watch?v=9rBtm62Lkfk.
Wang, Robert and Jovan Popovic, “Gestural user input,” video clip, YouTube, posted by rkeltset on May 19, 2010, 1 page; http://www.youtube.com/watch?v=3JWYTtBjdTE.
Wang, Robert and Jovan Popovic, “Manipulating a virtual yoke,” video clip, YouTube, posted by rkeltset on Jun. 8, 2010, 1 page; http://www.youtube.com/watch?v=UfgGOO2uM.
Wang, Robert and Jovan Popovic, “Real-Time Hand-Tracking with a Color Glove, ACM Transaction on Graphics,” 4 pages, [Retrieved and printed on Dec. 1, 2010] http://people.csail.mit.edu/rywang/hand.
Wang, Robert and Jovan Popovic, “Real-Time Hand-Tracking with a Color Glove, ACM Transaction on Graphics” (SIGGRAPH 2009), 28(3), Aug. 2009; 8 pages http://people.csail.mit.edu/rywang/handtracking/s09-hand-tracking.pdf.
Wang, Robert and Jovan Popovic, “Tracking the 3D pose and configuration of the hand,” video clip, YouTube, posted by rkeltset on Mar. 31, 2010, 1 page; http://www.youtube.com/watch?v=JOXwJkWP6Sw.
Weinstein et al., “Emerging Technologies for Teleconferencing and Telepresence,” Wainhouse Research 2005; http://www.ivci.com/pdf/whitepaper-emerging-technologies-for-teleconferencing-and-telepresence.pdf.
Westerink, P.H., et al., “Two-pass MPEG-2 variable-bitrate encoding,” IBM Journal of Research and Development, Jul. 1991, vol. 43, No. 4; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.128.421; 18 pages.
Wiegand, T., et al., “Efficient mode selection for block-based motion compensated video coding,” Proceedings, 2005 International Conference on Image Processing IIP 2005, pp. 2559-2562; citeseer.ist.psu.edu/wiegand95efficient.html.
Wiegand, T., et al., “Rate-distortion optimized mode selection for very low bit rate video coding and the emerging H.263 standard,” IEEE Trans. Circuits Syst. Video Technol., Apr. 1996, vol. 6, No. 2, pp. 182-190.
Wi-Fi Protected Setup, from Wikipedia, Sep. 2, 2010, 3 pages http://en.wikipedia.org/wiki/Wi-Fi—Protected—Setup.
Wilson, Mark, “Dreamoc 3D Display Turns Any Phone Into Hologram Machine,” Oct. 30, 2008; http://gizmodo.com/5070906/ dreamoc-3d-display-turns-any-phone-into-hologram-machine; 2 pages.
WirelessDevNet, Melody Launches Bluetooth Over IP, [retrieved and printed on Jun. 5, 2010] http://www.wirelessdevnet.com/news/2001/ 155/news5.html; 2 pages.
Xia, F., et al., “Home Agent Initiated Flow Binding for Mobile IPv6,” Network Working Group, Oct. 19, 2009, 15 pages; http://tools.ietf.orghtml/draft-xia-mext-ha-init-flow-binding-01.txt.
Xin, Jun, et al., “Efficient macroblock coding-mode decision for H.264/AVC video coding,” Technical Repot MERL 2004-079, Mitsubishi Electric Research Laboratories, Jan. 2004; www.merl.com/publications/TR2004-079/; 12 pages.
Yang, Jie, et al., “A Real-Time Face Tracker,” Proceedings 3rd IEEE Workshop on Applications of Computer Vision; 1996; Dec. 2-4, 1996; pp. 142-147; http://www.ri.cmu.edu/pub—files/pub1/yang—jie—1996—1/yang—jie—1996—1.pdf.
Yang, Ming-Hsuan, et al., “Detecting Faces in Images: A Survey,” vol. 24, No. 1; Jan. 2002; pp. 34-58; http://vision.ai.uiuc.edu/mhyang/papers/pami02a.pdf.
Yang, Ruigang, et al., “Real-Time Consensus-Based Scene Reconstruction using Commodity Graphics Hardware,” Department of Computer Science, University of North Carolina at Chapel Hill; 2002; http://www.cs.unc.edu/Research/stc/publications/yang—pacigra2002.pdf ; 10 pages.
Yang, Xiaokang, et al., Rate Control for H.264 with Two-Step Quantization Parameter Determination but Single-Pass Encoding, EURASIP Journal on Applied Signal Processing, Jun. 2006; http://downloads.hindawi.com/journals/asp/2006/063409.pdf; 13 pages.
Yegani, P. et al., “GRE Key Extension for Mobile IPv4,” Network Working Group, Feb. 2006, 11 pages; http://tools.ietf.org/pdf/draft-yegani-gre-key-extension-01.pdf.
Yoo, Byounghun, et al., “Image-Based Modeling of Urban Buildings Using Aerial Photographs and Digital Maps,” Transactions in GIS, 2006, 10(3): p. 377-394.
Zhong, Ren, et al., “Integration of Mobile IP and MPLS,” Network Working Group, Jul. 2000, 15 pages; http://tools.ietf.org/html/draft-zhong-mobile-ip-mpls-01.
PCT Mar. 21, 2013 International Preliminary Report on Patentability from International Application Serial No. PCT/US2011/050380.
PRC Apr. 3, 2013 SIPO Second Office Action from Chinese Application No. 200980119121.5; 16 pages.
PCT May 30, 2013 International Preliminary Report on Patentability and Written Opinion from the International Searching Authority for International Application Serial No. PCT/US2011/061442 8 pages.
PCT May 30, 2013 International Preliminary Report on Patentability and Written Opinion from the International Searching Authority for International Application Serial No. PCT/US2011/060579 6 pages.
PCT May 30, 2013 International Preliminary Report on Patentability and Written Opinion from the International Searching Authority for International Application Serial No. PCT/US2011/060584 7 pages.
PRC Jun. 18, 2013 Response to SIPO Second Office Action from Chinese Application No. 200980119121.5; 5 pages.
PRC Jul. 9, 2013 SIPO Third Office Action from Chinese Application No. 200980119121.5; 15 pages.
PRC Sep. 24, 2013 Response to SIPO Third Office Action from Chinese Application No. 200980119121.5; 5 pages.
PRC Dec. 18, 2012 Response to SIPO First Office Action from Chinese Application No. 200980119121.5; 16 pages.
PRC Jan. 7, 2013 SIPO Second Office Action from Chinese Application Serial No. 200980105262.1.
Klint, Josh, “Deferred Rendering in Leadwerks Engine,” Copyright Leadwerks Corporation © 2008; http://www.leadwerks.com/files/Deferred—Rendering—in—Leadwerks—Engine.pdf; 10 pages.
“Oblong Industries is the developer of the g-speak spatial operation environment,” Oblong Industries Information Page, 2 pages, [Retrieved and printed on Dec. 1, 2010] http://oblong.com.
Underkoffler, John, “G-Speak Overview 1828121108,” video clip, Vimeo.com, 1 page, [Retrieved and printed on Dec. 1, 2010] http://vimeo.com/2229299.
Kramer, Kwindla, “Mary Ann de Lares Norris at Thinking Digital,” Oblong Industries, Inc. Web Log, Aug. 24, 2010; 1 page; http://oblong.com/articles/OBS6hEeJmoHoCwgJ.html.
“Mary Ann de Lares Norris,” video clip, Thinking Digital 2010 Day Two, Thinking Digital Videos, May 27, 2010, 3 pages; http://videos.thinkingdigital.co.uk/2010/05/mary-ann-de-la res-norris-oblong/.
Kramer, Kwindla, “Oblong at TED,” Oblong Industries, Inc. Web Log, Jun. 6, 2010, 1 page; http://oblong.com/article/OB22LFIS1NVyrOmR.html.
Video on TED.com, Pranav Mistry: the Thrilling Potential of SixthSense Technology (5 pages) and Interactive Transcript (5 pages), retrieved and printed on Nov. 30, 2010; http://www.ted.com/talks/pranav—mistry—the—thrilling—potential—of—sixthsense—technology.html.
“John Underkoffler points to the future of UI,” video clip and interactive transcript, Video on TED.com, Jun. 2010, 6 pages; http://www.ted.com/talks/john—underkoffler—drive—3d—data—with—a—gesture.html.
Kramer, Kwindla, “Oblong on Bloomberg TV,” Oblong Industries, Inc. Web Log, Jan. 28, 2010, 1 page; http://oblong.com/article/OAN—1KD9q990PEnw.html.
Kramer, Kwindla, “g-speak at RISD, Fall 2009,” Oblong Industries, Inc. Web Log, Oct. 29, 2009, 1 page; http://oblong.com/article/09uW060q6xRIZYvm.html.
Kramer, Kwindla, “g-speak + TMG,” Oblong Industries, Inc. Web Log, Mar. 24, 2009, 1 page; http://oblong.com/article/08mM77zpYMm7kFtv.html.
“g-stalt version 1,” video clip, YouTube.com, posted by zigg1es on Mar. 15, 2009, 1 page; http://youtube.com/watch?v=k8ZAql4mdvk.
Underkoffler, John, “Carlton Sparrell speaks at MIT,” Oblong Industries, Inc. Web Log, Oct. 30, 2009, 1 page; http://oblong.com/article/09usAB4l1Ukb6CPw.html.
Underkoffler, John, “Carlton Sparrell at MIT Media Lab,” video clip, Vimeo.com, 1 page, [Retrieved and printed Dec. 1, 2010] http://vimeo.com/7355992.
Underkoffler, John, “Oblong at Altitude: Sundance 2009,” Oblong Industries, Inc. Web Log, Jan. 20, 2009, 1 page; http://oblong.com/article/08Sr62ron—2akg0D.html.
Underkoffler, John, “Oblong's tamper system 1801011309,” video clip, Vimeo.com, 1 page, [Retrieved and printed Dec. 1, 2010] http://vimeo.com/2821182.
Feld, Brad, “Science Fact,” Oblong Industries, Inc. Web Log, Nov. 13, 2008, 2 pages, http://oblong.com/article/084H-PKI5Tb9l4Ti.html.
Kwindla Kramer, “g-speak in slices,” Oblong Industries, Inc. Web Log, Nov. 13, 2008, 6 pages; http://oblong.com/article/0866JqfNrFg1NeuK.html.
Underkoffler, John, “Origins: arriving here,” Oblong Industries, Inc. Web Log, Nov. 13, 2008, 5 pages; http://oblong.com/article/085zBpRSY9JeLv2z.html.
Rishel, Christian, “Commercial overview: Platform and Products,” Oblong Industries, Inc., Nov. 13, 2008, 3 pages; http://oblong.com/article/086E19gPvDcktAf9.html.
Related Publications (1)
Number Date Country
20120127259 A1 May 2012 US