System and method for depth-guided filtering in a video conference environment

Information

  • Patent Grant
  • 9681154
  • Patent Number
    9,681,154
  • Date Filed
    Thursday, December 6, 2012
    11 years ago
  • Date Issued
    Tuesday, June 13, 2017
    7 years ago
  • Inventors
  • Original Assignees
    • PATENT CAPITAL GROUP (San Jose, CA, US)
  • Examiners
    • Dang; Hung
    Agents
    • Patent Capital Group
Abstract
A method is provided in one example embodiment that includes generating a depth map that corresponds to a video image and filtering the depth map with the video image to create a filtered depth map. The video image can be filtered with the filtered depth map to create an image. In one example implementation, the video image is filtered using extended depth-guided filtering that is incorporated into a video encoding-decoding loop.
Description
TECHNICAL FIELD

This disclosure relates in general to the field of communications, and more particularly, to a system and a method for depth-guided filtering in a video conference environment.


BACKGROUND

Video architectures have grown in complexity in recent times. Some video architectures can deliver real-time, face-to-face interactions between people using advanced visual, audio, and collaboration technologies. In certain architectures, service providers may offer sophisticated video conferencing services for their end users, which can simulate an “in-person” meeting experience over a network. The ability to enhance video encoding and decoding with certain bitrate constraints during a video conference presents a significant challenge to developers and designers, who attempt to offer a video conferencing solution that is realistic and that mimics a real-life meeting.





BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:



FIG. 1 is a simplified block diagram illustrating an example embodiment of a communication system in accordance with one embodiment of the present disclosure;



FIG. 2A is a simplified block diagram illustrating possible example details associated with one embodiment of the present disclosure;



FIG. 2B is a simplified block diagram illustrating possible example details associated with one embodiment of the present disclosure;



FIG. 3 is a simplified block diagram illustrating possible example details associated with one embodiment of the present disclosure;



FIG. 4 is a simplified block diagram illustrating possible example details associated with one embodiment of the present disclosure; and



FIG. 5 is a simplified flowchart illustrating potential operations associated with the communication system in accordance with one embodiment of the present disclosure.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview


A method is provided in one example embodiment that includes generating a depth map that corresponds to a video image and filtering the depth map with the video image to create a filtered depth map. The video image can be filtered with the filtered depth map to create an image. In one example implementation, the video image is filtered using extended depth-guided filtering that is incorporated into a video encoding-decoding loop.


In more detailed embodiments, the method may also include using the video image to create a pyramid of video images and using the depth map to create a pyramid of depth maps. The pyramid of video images may be combined with the pyramid of depth maps, layered into multiple bit streams, and cascaded to create the image. The term ‘pyramid’ in this context refers to any suitable combination, integration, formulation, arrangement, hierarchy, organization, composition, make-up, structure, format, or design associated with one or more video images. The method may further include using upsampling when the depth map has a lower resolution than the video image. The term ‘upsampling’ in this context simply refers to a process of increasing a sampling rate of a signal (by any suitable level).


In another example, filtering the depth map may include removing noise from the depth map. In yet another example, filtering the depth map may include filling in missing data from the depth map. The method may also include encoding the image into a bit stream for transmission over a network.


Example Embodiments


Turning to FIG. 1, FIG. 1 is a simplified schematic diagram illustrating a communication system 10 for conducting a video conference in accordance with one embodiment of the present disclosure. Communication system 10 includes a multipoint manager module 18 and multiple endpoints associated with various end users of the video conference. Multipoint manager module 18 includes an extended depth-guided filtering (EDGF) module 20, a video encoder 22, a processor 24a, and a memory 26a.


In general, endpoints may be geographically separated, where in this particular example, a plurality of endpoints 12a-12c are located in San Jose, Calif. and remote endpoints (not shown) are located in Chicago, Ill. Note that the numerical and letter designations assigned to the endpoints do not connote any type of hierarchy; the designations are arbitrary and have been used for purposes of teaching only. These designations should not be construed in any way to limit their capabilities, functionalities, or applications in the potential environments that may benefit from the features of communication system 10.


In this example, each endpoint 12a-12c is fitted discreetly along a desk and is proximate to its associated participant. Such endpoints could be provided in any other suitable location, as FIG. 1 only offers one of a multitude of possible implementations for the concepts presented herein. In one example implementation, the endpoints are videoconferencing endpoints, which can assist in receiving and communicating video and audio data. Other types of endpoints are certainly within the broad scope of the outlined concepts, and some of these example endpoints are further described below. Each endpoint 12a-12c is configured to interface with the multipoint manager module 18, which helps to coordinate and to process information being transmitted by the end users.


As illustrated in FIG. 1, a number of image capture devices 14a-14c and displays 16a-16c are provided to interface with endpoints 12a-12c, respectively. Displays 16a-16c render images to be seen by conference participants and, in this particular example, reflect a three-display design (e.g., a ‘triple’). Note that as used herein in this Specification, the term ‘display’ is meant to connote any element that is capable of delivering image data (inclusive of video information), text, sound, audiovisual data, etc. to participants. This would necessarily be inclusive of any screen-cubes, panel, plasma element, television (which may be high-definition), monitor, computer interface, screen, Telepresence devices (inclusive of Telepresence boards, panels, screens, surfaces, etc.), or any other suitable element that is capable of delivering/rendering/projecting (from front or back) such information.


The components of communication system 10 may use specialized applications and hardware to create a system that can leverage a network. Communication system 10 can use standard IP technology and can operate on an integrated voice, video, and data network. The system can also support high quality, real-time voice, and video communications using broadband connections. It can further offer capabilities for ensuring quality of service (QoS), security, reliability, and high availability for high-bandwidth applications such as video. Power and Ethernet connections for all end users can be provided. Participants can use their laptops to access data for the meeting, join a meeting place protocol or a Web session, or stay connected to other applications throughout the meeting.


For purposes of illustrating certain example techniques of communication system 10, it is important to understand certain image processing techniques and the communications that may be traversing the network. The following foundational information may be viewed as a basis from which the present disclosure may be properly explained.


Conceptually, an image may be described as any electronic element (e.g., an artifact) that reproduces the form of a subject, such as an object or a scene. In many contexts, an image may be an optically formed duplicate or reproduction of a subject, such as a two-dimensional photograph of an object or scene. In a broader sense, an image may also include any two-dimensional representation of information, such as a drawing, painting, or map. A video is a sequence of images, in which each still image is generally referred to as a “frame.”


A digital image, in general terms, is a numeric representation of an image. A digital image is most commonly represented as a set (rows and columns) of binary values, in which each binary value is a picture element (i.e., a “pixel”). A pixel holds quantized values that represent the intensity (or “brightness”) of a given color at any specific point in the two-dimensional space of the image. A digital image can be classified generally according to the number and nature of those values (samples), such as binary, grayscale, or color. Typically, pixels are stored in a computer memory as a two-dimensional array of small integers (i.e., a raster image or a raster map).


An image (or video) may be captured by optical devices having a sensor that converts lights into electrical charges, such as a digital camera or a scanner, for example. The electrical charges can then be converted into digital values. Some digital cameras give access to almost all the data captured by the camera, using a raw image format. An image can also be synthesized from arbitrary non-image information, such as mathematical functions or three-dimensional geometric models.


Images from digital image capture devices often receive further processing to improve their quality and/or to reduce the consumption of resources, such as memory or bandwidth. For example, a digital camera frequently includes a dedicated digital image-processing unit (or chip) to convert the raw data from the image sensor into a color-corrected image in a standard image file format. Image processing in general includes any form of signal processing for which the input is an image, such as a photograph or video frame. The output of image processing may be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.


Digital images can be coded (or compressed) to reduce or remove irrelevance and redundancy from the image data to improve storage and/or transmission efficiency. For example, general-purpose compression generally includes entropy encoding to remove statistical redundancy from data. However, entropy encoding is frequently not very effective for image data without an image model that attempts to represent a signal in a form that is more readily compressible. Such models exploit the subjective redundancy of images (and video). A motion model that estimates and compensates for motion can also be included to exploit significant temporal redundancy usually found in video.


An image encoder usually processes image data in blocks of samples. Each block can be transformed (e.g., with a discrete cosine transform) into spatial frequency coefficients. Energy in the transformed image data tends to be concentrated in a few significant coefficients; other coefficients are usually close to zero or insignificant. The transformed image data can be quantized by dividing each coefficient by an integer and discarding the remainder, typically leaving very few non-zero coefficients, which can readily be encoded with an entropy encoder. In video, the amount of data to be coded can be reduced significantly if the previous frame is subtracted from the current frame.


Digital image processing often also includes some form of filtering intended to improve the quality of an image, such as by reducing noise and other unwanted artifacts. Image noise can be generally defined as random variation of brightness or color information in images not present in the object imaged. Image noise is usually an aspect of electronic noise, which can be produced by the sensor and/or other circuitry of a capture device. Image noise can also originate during quantization. In video, noise can also refer to the random dot pattern that is superimposed on the picture as a result of electronic noise. Interference and static are other forms of noise, in the sense that they are unwanted, which can affect transmitted signals.


Smoothing filters attempt to preserve important patterns in an image, while reducing or eliminating noise or other fine-scale structures. Many different algorithms can be implemented in filters to smooth an image. One of the most common algorithms is the “moving average”, often used to try to capture important trends in repeated statistical surveys. Noise filters, for example, generally attempt to determine whether the actual differences in pixel values constitute noise or real photographic detail, and average out the former while attempting to preserve the latter. However, there is often a tradeoff made between noise removal and preservation of fine, low-contrast detail that may have characteristics similar to noise. Other filters (e.g., a deblocking filter) can be applied to improve visual quality and prediction performance, such as by smoothing the sharp edges that can form between macroblocks when block-coding techniques are used.


Image textures can also be calculated in image processing to quantify the perceived texture of an image. Image texture data provides information about the spatial arrangement of color or intensities in an image or a selected region of an image. The use of edge detection to determine the number of edge pixels in a specified region helps determine a characteristic of texture complexity. After edges have been found, the direction of the edges can also be applied as a characteristic of texture and can be useful in determining patterns in the texture. These directions can be represented as an average or in a histogram. Image textures may also be useful for classification and segmentation of images. In general, there are two primary types of segmentation based on image texture: region-based and boundary-based. Region-based segmentation generally attempts to group or cluster pixels based on texture properties together, while boundary-based segmentation attempts to group or cluster pixels based on edges between pixels that come from different texture properties. Though image texture is not always a perfect measure for segmentation, it can be used effectively along with other measures, such as color, to facilitate image segmentation.


In 3-D imaging, an image may be accompanied by a depth map that contains information corresponding to a third dimension of the image: indicating distances of objects in the scene to the viewpoint. In this sense, depth is a broad term indicative of any type of measurement within a given image. Each depth value in a depth map can correspond to a pixel in an image, which can be correlated with other image data (e.g., intensity values). Depth maps may be used for virtual view synthesis in 3-D video systems (e.g., 3DTV, or for gesture recognition in human-computer interaction, for example, MICROSOFT KINECT).


From a video coding perspective, depth maps may also be used for segmenting images into multiple regions, usually along large depth discontinuities. Each region may then be encoded separately, with possibly different parameters. One example is segmenting each image into foreground and background in which foreground objects in closer proximity to the viewpoint are differentiated from background objects that are relatively far away from the viewpoint. Such segmentation can be especially meaningful for Telepresence and video conferencing, in which scenes comprise primarily meeting participants (i.e., people).


However, merely using depth maps for image segmentation does not fully exploit the information to enhance image coding. In general, pixels within a region have been treated equally after segmentation in coding: regardless of their locations in the region with respect to other regions. In the foreground-background case, for example, a block of pixels in a color image is encoded as either foreground or background, which lacks a fine grain approach for improving image coding using depth. Further, a depth map typically has a lower resolution and contains considerable noise and missing values, which may introduce artifacts if the depth map is directly used in depth-guided filtering.


In accordance with embodiments disclosed herein, communication system 10 can resolve the aforementioned issue (and potentially others) by providing depth-guided image filtering. More specifically, communication system 10 can provide a system and method for processing a sequence of images using depth maps that are generated to correspond to the images. Depth maps and texture data of images can be used to develop a filter, which can be applied to the images. Such a system and method may be particularly advantageous for a conferencing environment such as communication system 10, in which images are encoded under a bitrate constraint and transported over a network, but the filter may also be applied advantageously independent of image encoding.


At its most general level, the system and method described herein may include receiving an image and a depth map, such as from a 3-D camera, and filtering the image according to the depth map such that details in the image that correspond to depth discontinuity and intensity variation can be preserved while substantially reducing or eliminating noise in the image. When coupled with a video encoder, the image may be further filtered such that details of objects closer to a viewpoint are preserved preferentially over objects further away, which may be particularly useful when the bitrate for encoding the image is constrained. For a block-based video encoder such as H.264 or MPEG-4, for example, the filtering may operate to reduce coding artifacts, such as artifacts introduced by quantization errors. When coupled with a video encoder, depth-guided filtering may further operate to conceal errors from partial image corruption, such as might occur with data loss during transmission.


Before turning to some of the additional operations of communication system 10, a brief discussion is provided about some of the infrastructure of FIG. 1. Endpoint 12a may be used by someone wishing to participate in a video conference in communication system 10. The term “endpoint” is inclusive of devices used to initiate a communication, such as a receiver, a computer, a set-top box, an Internet radio device (IRD), a cell phone, a smart phone, a tablet, a personal digital assistant (PDA), a Google droid, an iPhone, and iPad, or any other device, component, element, or object capable of initiating voice, audio, video, media, or data exchanges within communication system 10. Endpoints 12a-c may also be inclusive of a suitable interface to the human user, such as a display, a keyboard, a touchpad, a remote control, or other terminal equipment. Endpoints 12a-c may also be any device that seeks to initiate a communication on behalf of another entity or element, such as a program, a database, or any other component, device, element, or object capable of initiating an exchange within communication system 10. Data, as used herein in this document, refers to any type of numeric, voice, video, media, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another. In some embodiments, image capture devices may be integrated with an endpoint, particularly mobile endpoints.


In operation, multipoint manager module 18 can be configured to establish, or to foster a video session between one or more end users, which may be located in various other sites and locations. Multipoint manager module 18 can also coordinate and process various policies involving endpoints 12a-12c. In general, multipoint manager module 18 may communicate with endpoints 12a-12c through any standard or proprietary conference control protocol. Multipoint manager module 18 can include a switching component that determines which signals are to be routed to individual endpoints 12a-12c. Multipoint manager module 18 can also determine how individual end users are seen by others involved in the video conference. Furthermore, multipoint manager module 18 can control the timing and coordination of this activity. Multipoint manager module 18 can also include a media layer that can copy information or data, which can be subsequently retransmitted or simply forwarded along to one or more endpoints 12a-12c.


Endpoints 12a-c, image capture devices 14a-c, and multipoint manager module 18 are network elements that can facilitate the extended depth-guide filtering activities discussed herein. As used herein in this Specification, the term ‘network element’ is meant to encompass any of the aforementioned elements, as well as routers, switches, cable boxes, gateways, bridges, loadbalancers, firewalls, inline service nodes, proxies, servers, processors, modules, or any other suitable device, component, element, proprietary appliance, or object operable to exchange information in a network environment. These network elements may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information.


In one implementation, endpoints 12a-c, image capture devices 14a-c, and/or multipoint manager module 18 include software to achieve (or to foster) the image processing activities discussed herein. This could include the implementation of instances of an EDGF module 20 at any appropriate location. Additionally, each of these elements can have an internal structure (e.g., a processor, a memory element, etc.) to facilitate some of the image processing operations described herein. In other embodiments, these activities may be executed externally to these elements, or included in some other network element to achieve the intended functionality. Alternatively, endpoints 12a-c, image capture devices 14a-c, and/or multipoint manager module 18 may include software (or reciprocating software) that can coordinate with other network elements in order to achieve the activities described herein. In still other embodiments, one or several devices may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof.


Turning to FIG. 2A, FIG. 2A is a simplified block diagram illustrating additional details that may be associated with EDGF module 20. EDGF module 20 includes a depth-guided filtering module 28, a depth map filtering module 30, a processor 24b, and a memory 26b. In one example, a depth map 80 is generated to correspond to a video image 78. In this example, depth map 80 has a lower resolution than video image 78 and contains noise and missing depth values.


Depth-guided filtering module 28 can be configured to combine depth information with texture data and the depth-guided filtering can help reduce coding artifacts, such as those that can be introduced by quantization errors. The depth-guided filtering can be applied to an image after inverse transform and inverse quantization, deblocking filtering, and prediction compensation. Depth map filtering module 30 can be configured to filter depth map 80 using the higher-resolution video image 78 to create a filtered depth map 82. Filtered depth map 82 is then sent to depth-guided filtering module 28 where video image 80 is filtered using filtered depth map 82 to create a filtered image 84. The two filtering procedures may be performed sequentially, or they may be applied at once in a joint form on video image 78 using (unfiltered) depth map 80. Filtered image 84 may be sent to video encoder 22 for encoding into a bit stream.


In an embodiment, EDGF module 20 can be configured to account for original depth values that may be missing due to limitations at depth acquisition or that contain considerable noise. More specifically in equations one and two below, p is the central pixel to be filtered, and q is a neighboring pixel in a window S. Dp, Ip, and Dq, Iq denote the depth and color intensity values of the two pixels, respectively. Gk, Gd, and Gr are functions (e.g., Gaussian functions) of the corresponding distance measures, respectively. Wp is a normalization factor.










EDGF


(
p
)


=


1

W
p







q

S













G
k



(



p
-
q



)


·


G
d



(





f
I



(

D
p

)


-


f
I



(

D
q

)





)


·


G
r



(




I
p

-

I
q




)


·

I
q








Equation





1







W
p

=




q

S













G
k



(



p
-
q



)


·


G
d



(





f
I



(

D
p

)


-


f
I



(

D
q

)





)


·


G
r



(




I
p

-

I
q




)








Equation





2







The filtering of the depth value at pixel p may take a similar form using corresponding color intensity values, as shown below in equations three and four.












f
I



(

D
p

)


=


1

K
p







q

T









H
k



(



p
-
q



)


·


H
d



(




D
p

-

D
q




)


·


H
r



(




I
p

-

I
q




)


·

D
q













Equation





3







K
p

=




q

T













H
k



(



p
-
q



)


·


H
d



(




D
p

-

D
q




)


·


H
r



(




I
p

-

I
q




)








Equation





4







The filtering of depth values has two-fold implications. First, when the depth image has a lower resolution than the color image, the filtering includes upsampling such that a depth value is generated for every pixel in the color image. Second, it also embraces noise reduction so that a higher-quality depth map may be achieved.


Although the depth-guided filtering and the pre-filtering of the depth map may have similar parametric forms as presented above, they have different implications and are applied to achieve different goals. The pre-filtering of the depth map is to remove noise and generate, by interpolation, missing depth values, and hence increase the resolution and quality of the depth map. In short, it is for improving the objective quality of the depth map and bringing it closer to ground truth. On the other hand, the application of depth-guided filtering is primarily for improving the subjective quality of video images, especially when the images are going through a video encoder under bit-rate constraints.


Such a distinction would be reflected when determining functions {Gk, Gd, Gr}and {Hk, Hd, Hr} for the two filters. In depth-guided filtering, for example, {Gk, Gd, Gr} may be zero-mean Gaussian functions with standard deviations selected such that in the filtered color image, more fine details are preserved where depth values are small while only strong edges are retained in areas that have large depth values. Alternatively, in pre-filtering a depth map, {Hk, Hd, Hr} should be selected to preserve edges in the depth map even when two objects on either side of a depth discontinuity have similar color intensity and to prevent creating non-existing discontinuities on a (supposedly) smooth surface in the depth map, due to noise contained in the depth map and rich texture in the corresponding region in the color image.


In an embodiment, extended depth-guided filtering may be incorporated into a video encoding-decoding loop. In this instance, instead of operating as a stand-alone filter and having an explicit parametric form, the depth-guided filtering may take an indirect form and operate as a filter parameter controller (FPC) to existing in-loop filters. The in-loop filters may include a deblocking filter (DF) in H.264, and may also include other filters (e.g., Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) that have been introduced in the development of next-generation video coding standard, known as High Efficiency Video Coding (HEVC)).


Turning to FIG. 2B, FIG. 2B is a simplified block diagram illustrating additional details that may be associated with video encoder 22. Video encoder 22 includes a processor 24c, memory 26c, a transform and quantization module 32, an entropy encoding module 34, an inverse transform and inverse quantization module 36, an intra/inter prediction module 38, a deblocking filter 40, and a reference buffer 44. Video encoder 22 is generally configured to receive image information (e.g., filtered image 84) as a signal via some connection, which may be a wireless connection, or via one or more cables or wires that allow for the propagation of signals. In an embodiment, an image can be processed in blocks or macroblocks of samples. In general, video encoder 22 may be configured to transform each block into a block of spatial frequency coefficients, divide each coefficient by an integer, and discard the remainder, such as in a transform and quantization module 32. The resulting coefficients can then be encoded using entropy encoding module 34.


Intra/inter prediction module 38 may be configured to enhance encoding, such as with motion compensation. A prediction can be formed based on previously encoded data, either from the current time frame (intra-prediction) or from other frames that have already been coded (inter-prediction). For example, inverse transform and inverse quantization module 36 can be configured to rescale the quantized transform coefficients. Each coefficient can be multiplied by an integer value to restore its original scale. An inverse transform can combine the standard basis patterns, weighted by the rescaled coefficients, to re-create each block of data. These blocks can be combined together to form a macroblock, and the prediction can be subtracted from the current macroblock to form a residual. In a video encoder with an in-loop filter, deblocking filter 40 can also be applied to blocks in decoded video to improve visual quality and prediction performance by smoothing the sharp edges that can form between macroblocks when block-coding techniques are used.


Turning to FIG. 3, FIG. 3 is a simplified block diagram illustrating another embodiment of EDGF module 20. EDGF module 20 includes processor 24b, memory 26b, depth-guided filtering module 28, depth map filtering module 30, transform and quantization module 32, entropy coding module 34, inverse transform and inverse quantization module 36, intra/inter prediction module 38, and reference buffer 44. Depth-guided filtering module 28 includes a processor 24d, memory 26d, deblocking filter 40, filter parameter controller 56, sample adaptive offset module 60, and adaptive loop filter 62.


In an embodiment, EDGF module 20 can be configured for high efficiency video coding (HEVC) that includes depth-guided in-loop filtering, where a filtered depth map is sent to filter parameter controller 56. Based on depth values and available bit rates, filter parameter controller 56 can determine parameters for in-loop filters (e.g., deblocking filter 40, adaptive look filter 62, etc.) such that the subjective quality of video may be improved while the objective quality is preserved.


Turning to FIG. 4, FIG. 4 is a simplified block diagram illustrating an embodiment of video encoder 22 having depth map filtering functionality. Video encoder 22 includes processor 24c, memory 26c, an image pyramid generation module 64, depth map filtering module 30, a depth map pyramid generation module 70, a plurality of depth-guided pre-filtering modules 72a-c, a plurality of layer encoding modules 74a-c, and a bitstream multiplexer 76. Video encoder 22 can be configured to incorporate both depth-guided pre-filtering and depth-guided in-loop filtering in a scalable video coding context.


In an embodiment, a hierarchical set of lower-resolution images is first generated from the original, high-resolution image. Next, the original depth map is filtered to generate a new depth map that has the same high resolution as the original image and at a higher quality. The new depth map is then downscaled to generate a hierarchical set of depth maps, each depth map corresponding to an image in the image hierarchical set. Finally, upon encoding, each video image is filtered using the corresponding depth map in the hierarchy.


More specifically, image pyramid generation module 64 can be configured to receive video image 78 and generate an original image signal and one or more scaled image signals. The original image signal and each of the one or more scaled image signals are sent to one of the plurality of depth-guided pre-filtering modules. For example, the original image signal may be sent to depth-guided pre-filtering module 72a, one scaled image signal may be sent to depth-guided pre-filtering module 72b, and a second scaled image signal may be sent to depth-guided pre-filtering module 72c.


Depth map pyramid generation module 70 can be configured to receive filtered depth map 82 from depth map filtering module 30 and generate a scaled filtered depth map signal and one or more original filtered depth map signals. The scaled filtered depth map signal and each of the one or more original filtered depth map signals are sent to one of the plurality of depth-guided pre-filtering modules. For example, the scaled filtered depth map signal may be sent to depth-guided pre-filtering module 72a, one original filtered depth map signal may be sent to depth-guided pre-filtering module 72b, and a second original filtered depth map signal may be sent to depth-guided pre-filtering module 72c.


Each depth-guided pre-filtering module combines the original image signal or one of the scaled image signals (which ever was received) and the scaled filtered depth map signal or one of the original filtered depth map signals (which ever was received) into a single signal that is sent to a corresponding layer encoding module 74a-c. For example, the original image signal and the scaled filtered depth map signal sent to depth-guided pre-filtering module 72a may be combined and sent to layer encoding module 74a. In another example, the scaled image signal and the original filtered depth map signal sent to depth-guided pre-filtering module 72b may be combined and sent to layer encoding module 74b. Each layer encoding module 74a-c processes the signal and sends the processed signal to bitstream multiplexer 76 where all the signals are combined.



FIG. 5 is a simplified flowchart 500 illustrating example potential operations that may be associated with extended depth-guided filtering. At 502, a depth map is generated that corresponds to a video image. At 504, the depth map is filtered using the video image to create a filtered depth map. At 506, the video image is filtered using the filtered depth map.


Any of these elements (e.g., the network elements of FIG. 1 such as multipoint manager module 18, video encoder 22, etc.) can include memory elements for storing information to be used in achieving the extended depth-guided filtering activities as outlined herein. Additionally, each of these devices may include a processor that can execute software or an algorithm to perform the extended depth-guided filtering management activities as discussed in this Specification.


A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, processors (e.g., processor 24a, 24b) could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof.


In operation, components in communication system 10 can include one or more memory elements (e.g., memory element 26a, 26b) for storing information to be used in achieving operations as outlined herein. This includes the memory element being able to store instructions (e.g., software, logic, code, etc.) in non-transitory media such that the instructions are executed to carry out the activities described in this Specification.


These devices may further keep information in any suitable type of non-transitory storage medium (e.g., random access memory (RAM), read only memory (ROM), field programmable gate array (FPGA), erasable programmable read only memory (EPROM), electrically erasable programmable ROM (EEPROM), etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs.


The information being tracked, sent, received, or stored in communication system 10 could be provided in any database, register, table, cache, queue, control list, or storage structure, based on particular needs and implementations, all of which could be referenced in any suitable timeframe. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’


Components of communication system 10 described and shown herein may also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment. Additionally, some of the processors and memories associated with the various components may be removed, or otherwise consolidated such that a single processor and a single memory location are responsible for certain activities. In a general sense, the arrangements depicted in the FIGURES may be more logical in their representations, whereas a physical architecture may include various permutations, combinations, and/or hybrids of these elements. It is imperative to note that countless possible design configurations can be used to achieve the operational objectives outlined here. Accordingly, the associated infrastructure has a myriad of substitute arrangements, design choices, device possibilities, hardware configurations, software implementations, equipment options, etc.


The elements discussed herein may be configured to keep information in any suitable memory element (random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein (e.g., database, table, cache, key, etc.) should be construed as being encompassed within the broad term “memory element.” Similarly, any of the potential processing elements, modules, and machines described herein should be construed as being encompassed within the broad term “processor.”


Note that with the examples provided above, interaction may be described in terms of two, three, or four elements or components. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functions or operations by only referencing a limited number of components. It should be appreciated that the principles described herein are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings provided herein as potentially applied to a myriad of other architectures. Additionally, although described with reference to particular scenarios, where a particular module is provided within an element, these modules can be provided externally, or consolidated and/or combined in any suitable fashion. In certain instances, such modules may be provided in a single proprietary unit.


It is also important to note that operations in the appended diagrams illustrate only some of the possible scenarios and patterns that may be executed by, or within elements of communication system 10. Some of these operations may be deleted or removed where appropriate, or these operations may be modified or changed considerably without departing from the scope of teachings provided herein. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings provided herein.


Although a system and method for depth-guided image filtering has been described in detail with reference to particular embodiments, it should be understood that various other changes, substitutions, and alterations may be made hereto without departing from the spirit and scope of this disclosure. For example, although the previous discussions have focused on video conferencing associated with particular types of endpoints, handheld devices that employ video applications could readily adopt the teachings of the present disclosure. For example, iPhones, iPads, Android devices, personal computing applications (i.e., desktop video solutions, Skype, etc.) can readily adopt and use the depth-guided filtering operations detailed above. Any communication system or device that encodes video data would be amenable to the features discussed herein.


It is also imperative to note that the systems and methods described herein can be used in any type of imaging or video application. This can include standard video rate transmissions, adaptive bit rate (ABR), variable bit rate (VBR), CBR, or any other imaging technology in which image encoding can be utilized. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.


In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.

Claims
  • 1. A method, comprising: generating a depth map that corresponds to a video image;filtering the depth map with the video image to create a filtered depth map, wherein filtering the depth map includes filling in missing data from the depth map and upsampling when the depth map has a lower resolution than the video image, wherein the video image is a higher resolution image than the depth map, wherein details of objects closer to a viewpoint are preserved preferentially over objects further away;using the video image to create a pyramid of video images; andusing the depth map to create a pyramid of depth maps, wherein the pyramid of video images are filtered with the pyramid of depth maps, layered into multiple bit streams, and cascaded to create an image.
  • 2. The method of claim 1, wherein the filtering the depth map and filtering the video image are performed sequentially.
  • 3. The method of claim 1, further comprising: using extended depth-guided filtering that is incorporated into a video encoding-decoding loop, and wherein filtering the depth map includes removing noise from the depth map.
  • 4. The method of claim 1, wherein zero-mean Gaussian function with standard deviations are used for filtering the depth map.
  • 5. The method of claim 1, further comprising: encoding the multiple bit streams for transmission over a network.
  • 6. The method of claim 5, wherein encoding the multiple bit streams includes using a deblocking filter in H.264.
  • 7. Logic encoded in non-transitory media that includes instructions for execution and when executed by a processor, is operable to perform operations comprising: generating a depth map that corresponds to a video image, wherein the video image is filtered;filtering the depth map with the video image to create a filtered depth map, wherein filtering the depth map includes filling in missing data from the depth map and upsampling when the depth map has a lower resolution than the video image, wherein the video image is a higher resolution image than the depth map, wherein details of objects closer to a viewpoint are preserved preferentially over objects further away;using the video image to create a pyramid of video images; andusing the depth map to create a pyramid of depth maps, wherein the pyramid of video images are filtered with the pyramid of depth maps, layered into multiple bit streams, and cascaded to create an image.
  • 8. The logic of claim 7, the operations further comprising: encoding the multiple bit streams for transmission over a network.
  • 9. An apparatus, comprising: a memory element for storing data;a processor that executes instructions associated with the data;a manager module configured to interface with the processor and the memory element such that the apparatus is configured to: generate a depth map that corresponds to a video image, wherein the video image is filtered;filter the depth map with the video image to create a filtered depth map, wherein filtering the depth map includes filling in missing data from the depth map and upsampling when the depth map has a lower resolution than the video image, wherein the video image is a higher resolution image than the depth map, wherein details of objects closer to a viewpoint are preserved preferentially over objects further away;using the video image to create a pyramid of video images; andusing the depth map to create a pyramid of depth maps, wherein the pyramid of video images are filtered with the pyramid of depth maps, layered into multiple bit streams, and cascaded to create an image.
US Referenced Citations (351)
Number Name Date Kind
2911462 Brady Nov 1959 A
3793489 Sank Feb 1974 A
3909121 De Mesquita Cardoso Sep 1975 A
4400724 Fields Aug 1983 A
4473285 Winter Sep 1984 A
4494144 Brown Jan 1985 A
4750123 Christian Jun 1988 A
4815132 Minami Mar 1989 A
4827253 Maltz May 1989 A
4853764 Sutter Aug 1989 A
4890314 Judd et al. Dec 1989 A
4961211 Tsugane et al. Oct 1990 A
4994912 Lumelsky et al. Feb 1991 A
5003532 Ashida et al. Mar 1991 A
5020098 Celli May 1991 A
5136652 Jibbe et al. Aug 1992 A
5187571 Braun et al. Feb 1993 A
5200818 Neta et al. Apr 1993 A
5249035 Yamanaka Sep 1993 A
5255211 Redmond Oct 1993 A
5268734 Parker et al. Dec 1993 A
5317405 Kuriki et al. May 1994 A
5337363 Platt Aug 1994 A
5347363 Yamanaka Sep 1994 A
5351067 Lumelsky et al. Sep 1994 A
5359362 Lewis et al. Oct 1994 A
5406326 Mowry Apr 1995 A
5423554 Davis Jun 1995 A
5446834 Deering Aug 1995 A
5448287 Hull Sep 1995 A
5467401 Nagamitsu et al. Nov 1995 A
5495576 Ritchey Feb 1996 A
5502481 Dentinger et al. Mar 1996 A
5502726 Fischer Mar 1996 A
5506604 Nally et al. Apr 1996 A
5532737 Braun Jul 1996 A
5541639 Takatsuki et al. Jul 1996 A
5541773 Kamo et al. Jul 1996 A
5570372 Shaffer Oct 1996 A
5572248 Allen et al. Nov 1996 A
5587726 Moffat Dec 1996 A
5612733 Flohr Mar 1997 A
5625410 Washino et al. Apr 1997 A
5666153 Copeland Sep 1997 A
5673401 Volk et al. Sep 1997 A
5675374 Kohda Oct 1997 A
5715377 Fukushima et al. Feb 1998 A
5729471 Jain et al. Mar 1998 A
5737011 Lukacs Apr 1998 A
5748121 Romriell May 1998 A
5760826 Nayar Jun 1998 A
5790182 Hilaire Aug 1998 A
5796724 Rajamani et al. Aug 1998 A
5815196 Alshawi Sep 1998 A
5818514 Duttweiler et al. Oct 1998 A
5821985 Iizawa Oct 1998 A
5889499 Nally et al. Mar 1999 A
5894321 Downs et al. Apr 1999 A
5940118 Van Schyndel Aug 1999 A
5940530 Fukushima et al. Aug 1999 A
5953052 McNelley et al. Sep 1999 A
5956100 Gorski Sep 1999 A
6069658 Watanabe May 2000 A
6088045 Lumelsky et al. Jul 2000 A
6097441 Allport Aug 2000 A
6101113 Paice Aug 2000 A
6124896 Kurashige Sep 2000 A
6148092 Qian Nov 2000 A
6167162 Jacquin et al. Dec 2000 A
6172703 Lee Jan 2001 B1
6173069 Daly et al. Jan 2001 B1
6226035 Korein et al. May 2001 B1
6243130 McNelley et al. Jun 2001 B1
6249318 Girod et al. Jun 2001 B1
6256400 Takata et al. Jul 2001 B1
6266098 Cove et al. Jul 2001 B1
6285392 Satoda et al. Sep 2001 B1
6292575 Bortolussi et al. Sep 2001 B1
6356589 Gebler et al. Mar 2002 B1
6380539 Edgar Apr 2002 B1
6424377 Driscoll, Jr. Jul 2002 B1
6430222 Okadia Aug 2002 B1
6462767 Obata et al. Oct 2002 B1
6493032 Wallerstein et al. Dec 2002 B1
6507356 Jackel et al. Jan 2003 B1
6573904 Chun et al. Jun 2003 B1
6577333 Tai et al. Jun 2003 B2
6583808 Boulanger et al. Jun 2003 B2
6590603 Sheldon et al. Jul 2003 B2
6591314 Colbath Jul 2003 B1
6593955 Falcon Jul 2003 B1
6593956 Potts et al. Jul 2003 B1
6611281 Strubbe Aug 2003 B2
6680856 Schreiber Jan 2004 B2
6693663 Harris Feb 2004 B1
6694094 Partynski et al. Feb 2004 B2
6704048 Malkin et al. Mar 2004 B1
6710797 McNelley et al. Mar 2004 B1
6751106 Zhang et al. Jun 2004 B2
6763226 McZeal Jul 2004 B1
6768722 Katseff et al. Jul 2004 B1
6795108 Jarboe et al. Sep 2004 B2
6795558 Matsuo et al. Sep 2004 B2
6798834 Murakami et al. Sep 2004 B1
6801637 Voronka et al. Oct 2004 B2
6806898 Toyama et al. Oct 2004 B1
6807280 Stroud et al. Oct 2004 B1
6831653 Kehlet et al. Dec 2004 B2
6844990 Artonne et al. Jan 2005 B2
6850266 Trinca Feb 2005 B1
6853398 Malzbender et al. Feb 2005 B2
6867798 Wada et al. Mar 2005 B1
6882358 Schuster et al. Apr 2005 B1
6888358 Lechner et al. May 2005 B2
6909438 White et al. Jun 2005 B1
6911995 Ivanov et al. Jun 2005 B2
6917271 Zhang et al. Jul 2005 B2
6963653 Miles Nov 2005 B1
6980526 Jang et al. Dec 2005 B2
6985178 Morita et al. Jan 2006 B1
6989754 Kiscanin et al. Jan 2006 B2
6989836 Ramsey Jan 2006 B2
6989856 Firestone et al. Jan 2006 B2
6990086 Holur et al. Jan 2006 B1
7002973 MeLampy et al. Feb 2006 B2
7023855 Haumont et al. Apr 2006 B2
7028092 MeLampy et al. Apr 2006 B2
7031311 MeLampy et al. Apr 2006 B2
7043528 Schmitt et al. May 2006 B2
7046862 Ishizaka et al. May 2006 B2
7057636 Cohen-Solal et al. Jun 2006 B1
7057662 Malzbender Jun 2006 B2
7061896 Jabbari et al. Jun 2006 B2
7072504 Miyano et al. Jul 2006 B2
7072833 Rajan Jul 2006 B2
7080157 McCanne Jul 2006 B2
7092002 Ferren et al. Aug 2006 B2
7095455 Jordan et al. Aug 2006 B2
7111045 Kato et al. Sep 2006 B2
7126627 Lewis et al. Oct 2006 B1
7131135 Virag et al. Oct 2006 B1
7136651 Kalavade Nov 2006 B2
7139767 Taylor et al. Nov 2006 B1
D533525 Arie Dec 2006 S
D533852 Ma Dec 2006 S
D534511 Maeda et al. Jan 2007 S
D535954 Hwang et al. Jan 2007 S
7158674 Suh Jan 2007 B2
7161942 Chen et al. Jan 2007 B2
D539243 Chiu et al. Mar 2007 S
7197008 Shabtay et al. Mar 2007 B1
D541773 Chong et al. May 2007 S
D542247 Kinoshita et al. May 2007 S
7221260 Berezowski et al. May 2007 B2
7239338 Krisbergh et al. Jul 2007 B2
7246118 Chastain et al. Jul 2007 B2
D550635 DeMaio et al. Sep 2007 S
D551184 Kanou et al. Sep 2007 S
7269292 Steinberg Sep 2007 B2
7274555 Kim et al. Sep 2007 B2
D555610 Yang et al. Nov 2007 S
7336299 Kostrzewski Feb 2008 B2
D567202 Rieu Piquet Apr 2008 S
7352809 Wenger et al. Apr 2008 B2
7353279 Durvasula et al. Apr 2008 B2
7359731 Choksi Apr 2008 B2
7399095 Rondinelli Jul 2008 B2
7411975 Mohaban Aug 2008 B1
7413150 Hsu Aug 2008 B1
7428000 Cutler et al. Sep 2008 B2
D578496 Leonard Oct 2008 S
7440615 Gong et al. Oct 2008 B2
7450134 Maynard et al. Nov 2008 B2
7471320 Malkin et al. Dec 2008 B2
7477657 Murphy et al. Jan 2009 B1
D588560 Mellingen et al. Mar 2009 S
7505036 Baldwin Mar 2009 B1
7518051 Redmann Apr 2009 B2
7529425 Kitamura et al. May 2009 B2
7532230 Culbertson et al. May 2009 B2
7532232 Shah et al. May 2009 B2
7534056 Cross et al. May 2009 B2
7545761 Kalbag Jun 2009 B1
7551432 Bockheim et al. Jun 2009 B1
7555141 Mori Jun 2009 B2
7575537 Ellis Aug 2009 B2
D602453 Ding et al. Oct 2009 S
7607101 Barrus Oct 2009 B1
7616226 Roessler et al. Nov 2009 B2
7620202 Fujimura et al. Nov 2009 B2
7623115 Marks Nov 2009 B2
7646419 Cernasov Jan 2010 B2
D610560 Chen Feb 2010 S
7679639 Harrell et al. Mar 2010 B2
7692680 Graham Apr 2010 B2
7707247 Dunn et al. Apr 2010 B2
D615514 Mellingen et al. May 2010 S
7710448 De Beer et al. May 2010 B2
7710450 Dhuey et al. May 2010 B2
7714222 Taub et al. May 2010 B2
7715657 Lin et al. May 2010 B2
7716283 Thukral May 2010 B2
7719605 Hirasawa et al. May 2010 B2
7719662 Bamji et al. May 2010 B2
7720277 Hattori May 2010 B2
7725919 Thiagarajan et al. May 2010 B1
7738457 Nordmark et al. Jun 2010 B2
D626102 Buzzard et al. Oct 2010 S
D626103 Buzzard et al. Oct 2010 S
7813724 Gronner et al. Oct 2010 B2
D628175 Desai et al. Nov 2010 S
7839434 Ciudad et al. Nov 2010 B2
D628968 Desai et al. Dec 2010 S
7855726 Ferren et al. Dec 2010 B2
7861189 Watanabe et al. Dec 2010 B2
7889851 Shah et al. Feb 2011 B2
7894531 Cetin et al. Feb 2011 B1
7899265 Rostami Mar 2011 B1
D636359 Buzzard et al. Apr 2011 S
D636747 Buzzard et al. Apr 2011 S
7920158 Beck et al. Apr 2011 B1
D637568 Desai et al. May 2011 S
D637569 Desai et al. May 2011 S
D637570 Desai et al. May 2011 S
8000559 Kwon Aug 2011 B2
8031906 Fujimura et al. Oct 2011 B2
8077857 Lambert Dec 2011 B1
8081346 Anup et al. Dec 2011 B1
8086076 Tian et al. Dec 2011 B2
8135068 Alvarez Mar 2012 B1
8179419 Girish et al. May 2012 B2
8219404 Weinberg et al. Jul 2012 B2
8259155 Marathe et al. Sep 2012 B2
8289363 Buckler Oct 2012 B2
8299979 Rambo et al. Oct 2012 B2
8315466 El-Maleh et al. Nov 2012 B2
8363719 Nakayama Jan 2013 B2
8374456 Vetro et al. Feb 2013 B2
8436888 Baldino et al. May 2013 B1
8588758 Ullrich Nov 2013 B2
8614735 Buckler Dec 2013 B2
8692862 N'Guessan Apr 2014 B2
8699457 Venkataswami et al. Apr 2014 B2
8723914 Mackie et al. May 2014 B2
8786631 Collins Jul 2014 B1
20020047892 Gonsalves Apr 2002 A1
20020106120 Brandenburg et al. Aug 2002 A1
20020108125 Joao Aug 2002 A1
20020114392 Sekiguchi et al. Aug 2002 A1
20020118890 Rondinelli Aug 2002 A1
20020131608 Lobb et al. Sep 2002 A1
20020140804 Colmenarez et al. Oct 2002 A1
20020149672 Clapp et al. Oct 2002 A1
20020186528 Huang Dec 2002 A1
20020196737 Bullard Dec 2002 A1
20030017872 Oishi et al. Jan 2003 A1
20030048218 Milnes et al. Mar 2003 A1
20030071932 Tanigaki Apr 2003 A1
20030072460 Gonopolskiy et al. Apr 2003 A1
20030160861 Barlow et al. Aug 2003 A1
20030179285 Naito Sep 2003 A1
20030185303 Hall Oct 2003 A1
20030197687 Shetter Oct 2003 A1
20030220971 Kressin Nov 2003 A1
20040003411 Nakai et al. Jan 2004 A1
20040032906 Lillig Feb 2004 A1
20040038169 Mandelkern et al. Feb 2004 A1
20040061787 Liu et al. Apr 2004 A1
20040091232 Appling, III May 2004 A1
20040118984 Kim et al. Jun 2004 A1
20040119814 Clisham et al. Jun 2004 A1
20040164858 Lin Aug 2004 A1
20040165060 McNelley et al. Aug 2004 A1
20040178955 Menache et al. Sep 2004 A1
20040189463 Wathen Sep 2004 A1
20040189676 Dischert Sep 2004 A1
20040196250 Mehrotra et al. Oct 2004 A1
20040207718 Boyden et al. Oct 2004 A1
20040218755 Marton et al. Nov 2004 A1
20040246962 Kopeikin et al. Dec 2004 A1
20040254982 Hoffman et al. Dec 2004 A1
20040260796 Sundqvist et al. Dec 2004 A1
20050007954 Sreemanthula et al. Jan 2005 A1
20050024484 Leonard Feb 2005 A1
20050050246 Lakkakorpi et al. Mar 2005 A1
20050071430 Kobayashi et al. Mar 2005 A1
20050081160 Wee et al. Apr 2005 A1
20050110867 Schulz May 2005 A1
20050117022 Marchant Jun 2005 A1
20050129325 Wu Jun 2005 A1
20050147257 Melchior et al. Jul 2005 A1
20050268823 Bakker et al. Dec 2005 A1
20060013495 Duan et al. Jan 2006 A1
20060017807 Lee et al. Jan 2006 A1
20060028983 Wright Feb 2006 A1
20060029084 Grayson Feb 2006 A1
20060038878 Takashima et al. Feb 2006 A1
20060066717 Miceli Mar 2006 A1
20060072813 Matsumoto et al. Apr 2006 A1
20060082643 Richards Apr 2006 A1
20060093128 Oxford May 2006 A1
20060100004 Kim et al. May 2006 A1
20060104297 Buyukkoc et al. May 2006 A1
20060104470 Akino May 2006 A1
20060120307 Sahashi Jun 2006 A1
20060120568 McConville et al. Jun 2006 A1
20060125691 Menache et al. Jun 2006 A1
20060126878 Takumai et al. Jun 2006 A1
20060126894 Mori Jun 2006 A1
20060152489 Sweetser et al. Jul 2006 A1
20060152575 Amiel et al. Jul 2006 A1
20060158509 Kenoyer et al. Jul 2006 A1
20060168302 Boskovic et al. Jul 2006 A1
20060170769 Zhou Aug 2006 A1
20060181607 McNelley et al. Aug 2006 A1
20060184497 Suzuki et al. Aug 2006 A1
20060233120 Eshel et al. Oct 2006 A1
20060256187 Sheldon et al. Nov 2006 A1
20060284786 Takano et al. Dec 2006 A1
20070039030 Romanowich et al. Feb 2007 A1
20070121353 Zhang et al. May 2007 A1
20070140337 Lim et al. Jun 2007 A1
20070206556 Yegani et al. Sep 2007 A1
20070217406 Riedel et al. Sep 2007 A1
20080077390 Nagao Mar 2008 A1
20080240237 Tian et al. Oct 2008 A1
20080240571 Tian et al. Oct 2008 A1
20080303901 Variyath et al. Dec 2008 A1
20090051756 Trachtenberg Feb 2009 A1
20090122867 Mauchly et al. May 2009 A1
20090207233 Mauchly et al. Aug 2009 A1
20090207234 Chen et al. Aug 2009 A1
20090244257 MacDonald et al. Oct 2009 A1
20090256901 Mauchly et al. Oct 2009 A1
20090322082 Wagoner Dec 2009 A1
20090324023 Tian et al. Dec 2009 A1
20100082557 Gao et al. Apr 2010 A1
20100123770 Friel et al. May 2010 A1
20100225732 De Beer et al. Sep 2010 A1
20100225735 Shaffer et al. Sep 2010 A1
20100283829 De Beer et al. Nov 2010 A1
20100302345 Baldino et al. Dec 2010 A1
20110037636 Alexander Feb 2011 A1
20110261050 Smolic et al. Oct 2011 A1
20110292044 Kim et al. Dec 2011 A1
20120163672 McKinnon Jun 2012 A1
20130002816 Hannuksela Jan 2013 A1
20130077852 Chang et al. Mar 2013 A1
20130114884 Chen et al. May 2013 A1
20130286017 Marimon Sanjuan et al. Oct 2013 A1
20140205023 Girdzijauskas Jul 2014 A1
Foreign Referenced Citations (47)
Number Date Country
101383925 Mar 2009 CN
101953158 Jan 2011 CN
102067593 May 2011 CN
502600 Sep 1992 EP
0 650 299 Oct 1994 EP
0 714 081 Nov 1995 EP
0 740 177 Apr 1996 EP
1143745 Oct 2001 EP
1 178 352 Jun 2002 EP
1 589 758 Oct 2005 EP
1701308 Sep 2006 EP
1768058 Mar 2007 EP
2073543 Jun 2009 EP
2255531 Dec 2010 EP
2277308 Jan 2011 EP
2 294 605 May 1996 GB
2336266 Oct 1999 GB
2355876 May 2001 GB
WO 9416517 Jul 1994 WO
WO 9621321 Jul 1996 WO
WO 9708896 Mar 1997 WO
WO 9847291 Oct 1998 WO
WO 9959026 Nov 1999 WO
WO 0133840 May 2001 WO
WO 2005013001 Feb 2005 WO
WO 2006072755 Jul 2006 WO
WO 2007106157 Sep 2007 WO
WO 2007123946 Nov 2007 WO
WO 2007123960 Nov 2007 WO
WO2008010929 Jan 2008 WO
WO 2008039371 Apr 2008 WO
WO 2008040258 Apr 2008 WO
WO 2008101117 Aug 2008 WO
WO 2008118887 Oct 2008 WO
WO2009088976 Jul 2009 WO
WO 2009102503 Aug 2009 WO
WO 2009120814 Oct 2009 WO
WO 2010059481 May 2010 WO
WO 2010096342 Aug 2010 WO
WO 2010104765 Sep 2010 WO
WO 2010132271 Nov 2010 WO
WO 2011138472 Nov 2011 WO
WO 2012033716 Mar 2012 WO
WO 2012068008 May 2012 WO
WO 2012068010 May 2012 WO
WO 2012068485 May 2012 WO
WO2013096041 Jun 2013 WO
Non-Patent Literature Citations (102)
Entry
U.S. Appl. No. 13/106,002, filed May 12, 2011, entitled “System and Method for Video Coding in a Dynamic Environment,” Inventor: Dihong Tian.
U.S. Appl. No. 13/329,943, filed Dec. 19, 2011, entitled “System and Method for Depth-Guided Image Filtering in a Video Conference Environment,” Inventor: Dihong Tian.
“3D Particles Experiments in AS3 and Flash CS3,” [retrieved and printed on Mar. 18, 2010]; 2 pages; http://www.flashandmath.com/advanced/fourparticles/notes.html.
3G, “World's First 3G Video Conference Service with New TV Commercial,” Apr. 28, 2005, 4 pages; http://www.3g.co.uk/PR/Apri12005/1383.htm.
Digital Video Enterprises, “DVE Eye Contact Silhouette,” 1 page, © DVE 2008; http://www.dvetelepresence.com/products/eyeContactSilhouette.asp.
“Eye Gaze Response Interface Computer Aid (Erica) tracks Eye movement to enable hands-free computer operation,” UMD Communication Sciences and Disorders Tests New Technology, University of Minnesota Duluth, posted Jan. 19, 2005; 4 pages http://www.d.umn.edu/unirel/homepage/05/eyegaze.html.
France Telecom R&D, “France Telecom's Magic Telepresence Wall—Human Productivity Lab,” 5 pages, retrieved and printed on May 17, 2010; http://www.humanproductivitylab.com/archive—blogs/2006/07/11/france—telecoms—magic—telepres—1.php.
Joshua Gluckman and S.K. Nayar, “Rectified Catadioptric Stereo Sensors,” 8 pages, retrieved and printed on May 17, 2010; http://cis.poly.edu/˜gluckman/papers/cvpr00.pdf.
R.V. Kollarits, et al., “34.3: An Eye Contact Camera/Display System for Videophone Applications Using a Conventional Direct-View LCD,” © 1995 SID, ISSNN0097-0966X/95/2601, pp. 765-768; http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=47A1E7E028C26503975E633895D114EC?doi=10.1.1.42.1772&rep=rep1&type=pdf.
Trevor Darrell, “A Real-Time Virtual Mirror Display,” 1 page, Sep. 9, 1998; http://people.csail.mit.edu/trevor/papers/1998-021/node6.html.
Video on TED.com, Pranav Mistry: the Thrilling Potential of SixthSense Technology (5 pages) and Interactive Transcript (5 pages), retrieved and printed on Nov. 30, 2010; http://www.ted.com/talks/pranav—mistry—the—thrilling—potential—of—sixthsense—technology.html.
Andersson, L., et al., “LDP Specification,” Network Working Group, RFC 3036, Jan. 2001, 133 pages; http://tools.ietf.org/html/rfc3036.
Andreopoulos, Yiannis, et al., “In-Band Motion Compensated Temporal Filtering,” Signal Processing: Image Communication 19 (2004) 653-673, 21 pages http://medianetlab.ee.ucla.edu/papers/011.pdf.
Arrington, Michael, “eJamming—Distributed Jamming,” TechCrunch; Mar. 16, 2006; http://www.techcrunch.com/2006/03/16/ejamming-distributed-jamming/; 1 page.
Arulampalam, M. Sanjeev, et al., “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, vol. 50, No. 2, Feb. 2002, 15 pages; http://www.cs.ubc.ca/˜murphyk/Software/Kalman/ParticleFilterTutorial.pdf.
Avrithis, Y., et al., “Color-Based Retrieval of Facial Images,” European Signal Processing Conference (EUSIPCO '00), Tampere, Finland; Sep. 2000; http://www.image.ece.ntua.gr/˜ntsap/presentations/eusipco00.ppt#256; 18 pages.
Awduche, D., et al., “Requirements for Traffic Engineering over MPLS,” Network Working Group, RFC 2702, Sep. 1999, 30 pages; http://tools.ietf.org/pdf/rfc2702.pdf.
Bakstein, Hynek, et al., “Visual Fidelity of Image Based Rendering,” Center for Machine Perception, Czech Technical University, Proceedings of the Computer Vision, Winter 2004, http://www.benogo.dk/publications/Bakstein-Pajdla-CVWW04.pdf; 10 pages.
Beesley, S.T.C., et al., “Active Macroblock Skipping in the H.264 Video Coding Standard,” in Proceedings of 2005 Conference on Visualization, Imaging, and Image Processing—VIIP 2005, Sep. 7-9, 2005, Benidorm, Spain, Paper 480-261. ACTA Press, ISBN: 0-88986-528-0; 5 pages.
Berzin, O., et al., “Mobility Support Using MPLS and MP-BGP Signaling,” Network Working Group, Apr. 28, 2008, 60 pages; http://www.potaroo.net/ietf/all-/draft-berzin-malis-mpls-mobility-01.txt.
Boccaccio, Jeff; CEPro, “Inside HDMI CEC: The Little-Known Control Feature,” Dec. 28, 2007; http://www.cepro.com/article/print/inside—hdmi—cec—the—little—known—control—feature; 2 pages.
Boros, S., “Policy-Based Network Management with SNMP,” Proceedings of the EUNICE 2000 Summer School Sep. 13-15, 2000, p. 3.
Bücken R: “Bildfernsprechen: Videokonferenz vom Arbeitsplatz aus” Funkschau, Weka Fachzeitschriften Verlag, Poing, DE, No. 17, Aug. 14, 1986, pp. 41-43, XP002537729; ISSN: 0016-2841, p. 43, left-hand column, line 34-middle column, line 24.
Chan, Eric, et al., “Experiments on block-matching techniques for video coding,” Multimedia Systems; 9 Springer-Verlag 1994, Multimedia Systems (1994) 2 pages 228-241.
Chen et al., “Toward a Compelling Sensation of Telepresence: Demonstrating a Portal to a Distant (Static) Office,” Proceedings Visualization 2000; VIS 2000; Salt Lake City, UT, Oct. 8-13, 2000; Annual IEEE Conference on Visualization, Los Alamitos, CA; IEEE Comp. Soc., US, Jan. 1, 2000, pp. 327-333; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.1287.
Chen, Jason, “iBluetooth Lets iPhone Users Send and Receive Files Over Bluetooth,” Mar. 13, 2009; http://i.gizmodo.com/5169545/ibluetooth-lets-iphone-users-send-and-receive-files-over-bluetooth; 1 page.
Chen, Qing, et al., “Real-time Vision-based Hand Gesture Recognition Using Haar-like Features,” Instrumentation and Measurement Technology Conference, Warsaw, Poland, May 1-3, 2007, 6 pages;http://www.google.com/url?sa=t&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.93.103%26rep%3Drep1%26type%3Dpdf&ei=A28RTLKRDeftnQeXzZGRAw&usg=AFQjCNHpwj5MwjgGp-3goVzSWad6CO-Jzw.
Chien et al., “Efficient moving Object Segmentation Algorithm Using Background Registration Technique,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, No. 7, Jul. 2002, 10 pages.
Cisco: Bill Mauchly and Mod Marathe; UNC: Henry Fuchs, et al., “Depth-Dependent Perspective Rendering,” Apr. 15, 2008; 6 pages.
Costa, Cristina, et al., “Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distorion Map,” EURASIP Journal on Applied Signal Processing, Jan. 7, 2004, vol. 2004, No. 12; © 2004 Hindawi Publishing Corp.; XP002536356; ISSN: 1110-8657; pp. 1899-1911; http://downloads.hindawi.com/journals/asp/2004/470826.pdf.
Criminisi, A., et al., “Efficient Dense-Stereo and Novel-view Synthesis for Gaze Manipulation in One-to-one Teleconferencing,” Technical Rpt MSR-TR-2003-59, Sep. 2003 [retrieved and printed on Feb. 26, 2009], http://research.microsoft.com/pubs/67266/criminis—techrep2003-59.pdf, 41 pages.
Cumming, Jonathan, “Session Border Control in IMS, An Analysis of the Requirements for Session Border Control in IMS Networks,” Sections 1.1, 1.1.1, 1.1.3, 1.1.4, 2.1.1, 3.2, 3.3.1, 5.2.3 and pp. 7-8, Data Connection, 2005.
Daly, S., et al., “Face-based visually-optimized image sequence coding,” Image Processing, 1998. ICIP 98. Proceedings; 1998 International Conference on Chicago, IL; Oct. 4-7, 1998, Los Alamitos; IEEE Computing; vol. 3, Oct. 4, 1998; ISBN: 978-0-8186-8821-8; XP010586786; pp. 443-447.
Diaz, Jesus, “Zcam 3D Camera is Like Wii Without Wiimote and Minority Report Without Gloves,” Dec. 15, 2007; http://gizmodo.com/gadgets/zcam-depth-camera-could-be-wii-challenger/zcam-3d-camera-is-like-wii-without-wiimote-and-minority-report-without-gloves-334426.php; 3pages.
Diaz, Jesus, iPhone Bluetooth File Transfer Coming Soon (YES!); Jan. 26, 2009; http://i.gizmodo.com/5138797/iphone-bluetooth-file-transfer-coming-soon-yes; 1 page.
Dornaika F., et al., “Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters,” Jun. 27, 2004; 20040627-20040602, Jun. 27, 2004, 22 pages; HEUDIASY Research Lab; http://eprints.pascal-network.org/archive/00001231/01/rtvhci—chapter8.pdf.
DVE Digital Video Enterprises, “DVE Tele-Immersion Room,” [retrieved and printed on Feb. 5, 2009] http://www.dvetelepresence.com/products/immersion—room.asp; 2 pages.
Dynamic Displays, copyright 2005-2008 [retrieved and printed on Feb. 24, 2009] http://www.zebraimaging.com/html/lighting—display.html, 2 pages.
ECmag.com, “IBS Products,” Published Apr. 2009; http://www.ecmag.com/index.cfm?fa=article&articleID=10065; 2 pages.
Eisert, Peter, “Immersive 3-D Video Conferencing: Challenges, Concepts and Implementations,” Proceedings of SPIE Visual Communications and Image Processing (VCIP), Lugano, Switzerland, Jul. 2003; 11 pages; http://iphome.hhi.de/eisert/papers/vcip03.pdf.
eJamming Audio, Learn More; [retrieved and printed on May 27, 2010] http://www.ejamming.com/learnmore/; 4 pages.
Electrophysics Glossary, “Infrared Cameras, Thermal Imaging, Night Vision, Roof Moisture Detection,” [retrieved and printed on Mar. 18, 2010] http://www.electrophysics.com/Browse/Brw—Glossary.asp; 11 pages.
Farrukh, A., et al., Automated Segmentation of Skin-Tone Regions in Video Sequences, Proceedings IEEE Students Conference, ISCON—apos—02; Aug. 16-17, 2002; pp. 122-128.
Fiala, Mark, “Automatic Projector Calibration Using Self-Identifying Patterns,” National Research Council of Canada, Jun. 20-26, 2005; http://www.procams.org/procams2005/papers/procams05-36.pdf; 6 pages.
Foote, J., et al., “Flycam: Practical Panoramic Video and Automatic Camera Control,” in Proceedings of IEEE International Conference on Multimedia and Expo, vol. III, Jul. 30, 2000; pp. 1419-1422; http://citeseerx.ist.psu.edu/viewdoc/versions?doi=10.1.1.138.8686.
Freeman, Professor Wilson T., Computer Vision Lecture Slides, “6.869 Advances in Computer Vision: Learning and Interfaces,” Spring 2005; 21 pages.
Garg, Ashutosh, et al., “Audio-Visual ISpeaker Detection Using Dynamic Bayesian Networks,” IEEE International Conference on Automatic Face and Gesture Recognition, 2000 Proceedings, 7 pages; http://www.ifp.illinois.edu/˜ashutosh/papers/FG00.pdf.
Gemmell, Jim, et al., “Gaze Awareness for Video-conferencing: A Software Approach,” IEEE MultiMedia, Oct.-Dec. 2000; vol. 7, No. 4, pp. 26-35.
Geys et al., “Fast Interpolated Cameras by Combining a GPU Based Plane Sweep With a Max-Flow Regularisation Algorithm,” Sep. 9, 2004; 3D Data Processing, Visualization and Transmission 2004, pp. 534-541.
Gluckman, Joshua, et al., “Rectified Catadioptric Stereo Sensors,” 8 pages, retrieved and printed on May 17, 2010; http://cis.poly.edu/˜gluckman/papers/cvpr00.pdf.
PCT Mar. 14, 2013 International Search Report and Written Opinion from International Application PCT/US2012/069111.
Kim, Jae Hoon et al., “New Coding Tools for Illumination and Focus Mismatch Compensation in Multiview Video Coding,” IEEE Transactions on Circuits and Systems for Video Technology, IEEE Service Center, Piscataway, NJ, US. vol. 17, No. 11, Nov. 1, 2007; 17 pages.
Hong, Min-Cheol, et al., “A Reduced Complexity Loop Filter Using Coded Block Pattern and Quantization Step Size for H.26L Video Coder,” International Conference on Consumer Electronics, 2001 Digest of Technical Papers, ICCE, Los Angelos, CA, Jun. 19-21, 2001; 2 pages.
USPTO May 24, 2013 Non-Final Office Action from U.S. Appl. No. 13/329,943.
USPTO Aug. 24, 2013 Response to May 24, 2013 Non-Final Office Action from U.S. Appl. No. 13/329,943.
USPTO Sep. 19, 2013 Notice of Allowance from U.S. Appl. No. 13/329,943.
Gotchev, Atanas, “Computer Technologies for 3D Video Delivery for Home Entertainment,” International Conference on Computer Systems and Technologies; CompSysTech, Jun. 12-13, 2008; http://ecet.ecs.ru.acad.bg/cst08/docs/cp/Plenary/P.1.pdf; 6 pages.
Gries, Dan, “3D Particles Experiments in AS3 and Flash CS3, Dan's Comments,” [retrieved and printed on May 24, 2010] http://www.flashandmath.com/advanced/fourparticles/notes.html; 3 pages.
Guernsey, Lisa, “Toward Better Communication Across the Language Barrier,” Jul. 29, 1999; http://www.nytimes.com/1999/07/29/technology/toward-better-communication-across-the-language-barrier.html; 2 pages.
Guili, D., et al., “Orchestra!: A Distributed Platform for Virtual Musical Groups and Music Distance Learning over the Internet in JavaTM Technology”; [retrieved and printed on Jun. 6, 2010] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=778626; 2 pages.
Gundavelli, S., et al., “Proxy Mobile IPv6,” Network Working Group, RFC 5213, Aug. 2008, 93 pages; http://tools.ietf.org/pdf/rfc5213.pdf.
Gussenhoven, Carlos, “Chapter 5 Transcription of Dutch Intonation,” Nov. 9, 2003, 33 pages; http://www.ru.nl/publish/pages/516003/todisun-ah.pdf.
Gvili, Ronen et al., “Depth Keying,” 3DV System Ltd., [Retrieved and printed on Dec. 5, 2011] 11 pages; http://research.microsoft.com/en-us/um/people/eyalofek/Depth%20Key/DepthKey.pdf.
Habili, Nariman, et al., “Segmentation of the Face and Hands in Sign Language Video Sequences Using Color and Motion Cues” IEEE Transaction on Circuits and Systems for Video Technology, IEEE Service Center, vol. 14, No. 8, Aug. 1, 2004; ISSN: 1051-8215; XP011115755; pp. 1086-1097.
Hammadi, Nait Charif et al., “Tracking the Activity of Participants in a Meeting,” Machine Vision and Applications, Springer, Berlin, De Lnkd—DOI:10.1007/S00138-006-0015-5, vol. 17, No. 2, May 1, 2006, pp. 83-93, XP019323925 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.9832.
He, L., et al., “The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing,” Proc. SIGGRAPH, Aug. 1996; http://research.microsoft.com/en-us/um/people/lhe/papers/siggraph96.vc.pdf; 8 pages.
Hepper, D., “Efficiency Analysis and Application of Uncovered Background Prediction in a Low BitRate Image Coder,” IEEE Transactions on Communications, vol. 38, No. 9, pp. 1578-1584, Sep. 1990.
Hock, Hans Henrich, “Prosody vs. Syntax: Prosodic rebracketing of final vocatives in English,” 4 pages; [retrieved and printed on Mar. 3, 2011] http://speechprosody2010.illinois.edu/papers/100931.pdf.
Holographic Imaging, “Dynamic Holography for scientific uses, military heads up display and even someday HoloTV Using TI's DMD,” [retrieved and printed on Feb. 26, 2009] http://innovation.swmed.edu/ research/instrumentation/res—inst—dev3d.html; 5 pages.
Hornbeck, Larry J., “Digital Light ProcessingTM: A New MEMS-Based Display Technology,” [retrieved and printed on Feb. 26, 2009] http://focus.ti.com/pdfs/dlpdmd/17—Digital—Light—Processing—MEMS—display—technology.pdf; 22 pages.
IR Distribution Category @ Envious Technology, “IR Distribution Category,” [retrieved and printed on Apr. 22, 2009] http://www.envioustechnology.com.au/ products/product-list.php?CID=305; 2 pages.
IR Trans—Products and Orders—Ethernet Devices, [retrieved and printed on Apr. 22, 2009] http://www.irtrans.de/en/shop/lan.php; 2 pages.
Isgro, Francesco et al., “Three-Dimensional Image Processing in the Future of Immersive Media,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, No. 3; XP011108796; ISSN: 1051-8215; Mar. 1, 2004; pp. 288-303.
Itoh, Hiroyasu, et al., “Use of a gain modulating framing camera for time-resolved imaging of cellular phenomena,” SPIE vol. 2979, Feb. 2, 1997, pp. 733-740.
Jamoussi, Bamil, “Constraint-Based LSP Setup Using LDP,” MPLS Working Group, Sep. 1999, 34 pages; http://tools.ietf.org/html/draft-ietf-mpls-cr-ldp-03.
Jeyatharan, M., et al., “3GPP TFT Reference for Flow Binding,” MEXT Working Group, Mar. 2, 2010, 11 pages; http:/www.ietf.org/id/draft-jeyatharan-mext-flow-tftemp-reference-00.txt.
Jiang, Minqiang, et al., “On Lagrange Multiplier and Quantizer Adjustment for H.264 Frame-layer Video Rate Control,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, Issue 5, May 2006, pp. 663-669.
Jong-Gook Ko et al., “Facial Feature Tracking and Head Orientation-Based Gaze Tracking,” ITC-CSCC 2000, International Technical Conference on Circuits/Systems, Jul. 11-13, 2000, 4 pages http://www.umiacs.umd.edu/˜knkim/paper/itc-cscc-2000-jgko.pdf.
Kannangara, C.S., et al., “Complexity Reduction of H.264 Using Lagrange Multiplier Methods,” IEEE Int. Conf. on Visual Information Engineering, Apr. 2005; www.rgu.ac.uk/files/h264—complexity—kannangara.pdf; 6 pages.
Kannangara, C.S., et al., “Low Complexity Skip Prediction for H.264 through Lagrangian Cost Estimation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, No. 2, Feb. 2006; www.rgu.ac.uk/files/h264—skippredict—richardson—final.pdf; 20 pages.
Kauff, Peter, et al., “An Immersive 3D Video-Conferencing System Using Shared Virtual Team User Environments,” Proceedings of the 4th International Conference on Collaborative Virtual Environments, XP040139458; Sep. 30, 2002; http://ip.hhi.de/imedia—G3/assets/pdfs/CVE02.pdf; 8 pages.
Kazutake, Uehira, “Simulation of 3D image depth perception in a 3D display using two stereoscopic displays at different depths,” Jan. 30, 2006; http://adsabs.harvard.edu/abs/2006SPIE.6055.408U; 2 pages.
Keijser, Jeroen, et al., “Exploring 3D Interaction in Alternate Control-Display Space Mappings,” IEEE Symposium on 3D User Interfaces, Mar. 10-11, 2007, pp. 17-24.
Kim, Y.H., et al., “Adaptive mode decision for H.264 encoder,” Electronics letters, vol. 40, Issue 19, pp. 1172-1173, Sep. 2004; 2 pages.
Klint, Josh, “Deferred Rendering in Leadwerks Engine,” Copyright Leadwerks Corporation, Dec. 9, 2008; http://www.leadwerks.com/files/Deferred—Rendering—in—Leadwerks—Engine.pdf; 10 pages.
Kollarits, R.V., et al., “34.4: An Eye Contact Camera/Display System for Videophone Applications Using a Conventional Direct-View LCD,” Society for Information Display, International Symposium 1996 Digest of Technical Papers, vol. XXVI, May 23, 1995, pp. 765-768; http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=47A1E7E028C26503975E633895D114EC?doi=10.1.1.42.1772&rep=rep1&type=pdf.
Kolsch, Mathias, “Vision Based Hand Gesture Interfaces for Wearable Computing and Virtual Environments,” A Dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science, University of California, Santa Barbara, Nov. 2004, 288 pages. http://fulfillment.umi.com/dissertations/b7afbcb56ba72fdb14d26dfccc6b470f/1291487062/3143800.pdf.
Koyama, S., et al. “A Day and Night Vision MOS Imager with Robust Photonic-Crystal-Based RGB-and-IR,” Mar. 2008, pp. 754-759; ISSN: 0018-9383; IEE Transactions on Electron Devices, vol. 55, No. 3; http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4455782&isnumber=4455723.
Kwolek, B., “Model Based Facial Pose Tracking Using a Particle Filter,” Geometric Modeling and Imaging—New Trends, 2006 London, England Jul. 5-6, 2005, Piscataway, NJ, USA, IEEE LNKD-DOI: 10.1109/GMAI.2006.34 Jul. 5, 2006, pp. 203-208; XP010927285 [Abstract Only].
Lambert, “Polycom Video Communications,” © 2004 Polycom, Inc., Jun. 20, 2004 http://www.polycom.com/global/documents/whitepapers/video—communications—h.239—people—content—polycom—patented—technology.pdf.
Lawson, S., “Cisco Plans TelePresence Translation Next Year,” Dec. 9, 2008; http://www.pcworld.com/article/155237/.html?tk=rss—news; 2 pages.
Lee, J. and Jeon, B., “Fast Mode Decision for H.264,” ISO/IEC MPEG and ITU-T VCEG Joint Video Team, Doc. JVT-J033, Dec. 2003; http://media.skku.ac.kr/publications/paper/IntC/ljy—ICME2004.pdf; 4 pages.
Liu, Shan, et al., “Bit-Depth Scalable Coding for High Dynamic Range Video,” SPIE Conference on Visual Communications and Image Processing, Jan. 2008; 12 pages http://www.merl.com/papers/docs/TR2007-078.pdf.
Liu, Z., “Head-Size Equalization for Better Visual Perception of Video Conferencing,” Proceedings, IEEEInternational Conference on Multimedia & Expo (ICME2005), Jul. 6-8, 2005, Amsterdam, The Netherlands; http://research.microsoft.com/users/cohen/HeadSizeEqualizationICME2005.pdf; 4 pages.
Mann, S., et al., “Virtual Bellows: Constructing High Quality Still from Video,” Proceedings, First IEEE International Conference on Image Processing ICIP-94, Nov. 13-16, 1994, Austin, TX; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.8405; 5 pages.
Marvin Imaging Processing Framework, “Skin-colored pixels detection using Marvin Framework,” video clip, YouTube, posted Feb. 9, 2010 by marvinproject, 1 page; http://www.youtube.com/user/marvinproject#p/a/u/0/3ZuQHYNIcrl.
Miller, Gregor, et al., “Interactive Free-Viewpoint Video,” Centre for Vision, Speech and Signal Processing, [retrieved and printed on Feb. 26, 2009], http://www.ee.surrey.ac.uk/CVSSP/VMRG/Publications/miller05cvmp.pdf, 10 pages.
Miller, Paul, “Microsoft Research patents controller-free computer input via EMG muscle sensors,” Engadget.com, Jan. 3, 2010, 1 page; http://www.engadget.com/2010/01/03/microsoft-research-patents-controller-free-computer-input-via-em/.
Minoru from Novo is the worlds first consumer 3D Webcam, Dec. 11, 2008; http://www.minoru3d.com; 4 pages.
Mitsubishi Electric Research Laboratories, copyright 2009 [retrieved and printed on Feb. 26, 2009], http://www.merl.com/projects/3dtv, 2 pages.
Nakaya, Y., et al. “Motion Compensation Based on Spatial Transformations,” IEEE Transactions on Circuits and Systems for Video Technology, Jun. 1994, Abstract Only http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F76%2F7495%2F00305878.pdf%3Farnumber%3D305878&authDecision=-203.
National Training Systems Association Home—Main, Interservice/Industry Training, Simulation & Education Conference, Dec. 1-4, 2008; http://ntsa.metapress.com/app/home/main.asp?referrer=default; 1 page.
Related Publications (1)
Number Date Country
20140160239 A1 Jun 2014 US