With the recent surge in popularity of digital video, the demand for video compression has increased dramatically. Video compression reduces the number of bits required to store and transmit digital media. Video data contains spatial and temporal redundancy, and these spatial and temporal similarities can be encoded by registering differences within a frame (spatial) and between frames (temporal). The hardware or software that performs compression is called a codec (coder/decoder). The codec is a device or software capable of performing encoding and decoding on a digital signal. As data-intensive digital video applications have become ubiquitous, so has the need for more efficient ways to encode signals. Thus, video compression has now become a central component in storage and communication technology.
Codecs are often used in many different technologies, such as videoconferencing, videoblogging and other streaming media applications, e.g. video podcasts. Typically, a videoconferencing or videoblogging system provides digital compression of audio and video streams in real-time. One of the problems with videoconferencing and videoblogging is that many participants suffer from appearance consciousness. The burden of presenting an acceptable on-screen appearance, however, is not an issue in audio-only communication.
Another problem videoconferencing and video blogging presents is that the compression of information can result in decreased video quality. The compression ratio is one of the most important factors in video conferencing because the higher the compression ratio, the faster the video conferencing information is transmitted. Unfortunately, with conventional video compression schemes, the higher the compression ratio, the lower the video quality. Often, compressed video streams result in poor images and poor sound quality.
In general, conventional video compression schemes suffer from a number of inefficiencies, which are manifested in the form of slow data communication speeds, large storage requirements, and disturbing perceptual effects. These impediments can impose serious problems to a variety of users who need to manipulate video data easily, efficiently, and without sacrificing quality, which is particularly important in light of the innate sensitivity people have to some forms of visual information.
In video compression, a number of critical factors are typically considered including: video quality and the bit rate, the computational complexity of the encoding and decoding algorithms, robustness to data losses and errors, and latency. As an increasing amount of video data surges across the Internet, not just to computers but also televisions, cell phones and other handheld devices, a technology that could significantly relieve congestion or improve quality represents a significant breakthrough.
Systems and methods for processing video are provided to create computational and analytical advantages over existing state-of-the-art methods. Video compression schemes are provided to reduce the number of bits required to store and transmit digital media in video conferencing or videoblogging applications. A photorealistic avatar representation of a video conference participant is created. The avatar representation can be based on portions of a video stream that depict the conference participant. An object based video compression algorithm, can use a face detector, such as a Violla-Jones face detector, to detect, track and classify the face of the conference participant. Object models for structure, deformation, appearance and illumination are created based on the detected face in conjunction with registration of pre-defined object models for general faces. These object models are used to create an implicit representation, and thus, generate the photorealistic avatar representation of the video conference participant.
This depiction can be a lifelike version of the face of the video conference participant. It can be accurate in terms of the user's appearance and expression. Other parts of the originally captured frame can be depicted, possibly with lower accuracy. A short calibration session, executed once per unique user, can take place. This would enable the system to initialize the compression algorithms and create the object models. Preferably, subsequent video conferencing sessions would not need additional calibration.
Should the user require a video representation that is as faithful as a conventional video depiction, the system might require an additional calibration period to adjust the stored models to better match the user's appearance. Otherwise, the user may prefer to use a preferred object model rather than a current object model. The preferred model may be some advantageous representation of the user, for example a calibration session with best lighting and a neater appearance of the user. Another preferred object model would be a calibration model that has been “re-lit” and with “smoothing” applied to the face—both processing steps to achieve a “higher quality” representation of the subject.
A video conferencing/blogging system can be provided using client server framework. A user at a client node can initiate a video conferencing session, communicating through the use of a video camera and headset. The photorealistic avatar representation of each user's face can be generated. The photorealistic avatar representation created can be an implicit representation of the face of the video conference participant.
The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
A description of example embodiments of the invention follows.
Creating Object Models
In video signal data, frames of video are assembled into a sequence of images. The subject of the video is usually a three-dimensional scene projected onto the camera's two-dimensional imaging surface. In the case of synthetically generated video, a “virtual” camera is used for rendering; and in the case of animation, the animator performs the role of managing this camera frame of reference. Each frame, or image, is composed of picture elements (pels) that represent an imaging sensor response to the sampled signal. Often, the sampled signal corresponds to some reflected, refracted, or emitted energy, (e.g. electromagnetic, acoustic, etc.) sampled through the camera's components on a two dimensional sensor array. A successive sequential sampling results in a spatiotemporal data stream with two spatial dimensions per frame and a temporal dimension corresponding to the frame's order in the video sequence. This process is commonly referred to as the “imaging” process.
The invention provides a means by which video signal data can be efficiently processed into one or more beneficial representations. The present invention is efficient at processing many commonly occurring data sets in the video signal. The video signal is analyzed, and one or more concise representations of that data are provided to facilitate its processing and encoding. Each new, more concise data representation allows reduction in computational processing, transmission bandwidth, and storage requirements for many applications, including, but not limited to: encoding, compression, transmission, analysis, storage, and display of the video signal. Noise and other unwanted parts of the signal are identified as lower priority so that further processing can be focused on analyzing and representing the higher priority parts of the video signal. As a result, the video signal can be represented more concisely than was previously possible. And the loss in accuracy is concentrated in the parts of the video signal that are perceptually unimportant.
As described in U.S. application Ser. No. 11/336,366 filed Jan. 20, 2006 and U.S. application No. 60/881,966, titled “Computer Method and Apparatus for Processing Image Data,” filed Jan. 23, 2007, the entire teachings of which are incorporated by reference, video signal data is analyzed and salient components are identified. The analysis of the spatiotemporal stream reveals salient components that are often specific objects, such as faces. The identification process qualifies the existence and significance of the salient components, and chooses one or more of the most significant of those qualified salient components. This does not limit the identification and processing of other less salient components after or concurrently with the presently described processing. The aforementioned salient components are then further analyzed, identifying the variant and invariant subcomponents. The identification of invariant subcomponents is the process of modeling some aspect of the component, thereby revealing a parameterization of the model that allows the component to be synthesized to a desired level of accuracy
In one embodiment, the PCA/wavelet encoding techniques are applied to a preprocessed video signal to form a desired compressed video signal. The preprocessing reduces complexity of the video signal in a manner that enables principal component analysis (PCA)/wavelet encoding (compression) to be applied with increased effect. PCA/wavelet encoding is discussed at length in co-pending application, U.S. application Ser. No. 11/336,366 filed Jan. 20, 2006 and U.S. application No. 60/881,966, titled “Computer Method and Apparatus for Processing Image Data,” filed Jan. 23, 2007.
Segmenter 103 analyzes an image gradient over time and/or space using temporal and/or spatial differences in derivatives of pels. For the purposes of coherence monitoring, parts of the video signal that correspond to each other across sequential frames of the video signal are tracked and noted. The finite differences of the derivative fields associated with those coherent signal components are integrated to produce the determined portions of the video signal which use disproportionate bandwidth relative to other portions (i.e., determines the components of interest). In a preferred embodiment, if a spatial discontinuity in one frame is found to correspond to a spatial discontinuity in a succeeding frame, then the abruptness or smoothness of the image gradient is analyzed to yield a unique correspondence (temporal coherency). Further, collections of such correspondences are also employed in the same manner to uniquely attribute temporal coherency of discrete components of the video frames. For an abrupt image gradient, an edge is determined to exist. If two such edge defining spatial discontinuities exist then a corner is defined. These identified spatial discontinuities are combined with the gradient flow, which produces motion vectors between corresponding pels across frames of the video data. When a motion vector is coincident with an identified spatial discontinuity, then the invention segmenter 103 determines that a component of interest (salient object) exists.
Other segmentation techniques are suitable for implementing segmenter 103. Returning to
The structural object model 107 may be mathematically represented as:
where σ is the salient object (determined component of interest) and SM ( )is the structural model of that object;
vx,y are the 2D mesh vertices of a piece-wise linear regularized mesh over the object σ registered over time;
Δt are the changes in the vertices over time t representing scaling (or local deformation), rotation and translation of the object between video frames; and
Z is global motion.
From Equation 1, a global rigid structural model, global motion, pose, and locally derived deformation of the model can be derived. Known techniques for estimating structure from motion are employed and are combined with motion estimation to determine candidate structures for the structural parts (component of interest of the video frame over time). This results in defining the position and orientation of the salient object in space and hence provides a structural model 107 and a motion model 111.
The appearance model 108 then represents characteristics and aspects of the salient object which are not collectively modeled by the structural model 107 and the motion model 111. In one embodiment, the appearance model 108 is a linear decomposition of structural changes over time and is defined by removing global motion and local deformation from the structural model 107. Applicant takes object appearance at each video frame and using the structural model 107 and reprojects to a “normalized pose.” The “normalized pose” will also be referred to as one or more “cardinal” poses. The reprojection represents a normalized version of the object and produces any variation in appearance. As the given object rotates or is spatially translated between video frames, the appearance is positioned in a single cardinal pose (i.e., the average normalized representation). The appearance model 108 also accounts for cardinal deformation of a cardinal pose (e.g., eyes opened/closed, mouth opened/closed, etc.) Thus appearance model 108 AM(σ) is represented by cardinal pose Pc and cardinal deformation Δc in cardinal pose Pc,
The pels in the appearance model 108 are preferably biased based on their distance and angle of incidence to camera projection axis. Biasing determines the relative weight of the contribution of an individual pel to the final formulation of a model. Therefore, preferably, this “sampling bias” can factor into all processing of all models. Tracking of the candidate structure (from the structural model 107) over time can form or enable a prediction of the motion of all pels by implication from a pose, motion, and deformation estimates.
Further, with regard to appearance and illumination modeling, one of the persistent challenges in image processing has been tracking objects under varying lighting conditions. In image processing, contrast normalization is a process that models the changes of pixel intensity values as attributable to changes in lighting/illumination rather than it being attributable to other factors. The preferred embodiment estimates a salient object's arbitrary changes in illumination conditions under which the video was captured (i.e., modeling, illumination incident on the object). This is achieved by combining principles from Lambertian Reflectance Linear Subspace (LRLS) theory with optical flow. According to the LRLS theory, when an object is fixed—preferably, only allowing for illumination changes, the set of the reflectance images can be approximated by a linear combination of the first nine spherical harmonics; thus the image lies close to a 9D linear subspace in an ambient “image” vector space. In addition, the reflectance intensity for an image pixel (x,y) can be approximated as follows.
Using LRLS and optical flow, expectations are computed to determine how lighting interacts with the object. These expectations serve to constrain the possible object motion that can explain changes in the optical flow field. When using LRLS to describe the appearance of the object using illumination modeling, it is still necessary to allow an appearance model to handle any appearance changes that may fall outside of the illumination model's predictions.
Other mathematical representations of the appearance model 108 and structural model 107 are suitable as long as the complexity of the components of interest is substantially reduced from the corresponding original video signal but saliency of the components of interest is maintained.
Returning to
PCA encoding is applied to the normalized pel data on both sides 232 and 236, which builds the same set of basis vectors on each side 232, 236. In a preferred embodiment, PCA/wavelet is applied on the basis function during image processing to produce the desired compressed video data. Wavelet techniques (DWT) transform the entire image and sub-image and linearly decompose the appearance model 108 and structural model 107 then this decomposed model is truncated gracefully to meet desired threshold goals (ala EZT or SPIHT). This enables scalable video data processing unlike systems/methods of the prior art due to the “normalize” nature of video data.
As shown in
Creating a Photorealistic Avatar Representation
At 304, the system 100 determines whether the face has been calibrated before. If there is no existing calibration, then at 306 the face is calibrated. Calibration information can include information about face orientation (x, y positions specifying where the face is centered), scale information, and structure, deformation, appearance and illumination information. These parameters can be derived using a hybrid three-dimensional morphable model and LRLS algorithm and the structure, deformation, appearance and illumination models. These models are discussed in U.S. application Ser. No. 11/336,366 filed Jan. 20, 2006 and U.S. application No. 60/881,966, titled “Computer Method and Apparatus for Processing Image Data,” filed Jan. 23, 2007, the entire teachings of which are incorporated by reference. Other known modeling technologies may also be used to determine these parameters, such as three-dimensional morphable modeling, active appearance models, etc. These approximations can be used to estimate the pose and structure of the face, and the illumination conditions for each frame in the video. Once the structure, deformation, appearance and illumination basis (e.g. calibration information) for the individual's face has been resolved, then at 308, these explicit models can be used to detect, track and model the individual's face.
At 310, these parameters (e.g. structure, deformation, appearance and illumination basis) can be used to initialize the implicit modeling. The implicit modeling builds its model relative to the information obtained from the explicit modeling and provides a compact encoding of the individual's face. The parameters obtained from the explicit modeling are used as a ground truth for estimating the implicit model. For example, the explicit modeling parameters are used to build expectations about how lighting interacts with the structure of the face and then the face is sampled, these constraints provide a means of limiting the search space for the implicit algorithm. At 312, the individual's face is detected, tracked and classified using the implicit model, and a photorealistic avatar representation is generated. The frames generated using the implicit modeling use less encoding per frame and require fewer parameters than the explicit model. The photorealistic avatar representation is a synthetic representation of the face (e.g. a proxy avatar) of the conference participant. The synthetic representation fidelity can range from a faithful representation of the participant in the original video capture all the way to a representation supported by a previous calibration session.
The system 300 performs periodic checking to ensure that it is basing its modeling on realistic approximations. Thus, at step 314, the system 300 checks to confirm that its implicit object modeling is working properly. The system may determine that the implicit object modeling is working if the reprojection error is low for a certain amount of time. If the reprojection error is low and there is significant amount of motion, then it is likely that the implicit object modeling is working properly. If, however, the reprojection error is high, then the system 300 may determine that the implicit modeling is not working optimally. Similarly, if the system 300 detects a disproportional amount of bandwidth, the system may determine that the implicit modeling is not working optimally.
If it is determined that the implicit modeling is not working, then at step 316, the system 300 checks to determine whether a face can be detected. If a face can be detected, then at step 304, the system 300 finds the existing calibration information for the face and proceeds accordingly. If a face cannot be detected, then the system proceeds to step 302 to detect the face using the Viola-Jones face detector.
In another preferred embodiment, the present invention uses the explicit modeling to re-establish the implicit modeling. The explicit modeling re-establishes the model parameters necessary to re-initialize the implicit model. The full re-establishment involving running the face detector is performed if the explicit modeling cannot re-establish modeling of the participant.
It should be noted that face detection leads can use implicit modeling for calibration. In this case, the implicit model is used to “calibrate” the explicit model. Then, the explicit model starts it's processing, which then leads to an initialization of the implicit model as well.
This periodic checking enables the system 300 to reconfirm that it is in fact modeling a real object, a human face, and causes the system 300 to reset its settings periodically. This arrangement provides a tight coupling between the face detector 402, the calibrator 404, the explicit modeler 406 and the implicit modeler 408. In this way, periodically, the feedback from the explicit modeler 406 is used to reinitialize the implicit modeler 408. A block diagram illustrating an example implementation of this system 300 is shown in
Photorealistic Avatar Preferences
The photorealistic avatar generation system 300 can provide a host of preferences to conference participants to make their video conference experience more enjoyable. For example, a conference participant can select a preference to require that their photorealistic avatar representation always look directly into camera, such that it appears that the avatar representation is looking directly at the other conference participant. Since the modeling employed allows for the re-posing of any model relative to a virtual camera, the gaze adjustment required for non-co-located cameras and monitors can be compensated for. The conference participant can also select a specific background model. By selecting a consistent background model, the system 300 is able to provide an even more efficient compressed version the video stream. The model may be a predefined background or a low-resolution of the actual background, for example. During face detection and calibration, the conference participant can also customize features associated with their personal attributes in their photorealistic avatar representation, such as removal of wrinkles, selection of hair style/effects, selection of clothing, etc.
By providing a photorealistic avatar representation of the conference participant, the system 300 provides an added layer of security that is not typically available in conventional video conference systems. In particular, because the photorealistic avatar representation is a synthetic representation, the conference participant does not need to worry about the other conference participant knowing potentially confidential information, such as confidential documents that the conference participant is looking at during the video conference, or other confidential information that might be derived by being able to view the specific environment in which video conference is being recorded.
Video Conferencing System
The asynchronous or semi-synchronous messaging system environment 500 provides a means by which multiple participants are able to interact with each other. This is an important element of usability. The instant messaging session aspect allows the users to “edit” their own video, and review it prior to “sending” it to the other side. There is an aspect of control and also bandwidth reduction that is critical. The editing and control aspects may also be used to generate “higher” quality video segments that can then later be used for other purposes (e.g. by associating the phonemes, or audio phrase patterns, in the video, a video session can be provided without a camera, by using “previous” segments stitched together.)
Processing Environment
In one embodiment, the processor routines 92 and data 94 are a computer program product, including a computer readable medium (e.g., a removable storage medium, such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the invention system. Computer program product can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable, communication and/or wireless connection. In other embodiments, the invention programs are a computer program propagated signal product embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network, such as the Internet, or other network(s)). Such carrier medium or signals provide at least a portion of the software instructions for the present invention routines/program 92.
In alternate embodiments, the propagated signal is an analog carrier wave or digital signal carried on the propagated medium. For example, the propagated signal may be a digitized signal propagated over a global network (e.g., the Internet), a telecommunications network, or other network. In one embodiment, the propagated signal is a signal that is transmitted over the propagation medium over a period of time, such as the instructions for a software application sent in packets over a network over a period of milliseconds, seconds, minutes, or longer. In another embodiment, the computer readable medium of computer program product is a propagation medium that the computer system may receive and read, such as by receiving the propagation medium and identifying a propagated signal embodied in the propagation medium, as described above for computer program propagated signal product.
Generally speaking, the term “carrier medium” or transient carrier encompasses the foregoing transient signals, propagated signals, propagated medium, storage medium and the like.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
For example, the present invention may be implemented in a variety of computer architectures. The computer networks illustrated in
The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Some examples of optical disks include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W) and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories, which provide temporary storage of at least some program code in order to reduce the number of times code are retrieved from bulk storage during execution.
Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
This application is the U.S. National Stage of International Application No. PCT/US2008/000092, filed Jan. 4, 2008, which designates the U.S., published in English, and claims the benefit of U.S. Provisional Application Ser. No. 60/881,979 filed Jan. 23, 2007. This application is related to U.S. Provisional Application Ser. No. 60/881,966, titled “Computer Method and Apparatus for Processing Image Data,” filed Jan. 23, 2007, U.S. Provisional Application Ser. No. 60/811,890, titled “Apparatus And Method For Processing Video Data,” filed Jun. 8, 2006. This application is related to U.S. application Ser. No. 11/396,010 filed Mar. 31, 2006, which is a continuation-in-part of U.S. application Ser. No. 11/336,366 filed Jan. 20, 2006, which is a continuation-in-part of U.S. application Ser. No. 11/280,625 filed Nov. 16, 2005, which is a continuation-in-part of U.S. application Ser. No. 11/230,686, filed Sep. 20, 2005, which is a continuation-in-part of U.S. application Ser. No. 11/191,562, filed Jul. 28, 2005, now U.S. Pat. No. 7,158,680. The entire teachings of the above applications are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2008/000092 | 1/4/2008 | WO | 00 | 7/7/2009 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2008/091485 | 7/31/2008 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5117287 | Koike et al. | May 1992 | A |
5710590 | Ichige et al. | Jan 1998 | A |
5760846 | Lee | Jun 1998 | A |
5774591 | Black et al. | Jun 1998 | A |
5774595 | Kim | Jun 1998 | A |
5933535 | Lee et al. | Aug 1999 | A |
5969755 | Courtney | Oct 1999 | A |
6307964 | Lin et al. | Oct 2001 | B1 |
6546117 | Sun et al. | Apr 2003 | B1 |
6574353 | Schoepflin | Jun 2003 | B1 |
6611628 | Sekiguchi et al. | Aug 2003 | B1 |
6625310 | Lipton et al. | Sep 2003 | B2 |
6625316 | Maeda | Sep 2003 | B1 |
6661004 | Aumond et al. | Dec 2003 | B2 |
6711278 | Gu et al. | Mar 2004 | B1 |
6731799 | Sun et al. | May 2004 | B1 |
6731813 | Stewart | May 2004 | B1 |
6738424 | Allmen et al. | May 2004 | B1 |
6751354 | Foote et al. | Jun 2004 | B2 |
6774917 | Foote et al. | Aug 2004 | B1 |
6792154 | Stewart | Sep 2004 | B1 |
6870843 | Stewart | Mar 2005 | B1 |
6912310 | Park et al. | Jun 2005 | B1 |
6925122 | Gorodnichy | Aug 2005 | B2 |
6950123 | Martins | Sep 2005 | B2 |
7043058 | Cornog et al. | May 2006 | B2 |
7088845 | Gu et al. | Aug 2006 | B2 |
7158680 | Pace | Jan 2007 | B2 |
7162055 | Gu et al. | Jan 2007 | B2 |
7162081 | Timor et al. | Jan 2007 | B2 |
7164718 | Maziere et al. | Jan 2007 | B2 |
7184073 | Varadarajan et al. | Feb 2007 | B2 |
7352386 | Shum et al. | Apr 2008 | B1 |
7415527 | Varadarajan et al. | Aug 2008 | B2 |
7424157 | Pace | Sep 2008 | B2 |
7424164 | Gondek et al. | Sep 2008 | B2 |
7426285 | Pace | Sep 2008 | B2 |
7436981 | Pace | Oct 2008 | B2 |
7457435 | Pace | Nov 2008 | B2 |
7457472 | Pace et al. | Nov 2008 | B2 |
7508990 | Pace | Mar 2009 | B2 |
7574406 | Varadarajan et al. | Aug 2009 | B2 |
7630522 | Popp et al. | Dec 2009 | B2 |
8036464 | Sridhar et al. | Oct 2011 | B2 |
8065302 | Sridhar et al. | Nov 2011 | B2 |
8068677 | Varadarajan et al. | Nov 2011 | B2 |
8086692 | Sridhar et al. | Dec 2011 | B2 |
8090670 | Sridhar et al. | Jan 2012 | B2 |
8140550 | Varadarajan et al. | Mar 2012 | B2 |
20010038714 | Masumoto et al. | Nov 2001 | A1 |
20020054047 | Toyama et al. | May 2002 | A1 |
20020085633 | Kim et al. | Jul 2002 | A1 |
20020164068 | Yan | Nov 2002 | A1 |
20020196328 | Piotrowski | Dec 2002 | A1 |
20030011589 | Desbrun et al. | Jan 2003 | A1 |
20030063778 | Rowe et al. | Apr 2003 | A1 |
20030103647 | Rui et al. | Jun 2003 | A1 |
20030122966 | Markman et al. | Jul 2003 | A1 |
20030163690 | Stewart | Aug 2003 | A1 |
20030194134 | Wenzel et al. | Oct 2003 | A1 |
20030235341 | Gokturk et al. | Dec 2003 | A1 |
20040013286 | Viola et al. | Jan 2004 | A1 |
20040107079 | MacAuslan | Jun 2004 | A1 |
20040135788 | Davidson et al. | Jul 2004 | A1 |
20040246336 | Kelly, III et al. | Dec 2004 | A1 |
20060029253 | Pace | Feb 2006 | A1 |
20060067585 | Pace | Mar 2006 | A1 |
20060133681 | Pace | Jun 2006 | A1 |
20060177140 | Pace et al. | Aug 2006 | A1 |
20060233448 | Pace et al. | Oct 2006 | A1 |
20070025373 | Stewart | Feb 2007 | A1 |
20070071336 | Pace | Mar 2007 | A1 |
20090067719 | Sridhar et al. | Mar 2009 | A1 |
20090292644 | Varadarajan et al. | Nov 2009 | A1 |
20100008424 | Pace | Jan 2010 | A1 |
20100049739 | Varadarajan et al. | Feb 2010 | A1 |
20100086062 | Pace | Apr 2010 | A1 |
20100167709 | Varadarajan | Jul 2010 | A1 |
20110055266 | Varadarajan et al. | Mar 2011 | A1 |
20110087703 | Varadarajan et al. | Apr 2011 | A1 |
20110182352 | Pace | Jul 2011 | A1 |
Number | Date | Country |
---|---|---|
0 614 318 | Sep 1994 | EP |
1 124 379 | Aug 2001 | EP |
1 426 898 | Jun 2004 | EP |
1 779 294 | May 2007 | EP |
5-244585 | Sep 1993 | JP |
2001-100731 | Apr 2001 | JP |
2001-103493 | Apr 2001 | JP |
2002-525735 | Aug 2002 | JP |
2004-94917 | Mar 2004 | JP |
2006-521048 | Sep 2006 | JP |
WO 9827515 | Jun 1998 | WO |
WO 9859497 | Dec 1998 | WO |
WO 9926415 | May 1999 | WO |
WO 0016563 | Mar 2000 | WO |
WO 0045600 | Aug 2000 | WO |
WO02 102084 | Dec 2002 | WO |
WO 03041396 | May 2003 | WO |
WO 2005055602 | Jun 2005 | WO |
WO 2005107116 | Nov 2005 | WO |
WO 2006015092 | Feb 2006 | WO |
WO 2006034308 | Mar 2006 | WO |
WO 2006055512 | May 2006 | WO |
WO 2006083567 | Aug 2006 | WO |
WO 2006105470 | Oct 2006 | WO |
WO 2007146102 | Dec 2007 | WO |
WO 2008091483 | Jul 2008 | WO |
WO 2008091484 | Jul 2008 | WO |
WO 2008091485 | Jul 2008 | WO |
WO 2010042486 | Apr 2010 | WO |
Number | Date | Country | |
---|---|---|---|
20100073458 A1 | Mar 2010 | US |
Number | Date | Country | |
---|---|---|---|
60881979 | Jan 2007 | US |