This patent application is related to concurrently filed U.S. patent application Ser. No. 09/608,991, titled “Model-Based Video Image Coding,” by Acharya et al., filed on Jun. 30, 2000, and concurrently filed U.S. patent application Ser. No. 09/607,724, titled “Method of Video Coding Shoulder Movement from a Sequence of Images,” by Acharya et al., filed on Jun. 30, 2000, both assigned in part to the assignee of the present invention and herein incorporated by reference.
The present disclosure is related to video coding and, more particularly, to coding the movement of a head from a sequence of images.
As is well-known, motion estimation is a common or frequently encountered problem in digital video processing. A number of approaches are known and have been employed. One approach, for example, identifies the features located on the object and tracks the features from frame to frame, as described for example in “Two-View Facial Movement Estimation” by H. Li and R. Forchheimer, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 4, No. 3, pp. 276–287, June, 1994. In this approach, the features are tracked from the two-dimensional correspondence between successive frames. From this correspondence, the three-dimensional motion parameters are estimated. Another approach estimates the motion parameters from an optical flow and affine motion model. See, for example, “Analysis and Synthesis of Facial Image Sequences in Model-Based Coding,” by C. S. Choi, K. Aizawa, H. Harashima and T. Takeve, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 4, No. 3, pp. 257–275, June, 1994. This optical flow approach estimates the motion parameters without establishing a two-dimensional correspondence. This latter approach, therefore, tends to be more robust and accurate, but imposes a computational load that is heavier typically. A need, therefore, exists for an approach that is more accurate then the two-dimensional correspondence approach, but that is computationally less burdensome than the optical flow and affine motion model.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
As previously described, motion estimation is a common problem in video image processing. However, state of the art techniques such as previously described, for example, suffer from some disadvantages. For example, the previously described technique, referred to here as the “two-dimensional correspondence approach,” although computationally less burdensome, seems to be prone to errors due to mismatches of the two-dimensional correspondences. Another approach, referred to here as the “optical flow and affine motion model,” such as described in “3-D Motion Estimation and Wireframe Adaptation Including Photometric Effects for Model-Based Coding of Facial Image Sequences”, by G. Bozdagi, A. Murat Tekalp and L. Onural, IEEE Transactions on CSVT, Vol. 4, No. 3, pp. 246–256, June 1994, although more accurate and robust, is typically computationally burdensome. Therefore, a need exists for an approach that is more accurate than the former, but less computationally burdensome than the latter.
In this particular context, the motion that is being tracked or coded is the movement of a head or face in a sequence of images. Having the ability to track this motion and coding it may be desirable for a number of reasons. As just a few examples, this may be desirable in video conferencing, where a camera at one end may transmits the appropriate motion or movement of face to a display at the other end. However, the communications channel by which this video conferencing may take place sometimes has a relatively low or limited bandwidth, so that only a limited amount of signal information may be communicated in real-time.
An embodiment of a method of video coding a movement of human head or face from a sequence of images includes the following. A limited number of feature points are selected from an image of the face whose movement is to be video coded. Using at least two images or frames from the sequence, changes in the intensity of selected feature points, such as spatio-temporal rates of change, are estimated. Using the feature points and the estimated rates, translation and rotation parameters of the face are then estimated. The estimated translation and rotation parameters are coded and/or transmitted across the communications channel. It is noted, of course that instead of communicating the coded signal information, it may, alternatively, be stored and read from memory for later use, or used in some other way other than by transmitting it.
Although the invention is not limited in scope in this respect, in this particular embodiment, the face is coded from at least one of the images or frames by employing a three-dimensional (3D) based coding technique to produce what shall be referred to here as a 3D model. Movement of the face from at least two, typically sequential, images of the sequence is estimated using this 3D model of the face or head. In particular, as shall be described in more detail hereinafter, the movement of the face is estimated by treating the 3D model of the head as a rigid body in the sequence of images.
In this embodiment, although the invention is not limited in scope in this respect, the 3D model applied comprises planar triangular patches. This illustrated, for example, in
In this embodiment, a limited number of feature points are selected from an image of the head. In this embodiment, enough feature points are selected from different triangular patches to obtain the desired amount of accuracy or robustness without being computationally burdensome. Furthermore, a weighting factor is assigned to each feature point, depending upon the class of triangular patch to which it belongs. The weighting factor assigned to a feature point selected from the ith triangular patch is given by the following relationship.
The weighting factors are used in the Least Mean Square estimation of the global motion parameters in this particular embodiment, as described in more detail later, and there, the facial regions contributing more to the global motion have more weighting factors than the ones predominantly contributing to local motion; however, the invention is not restricted in scope to this embodiment. For example, other estimation approaches other than Least Mean Square may be employed and other approaches to employing weighting may be employed, or, alternatively, weighting may not necessarily be employed in alternative embodiments. For this embodiment, the range of the weighting factors were determined from experimentation, although, again, the invention is not restricted in scope to this particular range of weights. Here, nonetheless, Wg varies in the range of approximately 0.6 to approximately 0.9 and Wl varies in the range of approximately 0.3 to approximately 0.1.
Once feature points are selected, the rate of change of intensity of the selected feature points is estimated from the sequence of images. It is noted that it takes at least two images to estimate a rate of change; however, in this embodiment a rate of change is calculated for each pair of immediately sequential images in the sequence. It is also noted that a distinguishing feature of this approach is the selection of a limited number of feature points, thereby reducing the computational burden of this approach.
The relationship between rate of change in intensity at the selected feature points and estimating the translation and rotation of the face is as follows. The gradient between two consecutive or immediately sequential frames is described as follows.
IXK VXK+IYK VYK+ITK=0 (1)
Under the assumption of orthographic projection of the human face, for this particular embodiment, VXK and VYK are considered to be the optical flow fields with the z-diectional component assumed to be zero. The following linearized estimation equation may, therefore, be derived from equation (2) above by equating the x- and the y-directional components of the velocities and then using these relations in equation (1) to evaluate ITK as
HK=FKA
It will, of course, be understood that, although particular embodiments have just been described, the invention is not limited in scope to a particular embodiment or implementation. For example, one embodiment may be in hardware, whereas another embodiment may be in software. Likewise, an embodiment may be in firmware, or any combination of hardware, software, or firmware, for example. Likewise, although the invention is not limited in scope in this respect, one embodiment may comprise an article, such as a storage medium. Such a storage medium, such as, for example, a CD-ROM, or a disk, may have stored thereon instructions, which when executed by a system, such as a host computer or computing system or platform, or an imaging system, may result in a method of video coding the movement of a human face from a sequence of images in accordance with the invention, such as, for example, one of the embodiments previously described. Likewise, a hardware embodiment may comprise an imaging system including an imager and a computing platform, such as one adapted to perform or execute coding in accordance with the invention, for example.
While certain features of the invention have been illustrated and detailed herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
Number | Name | Date | Kind |
---|---|---|---|
4941193 | Barnsley et al. | Jul 1990 | A |
5065447 | Barnsley et al. | Nov 1991 | A |
5347600 | Barnsley et al. | Sep 1994 | A |
5416856 | Jacobs et al. | May 1995 | A |
5592228 | Dachiku et al. | Jan 1997 | A |
5600731 | Sezan et al. | Feb 1997 | A |
5724451 | Shin et al. | Mar 1998 | A |
5740282 | Hurd | Apr 1998 | A |
5778098 | Lee et al. | Jul 1998 | A |
5832115 | Rosenberg | Nov 1998 | A |
5862262 | Jacobs et al. | Jan 1999 | A |
5867221 | Pullen et al. | Feb 1999 | A |
5875122 | Acharya | Feb 1999 | A |
5978030 | Jung et al. | Nov 1999 | A |
5982441 | Hurd et al. | Nov 1999 | A |
5995210 | Acharya | Nov 1999 | A |
6009201 | Acharya | Dec 1999 | A |
6009206 | Acharya | Dec 1999 | A |
6009210 | Kang | Dec 1999 | A |
6044168 | Tuceryan et al. | Mar 2000 | A |
6047303 | Acharya | Apr 2000 | A |
6091851 | Acharya | Jul 2000 | A |
6094508 | Acharya et al. | Jul 2000 | A |
6108453 | Acharya | Aug 2000 | A |
6124811 | Acharya et al. | Sep 2000 | A |
6130960 | Acharya | Oct 2000 | A |
6151069 | Dunton et al. | Nov 2000 | A |
6151415 | Acharya et al. | Nov 2000 | A |
6154493 | Acharya et al. | Nov 2000 | A |
6157747 | Szeliski et al. | Dec 2000 | A |
6166664 | Acharya | Dec 2000 | A |
6178269 | Acharya | Jan 2001 | B1 |
6195026 | Acharya | Feb 2001 | B1 |
6215908 | Pazmino et al. | Apr 2001 | B1 |
6215916 | Acharya | Apr 2001 | B1 |
6229578 | Acharya et al. | May 2001 | B1 |
6233358 | Acharya | May 2001 | B1 |
6236433 | Acharya et al. | May 2001 | B1 |
6236765 | Acharya | May 2001 | B1 |
6269181 | Acharya | Jul 2001 | B1 |
6275206 | Tsai et al. | Aug 2001 | B1 |
6285796 | Acharya et al. | Sep 2001 | B1 |
6292114 | Tsai et al. | Sep 2001 | B1 |
6301370 | Steffens et al. | Oct 2001 | B1 |
6301392 | Acharya | Oct 2001 | B1 |
6348929 | Acharya et al. | Feb 2002 | B1 |
6351555 | Acharya et al. | Feb 2002 | B1 |
6356276 | Acharya | Mar 2002 | B1 |
6366692 | Acharya | Apr 2002 | B1 |
6366694 | Acharya | Apr 2002 | B1 |
6373481 | Tan et al. | Apr 2002 | B1 |
6377280 | Acharya et al. | Apr 2002 | B1 |
6381357 | Tan et al. | Apr 2002 | B1 |
6392699 | Acharya | May 2002 | B1 |
6449380 | Acharya et al. | Sep 2002 | B1 |
6516093 | Pardas et al. | Feb 2003 | B1 |
6535648 | Acharya | Mar 2003 | B1 |