The success of the MPEG-1 and MPEG-2 coding standards was driven by the fact that they allow digital audiovisual services with high quality and compression efficiency. However, the scope of these two standards is restricted to the ability of representing audiovisual information similar to analog systems where the video is limited to a sequence of rectangular frames. MPEG-4 (ISO/IEC JTC1/SC29/WG11) is the first international standard designed for true multimedia communication, and its goal is to provide a new kind of standardization that will support the evolution of information technology.
MPEG-4 provides for a unified audiovisual representation framework. In this representation, a scene is described as a composition of arbitrarilyy shaped audiovisual objects (AVOs). These AVOs can be organized in a hierarchical fashion, and in addition to providing support for coding individual objects, MPEGA also provides facilities to compose that hierarchical structure.
One of these AVOs is the Face Object, which allows animation of synthetic faces, sometimes called Talking Heads. It consists of a 2D representation of a 3D synthetic visual object representing a human face, a synthetic audio object, and some additional information required for the animation of the face. Such a scene can be defined using the BInary Format for Scene (BIFS), which is a language that allows composition of 2D and 3D objects, as well as animation of the objects and their properties.
The face model is defined by BIFS through the use of nodes. The Face Animation Parameter node (FAP) defines the part of the face has to be animated. The Face Description Parameter node (FDP) defines the rules to animate the face model. The audio object can be natural audio, or created at the decoder with some proprietary Text-To-Speech (TTS) synthesize. In the case of an encoded stream containing natural audio, an independent FAP stream drives the animation, and time stamps included in the streams enable the synchronization between the audio and the animation. A synthesizer is a device that creates an output based on a set of inputs and a set of rules. Two different synthesizers that are subjected to different rules may generate perfectly acceptable but markedly different outputs in the response to a given set of inputs, such as one synthesizer might generate a talking head of a blond woman, while the other might generate a talking head of a dark haired woman.
A TTS is a system that accepts text as input, and outputs an intermediate signal that comprises phonemes, and a final signal that comprises audio samples corresponding to the text. MPEG-4 does not standardize the TTS Synthesizer, but it provides a Text-To-Speech Interface (TTSI). By sending text to the decoder, the animation is driven by the FAP stream and by the TTS.
MPEG-4 defines a set of 68 Face Animation Parameters (FAPs), each corresponding to a particular facial action that deforms a face from its neutral state. These FAPs are based on the study of minimal perceptible actions, and are closely related to muscle action. The value for a particular FAP indicates the magnitude of the corresponding action. The 68 parameters are categorized into 10 groups, as shown in Table 1 of the appendix. Other than the first group, all groups are related to different parts of the face. The first group contains two high-level parameters (FAP 1 and FAP2) visemes and expressions. A viseme is a visual version of a phoneme. It describes the visually distinguishable speech posture involving the lips, teeth and tongue. Different phonemes are pronounced with a very similar posture of the mouth, like “p” and “b” and, therefore, a single viseme can be related to more than one phoneme. Table 2 in the appendix shows the relation between visemes and their corresponding phonemes.
In order to allow the visualization of mouth movement produced by coarticulation, transitions from one viseme to the next are defined by blending the two visemes with a weighting factor that changes with time along some selected trajectory.
The expression parameter (FAP 2) defines 6 high level facial expressions, such as joy, sadness, anger, etc. They are described in Table 3 of the appendix. The nine other FAP groups, which represent FAP 3 to FAP 68, are low-level parameters, like move left mouth corner up.
Each FAP (except FAP1 and FAP2) is defined in a unit, which can vary from one parameter to another. Unlike visemes and expressions, each low-level FAP characterizes only a single action. Therefore, a low-level action is completely defined with only two numbers, the FAP number, and the amplitude of the action to apply. In the case of high-level parameters, a third number, called FAPselect, is required to determine which viseme (in case of FAP 1), or which expression (in case of FAP 2) is to be applied.
For each frame, the receiver applies and performs the deformations on the face model using all FAPs. Once all actions have been done on the model, the face is rendered.
MPEG-4 allows the receiver to use a proprietary face model with its own animation rules. Thus, the encoder sends signals to control the animation of the face by sending FAPs but has no knowledge concerning the size and proportion of the head to animate, or any other characteristic of the decoding arrangements. The decoder, for its part, needs to interpret the values of they in a way such that the FAPs produce reasonable deformation. Because the encoder is not aware of the decoder that will be employed, the MPEG-4 standard contemplates providing normalized FAP values in face animation parameter units (FAPU). The FAPU are computed from spatial distances between key facial features on the model in its neutral state, such as iris diameter, eye separation, eye-to-nose separation, Mouth-to-nose separation, and Mouth width.
Providing for this synchronization (between what is said and the desired facial expressions) on the encoder side is not trivial, and the problem is certainly not reduced when a TTS arrangement is contemplated. The reason lies in the fact that whereas faces are animated at constant frame rate, the timing behavior of a TTS Synthesizer on the decoder side is usually unknown. It is expected that there will be a very large number of commercial applications where it will be desirable to drive the animation from a text. Therefore, solving the synchronization problem is quite important.
An enhanced arrangement for a talking head driven by text is achieved by sending FAP information to a rendering arrangement that allows the rendering arrangement to employ the received FAPs in synchronism with the speech that is synthesized. In accordance with one embodiment, FAPs that correspond to visemes which can be developed from phonemes that are generated by a TTS synthesizer in the rendering arrangement are not included in the sent FAPS, to allow the local generation of such FAPs. In a further enhancement, a process is included in the rendering arrangement for creating a smooth transition from one FAP specification to the next FAP specification. This transition can follow any selected function. In accordance with one embodiment, a separate FAP value is evaluated for each of the rendered video frames.
One enhancement that is possible, when employing the
As indicated above, the synchronization between the generated visemes and the speech is fairly good. The only significant variable that is unknown to FRM 110 is the delay suffered between the time the phonemes are available and the time the speech signal is available. However, this delay can be measured and compensated in the terminal. By comparison, the synchronization between the incoming FAP stream and the synthesized speech is much more problematic. MPEG-4 does not specify a standard for the operation of TTS equipment, but specifies only a TTS Interface (TTSI). Therefore, the precise characteristics of the TTS synthesizer that may be employed in the
We have concluded that a better approach for insuring synchronization between the TTS synthesizer 120 and the output of FRM 110 is to communicate prosody and timing information to TTS synthesizer 120 along with the text and in synchronism with it. In our experimental embodiment this is accomplished by sending the necessary FAPs stream (i.e., the entire FAPs stream, minus the viseme FAPs that would be generated locally by converter 140) embedded in the TTS stream. The FAPs information effectively forms bookmarks in the TTS ASCII stream that appears on line 10. The embedding is advantageously arranged so that a receiving end could easily cull out the FAP bookmarks from the incoming streams.
This enhanced arrangement is shown in
Illustratively, the syntax of the FAPs bookmarks is <FAP # (FAPselect) FAPval FAPdur>, where the # is a number that specifies the FAP, in accordance with Table 4 in the appendix. When the # is a “1”, indicating that it represents a viseme, the FAPselect number selects from Table 1. When the # is a “2”, indicating that it represents an expression, the number selects from Table 2. FAPval specifies the magnitude of the FAP action, and FAPdur specifies the duration.
Simply applying a FAP of a constant value and removing it after a certain amount of time does not give a realistic face motion. Smoothly transitioning from one FAP specification to the next FAP specification is much better. Accordingly, it is advantageous to include a transitioning schema in the
To reset the action, a FAP with FAPval equal to 0 may be applied.
While having a linear transition trajectory from one FAP to the next is much better than an abrupt change, we realized that any complex trajectory can be effected. This is achieved by specifying a FAP for each frame, and a function that specifies the transition trajectory from the FAP from frame to frame. For example, when synthesizing a phrase such as “ . . . really? You don't say!” it is likely that an expression of surprise will be assigned to, or associated with, the word “really,” and perhaps for some time after the next word, or words are synthesized. Thus, this expression may need to last for a second or more, but the FAP that specifies surprise is specified only once by the source.
A trajectory for fading of the previous expression and for establishment of the “surprise” expression needs to be developed for the desired duration, recognizing that the next expression may be specified before the desired duration expires, or some time after the desired duration expires. Furthermore, for real-time systems it is advantageous if the current shape of the face can be processed from information available up to this moment and does not depend on information available in the future or after significant delay. This requirement prevents us from using splines where knowledge of future points is necessary in order to guarantee smooth transitions. Thus, the
We have identified a number of useful transition trajectory functions. They are:
f(t)=as+(a−as)t; (1)
f(t)=as+(1−e−1)(a−as), (2)
f(t)=as(2t3−3t2+1)+(−2t3+3t2)a+(t3−2t2+t)gs, (4)
with t=[0,1], the amplitude as at the beginning of the FAP, at t=0, control parameter λ and the gradient gs of f(0) with is the FAP amplitude overmeat t=0. If the transition time T≠1, the time axis of the functions need to be scaled. Since these functions depend only on as, λ, gs, and T, and thus are completely determined as soon as the FAP bookmark is known.
The most important criterion for selecting a transition trajectory function is the resulting quality of the animation. Experimental results suggest that when linear interpolation is used, and when equation (2) is used, sharp transitions result in the combined transition trajectory, which do not result in a realistic rendering for some facial motions. Equations (3) and (4) yield better results. On balance, we have concluded that the function of equation (4) order gives the best results, in terms of realistic behavior and shape prediction. This function enables one to match the tangent at the beginning of a segment with the tangent at the end of the previous segment, so that a smooth curve can be guaranteed. The computation of this function requires 4 parameters as input, which are: the value of the first point of the curve (startVal), its tangent (startTan), the value to be reached at the end of the curve (equal to FAPVal) and its tangent.
For each FAP #, the first curve (due to FAP # bookmarki=0) has a starting value of 0 (startVali=0=0) and a starting tangent of 0 (startTani=0=0). The value for startTan and startVal for i>0 depends on ti−l,i, which is the time elapsed between FAP # bookmarkt−l and FAP # bookmarki. Thus, in accordance with one acceptable schema,
If ti−l,i>FAPduri−l then:
startVali=FAPvali−l
stariTani=0
and the resulting amplitude of the FAP to be sent to the renderer is computed with equation (5):
FAPduri is used to relocate and scale the time parameter, t, from [0 1] to [titi+FAPduri] with tl being the instant when the word following FAP # bookmark, in the text is pronounced. Equation (6) gives the exact rendering time:
Rendering time for FAPAmpi(t)=ti+t·FAPduri. (6)
If ti−l,i<FAPduri−l then:
startVali=FAPAmpi−l(ti−l,i/FAPduri−l)
startTani=tani−l(ti−l,i/FAPduri−l)
which is computed with equation (3):
and the resulting amplitude of the FAP is again computed with equation (5).
Thus, even if the user does not estimate properly the duration of each bookmark, the equation (4) function, more than any other function investigated, would yield the smoothest overall resulting curve.
The above disclosed a number of principles and presented an illustrative embodiment. It should be understood, however, that skilled artisans can make various modifications without departing from the spirit and scope of this invention. For example, while the functions described by equations (1) through (4) are monotonic, there is no reason why an expression from its beginning to its end must be monotonic. One can imagine, for example, that a person might start a smile, freeze it for a moment, and then proceed with a broad smile. Alternatively, one might conclude that a smile that is longer than a certain time will appear too stale, and would want the synthesized smile to reach a peak and then reduce somewhat. It also possible to define triangle function in order to easily describe motions like an eye blink. Any such modulation can be effected by employing other functions, or by dividing the duration into segments, and applying different functions, or different target magnitudes at the different segments.
put, bed, mill
far, voice
think, that
tip, doll
call, gas
chair, join, she
sir, zeal
lot, not
red
This invention claims the benefit of provisional application No. 60/082,393, filed Apr. 20, 1998, titled “FAP Definition Syntax for TTS Input and of provisional application No. 60/073,185, filed Jan. 30, 1998, titled “Advanced TTS For Facial Animation.”
Number | Name | Date | Kind |
---|---|---|---|
4884972 | Gasper | Dec 1989 | A |
4913539 | Lewis | Apr 1990 | A |
5884267 | Goldenthal et al. | Mar 1999 | A |
6028960 | Graf et al. | Feb 2000 | A |
6043827 | Christian et al. | Mar 2000 | A |
6052132 | Christian et al. | Apr 2000 | A |
6069631 | Tao et al. | May 2000 | A |
6112177 | Cosatto et al. | Aug 2000 | A |
6130679 | Chen et al. | Oct 2000 | A |
6177928 | Basso et al. | Jan 2001 | B1 |
6181351 | Merrill et al. | Jan 2001 | B1 |
6249292 | Christian et al. | Jun 2001 | B1 |
6279017 | Walker | Aug 2001 | B1 |
Number | Date | Country | |
---|---|---|---|
60082393 | Apr 1998 | US | |
60073185 | Jan 1998 | US |