Claims
- 1. A method for automated computerized audio visual dubbing of a movie, comprising;
(a) generating a three-dimensional head model of an actor in a movie for at least one frame in said movie, wherein said actor head model is representative of specific facial features of said actor in said frame; (b) generating a three-dimensional head model of a dubber making target sounds for said at least one frame in said movie, wherein said dubber head model is representative of specific facial features of said dubber, as said target sounds are made; and (c) modifying at least a portion of said specific facial features of said actor head model according to said dubber head model such that said actor appears to be producing said target sounds made by said dubber.
- 2. A method according to claim 1, further comprising the step of modifying the face of the actor in said frame according to said actor head model as modified in step (c) so as to obtain a frame wherein at least a portion of the specific facial features of the actor correspond to specific facial features of said dubber and replacing target sounds made by said actor with target sounds made by said dubber.
- 3. A method according to claim 1, wherein said specific facial features comprise the lips and mouth area.
- 4. A method according to claim 3, wherein said specific facial features further comprise secondary facial muscles including the cheeks and eyebrows.
- 5. A method according to claim 1, wherein said at least one frame comprises multiple frames of a movie.
- 6. A method according to claim 1, wherein said movie is one of a photographed movie, an animated movie, or any combination thereof.
- 7. A method according to claim 1, wherein steps (a) and (b) of generating the three-dimensional head models are accomplished by computer fitting a generic three-dimensional head model to the actor or dubber's picture using significant facial features of said actor or dubber.
- 8. A method according to claim 7, further comprising computer tracking specific facial features of said three-dimensional head model of said actor or said dubber through a plurality of frames in said movie so as to create a library of reference similarity frames.
- 9. A method according to claim 8, further comprising mapping, for a plurality of frames in the movie, the face of the actor to the three dimensional head model of said actor wherein said mapping employs a computerized texture mapping technique that uses said reference similarity frames.
- 10. A method according to claim 9, wherein the step of modifying at least a portion of said three-dimensional head model of said actor comprises replacing, on a frame by frame basis, at least a portion of specific facial features of the actor head model with those of the dubber head model.
- 11. A method according to claim 1, wherein the actor is one of a human, an animal, or any object made to appear to be speaking.
- 12. A method according to claim 1, wherein the dubber is one of a human, an animal, or any object made to appear to be speaking.
- 13. A method according to claim 1, wherein the target sounds comprise at least one spoken word, or at least one sound.
- 14. A method for automated computerized audio visual dubbing of a movie, comprising:
(a) providing a three-dimensional graphical model of a head; (b) providing a video sequence of a speaker having at least one frame; (c) tracking features of said speaker through said video sequence, and extracting changing head parameters of said speaker; (d) applying said changing head parameters to said three-dimensional graphical head model so as to create a “speaking” three-dimensional head model; and (e) generating a video sequence, having at least one frame, of said speaking three-dimensional head model using computer graphic methods.
- 15. A method according to claim 14, wherein the head parameters comprise parameters of the lips and mouth area.
- 16. A method according to claim 15, wherein the head parameters further comprise parameters of secondary facial muscles including the cheeks and the eyebrows.
- 17. A method according to claim 14 wherein a sound track is added to said video sequence of said three-dimensional head model.
- 18. A method according to claim 14, wherein the speaker is one of a human, an animal, or any object made to appear to be speaking.
- 19. A system for automated computerized audio visual dubbing of a movie, comprising:
(a) means for providing a three-dimensional graphical model of a head; (b) means for tracking features of a speaker through a video sequence, and for extracting changing head parameters of said speaker through said video sequence; (c) means for applying said changing head parameters to said three-dimensional graphical head model so as to create a “speaking” three-dimensional head model and for generating a video sequence of said speaking three-dimensional head model using computer graphic methods; and (d) means for adding a sound track to said video sequence of said three dimensional head model.
- 20. A system for automated computerized audio visual dubbing of a movie, comprising:
(a) means for generating a three-dimensional head model of an actor in a movie for at least one frame in said movie, wherein said actor head model is representative of specific facial features of said actor in said frame; (b) means for generating a three-dimensional head model of a dubber making target sounds for said at least one frame in said movie, wherein said dubber head model is representative specific facial features of said dubber, as said target sounds are made; and (c) means for modifying at least a portion of said specific facial features of said actor head model according to said dubber head model such that said actor appears to be producing said target sounds made by said dubber.
- 21. A method for automated computerized audio visual dubbing of a movie, comprising the steps of:
(a) selecting from the movie a frame having a picture of the head of an actor; (b) marking on the actors' head a number of significant feature points and measuring their locations in the selected frame; (c) fitting a generic three-dimensional head model to the actor's two-dimensional head picture by adapting the data of the significant feature points, as measured in step (b), to their corresponding locations in the model;
(d) (d1) tracking parameters of the actor fitted three dimensional head model throughout a substantial part of the movie frame-by-frame, iteratively in an automated computerized manner, and (d2) creating a library of reference similarity frames; (e) taking a dubber's movie of a dubber wherein the dubber speaks a target text; (f) repeating steps (a), (b), (c), and (dl) with the dubber's movie, whereby a dubber fitted three dimensional head model is obtained and the parameters of the dubber fitted three dimensional head model are tracked throughout a substantial part of the dubber's movie; (g) for each of the parameters, normalizing minimum and maximum values of the dubber fitted three dimensional head model to minimum and maximum values of the actor fitted three dimensional head model, respectively; (h) for each frame in the movie where the actor needs to be dubbed, texture mapping a two-dimensional picture of the actor's face appearing in said frame onto the actor fitted three dimensional head model, making use of the reference similarity frames; and (i) changing the actor fitted and texture mapped three dimensional model obtained in step (h) by replacing mouth parameters of the two-dimensional picture of the actor's face with mouth parameters among the parameters tracked in step (dl) for the dubber's movie, thereby obtaining the parametric description for a dubbed picture, with identical values to the two-dimensional picture of the actor's face, except that a mouth status of the actor resembles a mouth status of the dubber.
Priority Claims (1)
Number |
Date |
Country |
Kind |
115552 |
Oct 1995 |
IL |
|
Parent Case Info
[0001] This application is a continuation-in-part of application Ser. No. 09/051,417, filed Apr. 7, 1998, which is now pending.
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
09051417 |
Jul 1998 |
US |
Child |
10279097 |
Oct 2002 |
US |