Claims
- 1. A method for generating an output image associated with a point in time between a first image and a second image, comprising:
determining a motion vector for each pixel in an image at a map time between the first image and the second image, wherein the map time is different from the point in time of the output image, wherein the motion vector describes motion of a pixel of the image at the map time to a first point in the first image and a second point in the second image; calculating a factor that represents the point in time between the first image and the second image at which the output image occurs; warping the first image according to the determined motion vectors and the factor; warping the second image according to the determined motion vectors and the factor; and blending the warped first image and the warped second image according to the factor to obtain the output image.
- 2. The method of claim 1, wherein the first image is in a first sequence of images and the second image is in a second sequence of images such that the first image is not contiguous with the second image in a sequence of images.
- 3. The method of claim 2, wherein the first sequence has associated audio and the second sequence has associated audio, the method further comprising:
dissolving the audio associated with the first sequence to the audio associated with the second sequence.
- 4. The method of claim 1, wherein the first sequence has associated audio and the second sequence has associated audio, the method further comprising:
dissolving the audio associated with the first sequence to the audio associated with the second sequence.
- 5. The method of claim 1, wherein the output image and the first and second images are an output sequence of images with a duration at playback different from a duration of an input sequence of images containing the first and second images at playback, and wherein the input sequence of images has associated audio with a duration, the method further comprising:
adjusting the duration of the audio to match the duration of the output sequence of images.
- 6. The method of claim 1, wherein warping the first image and the second image and blending the warped images, comprises:
determining a primary transform for each triangle in a set of triangles, defined in an image at the map time, from the map time to the output time using the determined motion vectors; for each triangle, identifying any pixels in the output image that are contained within the triangle using the primary transform; determining a first transform for each triangle in the set of triangles from the output time to a time of the first image; for each pixel in each triangle at the output time, identifying a point in the first image using the first transform and spatially sampling the first image around the point; determining a second transform for each triangle in the set of triangles from the output time to a time of the second image; for each pixel in each triangle at the output time, identifying a point in the second image using the second transform and spatially sampling the second image around the point; and for each pixel in each triangle at the output time, combining the spatially sampled first image and the spatially sampled second image to obtain a value for the pixel in the output image.
- 7. A method for generating a plurality of output images, wherein each output image is associated with a different point in time between a first image and a second image, the method comprising:
determining a motion vector for each pixel in an image at a map time between the first image and the second image, wherein the motion vector describes motion of a pixel of the image at the map time to a first point in the first image and a second point in the second image; for each output image, calculating a factor that represents the point in time between the first image and the second image at which the output image occurs; for each output image, warping the first image according to the determined motion vectors and the factor for the output image; for each output image, warping the second image according to the determined motion vectors and the factor for the output image; and for each output image, blending the warped first image and the warped second image according to the factor for the output image.
- 8. The method of claim 7, wherein the first image is in a first sequence of images and the second image is in a second sequence of images such that the first image is not contiguous with the second image in a sequence of images.
- 9. The method of claim 8, wherein the first sequence has associated audio and the second sequence has associated audio, the method further comprising:
dissolving the audio associated with the first sequence to the audio associated with the second sequence.
- 10. The method of claim 7, wherein the first sequence has associated audio and the second sequence has associated audio, the method further comprising:
dissolving the audio associated with the first sequence to the audio associated with the second sequence.
- 11. The method of claim 7, wherein an output sequence of images includes the plurality of images and has a duration at playback different from a duration of an input sequence of images containing the first and second images at playback, and wherein the input sequence of images has associated audio with a duration, the method further comprising:
adjusting the duration of the audio to match the duration of the output sequence of images.
- 12. The method of claim 7, wherein warping the first image and the second image and blending the warped images, comprises:
determining a primary transform for each triangle in a set of triangles, defined in an image at the map time, from the map time to the output time using the determined motion vectors; for each triangle, identifying any pixels in the output image that are contained within the triangle using the primary transform; determining a first transform for each triangle in the set of triangles from the output time to a time of the first image; for each pixel in each triangle at the output time, identifying a point in the first image using the first transform and spatially sampling the first image around the point; determining a second transform for each triangle in the set of triangles from the output time to a time of the second image; for each pixel in each triangle at the output time, identifying a point in the second image using the second transform and spatially sampling the second image around the point; and for each pixel in each triangle at the output time, combining the spatially sampled first image and the spatially sampled second image to obtain a value for the pixel in the output image.
- 13. A method for generating a plurality of output images, wherein each output image is associated with a different point in time between a first image of a first sequence of one or more images and a second image of a second sequence of one or more images, the method comprising:
for each output image, selecting a pair of a first image from the first sequence and a second image from the second sequence; for each selected pair of first and second images, determining a motion vector for each pixel in an image at a map time between the first image and the second image, wherein the motion vector describes motion of a pixel of the image at the map time to a first point in the first image and a second point in the second image; for each output image, calculating a factor that represents the point in time, between the first and second images selected for the output image, at which the output image occurs; for each output image, warping the first image selected for the output image according to the factor for the output image and the motion vectors determined for the first and second images selected for the output image; for each output image, warping the second image selected for the output image according to the factor for the output image and the motion vectors determined for the first and second images selected for the output image; and for each output image, blending the warped first image and the warped second image according to the factor for the output image.
- 14. The method of claim 13, wherein the first sequence has associated audio and the second sequence has associated audio, the method further comprising:
dissolving the audio associated with the first sequence to the audio associated with the second sequence.
- 15. The method of claim 13, wherein an output sequence of images includes the plurality of output images and has a duration at playback different from a duration of an input sequence of images containing the first and second images at playback, and wherein the input sequence of images has associated audio with a duration, the method further comprising:
adjusting the duration of the audio to match the duration of the output sequence of images.
- 16. The method of claim 13, wherein warping the first image and the second image and blending the warped images, comprises:
determining a primary transform for each triangle in a set of triangles, defined in an image at the map time, from the map time to the output time using the determined motion vectors; for each triangle, identifying any pixels in the output image that are contained within the triangle using the primary transform; determining a first transform for each triangle in the set of triangles from the output time to a time of the first image; for each pixel in each triangle at the output time, identifying a point in the first image using the first transform and spatially sampling the first image around the point; determining a second transform for each triangle in the set of triangles from the output time to a time of the second image; for each pixel in each triangle at the output time, identifying a point in the second image using the second transform and spatially sampling the second image around the point; and for each pixel in each triangle at the output time, combining the spatially sampled first image and the spatially sampled second image to obtain a value for the pixel in the output image.
- 17. A method for generating a transition of a plurality of output images from a first sequence of images to a second sequence of images wherein an image at an end of the first sequence is not contiguous with an image at a beginning of the second sequence, the method comprising:
for each output image, selecting a pair of a first image from the first sequence and a second image from the second sequence such that the output image has a point in time between the first image and the second image in the transition; for each selected pair of first and second images, determining a set of motion vectors that describes motion between the first and second images; for each output image, calculating a factor that represents the point in time, between the first and second images selected for the output image, at which the output image occurs; for each output image, performing motion compensated interpolation to generate the output image according to the determined set of motion vectors and the calculated factor.
- 18. The method of claim 17, wherein the first sequence has associated audio and the second sequence has associated audio, the method further comprising:
dissolving the audio associated with the first sequence to the audio associated with the second sequence.
- 19. The method of claim 17, wherein a combination of the output image and the first and second images provides an output sequence of images with a duration at playback different from a duration of an input sequence of images containing the first and second images at playback, and wherein the input sequence of images has associated audio with a duration, the method further comprising:
adjusting the duration of the audio to match the duration of the output sequence of images.
- 20. A method for processing a jump cut from a first image at an end of a first segment of sequence of images and corresponding audio and a second image at a beginning of a second segment in the sequence of images and corresponding audio, comprising:
processing the corresponding audio to identify an audio break between the audio corresponding to the first segment and the audio corresponding to the second segment; determining a set of motion vectors that describes motion between the first and second images; and performing motion compensated interpolation to generate one or more images between the first image and the second image according to the determined set of motion vectors at a point in time corresponding to the audio break.
- 21. The method of claim 20, further comprising:
dissolving the audio associated with the first sequence to the audio associated with the second sequence around the audio break.
- 22. The method of claim 21, wherein determining a set of motion vectors comprises determining a motion vector for each pixel in an image at a map time between the first image and the second image, wherein the map time is different from the point in time of the output image, wherein the motion vector describes motion of a pixel of the image at the map time to a first point in the first image and a second point in the second image, and wherein performing motion compensated interpolation comprises:
calculating a factor that represents the point in time between the first image and the second image at which the output image occurs; warping the first image according to the determined motion vectors and the factor; warping the second image according to the determined motion vectors and the factor; and blending the warped first image and the warped second image according to the factor to obtain the output image.
- 23. The method of claim 21, wherein determining a set of motion vectors comprises determining a motion vector for each pixel in an image at a map time between the first image and the second image, wherein the motion vector describes motion of a pixel of the image at the map time to a first point in the first image and a second point in the second image, and wherein performing motion compensated interpolation comprises:
for each output image, calculating a factor that represents the point in time between the first image and the second image at which the output image occurs; for each output image, warping the first image according to the determined motion vectors and the factor for the output image; for each output image, warping the second image according to the determined motion vectors and the factor for the output image; and for each output image, blending the warped first image and the warped second image according to the factor for the output image.
- 24. A method for warping a first image and a second image to obtain an output image at an output time between the first image and the second image, comprising:
determining a set of motion vectors at a map time and that describes motion between the first and second images; determining a primary transform for each triangle in a set of triangles, defined in an image at the map time, from the map time to the output time using the determined set of motion vectors; for each triangle, identifying any pixels in the output image that are contained within the triangle using the primary transform; determining a first transform for each triangle in the set of triangles from the output time to a time of the first image; for each pixel in each triangle at the output time, identifying a point in the first image using the first transform and spatially sampling the first image around the point; determining a second transform for each triangle in the set of triangles from the output time to a time of the second image; for each pixel in each triangle at the output time, identifying a point in the second image using the second transform and spatially sampling the second image around the point; and for each pixel in each triangle at the output time, combining the spatially sampled first image and the spatially sampled second image to obtain a value for the pixel in the output image.
- 25. The method of claim 24, wherein the map time is between the first image and the second image.
- 26. The method of claim 24, wherein the map time is different from the output time.
- 27. A method for warping a first image and a second image to obtain an output image at an output time between the first image and the second image, comprising:
determining a set of motion vectors at a map time and that describes motion between the first and second images; determining a primary transform for each triangle in a set of triangles, defined in an image at the map time, from the map time to the output time using the determined set of motion vectors; for each triangle, identifying any pixels in the output image that are contained within the triangle at the output time using the primary transform; for each pixel in each triangle at the output time, spatially sampling the first image and the second image at points corresponding to the pixel and combining the spatially sampled first image and the spatially sampled second image to obtain a value for the pixel in the output image.
- 28. The method of claim 27, wherein the map time is between the first image and the second image.
- 29. The method of claim 27, wherein the map time is different from the output time.
- 30. A method for changing duration of an input sequence of images with associated audio, wherein the input sequence of images and associated audio has a duration, comprising:
receiving an indication of a selection of an operation by an operator indicative of a desired duration of an output sequence of images, and, in response to the received indication:
selecting a first image and a second image in the sequence of images; determining a set of motion vectors that describes motion between the first and second images; performing motion compensated interpolation to generate one or more images between the first image and the second image according to the determined set of motion vectors; repeating the selecting, determining and performing steps for multiple pairs of first and second images in the sequence of images to provide the output sequence of images; and adjusting the duration of the associated audio to retain synchronization with the output sequence of images.
- 31. The method of claim 30, further comprising:
playing back the output sequence of images with the audio.
- 32. The method of claim 30, wherein adjusting comprises resampling of the audio.
- 33. The method of claim 30, wherein adjusting comprises time scaling of the audio.
- 34. A method for performing color correction, comprising:
generating a first color histogram from first image from a first sequence of images; generating a second color histogram from a second image from a second sequence of images; determining a set of motion vectors from the first and second color histograms that describes motion between the first color histogram and the second color histogram; generating a table of color correction values from the set of motion vectors; and applying the table of color correction values to a sequence of images.
- 35. A method for reducing artifacts in an image created using motion compensated interpolation of a first image and a second image, comprising:
determining a set of motion vectors that describes motion between the first and second images; identifying a foreground region and a background region in the first and second images; performing tracking on at least one of the foreground region and the background region to determine a motion model for the tracked region; changing the set of motion vectors corresponding to the tracked region according to the motion model for the tracked region; and performing motion compensated interpolation to generate one or more images between the first image and the second image according to the changed set of motion vectors.
- 36. The method of claim 35, wherein performing motion compensated interpolation comprises:
determining a combination map using the changed set of motion vectors to indicate which pixels of the first and second images are used to contribute to a pixel in an output image.
- 37. The method of claim 1, wherein determining motion vectors comprises processing the first and second images to remove invalid image data.
- 38. The method of claim 1, wherein warping comprises:
identifying any motion vector transforms a point in the output image to an area outside of one of the first and second images; providing no contribution to the output image for the identified motion vector from one of the first and second images.
- 39. The method of claim 1, wherein blending comprises initializing an output image to a blend of the first and second images according to the determined factor.
- 40. A method for processing two fields of interlaced video, comprising:
computing motion vectors describing motion of image characteristics from a field to another field of opposite sense; removing from the motion vectors an offset corresponding to one half of a line and having a sign according to an orientation of the y-axis of the image space and which field includes the top line; using the motion vectors to generate a sampling region at a desired output time; transforming the sampling region using the motion vectors to a sample time at one of the fields; transforming the sampling region using the motion vectors to a sample time at the other of the fields; determining a field sense of an output field to be generated at the desired output time; translating the transformed sampling region for the field with a field sense opposite the field sense of the output field by an offset of one half of a line and having a sign determined by an orientation of the y-axis of the image space and which field includes the top line; and generating the output field using the transformed and translated sampling regions and the input fields.
- 41. The method of claim 40, further comprising:
wherein the desired output time is the time of one of the fields; combining the generated output field with the field at the desired output time to generate a progressive image.
- 42. The method of claim 41, further comprising:
performing an effect on the progressive image.
- 43. The method of claim 42, further comprising:
vertically decimating the progressive image with the effect to produce a field at the desired output time.
- 44. A method for processing two fields of interlaced video, comprising:
computing motion vectors describing motion of image characteristics from a field to another field of opposite sense; removing from the motion vectors an offset corresponding to one half of a line and having a sign according to an orientation of the y-axis of the image space and which field includes the top line; selecting a time of one of the fields as a desired output time; transforming a sampling region specified a the selected time using the motion vectors to a sample time at the field that is being warped; translating the transformed sampling region by an offset of one half of a line and having a sign determined by an orientation of the y-axis of the image space and which field includes the top line; and generating the output field at the desired output time using the transformed and translated sampling region and the field that is being warped.
- 45. The method of claim 44, further comprising:
combining the generated output field with the field at the desired output time to generate a progressive image.
- 46. The method of claim 45, further comprising:
performing an effect on the progressive image.
- 47. The method of claim 46, further comprising:
vertically decimating the progressive image with the effect to produce a field at the desired output time.
- 48. The method of claim 45, further comprising:
exporting the progressive image to a file.
- 49. The method of claim 45, further comprising:
displaying the progressive image as a still frame.
- 50. The method of claim 49, wherein displaying the progressive image as a still frame includes displaying the progressive image in a freeze frame effect.
- 51. The method of claim 49, wherein displaying the progressive image as a still frame includes displaying the progressive image upon stopping playback.
- 52. The method of claim 44, further comprising:
warping another field of the opposite sense to the desired output time; and blending the two warped fields to produce a field at the desired output time.
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit under 35 U.S.C. 120 and is a continuing application of U.S. patent application Ser. No. 09/657,699, filed Sep. 8, 2000, now pending, which is hereby incorporated by reference. This application also is related to U.S. patent application entitled “Analyzing Motion of Characteristics in Images,” by Katherine Cornog and Randy Fayan, filed on even date herewith, and “Correcting Motion Vector Maps for Image Processing,” by Katherine Cornog and Randy Fayan, filed on even date herewith, both of which are hereby incorporated by reference.
Continuations (1)
|
Number |
Date |
Country |
Parent |
09657699 |
Sep 2000 |
US |
Child |
09839160 |
Apr 2001 |
US |