Media pipeline with multichannel video processing and playback

Information

  • Patent Grant
  • 6357047
  • Patent Number
    6,357,047
  • Date Filed
    Monday, June 30, 1997
    27 years ago
  • Date Issued
    Tuesday, March 12, 2002
    22 years ago
Abstract
The system processes sequences of digital still images to provide real-time digital video effects, and includes first and second channels for communicating first and second sequences of digital still images at a rate for simulating video. A controller directs still images to one of the first and second channels. A blender, having a first input connected to the first channel, a second input connected to the second channel, and an output, provides a combination of the first and second sequences of digital still images at a rate for simulating video.
Description




BACKGROUND OF THE INVENTION




Technology for manipulating digital video has progressed to the point where it can be readily processed and handled on computers. For example, the Avid/1 Media Composer, available from Avid Technology, Inc. of Tewksbury, Mass., is a system wherein digital video can be readily captured, edited, and displayed for various purposes, such as broadcast television and film and video program post-production.




The Avid/1 Media Composer uses a media pipeline to provide real-time digital video output on a computer display. This media pipeline


30


is shown in FIG.


1


and is described in more detail in U.S. Pat. No. 5,045,940, issued Sep. 3, 1991. In this media pipeline


30


, a permanent storage


40


stores sequences of digital still images which represent digital video and are played back at a rate which provides the appearance of video. The sequences of digital still images do not include any frame synchronization or other type of timing information which are typically found in television signals. The still images also typically are stored in compressed form. The stored sequences are accessed and placed in a data buffer


42


from where they are provided to a compression/decompression system


44


. The output of the compression/decompression system


44


is applied to a frame buffer


46


which converts the still image to a typical video signal which is then applied to an input/output unit


48


. Each of the systems


40


,


42


,


44


,


46


, and


48


in this media pipeline


30


operate bi-directionally. That is, the output process discussed above can be reversed and video signals can be input via input/output unit


48


to the frame buffer


46


where they are converted to a sequence of digital still images. The images in the sequence are compressed by compression/decompression system


44


, stored in data buffer


42


and then transferred to the permanent storage


40


.




Although the media pipeline provides many advantages for digital video, including enabling broadcast and editing from the stored digital images in a computer system, this media pipeline is not able to provide real-time digital video effects including complex arbitrary three-dimensional effects, simple, two-dimensional effects such as resizing, x-y translation, rotating, layering (an appearance of picture-in-picture), and finally simple effects such as dissolves, wipes, fades and luma and/or chroma keying. In order to perceive such effects on the computer, the effect generally is first generated (not in real time), then digitized and stored if generated on a tape and finally played back.




SUMMARY OF THE INVENTION




The invention improves over the prior art by providing a media pipeline with two channels for processing sequences of digital still images. A blender is provided so as to enable simple effects on these two streams of video data such as dissolves, wipes and chroma keys. Complex arbitrary three-dimensional effects and other effects may also be provided using an external interface.




Thus, a system for processing sequences of digital still images to provide real-time digital video effects includes first and second channels for communicating first and second sequences of digital still images at a rate for simulating video. A controller directs still images to one of the first and second channels. A blender, having a first input connected to the first channel, a second input connected to the second channel, and an output, provides a combination of the first and second sequences of digital still images at a rate for simulating video.











BRIEF DESCRIPTION OF THE DRAWING




In the drawing,





FIG. 1

is a block diagram of a media pipeline as is used in the prior art;





FIG. 2

is a block diagram of a modified media pipeline in accordance with the present invention;





FIG. 3

is a more detailed block diagram of a modified compression/decompression subsystem of the media pipeline in accordance with the present invention;





FIG. 4

is a block diagram of a modified media pipeline in accordance with the present invention to provide real time digital video effects;





FIG. 5

is a block diagram of the a generator for box wipes;





FIG. 6

is a flow chart describing the operation of a state machine for each scan line in a frame for a box wipe;





FIG. 7

is a flow chart describing the operation of a state machine for each pixel in a scan line for a box wipe; and





FIG. 8

is a diagram showing how a is determined for different regions of an image for a box wipe.











DETAILED DESCRIPTION




The present invention will be more completely understood through the following detailed description which should be read in conjunction with the attached drawing in which similar reference numbers indicate similar structures. All references cited herein, including pending patent applications, are hereby expressly incorporated by reference.




A media pipeline


35


with two channels of digital video for providing effects will now be described in connection with FIG.


2


. The media pipeline


30


shown in

FIG. 1

is modified to include a compression/decompression (CODEC) unit


58


which is a modification of compression/decompression system


44


of FIG.


1


. The CODEC unit


58


has two CODEC channels


50


and


52


. One is used for compression and decompression, i.e., for both recording and playback, while the other is used only for playback. The outputs of these channels are fed to a blender


54


which combines them according to the desired effect. It is not necessary to use compressed data; however, compression is preferable to reduce storage requirements. This compression/decompression unit


58


is described in more detail in British provisional specification 9307894.7, filed Apr. 16, 1993, under U.S. foreign filing license 504287 granted Apr. 13, 1993.




This CODEC unit


58


will now be described in more detail in connection with FIG.


3


. In this figure, a control unit


60


controls two channels of coder/decoders. The modification to the media pipeline


30


is made by assigning, in the control unit


60


, different sections of the compressed data buffer


42


to each channel. A sequence of digital still images is also assigned to a channel. Thus, when the sequence is read into the compressed data buffer


42


, it is input to the section assigned to the channel for that sequence. Thus, reading and writing of data into the FIFO


62


and


64


for the CODECs


66


and


68


is based on the assignment of a channel to a selected sequence of digital still images.




Each channel has a separate CODEC, either a first CODEC


66


or a second CODEC


68


. The CODECs typically use the Joint Photographic Expert Group (JPEG) proposed standard for still image compression. Such CODECs are commercially available, such as the CL550 available from C-Cube of Milpitas, Calif. Each CODEC has a respective first-in, first-out (FIFO) memory elements


62


and


64


. The FIFO memory elements


62


and


64


feed respectively to the CODECs


66


and


68


of which the outputs are applied to field buffers


70


and


72


, which are also preferably FIFO memory elements. These two channels may be blended using a blender


74


which is controlled by an addressing and alpha information unit


76


, as will be described in more detail below. The blender


74


and alpha and addressing information unit


76


are preferably implemented using a field reprogrammable gate array such as the XC3090 manufactured by XiLinx.




Alternatively, a first output sequence may be provided by output A from the FIFO


70


for CODEC


66


, and a second output sequence may then be provided by the output of the blender, when no blending is performed, as the output B. Thus, FIFO


70


and blender


74


act as first and second sources of sequences of digital still images. The outputs A and B may be applied to a digital video effects system


59


as shown in FIG.


4


. This embodiment is useful for providing arbitrary three-dimensional video effects, as are described in U.S. patent application entitled Media Pipeline with Mechanism for Real-Time Addition of Digital Video Effects filed Mar. 18, 1994 by Harry Der et al., and assigned to Avid Technology, Inc. of Tewksbury, Mass.




More complex, two-dimensional effects can also be made using techniques known in the art, including X-Y translation, rotation and scaling. An additional effects board, similar to that for the three-dimensional, arbitrary effects, can be provided so as to perform an operation on a single stream. To provide this operation, the output A as shown in

FIG. 4

is applied to such an effects generator, the output of which would be applied to the input of the blender originally designed to receive channel A. When this capability is provided, the digital effects typically produce an output using the YUV data format with four bits for each of the Y, U and V parameters (4:4:4). In contrast, the normal data format for channels A and B is 4:2:2. Thus, in this instance, the blender


76


should be designed so as to optionally process channel A for either 4:4:4 format or 4:2:2 format, according to whether such digital effects are being provided.




Provision of simpler video effects in real time, such as box wipes and chroma and luma keys, using blender


74


and alpha and addressing information


76


, will now be described. Blending of two streams (A and B) of video typically involves the application of the function αA +(1−α)B to the streams of video information, where α is a value which can vary from pixel to pixel in an image, and where A and B, at any given point in time, are pixels in corresponding frames in the two streams (A and B) of video. Each effect is thus applied to one frame from each of the two streams. (One normally does not perform an effect on only a fraction of a frame). Given α, at any point in time and the addresses for pixels A and B, the output image can be generated. The blender


74


which performs this operation be determining the result of the combination αA+(1−α)B can be implemented using standard digital hardware design techniques. Preferably, a field reprogrammable gate array is used to implement the function (A−B)α+B.




The value of α applied to two pixels is dependent upon the kind of effect to be provided. For example, dissolves uses the same α for all pixels in one frame. α is gradually decreased for subsequent frames in the dissolve. The pair of pixels to be combined is also dependent upon the kind of effect to be provided. The indication of which pair of pixels to use is called the addressing information. For each kind of effect to be provided, a state machine and state variables can be defined for processing one frame of output video. The two general types of effects are chroma and/or luma keys and box wipes, which include dissolves and fades.




For example, in order to implement chroma and/or luma keying, two threshold values D


1


and D


2


and a key point Kc are defined by the user for each parameter of a pixel. These effects are typically applied to the YUV representation of an image. Thus, an incoming image is processed by comparing the pixel Y, U and V values to the key points and threshold values defined for Y, U, and V. In particular, using the parameter U as an example, ∥Kc−U∥ is calculated. If this value is less than D


1


, α is set to be the maximum possible value. If this value is greater than D


2


, α is set to be the minimum possible value. When the value is somewhere between D


1


and D


2


, a value for a is determined according to this value. In one embodiment of the invention, D


1


-D


2


is required to be some fixed number, e.g., 16. The magnitude of this fixed number represents the desired number of α. In this embodiment, when the value of ∥Kc−U∥ is between D


1


and D


2


, the value ∥Kc−U∥−D


1


is applied to a lookup table (stored in a random access, preferably rewritable, memory), which stores corresponding values of α to be used. The values of α may be any function of the input ∥Kc−U∥−D


1


, such as a step function, a sigmoid function, a ramp or any function desired by a user. Typically, only Y or U,V are keyed and processed. One could apply keying to all of Y, U and V at once, and combine the resulting a values, for example, by using the function (½(α


u





v


) AND α


y


).




Box wipes with a border can also be provided. A box wipe is a transition between two streams defined by a rectangular shape. Within the rectangle, information from one channel is provided. Outside the rectangle, information from the other channel is provided. The transition region can be strictly defined by the border of the rectangle or a border color can be provided. The transition can be described as a linear ramp (defined by a ratio of the channels to each other). The transition is thus defined by the lower and upper limits of the ramp, the step size, and the duration. All of these parameters should be user definable. Also, the coordinates of the box should be programmable to provide a horizontal wipe, a vertical wipe, or some corner to corner wipe. Typically, a blend is performed from the first channel to the border, from the border to the next channel, or among both channels. A state machine can readily be defined according to the variables defining the wipe so as to provide an output α value for each pair of pixels to be combined. There are three values used to define the final α. The α


init


values define the initial α


X


and α


Y


values, where α


X


and α


Y


are accumulated values according to the state machine. In the simplest wipe, a dissolve, the initial values are held, i.e., not changed, throughout a whole frame. In the other box wipes, α


X


and α


Y


may change, according to the desired wipe. In this process, the final α value is typically taken to be α


X


, subject to a limiting function defined by α


Y


. That is, the final α typically is α


X


when α


X


is less than ay and typically is α


Y


when α


X


is greater than α


Y


.




A wipe is defined by two sets of parameters. The first set is parameters for the X direction in a frame; the second set is parameters for the Y direction, defining changes between scan lines in the effect. Both of the X and Y parameters include four groups of four parameters, each group representing an operation, including offset, control, interval, and delta information. The offset information defines where blending is to begin. In the X direction, it identifies the pixel in the scan line where the first blend begins. In the Y direction, it identifies the scan line where blending is to begin. The next information is control information identifying whether further operations in the scan line, or in the frame will follow. For the X parameter, this control information is represented by two bits, wherein the first bit represents whether video is swapped between the A and B channels. The other bit indicates whether another operation in the X direction will appear. After the control information is the interval over which the blend is to be performed. The interval either identifies the number of scan lines or a number of pixels within one scan line. Finally, the delta information represents an increment to be added to α


X


or α


Y


for each pixel over the defined interval. Thus, a wipe is defined by four groups of four operations in each of the X and Y directions. The first operation signifies the transition from channel A to the border; the second from the border to the second channel; the third from the second channel to the border; and the fourth from the border to the first channel. If there is no border, only two operations are used and the second operation indicates that there is no further operation to be performed either for the scan line or for the frame.




Given the operations defining the wipe to be performed, including the four groups of operational information for each of the X and Y parameters, a state machine can be used to determine a for each pixel in the frame. These state machines will now be described in connection with

FIGS. 5 through 8

.





FIG. 5

is a block diagram illustrating some structures controlled by a state machine. The state machine's operation will be described in connection with the flow charts of

FIGS. 6 and 7

. In

FIG. 5

, X parameter memory


82


and Y parameter memory


80


store the operations to be performed. An address pointer is stored in registers


84


and


86


for each of these memories as X and Y address pointers. Initial X and Y delta values are also stored in registers


88


and


90


. These are fed to accumulators for the X and Y values


92


and


94


via switches


96


and


98


. The output of the accumulators


94


and


92


are fed to compare and switch unit


100


, the output of which provides the α value in a manner to be described below in connection with FIG.


8


. There is also a loop counter


102


which indicates the part of frame on which the effect is being performed. The significance of this will also be discussed further below in connection with FIG.


8


. There is also a Y position counter


104


and an X position counter


106


which are used by the control


108


which operates in accordance with the flow charts of

FIGS. 6 and 7

.





FIGS. 6 and 7

will now be described.




Upon a horizontal reset (HRST) or vertical sync (VSYNC), as indicated at step


110


, the Y accumulator


94


and Y address pointer


86


are cleared. An initial Y delta value


90


is loaded then into the accumulator


94


via switch


98


(step


112


). An offset is then read from the Y parameter memory


80


into the Y position counter


104


(step


114


). Control information is then read from the Y parameter memory


80


into the loop counter


102


(step


116


).




When valid data is available to be processed, and until the Y position counter


104


is not zero, operations on a scan line are performed in step


118


as will be discussed below in connection with FIG.


7


. After each scan is processed, the Y position counter


104


is decremented. When the Y position counter reaches zero, the interval is read and loaded from parameter memory


80


into the Y position counter


104


(step


120


). A delta value is then read from the Y parameter memory


80


into the Y accumulator


94


and is added to the current value therein (step


122


). This value is added for each scan line until the Y position counter is then zero. Each scan line is processed in accordance with the steps described below in connection with FIG.


7


. When the Y position counter


104


is zero, the control information is examined in step


124


to determine if further Y operations are to be performed. If there are further operations to be performed, processing returns to step


114


. Otherwise, the system waits until another horizontal reset or vertical sync occurs.




Operations on one scan line will now be described in connection with FIG.


7


. These operations begin in step


130


upon the receipt of a horizontal sync or reset. In step


132


, the X accumulator


92


and X address pointer


84


are cleared and an initial delta value at


88


is then loaded into the X accumulator


92


via switch


96


. An offset is then loaded from the X parameter memory


82


in step


134


into the X position counter


106


. Next control information is read from the X parameter memory


82


into loop counter


102


in step


136


. The X position counter is decremented when valid data is available, until the X position counter


106


is zero (step


138


). The interval is then read from X parameter memory


82


into X position counter


106


in step


140


. The delta value is then read from X parameter memory


82


into the X accumulator


92


and is added to the current value in the accumulator


92


until the X position counter is zero (step


142


). The X address pointer and Y address pointer are incremented along this process to identify the correct operation. If the control information indicates that more X operations are to be performed, as determined in step


144


, processing returns to step


134


. Otherwise, the system waits in step


146


until another horizontal sync or reset occurs.




For each pixel in the scan line, as indicated by each decrement operation on the X position counter


106


, an a value is output from the compare and switch unit


100


. How this operation is provided was discussed above. Further details for more complicated box wipes in this selection will now be provided in connection with FIG.


8


.




As indicated in

FIG. 5

, the compare and switch unit


100


receives an α


Y


value from the Y accumulator


94


and α


X


value from X accumulator


92


and a loop counter


102


. The loop counter indicates which quadrant in a box wipe with a border is being processed. The indication of a quadrant can readily be determined by the status of the state machine and a counter. Because the X operations and Y operations are each defined by four groups of four parameters, wherein each group identifies an operation to be performed when a portion of an image, there are precisely sixteen combinations of X parameters and Y parameters, each identifying a quadrant of the resulting image. The α value for each quadrant has a predetermined relationship with the α


X


and α


Y


values. Thus, according to the loop control


102


, the appropriate selection among α


X


and α


Y


can be provided.




The relationships of α to α


X


and α


Y


for each quadrant will now be described in connection with FIG.


8


.





FIG. 8

illustrates


25


regions of a box wipe with a border and transitions between one image, a border color, and another image. There are fifteen general types of regions to be considered in this effect, each being numbered accordingly in the upper left hand corner of the box. For example, the first region


200


is labeled zero as indicated at


202


. A box


204


is shown in each region identifying the source of image data (where CH


0


corresponds to the channel applied to input A of the blender


76


and CH


1


corresponds to the input applied to input B of the blender). The first line in box


204


indicates the a value to be provided to the blender. “A” indicates that α


X


is supplied as α, as in regions 4, 7, 8 and 11. “AL” indicates that α


Y


is supplied as the α value, as in regions 1, 2, 13 and 14. “AL:A<AL” indicates that α


X


is provided when it is less than α


Y


and α


Y


is provided otherwise, as in regions 0, 3, 10, 12 and 15. “AL:A>=AL” indicates that α


Y


is provided unless α


X


is greater than α


Y


when α


X


is provided, as in regions 5, 6, and 9.




Having now described a few embodiments of the invention, it should be apparent to those skilled in the art that the foregoing is merely illustrative and not limiting, having been presented by way of example only. Numerous modifications and other embodiments are within the scope of one of ordinary skill in the art and are contemplated as failing within the scope of the invention as defined by the appended claims and equivalents thereto.



Claims
  • 1. A method of generating a third sequence of digital still images from a first sequence of digital still images and a second sequence of digital still images during playback, wherein the first and second sequences are stored in data files in a file system, each digital image of the first and second sequence including a plurality of pixels, the method comprising:controlling the transfer of the first and second sequences from the data files to a first and a second data buffer, respectively; receiving a transition signal defining a transition from the first sequence to the second sequence; controlling reading of the first and second sequences from the first and second buffers, respectively; and generating the third sequence of digital still images from the read first sequence and the read second sequence in accordance with the defined transition, wherein the step of controlling the transfer of the first and second sequences from the data files to the first and second buffers, respectively, includes transferring to the first and second buffers in accordance with the amount of space available in each of first and second buffers, and wherein the first sequence is associated with the first data buffer and the second sequence is associated with the second data buffer, and wherein transferring comprises: determining an amount of data in the first data buffer and in the second data buffer; selecting one of the first and second sequences, wherein the selected sequence has the least amount of data in the associated data buffer, and selecting a desired amount of data to be read for the sequence; for the selected sequence, reading the desired amount of data from the data file stored in the file system; and repeating steps of determining, selecting and reading during playback of the generated third sequence of digital still images.
  • 2. The method of claim 1, wherein the transition is user-defined.
  • 3. The method of claim 1, further comprising:encoding the third sequence of digital still images in a motion video signal.
  • 4. The method of claim 1, further comprising:transferring the third sequence of digital still images to a video encoder to be encoded in a motion video signal, wherein the transition is generated in response to demands from the video encoder.
  • 5. The method of claim 1, wherein the first and second buffers both receive and output data on a first-in, first-out basis.
  • 6. The method of claim 1, wherein the data in the files is compressed, the method further comprising:decompressing the data.
  • 7. The method of claim 1, further comprising:transferring the first sequence of digital still images to a digital video effects system operative to perform three-dimensional video effects on sequences of digital still images; and transferring the third sequence of digital still images to the digital video effects system, wherein, in accordance with the transition defined by the transition signal, the third sequence is the same as the second sequence.
  • 8. The method of claim 1, wherein the third sequence is generated at a real-time rate.
  • 9. The method of claim 1, wherein the third sequence is generated at a user-selectable rate.
  • 10. The method of claim 1, further comprising:determining one or more digital still images of the first sequence and one or more digital still images of the second sequence to which the transition applies based on the transition signal.
  • 11. A system for generating a third sequence of digital still images from a first sequence of digital still images and a second sequence of digital still images during playback, wherein the first and second sequences are stored in data files in a file system, each digital image of the first and second sequence including a plurality of pixels, the system comprising:means for controlling the transfers of the first and second sequences from the data files to a first and a second data buffer, respectively; means for receiving a transition signal defining a transition from the first sequence to the second sequence; means for controlling reading the first and second sequences from the first and second buffers, respectively; and means for generating the third sequence of digital still images from the read first sequence and the read second sequence in accordance with the defined transition, wherein the means for controlling the transfers of the first and second sequences from the data files to the first and second buffers controls the transfers in accordance with the amount of space available in each of first and second buffers, and wherein the first sequence is associated with the first data buffer and the second sequence is associated with the second data buffer, and wherein the means for controlling the transfers of the first and second sequences from the data files to a first and a second data buffer comprises: means for determining an amount of data in the first data buffer and in the second data buffer; means for selecting one of the first and second sequences, wherein the selected sequence has the least amount of data in the associated data buffer, and selecting a desired amount of data to be read for the sequence; means for reading the desired amount of data from the data file for the selected sequence stored in the file system; and wherein the means for determining, means for selecting and means for reading cooperate during playback and generation of the third sequence of digital still images.
  • 12. The system of claim 11, wherein the transition is user-defined.
  • 13. The system of claim 11, further comprising:means for encoding the third sequence of digital still images in a motion video signal.
  • 14. The system of claim 11, further comprising:means for transferring the third sequence of digital still images to a video encoder to be encoded in a motion video signal, wherein the means for generating generates the transition between the first and second sequences of digital still images in response to demands from the video encoder.
  • 15. The system of claim 11, wherein each of the first and second buffers receives and outputs data on a first-in, first-out basis.
  • 16. The system of claim 11, wherein the data files are in a compressed state, the system further comprising:means for decompressing the data files.
  • 17. The system of claim 11 further comprising:means for transferring the first sequence of digital still images to a digital video effects system operative to perform three-dimensional video effects on sequences of digital still images; and means for transferring the third sequence of digital still images to the digital video effects system, wherein, in accordance with the transition defined by the transition signal the third sequence is the same as the second sequence.
  • 18. The system of claim 11, wherein the system is operative to generate the third sequence at a real-time rate.
  • 19. The system of claim 11, wherein the system is operative to generate the third sequence at a user-selectable rate.
  • 20. The system of claim 11, further comprising:means for determining the first digital still image and the second digital still between which to generate the transition based on the transition signal.
  • 21. A system for generating a third sequence of digital still images from a first sequence of digital still images and a second sequence of digital still images during playback, wherein the first and second sequences are stored in data files in a file system, each digital image of the first and second sequence including a plurality of pixels, the system comprising:a first data buffer and a second data buffer; a first controller to control the transfers of the first and second sequences from the data files to the first and second data buffers, respectively; a second controller having a first input to receive a transition signal defining the transition from the first sequence to the second sequence, a first output to control a read of one or more digital still images of the first sequence and one or more digital still images of the second sequences from the first buffer and the second buffer, respectively, to the processing module in accordance with the transition signal, and a second output to produce at an output a control signal, wherein the control signal indicates the transition to be performed based on the transition signal; and a digital video processing module for generating the third sequence of digital still images, the processing module having a first input to receive the one or more digital still images of the first sequence, a second input to receive the one or more digital still images of the second sequence, a third input to receive the control signal, and an output to provide the third sequence, the digital video processing module generating the third sequence from the one or more digital still images of the first sequence and the one or more digital still images of the second sequences in accordance with the control signal, wherein the first controller is operative to transfer the first and second sequences to the first and second buffers, respectively, in accordance with the amount of space available in each of first and second buffers, and wherein the first sequence is associated with the first data buffer and the second sequence is associated with the second data buffer, and wherein the first controller comprises: means for determining an amount of data in the first data buffer and in the second data buffer; means for selecting one of the first and second sequences, wherein the selected sequence has the least amount of data in the associated data buffer, and selecting a desired amount of data to be read for the sequence; means for reading the desired amount of data from the data file for the selected sequence stored in the file system; and wherein the means for determining, means for selecting and means for reading cooperate during playback and generation of the third sequence of digital still images.
  • 22. The system of claim 21, wherein the transition is user-defined.
  • 23. The system of claim 21, further comprising:a video encoder to receive the third sequence of digital still images and encode the third sequence in a motion video signal, wherein the second controller is operative to transfer the first and second sequences of digital still images from the first and second data buffers, respectively, to the processing module in response to demands from the video encoder.
  • 24. The system of claim 21, wherein the first and second buffers both receive and output data on a first-in, first-out basis.
  • 25. The system of claim 21, wherein the data files are in a compressed state, the system further comprising:a decompressor to decompress the data files.
  • 26. The system of claim 21, further comprising:a digital video effects system having first input to receive the first sequence of digital still images and a second input to receive the third sequence of digital still images, the digital video effects system to perform three-dimensional video effects on the first and third sequences of digital still images, wherein, in accordance with the transition defined by the transition signal, the third sequence is the same as the second sequence.
  • 27. The system of claim 21, wherein the system is operative to generate the third sequence at a real-time rate.
  • 28. The system of claim 21, wherein the system is operative to generate the third sequence at a user-selectable rate.
  • 29. The system of claim 21, wherein the second controller is operative to determine the first image and the second image between which to generate the transition based on the transition signal.
  • 30. A process for transferring video data for first and second sequences of digital still images from one or more video data files stored in a file system to an effects processing device during playback of a third sequence of digital still images output by the effects processing device, wherein the effects processing device has a first buffer for storing video data before processing for the first sequence and a second buffer for storing video data before processing for the second sequence, and a third buffer for storing the generated third sequence before playback by an output device, the process comprising:determining an amount of data in the first buffer and in the second buffer; selecting one of the first and second sequences, wherein the selected sequence has the least amount of data in the buffer associated with the sequence, and selecting a desired amount of data to be read for the sequence; for the selected sequence, reading the desired amount of data from the data file for the selected sequence in the file system; processing the video data in the first and second buffers received from the file system using the effects processing device to provide the third sequence to the third buffer for playback by the output device; and repeating steps of determining, selecting, reading and processing during playback of the third sequence.
  • 31. The method of claim 30, wherein processing the video data comprises:receiving a transition signal defining a transition from the first sequence to the second sequence; controlling reading of the first and second sequences from the first and second buffers, respectively; and generating the third sequence of digital still images from the read first sequence and the read second sequence in accordance with the defined transition.
  • 32. The method of claim 31, further comprising:transferring the third sequence of digital still images to a video encoder to be encoded in a motion video signal, wherein generating data for the third sequence is performed in response to demands from the video encoder during playback.
  • 33. The method of claim 30, wherein the first and second buffers both receive and output data on a first-in, first-out basis.
  • 34. The method of claim 31, wherein processing comprises:transferring the first sequence of digital still images to a digital video effects system operative to perform three-dimensional video effects on sequences of digital still images; processing the second sequence of digital still images in accordance with the transition defined by the transition signal such that the third sequence is the same as the second sequence; and transferring the third sequence of digital still images to the digital video effects system.
  • 35. The method of claim 30, wherein the third sequence is generated at a real-time rate.
  • 36. The method of claim 30, wherein the third sequence is generated at a user-selectable rate.
  • 37. An apparatus for transferring video data for first and second sequences of digital still images from one or more video data files stored in a file system to an effects processing device during playback of a third sequence of digital still images output by the effects processing device, wherein the effects processing device has a first buffer for storing video data before processing for the first sequence and a second buffer for storing video data before processing for the second sequence, and a third buffer for storing the generated third sequence before playback by an output device, the apparatus comprising:means for determining an amount of data in the first buffer and in the second buffer; means for selecting one of the first and second sequences, wherein the selected sequence has the least amount of data in the buffer associated with the sequence, and selecting a desired amount of data to be read for the sequence; means for reading, for the selected sequence, the desired amount of data from the data file for the selected sequence in the file system; means for processing the video data in the first and second buffers received from the file system using the effects processing device to provide the third sequence to the third buffer for playback by the output device; and wherein the means for determining, selecting, reading and processing cooperate during playback of the third sequence.
  • 38. The apparatus of claim 37, wherein the means for processing the video data comprises:means for receiving a transition signal defining a transition from the first sequence to the second sequence; means for controlling reading of the first and second sequences from the first and second buffers, respectively; and means for generating the third sequence of digital still images from the read first sequence and the read second sequence in accordance with the defined transition.
  • 39. The apparatus of claim 38, further comprising:means for transferring the third sequence of digital still images to a video encoder to be encoded in a motion video signal, wherein the means for generating data for the third sequence is performed in response to demands from the video encoder during playback.
  • 40. The apparatus of claim 37, wherein the first and second buffers both receive and output data on a first-in, first-out basis.
  • 41. The apparatus of claim 37, wherein the means for processing comprises:means for transferring the first sequence of digital still images to a digital video effects system operative to perform three-dimensional video effects on sequences of digital still images; means for processing the second sequence of digital still images in accordance with the transition defined by the transition signal such that the third sequence is the same as the second sequence; and means for transferring the third sequence of digital still images to the digital video effects system.
  • 42. The apparatus of claim 37, wherein the third sequence is generated at a real-time rate.
  • 43. The apparatus of claim 37, wherein the third sequence is generated at a user selectable rate.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 08/347,394 filed Mar. 6, 1995 under 35 U.S.C. §371 based on PCT Application Ser. No. PCT/US94/04253 filed Apr. 18, 1994.

US Referenced Citations (109)
Number Name Date Kind
2927154 Wolfe et al. Mar 1960 A
3084215 Bounsall Apr 1963 A
3123668 Silva Mar 1964 A
3342949 Wessels Sep 1967 A
3721757 Ettlinger Mar 1973 A
3740463 Youngstrom et al. Jun 1973 A
3748381 Strobele et al. Jul 1973 A
3787617 Fiori Jan 1974 A
3824336 Gould et al. Jul 1974 A
3925815 Lemelson Dec 1975 A
4001882 Fiori et al. Jan 1977 A
4080626 Hurst et al. Mar 1978 A
4100607 Skinner Jul 1978 A
4149242 Pirz Apr 1979 A
4205346 Ross May 1980 A
4209843 Hyatt Jun 1980 A
4272790 Bates Jun 1981 A
4283745 Kuper et al. Aug 1981 A
4311998 Matherat Jan 1982 A
4363049 Ohtsuki et al. Dec 1982 A
4375083 Maxemchuk Feb 1983 A
4538188 Barker et al. Aug 1985 A
4541008 Fishman et al. Sep 1985 A
4550437 Kobayashi et al. Oct 1985 A
4558302 Welch Dec 1985 A
4602286 Kellar et al. Jul 1986 A
4612569 Ichinose Sep 1986 A
4663730 Ikeda May 1987 A
4685003 Westland Aug 1987 A
4694343 Flora Sep 1987 A
4698664 Nichols et al. Oct 1987 A
4698682 Astle Oct 1987 A
4717971 Sawyer Jan 1988 A
4746994 Ettlinger May 1988 A
4750050 Belmares-Sarabia et al. Jun 1988 A
4785349 Keith et al. Nov 1988 A
4858011 Jackson et al. Aug 1989 A
4937685 Barker et al. Jun 1990 A
4956725 Kozuki et al. Sep 1990 A
4964004 Barker Oct 1990 A
4970663 Bedell et al. Nov 1990 A
4972274 Becker et al. Nov 1990 A
4979050 Westland et al. Dec 1990 A
4991013 Kobayashi Feb 1991 A
4994914 Wiseman et al. Feb 1991 A
4999807 Akashi Mar 1991 A
5040066 Arbeiter et al. Aug 1991 A
5045940 Peters et al. Sep 1991 A
5048012 Gulick et al. Sep 1991 A
5050066 Myers et al. Sep 1991 A
5068785 Sugiyama Nov 1991 A
5077610 Searby et al. Dec 1991 A
5081701 Silver Jan 1992 A
5109482 Bohrman Apr 1992 A
5111409 Gasper et al. May 1992 A
5119442 Brown Jun 1992 A
5121210 Hirayama Jun 1992 A
5126851 Yoshimura et al. Jun 1992 A
5161019 Emanuel Nov 1992 A
5164839 Lang Nov 1992 A
5189516 Angell et al. Feb 1993 A
5191645 Carlucci et al. Mar 1993 A
5192999 Graczyk et al. Mar 1993 A
5194952 Pelley Mar 1993 A
5206929 Langford et al. Apr 1993 A
5216755 Walker et al. Jun 1993 A
5220425 Enari et al. Jun 1993 A
5222219 Stumpf et al. Jun 1993 A
5227863 Bilbrey et al. Jul 1993 A
5237567 Nay et al. Aug 1993 A
5237648 Mills et al. Aug 1993 A
5241389 Bilbrey Aug 1993 A
5243447 Bodenkamp et al. Sep 1993 A
5255373 Brockman et al. Oct 1993 A
5255375 Crook et al. Oct 1993 A
5257113 Chen et al. Oct 1993 A
5260695 Gengler et al. Nov 1993 A
5267351 Reber et al. Nov 1993 A
5274750 Shiina et al. Dec 1993 A
5274760 Schneider Dec 1993 A
5287420 Barrett Feb 1994 A
5305438 MacKay et al. Apr 1994 A
5315326 Sugiyama May 1994 A
5315390 Windrem May 1994 A
5321500 Capitant et al. Jun 1994 A
5353391 Cohen et al. Oct 1994 A
5373327 McGee et al. Dec 1994 A
5388197 Rayner Feb 1995 A
5410354 Uz Apr 1995 A
5412773 Carlucci et al. May 1995 A
5426467 Moriwake et al. Jun 1995 A
5459517 Kunitake et al. Oct 1995 A
5459529 Searby Oct 1995 A
5499050 Baldes et al. Mar 1996 A
5508940 Rossmere et al. Apr 1996 A
5526132 Tsubota et al. Jun 1996 A
5528310 Peters et al. Jun 1996 A
5559641 Kajimoto et al. Sep 1996 A
5577190 Peters Nov 1996 A
5589993 Naimpally Dec 1996 A
5638501 Gough et al. Jun 1997 A
5644364 Kurtze et al. Jul 1997 A
5646750 Collier Jul 1997 A
5654737 Der et al. Aug 1997 A
5682326 Klingler et al. Oct 1997 A
5732239 Tobagi et al. Mar 1998 A
5754186 Tam et al. May 1998 A
5907692 Wise et al. May 1999 A
6223211 Hamilton et al. Apr 2001 B1
Foreign Referenced Citations (29)
Number Date Country
42 01 335 Jul 1993 DE
42 29 394 Mar 1994 DE
0 113 993 Jul 1984 EP
0 268 270 May 1988 EP
0 336 712 Oct 1989 EP
0 339 948 Nov 1989 EP
0 357 413 Sep 1990 EP
0 390 048 Oct 1990 EP
0 390 421 Oct 1990 EP
0 438 299 Jul 1991 EP
0 440 408 Aug 1991 EP
0 450 471 Oct 1991 EP
0 469 850 Feb 1992 EP
0 473 322 Mar 1992 EP
0 476 985 Mar 1992 EP
0 480 625 Apr 1992 EP
0 488 673 Jun 1992 EP
0 526 064 Feb 1993 EP
0 585 903 Mar 1994 EP
0 599 607 Jun 1994 EP
0 645 765 Mar 1995 EP
0 715 460 Jun 1996 EP
2 235 815 Mar 1991 GB
2 260 458 Apr 1993 GB
3 136 480 Jun 1991 JP
WO 8707108 Nov 1987 WO
WO 9013879 Nov 1990 WO
WO 9321636 Oct 1993 WO
WO 9424815 Oct 1994 WO
Non-Patent Literature Citations (17)
Entry
Chang, Shih-Fu, et al., A New Approach to Decoding and Compositing Motion-Compensated DCT-Based Images, ICASSP '93, Minneapolis, Minnesota, Apr., 1993.
Chang, Shih-Fu, et al., Compositing Motion-Compensated Video within the Network, IEEE 3rd International Workshop on Multimedia Communication, Apr. 1992.
Shae, Z.-Y. and Chen, M.-S., Mixing and Playback of JPEG Compressed Packet Videos, Proceedings IEEE Globecom '92 Conference, pp. 245-249, Dec. 6, 1992.
Davidoff, Frank, The All-Digital Television Studio, SMPTE Journal, vol. 89, No. 6 Jun. 1980, pp. 445-449.
Haeberli, Paul and Akeley, Kurt, The Accumulation Buffer: Hardware Support for High-Quality Rendering, Computer Graphics, vol. 24, No. 4, Aug. 1990.
Smith, B.C. and Rowe, L.A., Algorithms for Manipulating Compressed Images, IEEE Computer Graphics and Aplications, vol. 13, Issue 5, pp. 34-42, Sep. 1993.
C.A. Pantuso, Reducing Financial Aliasing in HDTV Production, Better Video Images, 23rd Annual SMPTE Conference, Feb. 3-4, 1989, San Francisco, CA. pp. 157-169.
Rangan, P.V. et al, A Window-Based Editor for Digital Video and Audio, 1992, pp. 640-648 IEEE.
Mackay, W.E. and Davenport, G., Virtual Video Editing in Interactive Multimedia Applications Jul. 1989, pp. 802-810, vol. 32, No. 7, Comm of ACM.
Krieg, P., Multimedia-Computer und die Zukunft des Film/Videoschnitts, 1991, pp. 252-258, Fernseh-und Kino-Technik Norton, M.J., A Visual EDL System (English Abstract).
Green, J.L., The Evolution of DVI System Software, 1/92, pp. 53-57, vol. 35, No. 1, Comm of ACM.
The O.L.E. Partnership, Lightworks Editor (Advertisement).
Kroeker, E.J. Challenges in Full Motion Video/Audio for Personal Computers, Jan. 1993 SMPTE Journal, vol. 102, No. 1, pp. 24-31.
Vicki de Mey and Simon Gibbs, “A Multimedia Component Kit,” Experiences with Visual Composition of Applications Centre Universitaire d'Informatique Université de Genève.
Michael Kass, “Condor: Constraint-Based Dataflow,” Computer Graphics, 26, Jul. 2, 1992.
Ulrich Schmidt and Knut Caesar, “Datawave: A Single-Chip Multiprocessor for Video Applications,” 1991 IEE Micro, vol. 11, Jun. 1991.
J. Serot G. Quenot and B. Zavidovique and B. Zavidovique, “Functional Data-flow Architecture dedicated to Real-time Image Processing,” IFIP WG10.3 Working Conference, pp. 129-140, 1993.
Continuations (1)
Number Date Country
Parent 08/347394 US
Child 08/885006 US