Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation

Information

  • Patent Grant
  • 10116959
  • Patent Number
    10,116,959
  • Date Filed
    Tuesday, December 27, 2016
    7 years ago
  • Date Issued
    Tuesday, October 30, 2018
    6 years ago
Abstract
Several improvements for use with Bidirectionally Predictive (B) pictures within a video sequence are provided. In certain improvements Direct Mode encoding and/or Motion Vector Prediction are enhanced using spatial prediction techniques. In other improvements Motion Vector prediction includes temporal distance and subblock information, for example, for more accurate prediction. Such improvements and other presented herein significantly improve the performance of any applicable video coding system/logic.
Description
TECHNICAL FIELD

This invention relates to video coding, and more particularly to methods and apparatuses for providing improved coding and/or prediction techniques associated with different types of video data.


BACKGROUND

The motivation for increased coding efficiency in video coding has led to the adoption in the Joint Video Team (JVT) (a standards body) of more refined and complicated models and modes describing motion information for a given macroblock. These models and modes tend to make better advantage of the temporal redundancies that may exist within a video sequence. See, for example, ITU-T, Video Coding Expert Group (VCEG), “JVT Coding—(ITU-T H.26L & ISO/IEC JTC 1 Standard)—Working Draft Number 2 (WD-2)”, ITU-T JVT-B 118, March 2002; and/or Heiko Schwarz and Thomas Wiegand, “Tree-structured macroblock partition”, Doc. VCEG-N17, December 2001.


There is continuing need for further improved methods and apparatuses that can support the latest models and modes and also possibly introduce new models and modes to take advantage of improved coding techniques.


SUMMARY

The above state needs and other are addressed, for example, by a method for use in encoding video data within a sequence of video frames. The method includes identifying at least a portion of at least one video frame to be a Bidirectionally Predictive (B) picture, and selectively encoding the B picture using at least spatial prediction to encode at least one motion parameter associated with the B picture. In certain exemplary implementations the B picture may include a block, a macroblock, a subblock, a slice, or other like portion of the video frame. For example, when a macroblock portion is used, the method produces a Direct Macroblock.


In certain further exemplary implementations, the method further includes employing linear or non-linear motion vector prediction for the B picture based on at least one reference picture that is at least another portion of the video frame. By way of example, in certain implementations, the method employs median motion vector prediction to produce at least one motion vector.


In still other exemplary implementations, in addition to spatial prediction, the method may also process at least one other portion of at least one other video frame to further selectively encode the B picture using temporal prediction to encode at least one temporal-based motion parameter associated with the B picture. In some instances the temporal prediction includes bidirectional temporal prediction, for example based on at least a portion of a Predictive (P) frame.


In certain other implementations, the method also selectively determines applicable scaling for a temporal-based motion parameter based at least in part on a temporal distance between the predictor video frame and the frame that includes the B picture. In certain implementations temporal distance information is encoded, for example, within a header or other like data arrangement associated with the encoded B picture.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings. The same numbers are used throughout the figures to reference like components and/or features.



FIG. 1 is a block diagram depicting an exemplary computing environment that is suitable for use with certain implementations of the present invention.



FIG. 2 is a block diagram depicting an exemplary representative device that is suitable for use with certain implementations of the present invention.



FIG. 3 is an illustrative diagram depicting spatial predication associated with portions of a picture, in accordance with certain exemplary implementations of the present invention.



FIG. 4 is an illustrative diagram depicting Direct Prediction in B picture coding, in accordance with certain exemplary implementations of the present invention.



FIG. 5 is an illustrative diagram depicting what happens when a scene change happens or even when the collocated block is intra-coded, in accordance with certain exemplary implementations of the present invention.



FIG. 6 is an illustrative diagram depicting handling of collocated intra within existing codecs wherein motion is assumed to be zero, in accordance with certain exemplary implementations of the present invention.



FIG. 7 is an illustrative diagram depicting how Direct Mode is handled when the reference picture of the collocated block in the subsequent P picture is other than zero, in accordance with certain exemplary implementations of the present invention.



FIG. 8 is an illustrative diagram depicting an exemplary scheme wherein MVFW and MVBW are derived from spatial prediction, in accordance with certain exemplary implementations of the present invention.



FIG. 9 is an illustrative diagram depicting how spatial prediction solves the problem of scene changes and the like, in accordance with certain exemplary implementations of the present invention.



FIG. 10 is an illustrative diagram depicting joint spatio-temporal prediction for Direct Mode in B picture coding, in accordance with certain exemplary implementations of the present invention.



FIG. 11 is an illustrative diagram depicting Motion Vector Prediction of a current block considering reference picture information of predictor macroblocks, in accordance with certain exemplary implementations of the present invention.



FIG. 12 is an illustrative diagram depicting how to use more candidates for Direct Mode prediction especially if bidirectional prediction is used within the B picture, in accordance with certain exemplary implementations of the present invention.



FIG. 13 is an illustrative diagram depicting how B pictures may be restricted in using future and past reference pictures, in accordance with certain exemplary implementations of the present invention.



FIG. 14 is an illustrative diagram depicting projection of collocated Motion Vectors to a current reference for temporal direct prediction, in accordance with certain exemplary implementations of the present invention.



FIGS. 15a-c are illustrative diagrams depicting Motion Vector Predictors for one MV in different configurations, in accordance with certain exemplary implementations of the present invention.



FIGS. 16a-c are illustrative diagrams depicting Motion Vector Predictors for one MV with 8×8 partitions in different configurations, in accordance with certain exemplary implementations of the present invention.



FIGS. 17a-c are illustrative diagrams depicting Motion Vector Predictors for one MV with additional predictors for 8×8 partitioning, in accordance with certain exemplary implementations of the present invention.





DETAILED DESCRIPTION

Several improvements for use with Bidirectionally Predictive (B) pictures within a video sequence are described below and illustrated in the accompanying drawings. In certain improvements Direct Mode encoding and/or Motion Vector Prediction are enhanced using spatial prediction techniques. In other improvements Motion Vector prediction includes temporal distance and subblock information, for example, for more accurate prediction. Such improvements and other presented herein significantly improve the performance of any applicable video coding system/logic.


While these and other exemplary methods and apparatuses are described, it should be kept in mind that the techniques of the present invention are not limited to the examples described and shown in the accompanying drawings, but are also clearly adaptable to other similar existing and future video coding schemes, etc.


Before introducing such exemplary methods and apparatuses, an introduction is provided in the following section for suitable exemplary operating environments, for example, in the form of a computing device and other types of devices/appliances.


Exemplary Operational Environments:


Turning to the drawings, wherein like reference numerals refer to like elements, the invention is illustrated as being implemented in a suitable computing environment. Although not required, the invention will be described in the general context of computer-executable instructions, such as program modules, being executed by a personal computer.


Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multi-processor systems, microprocessor based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, portable communication devices, and the like.


The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.



FIG. 1 illustrates an example of a suitable computing environment 120 on which the subsequently described systems, apparatuses and methods may be implemented. Exemplary computing environment 120 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the improved methods and systems described herein. Neither should computing environment 120 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in computing environment 120.


The improved methods and systems herein are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable include, but are not limited to, personal computers, server computers, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.


As shown in FIG. 1, computing environment 120 includes a general-purpose computing device in the form of a computer 130. The components of computer 130 may include one or more processors or processing units 132, a system memory 134, and a bus 136 that couples various system components including system memory 134 to processor 132.


Bus 136 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus also known as Mezzanine bus.


Computer 130 typically includes a variety of computer readable media. Such media may be any available media that is accessible by computer 130, and it includes both volatile and non-volatile media, removable and non-removable media.


In FIG. 1, system memory 134 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 140, and/or non-volatile memory, such as read only memory (ROM) 138. A basic input/output system (BIOS) 142, containing the basic routines that help to transfer information between elements within computer 130, such as during start-up, is stored in ROM 138. RAM 140 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processor 132.


Computer 130 may further include other removable/non-removable, volatile/non-volatile computer storage media. For example, FIG. 1 illustrates a hard disk drive 144 for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”), a magnetic disk drive 146 for reading from and writing to a removable, non-volatile magnetic disk 148 (e.g., a “floppy disk”), and an optical disk drive 150 for reading from or writing to a removable, non-volatile optical disk 152 such as a CD-ROM/R/RW, DVD-ROM/R/RW/+R/RAM or other optical media. Hard disk drive 144, magnetic disk drive 146 and optical disk drive 150 are each connected to bus 136 by one or more interfaces 154.


The drives and associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules, and other data for computer 130. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 148 and a removable optical disk 152, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like, may also be used in the exemplary operating environment.


A number of program modules may be stored on the hard disk, magnetic disk 148, optical disk 152, ROM 138, or RAM 140, including, e.g., an operating system 158, one or more application programs 160, other program modules 162, and program data 164.


The improved methods and systems described herein may be implemented within operating system 158, one or more application programs 160, other program modules 162, and/or program data 164.


A user may provide commands and information into computer 130 through input devices such as keyboard 166 and pointing device 168 (such as a “mouse”). Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, camera, etc. These and other input devices are connected to the processing unit 132 through a user input interface 170 that is coupled to bus 136, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).


A monitor 172 or other type of display device is also connected to bus 136 via an interface, such as a video adapter 174. In addition to monitor 172, personal computers typically include other peripheral output devices (not shown), such as speakers and printers, which may be connected through output peripheral interface 175.


Computer 130 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 182. Remote computer 182 may include many or all of the elements and features described herein relative to computer 130.


Logical connections shown in FIG. 1 are a local area network (LAN) 177 and a general wide area network (WAN) 179. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.


When used in a LAN networking environment, computer 130 is connected to LAN 177 via network interface or adapter 186. When used in a WAN networking environment, the computer typically includes a modem 178 or other means for establishing communications over WAN 179. Modem 178, which may be internal or external, may be connected to system bus 136 via the user input interface 170 or other appropriate mechanism.


Depicted in FIG. 1, is a specific implementation of a WAN via the Internet. Here, computer 130 employs modem 178 to establish communications with at least one remote computer 182 via the Internet 180.


In a networked environment, program modules depicted relative to computer 130, or portions thereof, may be stored in a remote memory storage device. Thus, e.g., as depicted in FIG. 1, remote application programs 189 may reside on a memory device of remote computer 182. It will be appreciated that the network connections shown and described are exemplary and other means of establishing a communications link between the computers may be used.


Attention is now drawn to FIG. 2, which is a block diagram depicting another exemplary device 200 that is also capable of benefiting from the methods and apparatuses disclosed herein. Device 200 is representative of any one or more devices or appliances that are operatively configured to process video and/or any related types of data in accordance with all or part of the methods and apparatuses described herein and their equivalents. Thus, device 200 may take the form of a computing device as in FIG. 1, or some other form, such as, for example, a wireless device, a portable communication device, a personal digital assistant, a video player, a television, a DVD player, a CD player, a karaoke machine, a kiosk, a digital video projector, a flat panel video display mechanism, a set-top box, a video game machine, etc. In this example, device 200 includes logic 202 configured to process video data, a video data source 204 configured to provide vide data to logic 202, and at least one display module 206 capable of displaying at least a portion of the video data for a user to view. Logic 202 is representative of hardware, firmware, software and/or any combination thereof. In certain implementations, for example, logic 202 includes a compressor/decompressor (codec), or the like. Video data source 204 is representative of any mechanism that can provide, communicate, output, and/or at least momentarily store video data suitable for processing by logic 202. Video reproduction source is illustratively shown as being within and/or without device 200. Display module 206 is representative of any mechanism that a user might view directly or indirectly and see the visual results of video data presented thereon. Additionally, in certain implementations, device 200 may also include some form or capability for reproducing or otherwise handling audio data associated with the video data. Thus, an audio reproduction module 208 is shown.


With the examples of FIGS. 1 and 2 in mind, and others like them, the next sections focus on certain exemplary methods and apparatuses that may be at least partially practiced using with such environments and with such devices.


Encoding Bidirectionally Predictive (B) Pictures And Motion Vector Prediction


This section describes several exemplary improvements that can be implemented to encode Bidirectionally Predictive (B) pictures and Motion Vector prediction within a video coding system or the like. The exemplary methods and apparatuses can be applied to predict motion vectors and enhancements in the design of a B picture Direct Mode. Such methods and apparatuses are particularly suitable for multiple picture reference codecs, such as, for example, JVT, and can achieve considerable coding gains especially for panning sequences or scene changes.


Bidirectionally Predictive (B) pictures are an important part of most video coding standards and systems since they tend to increase the coding efficiency of such systems, for example, when compared to only using Predictive (P) pictures. This improvement in coding efficiency is mainly achieved by the consideration of bidirectional motion compensation, which can effectively improve motion compensated prediction and thus allow the encoding of significantly reduced residue information. Furthermore, the introduction of the Direct Prediction mode for a Macroblock/block within such pictures can further increase efficiency considerably (e.g., more than 10-20%) since no motion information is encoded. Such may be accomplished, for example, by allowing the prediction of both forward and backward motion information to be derived directly from the motion vectors used in the corresponding macroblock of a subsequent reference picture.


By way of example, FIG. 4 illustrates Direct Prediction in B picture at time t+1 coding based on P frames at times t and t+2, and the applicable motion vectors (MVs). Here, an assumption is made that an object in the picture is moving with constant speed. This makes it possible to predict a current position inside a B picture without having to transmit any motion vectors. The motion vectors ({right arrow over (MV)}fw,{right arrow over (MV)}bw) of the Direct Mode versus the motion vector {right arrow over (MV)} of the collocated MB in the first subsequent P reference picture are basically calculated by:









MV


fw

=





TR
B

·

MV




TR
D







and







MV


bw


=



(


TR
B

-

TR
D


)

·

MV




TR
D




,




where TRB is the temporal distance between the current B picture and the reference picture pointed by the forward MV of the collocated MB, and TRD is the temporal distance between the future reference picture and the reference picture pointed by the forward MV of the collocated MB.


Unfortunately there are several cases where the existing Direct Mode does not provide an adequate solution, thus not efficiently exploiting the properties of this mode. In particular, existing designs of this mode usually force the motion parameters of the Direct Macroblock, in the case of the collocated Macroblock in the subsequent P picture being Intra coded, to be zero. For example, see FIG. 6, which illustrates handling of collocated intra within existing codecs wherein motion is assumed to be zero. This essentially means that, for this case, the B picture Macroblock will be coded as the average of the two collocated Macroblocks in the first subsequent and past P references. This immediately raises the following concern; if a Macroblock is Intra-coded, then how does one know how much relationship it has with the collocated Macroblock of its reference picture. In some situations, there may be little if any actual relationship. Hence, it is possible that the coding efficiency of the Direct Mode may be reduced. An extreme case can be seen in the case of a scene change as illustrated in FIG. 5. FIG. 5 illustrates what happens when a scene change occurs in the video sequence and/or what happens when the collocated block is intra. Here, in this example, obviously no relationship exists between the two reference pictures given the scene change. In such a case bidirectional prediction would provide little if any benefit. As such, the Direct Mode could be completely wasted. Unfortunately, conventional implementations of the Direct Mode restrict it to always perform a bidirectional prediction of a Macroblock.



FIG. 7 is an illustrative diagram depicting how Direct Mode is handled when the reference picture of the collocated block in the subsequent P picture is other than zero, in accordance with certain implementations of the present invention.


An additional issue with the Direct Mode Macroblocks exists when multi-picture reference motion compensation is used. Until recently, for example, the JVT standard provided the timing distance information (TRB and TRD), thus allowing for the proper scaling of the parameters. Recently, this was changed in the new revision of the codec (see, e.g., Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, “Joint Committee Draft (CD) of Joint Video Specification (ITU-T Rec. H.264| ISO/IEC 14496-10 AVC)”, ITU-T JVT-C167, May. 2002, which is incorporated herein by reference). In the new revision, the motion vector parameters of the subsequent P picture are to be scaled equally for the Direct Mode prediction, without taking in account the reference picture information. This could lead to significant performance degradation of the Direct Mode, since the constant motion assumption is no longer followed.


Nevertheless, even if the temporal distance parameters were available, it is not always certain that the usage of the Direct Mode as defined previously is the most appropriate solution. In particular for the B pictures which are closer to a first forward reference picture, the correlation might be much stronger with that picture, than the subsequent reference picture. An extreme example which could contain such cases could be a sequence where scene A changes to scene B, and then moves back to scene A (e.g., as may happen in a news bulletin, etc.). All the above could deter the performance of B picture encoding considerably since Direct Mode will not be effectively exploited within the encoding process.


With these and other concerns in mind, unlike the previous definitions of the Direct Mode where only temporal prediction was used, in accordance with certain aspects of the present invention, a new Direct Macroblock type is introduced wherein both temporal prediction and/or spatial prediction is considered. The type(s) of prediction used can depend on the type of reference picture information of the first subsequent P reference picture, for example.


In accordance with certain other aspects of the present invention, one may also further considerably improve motion vector prediction for both P and B pictures when multiple picture references are used, by taking in consideration temporal distances, if such are available.


These enhancements are implemented in certain exemplary methods and apparatuses as described below. The methods and apparatuses can achieve significant bitrate reductions while achieving similar or better quality.


Direct Mode Enhancements:


In most conventional video coding systems, Direct Mode is designed as a bidirectional prediction scheme where motion parameters are always predicted in a temporal way from the motion parameters in the subsequent P images. In this section, an enhanced Direct Mode technique is provided in which spatial information may also/alternatively be considered for such predictions.


One or more of the following exemplary techniques may be implemented as needed, for example, depending on the complexity and/or specifications of the system.


One technique is to implement spatial prediction of the motion vector parameters of the Direct Mode without considering temporal prediction. Spatial prediction can be accomplished, for example, using existing Motion Vector prediction techniques used for motion vector encoding (such as, e.g., median prediction). If multiple picture references are used, then the reference picture of the adjacent blocks may also be considered (even though there is no such restriction and the same reference, e.g. 0, could always be used).


Motion parameters and reference pictures could be predicted as follows and with reference to FIG. 3, which illustrates spatial predication associated with portions A-E (e.g., macroblocks, slices, etc.) assumed to be available and part of a picture. Here, E is predicted in general from A, B, C as Median (A,B,C). If C is actually outside of the picture then D is used instead. If B,C, and D are outside of picture, then only A is used, where as if A does not exist, such is replaced with (0,0). Those skilled in the art will recognize that spatial prediction may be done at a subblock level as well.


In general spatial prediction can be seen as a linear or nonlinear function of all available motion information calculated within a picture or a group of macroblocks/blocks within the same picture.


There are various methods available that may be arranged to predict the reference picture for Direct Mode. For example, one method may be to select a minimum reference picture among the predictions. In another method, a median reference picture may be selected. In certain methods, a selection may be made between a minimum reference picture and median reference picture, e.g., if the minimum is zero. In still other implementations, a higher priority could also be given to either vertical or horizontal predictors (A and B) due to their possibly stronger correlation with E.


If one of the predictions does not exist (e.g., all surrounding macroblocks are predicted with the same direction FW or BW only or are intra), then the existing one is only used (single direction prediction) or such could be predicted from the one available. For example if forward prediction is available then:








MV


bw

=



(


TR
B

-

TR
D


)

·


MV


fw



TR
B






Temporal prediction is used for Macroblocks if the subsequent P reference is non intra as in existing codecs. Attention is now drawn to FIG. 8, in which MVFW and MVBW are derived from spatial prediction (Median MV of surrounding Macroblocks). If either one is not available (i.e., no predictors) then one-direction is used. If a subsequent P reference is intra, then spatial prediction can be used instead as described above. Assuming that no restrictions exist, if one of the predictions is not available then Direct Mode becomes a single direction prediction mode.


This could considerably benefit video coding when the scene changes, for example, as illustrated in FIG. 9, and/or even when fading exists within a video sequence. As illustrated in FIG. 9, spatial prediction may be used to solve the problem of a scene change.


If temporal distance information is not available within a codec, temporal prediction will not be as efficient in the direct mode for blocks when the collocated P reference block has a non-zero reference picture. In such a case, spatial prediction may also be used as above. As an alternative, one may estimate scaling parameters if one of the surrounding macroblocks also uses the same reference picture as the collocated P reference block. Furthermore, special handling may be provided for the case of zero motion (or close to zero motion) with a non-zero reference. Here, regardless of temporal distance forward and backward motion vectors could always be taken as zero. The best solution, however, may be to always examine the reference picture information of surrounding macroblocks and based thereon decide on how the direct mode should be handled in such a case.


More particularly, for example, given a non-zero reference, the following sub cases may be considered:

    • Case A: Temporal prediction is used if the motion vectors of the collocated P block are zero.
    • Case B: If all surrounding macroblocks use different reference pictures than the collocated P reference, then spatial prediction appears to be a better choice and temporal prediction is not used.
    • Case C: If motion flow inside the B picture appears to be quite different than the one in the P reference picture, then spatial prediction is used instead.
    • Case D: Spatial or temporal prediction of Direct Mode macroblocks could be signaled inside the image header. A pre-analysis of the image could be performed to decide which should be used.
    • Case E: Correction of the temporally predicted parameters based on spatial information (or vice versa). Thus, for example, if both appear to have the same or approximately the same phase information then the spatial information could be a very good candidate for the direct mode prediction. A correction could also be done on the phase, thus correcting the sub pixel accuracy of the prediction.



FIG. 10 illustrates a joint spatio-temporal prediction for Direct Mode in B picture coding. Here, in this example, Direct Mode can be a 1- to 4-direction mode depending on information available. Instead of using Bi-directional prediction for Direct Mode macroblocks, a multi-hypothesis extension of such mode can be done and multiple predictions used instead.


Combined with the discussion above, Direct Mode macroblocks can be predicted using from one up to four possible motion vectors depending on the information available. Such can be decided, for example, based on the mode of the collocated P reference image macroblock and on the surrounding macroblocks in the current B picture. In such a case, if the spatial prediction is too different than the temporal one, one of them could be selected as the only prediction in favor of the other. Since spatial prediction as described previously, might favor a different reference picture than the temporal one, the same macroblock might be predicted from more than 2 reference pictures.


The JVT standard does not restrict the first future reference to be a P picture. Hence, in such a standard, a picture can be a B as illustrated in FIG. 12, or even a Multi-Hypothesis (MH) picture. This implies that more motion vectors are assigned per macroblock. This means that one may also use this property to increase the efficiency of the Direct Mode by more effectively exploiting the additional motion information.


In FIG. 12, the first subsequent reference picture is a B picture (pictures B8 and B9). This enables one to use more candidates for Direct Mode prediction especially if bidirectional prediction is used within the B picture.


In particular one may perform the following:

    • a.) If the collocated reference block in the first future reference is using bidirectional prediction, the corresponding motion vectors (forward or backward) are used for calculating the motion vectors of the current block. Since the backward motion vector of the reference corresponds to a future reference picture, special care should be taken in the estimate of the current motion parameters. Attention is drawn, for example to FIG. 12 in which the first subsequent reference picture is a B picture (pictures B8 and B9). This enables one to use more candidates for Direct Mode prediction especially if bidirectional prediction is used within the B picture. Thus, as illustrated, the backward motion vector of B8 {right arrow over (MV)}B8bw can be calculated as 2×{right arrow over (MV)}B7bw due to the temporal distance between B8, B7 and P6. Similarly for B9 the backward motion vector can be taken as {right arrow over (MV)}B7bw, if though these refer to the B7. One may also restrict these to refer to the first subsequent P picture, in which case these motion vectors can be scaled accordingly. A similar conclusion can be deduced about the forward motion vectors. Multiple picture reference or intra macroblocks can be handled similar to the previous discussion.
    • b.) If bidirectional prediction for the collocated block is used, then, in this example, one may estimate four possible predictions for one macroblock for the direct mode case by projecting and inverting the backward and forward motion vectors of the reference.
    • c.) Selective projection and inversion may be used depending on temporal distance. According to this solution, one selects the motion vectors from the reference picture which are more reliable for the prediction. For example, considering the illustration in FIG. 12, one will note that B8 is much closer to P2 than P6. This implies that the backward motion vector of B7 may not be a very reliable prediction. In this case, direct mode motion vectors can therefore be calculated only from the forward prediction of B7. For B9, however, both motion vectors seem to be adequate enough for the prediction and therefore may be used. Such decisions/information may also be decided/supported within the header of the image. Other conditions and rules may also be implemented. For example, additional spatial confidence of a prediction and/or a motion vector phase may be considered. Note, in particular, that if the forward and backward motion vectors have no relationship, then the backward motion vector might be too unreliable to use.


Single Picture Reference for B Pictures:


A special case exists with the usage of only one picture reference for B pictures (although, typically a forward and a backward reference are necessary) regardless of how many reference pictures are used in P pictures. From observations of encoding sequences in the current JVT codec, for example, it was noted that, if one compares the single-picture reference versus the multi-picture reference case using B pictures, even though encoding performance of P pictures for the multi-picture case is almost always superior to that of the single-picture, the some is not always true for B pictures.


One reason for this observation is the overhead of the reference picture used for each macroblock. Considering that B pictures rely more on motion information than P pictures, the reference picture information overhead reduces the number of bits that are transmitted for the residue information at a given bitrate, which thereby reduces efficiency. A rather easy and efficient solution could be the selection of only one picture reference for either backward or forward motion compensation, thus not needing to transmit any reference picture information.


This is considered with reference to FIGS. 13 and 14. As illustrated in FIG. 13, B pictures can be restricted in using only one future and past reference pictures. Thus, for direct mode motion vector calculation, projection of the motion vectors is necessary. A projection of the collocated MVs to the current reference for temporal direct prediction is illustrated in FIG. 14 (note that it is possible that TDD,0>TDD,1). Thus, in this example, Direct Mode motion parameters are calculated by projecting motion vectors that refer to other reference pictures to the two reference pictures, or by using spatial prediction as in FIG. 13. Note that such options not only allow for possible reduced encoding complexity of B pictures, but also tend to reduce memory requirements since fewer B pictures (e.g., maximum two) are needed to be stored if B pictures are allowed to reference B pictures.


In certain cases a reference picture of the first future reference picture may no longer be available in the reference buffer. This could immediately generate a problem for the estimate of Direct Mode macroblocks and special handling of such cases is required. Obviously there is no such problem if a single picture reference is used. However, if multiple picture references are desired, then possible solutions include projecting the motion vector(s) to either the first forward reference picture, and/or to the reference picture that was closest to the non available picture. Either solution could be viable, whereas again spatial prediction could be an alternative solution.


Refinements of the motion vector prediction for single- and multi-picture reference motion compensation


Motion vector prediction for multi-picture reference motion compensation can significantly affect the performance of both B and P picture coding. Existing standards, such as, for example, JVT, do not always consider the reference pictures of the macroblocks used in the prediction. The only consideration such standards do make is when only one of the prediction macroblocks uses the same reference. In such a case, only that predictor is used for the motion prediction. There is no consideration of the reference picture if only one or all predictors are using a different reference.


In such a case, for example, and in accordance with certain further aspects of the present invention, one can scale the predictors according to their temporal distance versus the current reference. Attention is drawn to FIG. 11, which illustrates Motion Vector prediction of a current block (C) considering the reference picture information of predictor macroblocks (Pr) and performance of proper adjustments (e.g., scaling of the predictors).


If predictors A, B, and C use reference pictures with temporal distance TRA, TRB, and TRC respectively, and the current reference picture has a temporal distance equal to TR, then the median predictor is calculated as follows:








MV


pred

=

TR
×

Median
(




MV


A


TR
A


,



MV


B


TR
B


,



MV


C


TR
C



)






If integer computation is to be used, it may be easier to place the multiplication inside the median, thus increasing accuracy. The division could also be replaced with shifting, but that reduces the performance, whereas it might be necessary to handle signed shifting as well (−1>>N=−1). It is thus very important in such cases to have the temporal distance information available for performing the appropriate scaling. Such could also be available within the header, if not predictable otherwise.


Motion Vector prediction as discussed previously is basically median biased, meaning that the median value among a set of predictors is selected for the prediction. If one only uses one type of macroblock (e.g., 16×16) with one Motion Vector (MV), then these predictors can be defined, for example, as illustrated in FIG. 15. Here, MV predictors are shown for one MV. In FIG. 15a, the MB is not in the first row or the last column. In FIG. 15b, the MB is in the last column. In FIG. 15c, the MB is in the first row.


The JVT standard improves on this further by also considering the case that only one of the three predictors exists (i.e. Macroblocks are intra or are using a different reference picture in the case of multi-picture prediction). In such a case, only the existing or same reference predictor is used for the prediction and all others are not examined.


Intra coding does not always imply that a new object has appeared or that scene changes. It might instead, for example, be the case that motion estimation and compensation is inadequate to represent the current object (e.g., search range, motion estimation algorithm used, quantization of residue, etc) and that better results could be achieved through Intra Coding instead. The available motion predictors could still be adequate enough to provide a good motion vector predictor solution.


What is intriguing is the consideration of subblocks within a Macroblock, with each one being assigned different motion information. MPEG-4 and H.263 standards, for example, can have up to four such subblocks (e.g., with size 8×8), where as the JVT standard allows up to sixteen subblocks while also being able to handle variable block sizes (e.g., 4×4, 4×8, 8×4, 8×8, 8×16, 16×8, and 16×16). In addition JVT also allows for 8×8 Intra subblocks, thus complicating things even further.


Considering the common cases of JVT and MPEG-4/H.263 (8×8 and 16×16), the predictor set for a 16×16 macroblock is illustrated in FIGS. 16a-c having a similar arrangement to FIGS. 15a-c, respectively. Here, Motion Vector predictors are shown for one MV with 8×8 partitions. Even though the described predictors could give reasonable results in some cases, it appears that they may not adequately cover all possible predictions.


Attention is drawn next to FIGS. 17a-c, which are also in a similar arrangement to FIGS. 15a-c, respectively. Here, in FIGS. 17a-c there are two additional predictors that could also be considered in the prediction phase (C1 and A2). If 4×4 blocks are also considered, this increases the possible predictors by four.


Instead of employing a median of the three predictors A, B, and C (or A1, B, and C2) one may now have some additional, and apparently more reliable, options. Thus, for example, one can observe that predictors A1, and C2 are essentially too close with one another and it may be the case that they may not be too representative in the prediction phase. Instead, selecting predictors A1, C1, and B seems to be a more reliable solution due to their separation. An alternative could also be the selection of A2 instead of A1 but that may again be too close to predictor B. Simulations suggest that the first case is usually a better choice. For the last column A2 could be used instead of A1. For the first row either one of A1 and A2 or even their average value could be used. Gain up to 1% was noted within JVT with this implementation.


The previous case adds some tests for the last column. By examining FIG. 17b, for example, it is obvious that such tends to provide the best partitioning available. Thus, an optional solution could be the selection of A2, C1, and B (from the upper-left position). This may not always be recommended however, since such an implementation may adversely affect the performance of right predictors.


An alternative solution would be the usage of averages of predictors within a Macroblock. The median may then be performed as follows:

{right arrow over (MV)}pred=Median(Ave({right arrow over (MV)}C1,{right arrow over (MV)}C2),Ave({right arrow over (MV)}A1,{right arrow over (MV)}A2),{right arrow over (MV)}B).


For median row/column calculation, the median can be calculated as:

{right arrow over (MV)}pred=Median(Median({right arrow over (MV)}C1,{right arrow over (MV)}C2,{right arrow over (MV)}D), . . . Median({right arrow over (MV)}D,{right arrow over (MV)}A1,{right arrow over (MV)}C2),Median({right arrow over (MV)}B,{right arrow over (MV)}A1,{right arrow over (MV)}A2))


Another possible solution is a Median5 solution. This is probably the most complicated solution due to computation (quick-sort or bubble-sort could for example be used), but could potentially yield the best results. If 4×4 blocks are considered, for example, then Median9 could also be used:

{right arrow over (MV)}pred=Median({right arrow over (MV)}C1,{right arrow over (MV)}C2,{right arrow over (MV)}D,{right arrow over (MV)}B,{right arrow over (MV)}A1,{right arrow over (MV)}A2)


Considering that JVT allows the existence of Intra subblocks within an Inter Macroblock (e.g., tree macroblock structure), such could also be taken in consideration within the Motion Prediction. If a subblock (e.g., from Macroblocks above or left only) to be used for the MV prediction is Intra, then the adjacent subblock may be used instead. Thus, if A1 is intra but A2 is not, then A1 can be replaced by A2 in the prediction. A further possibility is to replace one missing Infra Macroblock with the MV predictor from the upper-left position. In FIG. 17a, for example, if C1 is missing then D may be used instead.


In the above sections, several improvements on B picture Direct Mode and on Motion Vector Prediction were presented. It was illustrated that spatial prediction can also be used for Direct Mode macroblocks; where as Motion Vector prediction should consider temporal distance and subblock information for more accurate prediction. Such considerations should significantly improve the performance of any applicable video coding system.


CONCLUSION

Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention.

Claims
  • 1. A computing device comprising one or more processing units, volatile memory, non-volatile memory, and storage, the non-volatile memory and/or storage having stored therein computer-executable instructions for causing the computing device, when programmed thereby, to perform operations comprising: encoding a current frame in a sequence of video frames, including, for a given block of the current frame: determining a predicted motion vector (“MV”) of a given MV for the given block using multiple MV predictors from the current frame, the multiple MV predictors including: a first MV predictor from a first surrounding block of the current frame, the first surrounding block being above and to the left of the given block;a second MV predictor from a second surrounding block of the current frame, the second surrounding block being above the given block and separated from the first surrounding block;a third MV predictor from a third surrounding block of the current frame, the third surrounding block being above the given block and adjacent the second surrounding block;a fourth MV predictor from a fourth surrounding block of the current frame, the fourth surrounding block being left of the given block and separated from the first surrounding block; anda fifth MV predictor from a fifth surrounding block of the current frame, the fifth surrounding block being left of the given block and adjacent the fourth surrounding block; andperforming motion compensation for the given block, using the given MV for the given block, relative to a reference frame to determine a motion-compensated prediction for the given block; andoutputting encoded data for the current frame.
  • 2. The computing device of claim 1, wherein the determining the predicted MV of the given MV for the given block includes: calculating a first intermediate MV using at least some of the multiple MV predictors that are left of the given block;calculating a second intermediate MV using at least some of the multiple MV predictors that are above the given block; anddetermining the predicted MV of the given MV for the given block using the first and second intermediate MVs.
  • 3. The computing device of claim 1, wherein the motion compensation uses the predicted MV as the given MV for the given block, and wherein the encoding further includes, for the given block, selecting the reference frame from among reference frames used for the multiple MV predictors.
  • 4. The computing device of claim 1, wherein the encoding further includes, for the given block: calculating a MV difference between the given MV for the given block and the predicted MV of the given MV for the given block; andencoding the MV difference.
  • 5. The computing device of claim 1, wherein at least some of the multiple MV predictors use different reference frames.
  • 6. The computing device of claim 1, wherein the encoding further includes, for the given block determining a residual for the given block; andencoding the residual.
  • 7. The computing device of claim 1, wherein the determining the predicted MV of the given MV for the given block further includes replacing one of the multiple MV predictors if the surrounding block for that MV predictor is intra.
  • 8. A method comprising: receiving encoded data for a current frame in a sequence of video frames; anddecoding the current frame, including, for a given block of the current frame: determining a predicted motion vector (“MV”) of a given MV for the given block using multiple MV predictors from the current frame, the multiple MV predictors including: a first MV predictor from a first surrounding block of the current frame, the first surrounding block being above and to the left of the given block;a second MV predictor from a second surrounding block of the current frame, the second surrounding block being above the given block and separated from the first surrounding block;a third MV predictor from a third surrounding block of the current frame, the third surrounding block being above the given block and adjacent the second surrounding block;a fourth MV predictor from a fourth surrounding block of the current frame, the fourth surrounding block being left of the given block and separated from the first surrounding block; anda fifth MV predictor from a fifth surrounding block of the current frame, the fifth surrounding block being left of the given block and adjacent the fourth surrounding block; andperforming motion compensation for the given block, using the given MV for the given block, relative to a reference frame to determine a motion-compensated prediction for the given block.
  • 9. The method of claim 8, wherein the determining the predicted MV of the given MV for the given block includes: calculating a first intermediate MV using at least some of the multiple MV predictors that are left of the given block;calculating a second intermediate MV using at least some of the multiple MV predictors that are above the given block; anddetermining the predicted MV of the given MV for the given block using the first and second intermediate MVs.
  • 10. The method of claim 8, wherein the motion compensation uses the predicted MV as the given MV for the given block, and wherein the decoding further includes, for the given block, selecting the reference frame from among reference frames used for the multiple MV predictors.
  • 11. The method of claim 8, wherein the decoding further includes, for the given block: decoding a MV difference between the given MV for the given block and the predicted MV of the given MV for the given block; andcombining the MV difference and the predicted MV of the given MV for the given block.
  • 12. The method of claim 8, wherein at least some of the multiple MV predictors use different reference frames.
  • 13. The method of claim 8, wherein the decoding further includes, for the given block: decoding a residual for the given block; andcombining the residual and the motion-compensated prediction.
  • 14. The method of claim 8, wherein the determining the predicted MV of the given MV for the given block further includes replacing one of the multiple MV predictors if the surrounding block for that MV predictor is intra.
  • 15. A computing device comprising one or more processing units, volatile memory, non-volatile memory, and storage, the non-volatile memory and/or storage having stored therein computer-executable instructions for causing the computing device, when programmed thereby, to perform operations comprising: receiving encoded data for a current frame in a sequence of video frames; anddecoding the current frame, including, for a given block of the current frame: determining a predicted motion vector (“MV”) of a given MV for the given block using multiple MV predictors from the current frame, the multiple MV predictors including: a first MV predictor from a first surrounding block of the current frame, the first surrounding block being above and to the left of the given block;a second MV predictor from a second surrounding block of the current frame, the second surrounding block being above the given block and separated from the first surrounding block;a third MV predictor from a third surrounding block of the current frame, the third surrounding block being above the given block and adjacent the second surrounding block;a fourth MV predictor from a fourth surrounding block of the current frame, the fourth surrounding block being left of the given block and separated from the first surrounding block; anda fifth MV predictor from a fifth surrounding block of the current frame, the fifth surrounding block being left of the given block and adjacent the fourth surrounding block; andperforming motion compensation for the given block, using the given MV for the given block, relative to a reference frame to determine a motion-compensated prediction for the given block.
  • 16. The computing device of claim 15, wherein the determining the predicted MV of the given MV for the given block includes: calculating a first intermediate MV using at least some of the multiple MV predictors that are left of the given block;calculating a second intermediate MV using at least some of the multiple MV predictors that are above the given block; anddetermining the predicted MV of the given MV for the given block using the first and second intermediate MVs.
  • 17. The computing device of claim 15, wherein the motion compensation uses the predicted MV as the given MV for the given block, and wherein the decoding further includes, for the given block, selecting the reference frame from among reference frames used for the multiple MV predictors.
  • 18. The computing device of claim 15, wherein at least some of the multiple MV predictors use different reference frames.
  • 19. The computing device of claim 15, wherein the decoding further includes, for the given block: decoding a residual for the given block; andcombining the residual and the motion-compensated prediction.
  • 20. The computing device of claim 15, wherein the determining the predicted MV of the given MV for the given block further includes replacing one of the multiple MV predictors if the surrounding block for that MV predictor is intra.
RELATED PATENT APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/875,164, filed Oct. 5, 2015, which is a continuation of U.S. patent application Ser. No. 14/502,529, filed Sep. 30, 2014, now U.S. Pat. No. 9,185,427, which is a continuation of U.S. patent application Ser. No. 13/760,486, filed Feb. 6, 2013, now U.S. Pat. No. 8,873,630, which is a continuation of U.S. patent application Ser. No. 11/525,059, filed Sep. 20, 2006, now U.S. Pat. No. 8,374,245, which is a continuation of U.S. patent application Ser. No. 10/444,511, filed May 23, 2003, the disclosure of which is hereby incorporated by reference. U.S. patent application Ser. No. 10/444,511 claims the benefit of U.S. Provisional Patent Application No. 60/385,965, filed Jun. 3, 2002, the disclosure of which is hereby incorporated by reference.

US Referenced Citations (258)
Number Name Date Kind
4454546 Mori Jun 1984 A
4661849 Hinman Apr 1987 A
4661853 Roeder et al. Apr 1987 A
4695882 Wada et al. Sep 1987 A
4796087 Guichard et al. Jan 1989 A
4849812 Borgers et al. Jul 1989 A
4862267 Gillard et al. Aug 1989 A
4864393 Harradine et al. Sep 1989 A
5021879 Vogel Jun 1991 A
5068724 Krause et al. Nov 1991 A
5089887 Robert et al. Feb 1992 A
5089889 Sugiyama Feb 1992 A
5091782 Krause et al. Feb 1992 A
5103306 Weiman et al. Apr 1992 A
5111292 Kuriacose et al. May 1992 A
5117287 Koike et al. May 1992 A
5132792 Yonemitsu et al. Jul 1992 A
5157490 Kawai et al. Oct 1992 A
5175618 Ueda Dec 1992 A
5185819 Ng et al. Feb 1993 A
5193004 Wang et al. Mar 1993 A
5223949 Honjo Jun 1993 A
5227878 Puri et al. Jul 1993 A
5235618 Sakai et al. Aug 1993 A
5260782 Hui Nov 1993 A
5287420 Barrett Feb 1994 A
5298991 Yagasaki et al. Mar 1994 A
5317397 Odaka et al. May 1994 A
5343248 Fujinami Aug 1994 A
5347308 Wai Sep 1994 A
5386234 Veltman et al. Jan 1995 A
5400075 Savatier Mar 1995 A
5412430 Nagata May 1995 A
5412435 Nakajima May 1995 A
RE34965 Sugiyama Jun 1995 E
5424779 Odaka Jun 1995 A
5428396 Yagasaki Jun 1995 A
5442400 Sun Aug 1995 A
5448297 Alattar et al. Sep 1995 A
5453799 Yang et al. Sep 1995 A
5461421 Moon Oct 1995 A
RE35093 Wang et al. Nov 1995 E
5467086 Jeong Nov 1995 A
5467134 Laney et al. Nov 1995 A
5467136 Odaka Nov 1995 A
5477272 Zhang et al. Dec 1995 A
RE35158 Sugiyama Feb 1996 E
5510840 Yonemitsu et al. Apr 1996 A
5539466 Igarashi et al. Jul 1996 A
5565922 Krause Oct 1996 A
5594504 Ebrahimi Jan 1997 A
5598215 Watanabe Jan 1997 A
5598216 Lee Jan 1997 A
5612732 Yuyama et al. Mar 1997 A
5617144 Lee Apr 1997 A
5619281 Jung Apr 1997 A
5621481 Yasuda et al. Apr 1997 A
5623311 Phillips et al. Apr 1997 A
5648819 Tranchard Jul 1997 A
5666461 Igarashi et al. Sep 1997 A
5677735 Ueno et al. Oct 1997 A
5687097 Mizusawa et al. Nov 1997 A
5691771 Oishi et al. Nov 1997 A
5699476 Van Der Meer Dec 1997 A
5701164 Kato Dec 1997 A
5717441 Serizawa et al. Feb 1998 A
5731850 Maturi et al. Mar 1998 A
5734755 Ramchandran et al. Mar 1998 A
5748784 Sugiyama May 1998 A
5754239 Wilkinson May 1998 A
5767898 Urano et al. Jun 1998 A
5786860 Kim et al. Jul 1998 A
5787203 Lee et al. Jul 1998 A
5796438 Hosono Aug 1998 A
5798788 Meehan et al. Aug 1998 A
RE35910 Nagata et al. Sep 1998 E
5822541 Nonomura et al. Oct 1998 A
5835144 Matsumura et al. Nov 1998 A
5844613 Chaddha Dec 1998 A
5847776 Khmelnitsky Dec 1998 A
5874995 Naimpally et al. Feb 1999 A
5886742 Hibi et al. Mar 1999 A
5901248 Fandrianto et al. May 1999 A
5905535 Kerdranvat May 1999 A
5923375 Pau Jul 1999 A
5926573 Kim et al. Jul 1999 A
5929940 Jeannin Jul 1999 A
5946042 Kato Aug 1999 A
5949489 Nishikawa et al. Sep 1999 A
5959673 Lee et al. Sep 1999 A
5963258 Nishikawa et al. Oct 1999 A
5963673 Kodama et al. Oct 1999 A
5970173 Lee et al. Oct 1999 A
5970175 Nishikawa et al. Oct 1999 A
5973743 Han Oct 1999 A
5973755 Gabriel Oct 1999 A
5974183 Wilkinson Oct 1999 A
5982438 Lin et al. Nov 1999 A
5990960 Murakami et al. Nov 1999 A
5991447 Eifrig et al. Nov 1999 A
6002439 Murakami et al. Dec 1999 A
6005980 Eifrig et al. Dec 1999 A
RE36507 Iu Jan 2000 E
6011596 Burl Feb 2000 A
6026195 Eifrig et al. Feb 2000 A
6040863 Kato Mar 2000 A
6055012 Haskell et al. Apr 2000 A
6067322 Wang May 2000 A
6081209 Schuyler et al. Jun 2000 A
6091460 Hatano et al. Jul 2000 A
6094225 Han Jul 2000 A
RE36822 Sugiyama Aug 2000 E
6097759 Murakami et al. Aug 2000 A
6130963 Uz et al. Oct 2000 A
6154495 Yamaguchi et al. Nov 2000 A
6167090 Iizuka Dec 2000 A
6175592 Kim et al. Jan 2001 B1
6188725 Sugiyama Feb 2001 B1
6188794 Nishikawa et al. Feb 2001 B1
6192081 Chiang et al. Feb 2001 B1
6201927 Comer Mar 2001 B1
6205176 Sugiyama Mar 2001 B1
6205177 Girod et al. Mar 2001 B1
RE37222 Yonemitsu et al. Jun 2001 E
6243418 Kim Jun 2001 B1
6263024 Matsumoto Jul 2001 B1
6263065 Durinovic-Johri et al. Jul 2001 B1
6269121 Kwak Jul 2001 B1
6271885 Sugiyama Aug 2001 B2
6272179 Kadono Aug 2001 B1
6282243 Kazui et al. Aug 2001 B1
6295376 Nakaya Sep 2001 B1
6301301 Isu et al. Oct 2001 B1
6307887 Gabriel Oct 2001 B1
6307973 Nishikawa et al. Oct 2001 B2
6320593 Sobel Nov 2001 B1
6324216 Igarashi et al. Nov 2001 B1
6377628 Schultz et al. Apr 2002 B1
6381279 Taubman Apr 2002 B1
6404813 Haskell et al. Jun 2002 B1
6414992 Sriram et al. Jul 2002 B1
6427027 Suzuki et al. Jul 2002 B1
6459812 Suzuki et al. Oct 2002 B2
6483874 Panusopone et al. Nov 2002 B1
6496601 Migdal et al. Dec 2002 B1
6519287 Hawkins et al. Feb 2003 B1
6529632 Nakaya et al. Mar 2003 B1
6539056 Sato et al. Mar 2003 B1
6563953 Lin et al. May 2003 B2
6614442 Ouyang et al. Sep 2003 B1
6633611 Sekiguchi et al. Oct 2003 B2
6636565 Kim Oct 2003 B1
6647061 Panusopone et al. Nov 2003 B1
6650781 Nakaya Nov 2003 B2
6654419 Sriram et al. Nov 2003 B1
6654420 Snook Nov 2003 B1
6671319 Chang et al. Dec 2003 B1
6683987 Sugahara Jan 2004 B1
6697430 Yasunari et al. Feb 2004 B1
6697431 Yoneyama Feb 2004 B1
6704360 Haskell et al. Mar 2004 B2
6728317 Demos Apr 2004 B1
6735345 Lin et al. May 2004 B2
6765965 Hanami et al. Jul 2004 B1
RE38563 Eifrig et al. Aug 2004 E
RE38564 Eifrig et al. Aug 2004 E
6785331 Jozawa et al. Aug 2004 B1
6798364 Chen et al. Sep 2004 B2
6798837 Uenoyama et al. Sep 2004 B1
6807231 Wiegand et al. Oct 2004 B1
6816552 Demos Nov 2004 B2
6873657 Yang et al. Mar 2005 B2
6876703 Ismaeil et al. Apr 2005 B2
6900846 Lee et al. May 2005 B2
6920175 Karczewicz et al. Jul 2005 B2
6975680 Demos Dec 2005 B2
6980596 Wang et al. Dec 2005 B2
6999513 Sohn et al. Feb 2006 B2
7003035 Tourapis et al. Feb 2006 B2
7023922 Xu et al. Apr 2006 B1
7054494 Lin et al. May 2006 B2
7092576 Srinivasan et al. Aug 2006 B2
7154952 Tourapis et al. Dec 2006 B2
7233621 Jeon Jun 2007 B2
7280700 Tourapis et al. Oct 2007 B2
7317839 Holcomb Jan 2008 B2
7346111 Winger et al. Mar 2008 B2
7362807 Kondo et al. Apr 2008 B2
7388916 Park et al. Jun 2008 B2
7567617 Holcomb Jul 2009 B2
7609763 Mukerjee et al. Oct 2009 B2
7630438 Mukerjee et al. Dec 2009 B2
7646810 Tourapis et al. Jan 2010 B2
7733960 Kondo et al. Jun 2010 B2
8189666 Wu et al. May 2012 B2
8254455 Wu et al. Aug 2012 B2
8379722 Tourapis et al. Feb 2013 B2
8406300 Tourapis et al. Mar 2013 B2
8774280 Tourapis et al. Jul 2014 B2
20010019586 Kang et al. Sep 2001 A1
20010040926 Hannuksela et al. Nov 2001 A1
20020025077 De Haan et al. Feb 2002 A1
20020105596 Selby Aug 2002 A1
20020114388 Ueda Aug 2002 A1
20020122488 Takahashi et al. Sep 2002 A1
20020154693 Demos Oct 2002 A1
20020186890 Lee et al. Dec 2002 A1
20030016755 Tahara et al. Jan 2003 A1
20030039308 Wu et al. Feb 2003 A1
20030053537 Kim et al. Mar 2003 A1
20030099292 Wang et al. May 2003 A1
20030099294 Wang et al. May 2003 A1
20030112864 Karczewicz et al. Jun 2003 A1
20030113026 Srinivasan et al. Jun 2003 A1
20030142748 Tourapis Jul 2003 A1
20030142751 Hannuksela Jul 2003 A1
20030156646 Hsu et al. Aug 2003 A1
20030202590 Gu et al. Oct 2003 A1
20030206589 Jeon Nov 2003 A1
20040001546 Tourapis et al. Jan 2004 A1
20040008899 Tourapis et al. Jan 2004 A1
20040047418 Tourapis et al. Mar 2004 A1
20040101059 Joch et al. May 2004 A1
20040136457 Funnell et al. Jul 2004 A1
20040139462 Hannuksela et al. Jul 2004 A1
20040141651 Hara et al. Jul 2004 A1
20040146109 Kondo et al. Jul 2004 A1
20040228413 Hannuksela Nov 2004 A1
20040234143 Hagai et al. Nov 2004 A1
20050013497 Hsu et al. Jan 2005 A1
20050013498 Srinivasan Jan 2005 A1
20050036759 Lin et al. Feb 2005 A1
20050053137 Holcomb Mar 2005 A1
20050053147 Mukerjee et al. Mar 2005 A1
20050053149 Mukerjee et al. Mar 2005 A1
20050100093 Holcomb May 2005 A1
20050129120 Jeon Jun 2005 A1
20050135484 Lee Jun 2005 A1
20050147167 Dumitras et al. Jul 2005 A1
20050185713 Winger et al. Aug 2005 A1
20050207490 Wang Sep 2005 A1
20050249291 Gordon Nov 2005 A1
20050254584 Kim et al. Nov 2005 A1
20060013307 Olivier et al. Jan 2006 A1
20060072662 Tourapis et al. Apr 2006 A1
20060120464 Hannuksela Jun 2006 A1
20060280253 Tourapis et al. Dec 2006 A1
20070064801 Wang et al. Mar 2007 A1
20070177674 Yang Aug 2007 A1
20080043845 Nakaishi Feb 2008 A1
20080069462 Abe Mar 2008 A1
20080075171 Suzuki Mar 2008 A1
20080117985 Chen et al. May 2008 A1
20090002379 Baeza et al. Jan 2009 A1
20090003446 Wu et al. Jan 2009 A1
20090003447 Christoffersen et al. Jan 2009 A1
20090238269 Pandit et al. Sep 2009 A1
20120213286 Wu et al. Aug 2012 A1
Foreign Referenced Citations (75)
Number Date Country
0 279 053 Aug 1988 EP
0 397 402 Nov 1990 EP
0 526 163 Feb 1993 EP
0 535 746 Apr 1993 EP
0 540 350 May 1993 EP
0 588 653 Mar 1994 EP
0 614 318 Sep 1994 EP
0 625 853 Nov 1994 EP
0 771 114 May 1997 EP
0 782 343 Jul 1997 EP
0 786 907 Jul 1997 EP
0 830 029 Mar 1998 EP
0 863 673 Sep 1998 EP
0 863 674 Sep 1998 EP
0 863 675 Sep 1998 EP
0 874 526 Oct 1998 EP
0 884 912 Dec 1998 EP
0 901 289 Mar 1999 EP
0 944 245 Sep 1999 EP
1 006 732 Jul 2000 EP
1335609 Aug 2003 EP
1369820 Mar 2004 EP
1418762 May 2004 EP
1 427 216 Jun 2004 EP
2328337 Feb 1999 GB
2332115 Jun 1999 GB
2343579 May 2000 GB
61-205086 Sep 1986 JP
1869940 Sep 1986 JP
62 213 494 Sep 1987 JP
3-001688 Jan 1991 JP
3 129 986 Mar 1991 JP
05-137131 Jun 1993 JP
6 078 298 Mar 1994 JP
6-078295 Mar 1994 JP
06-276481 Sep 1994 JP
06-276511 Sep 1994 JP
6-292188 Oct 1994 JP
07-274171 Oct 1995 JP
07-274181 Oct 1995 JP
08-140099 May 1996 JP
08-251601 Sep 1996 JP
09-121355 May 1997 JP
09-322163 Dec 1997 JP
1005664 Feb 1998 JP
10-224800 Aug 1998 JP
11-055672 Feb 1999 JP
11-088888 Mar 1999 JP
11 136683 May 1999 JP
11-164305 Jun 1999 JP
2000-513167 Oct 2000 JP
2000-307672 Nov 2000 JP
2000-308064 Nov 2000 JP
2001-025014 Jan 2001 JP
2002-118598 Apr 2002 JP
2002-121053 Apr 2002 JP
2002-156266 May 2002 JP
2002-177889 Jun 2002 JP
2002-193027 Jul 2002 JP
2002-204713 Jul 2002 JP
2003-513565 Apr 2003 JP
2004-208259 Jul 2004 JP
2004-228259 Aug 2004 JP
2182727 May 2002 RU
WO 0033581 Aug 2000 WO
WO 0195633 Dec 2001 WO
WO 0237859 May 2002 WO
WO 0243399 May 2002 WO
WO 02062074 Aug 2002 WO
WO 03026296 Mar 2003 WO
WO 03047272 Jun 2003 WO
WO 03090473 Oct 2003 WO
WO 03090475 Oct 2003 WO
WO 2005004491 Jan 2005 WO
WO 2008023967 Feb 2008 WO
Non-Patent Literature Citations (108)
Entry
U.S. Appl. No. 60/341,674, filed Dec. 17, 2001, Lee et al.
U.S. Appl. No. 60/488,710, filed Jul. 18, 2003, Srinivasan et al.
U.S. Appl. No. 60/501,081, filed Sep. 7, 2003, Srinivasan et al.
Abe et al., “Clarification and Improvement of Direct Mode,” JVT-D033, 9 pp. (document marked Jul. 16, 2002).
Anonymous, “DivX Multi Standard Video Encoder,” 2 pp. (document marked Nov. 2005).
Chalidabhongse et al., “Fast motion vector estimation using multiresolution spatio-temporal correlations,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 477-488 (Jun. 1997).
Chujoh et al., “Verification result on the combination of spatial and temporal,” JVT-E095, 5 pp. (Oct. 2002).
Decision on Grant dated Apr. 2, 2007, from Russian Patent Application No. 2003116281, 5 pp.
Ericsson, “Fixed and Adaptive Predictors for Hybrid Predictive/Transform Coding,” IEEE Transactions on Comm., vol. COM-33, No. 12, pp. 1291-1302 (1985).
European Official Communication dated Oct. 28, 2005, from European Patent Application No. 03 011 935.8, 3 pp.
European Search Report dated Dec. 4, 2003, from European Patent Application No. 03 011 935.8, 5 pp.
Examiner's First Report dated Nov. 29, 2007, from Australian Patent Application No. 2003204477, 2 pp.
Examiner's Report No. 2 dated Jun. 24, 2008, from Australian Patent Application No. 2003204477, 2 pp.
Examiner's Report dated Oct. 12, 2012, from Canadian Patent Application No. 2,430,460, 2 pp.
Flierl et al., “Multihypothesis Motion Estimation for Video Coding,” Proc. DCC, 10 pp. (Mar. 2001).
Fogg, “Survey of Software and Hardware VLC Architectures,” SPIE, vol. 2186, pp. 29-37 (Feb. 9-10, 1994).
Girod, “Efficiency Analysis of Multihypothesis Motion-Compensated Prediction for Video Coding,” IEEE Transactions on Image Processing, vol. 9, No. 2, pp. 173-183 (Feb. 2000).
Grigoriu, “Spatio-temporal compression of the motion field in video coding,” 2001 IEEE Fourth Workshop on Multimedia Signal Processing, pp. 129-134 (Oct. 2001).
Gu et al., “Introducing Direct Mode P-picture (DP) to reduce coding complexity,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG, Document No. JVT-0044, 10 pp. (Mar. 2002).
Horn et al., “Estimation of Motion Vector Fields for Multiscale Motion Compensation,” Proc. Picture Coding Symp. (PCS 97), pp. 141-144 (Sep. 1997).
Hsu et al., “A Low Bit-Rate Video Codec Based on Two-Dimensional Mesh Motion Compensation with Adaptive Interpolation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, No. 1, pp. 111-117 (Jan. 2001).
Huang et al., “Hardware architecture design for variable block size motion estimation in MPEG-4 AVC/JVT/ITU-T H.264,” Proc. of the 2003 Int'l Symposium on Circuits & Sys. (ISCAS '03), vol. 2, pp. 796-799 (May 2003).
Ismaeil et al., “Efficient Motion Estimation Using Spatial and Temporal Motion Vector Prediction,” IEEE Int'l Conf. on Image Processing, pp. 70-74 (Oct. 1999).
ISO/IEC, “MPEG-4 Video Verification Model Version 18.0,” ISO/IEC JTC1/SC29/WG11 N3908, Pisa, pp. 1-10, 299-311 (Jan. 2001).
ISO/IEC, “ISO/IEC 11172-2: Information Technology—Coding of Moving Pictures and Associated Audio for Storage Media at up to About 1.5 Mbit/s,” 122 pp. (Aug. 1993).
ISO/IEC, “Information Technology—Coding of Audio-Visual Objects: Visual, ISO/IEC 14496-2, Committee Draft,” 330 pp. (Mar. 1998).
ISO/IEC, “MPEG-4 Video Verification Model Version 10.0,” ISO/IEC JTC1/SC29/WG11, MPEG98/N1992, 305 pp. (Feb. 1998).
ITU—Q15-F-24, “MVC Video Codec—Proposal for H.26L,” Study Group 16, Video Coding Experts Group (Question 15), 28 pp. (document marked as generated in Oct. 1998).
ITU-T, “ITU-T Recommendation H.261: Video Codec for Audiovisual Services at p × 64 kbits,” 28 pp. (Mar. 1993).
ITU-T, “ITU-T Recommendation H.262: Information Technology—Generic Coding of Moving Pictures and Associated Audio Information: Video,” 218 pp. (Jul. 1995).
ITU-T, “ITU-T Recommendation H.263: Video Coding for Low Bit Rate Communication,” 167 pp. (Feb. 1998).
Jeon et al., “B picture coding for sequence with repeating scene changes,” JVT-C120, 9 pp. (document marked May 1, 2002).
Jeon, “Clean up for temporal direct mode,” JVT-E097, 13 pp. (Oct. 2002).
Jeon, “Direct mode in B pictures,” JVT-D056, 10 pp. (Jul. 2002).
Jeon, “Motion vector prediction and prediction signal in B pictures,” JVT-D057, 5 pp. (Jul. 2002).
Ji et al., “New Bi-Prediction Techniques for B Pictures Coding,” IEEE Int'l Conf. on Multimedia and Expo, pp. 101-104 (Jun. 2004).
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, Working Draft No. 2, Revision 2 (WD-2), JVT-B118r2, 106 pp. (Jan. 2002).
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, Working Draft No. 2, Revision 0 (WD-2), JVT-B118r1, 105 pp. (Jan. 2002).
Joint Video Team of ISO/IEC MPEG and ITU-T VCEG, “Text of Committee Draft of Joint Video Specification (ITU-T Rec. H.264, ISO/IEC 14496-10 AVC),” Document JVT-C167, 142 pp. (May 2002).
Joint Video Team of ISO/IEC MPEG and ITU-T VCEG, “Joint Final Committee Draft (JFCD) of Joint Video Specification (ITU-T Recommendation H.264, ISO/IEC 14496-10 AVC,” JVT-D157 (Aug. 2002).
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, “Joint Model No. 1, Revision 1 (JM-1r1),” JVT-A003r1, Pattaya, Thailand, 80 pp. (Dec. 2001) [document marked “Generated: Jan. 18, 2002”].
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, “Study of Final Committee Draft of Joint Video Specification,” JVT-F100, Awaji Island, 242 pp. (Dec. 2002).
Kadono et al., “Memory Reduction for Temporal Technique of Direct Mode,” JVT-E076, 12 pp. (Oct. 2002).
Ko et al., “Fast Intra-Mode Decision Using Inter-Frame Correlation for H.264/AVC,” Proc. IEEE ISCE 2008, 4 pages (Apr. 2008).
Kondo et al., “New Prediction Method to Improve B-picture Coding Efficiency,” VCEG-O26, 9 pp. (document marked Nov. 26, 2001).
Kondo et al., “Proposal of Minor Changes to Multi-frame Buffering Syntax for Improving Coding Efficiency of B-pictures,” JVT-B057, 10 pp. (document marked Jan. 23, 2002).
Konrad et al., “On Motion Modeling and Estimation for Very Low Bit Rate Video Coding,” Visual Comm. & Image Processing (VCIP '95), 12 pp. (May 1995).
Kossentini et al., “Predictive RD Optimized Motion Estimation for Very Low Bit-rate Video Coding,” IEEE J. on Selected Areas in Communications, vol. 15, No. 9 pp. 1752-1763 (Dec. 1997).
Ku et al., “Investigation of a Visual Telephone Prototyping on Personal Computers,” IEEE Trans. on Consumer Electronics, vol. 42, No. 3, pp. 750-759 (Aug. 1996).
Lainema et al., “Skip Mode Motion Compensation,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), Document JVT-C027, 8 pp. (May 2002).
Microsoft Corporation, “Microsoft Debuts New Windows Media Players 9 Series, Redefining Digital Media on the PC,” 4 pp. (Sep. 4, 2002) [Downloaded from the World Wide Web on May 14, 2004].
Mook, “Next-Gen Windows Media Player Leaks to the Web,” BetaNews, 17 pp. (Jul. 19, 2002) [Downloaded from the World Wide Web on Aug. 8, 2003].
Notice of Acceptance dated Feb. 26, 2009, from Australian Patent Application No. 2003204477, 3 pp.
Notice of Allowance dated Apr. 12, 2010, from Korean Patent Application No. 10-2003-0035240, 2 pp.
Notice of Allowance dated Apr. 1, 2011, from Japanese Patent Application No. 2003157240, 6 pp.
Notice of Allowance dated Feb. 15, 2013, from Canadian Patent Application No. 2,430,460, 1 p.
Notice of Preliminary Rejection dated Sep. 24, 2009, from Korean Patent Application No. 10-2003-0035240, 4 pp.
Notice of Rejection dated Dec. 8, 2009, from Japanese Patent Application No. 2003-157240, 5 pp.
Notice of Rejection dated Sep. 17, 2010, from Japanese Patent Application No. 2003-157240, 3 pp.
Notice on Grant of Patent Right for Invention dated Sep. 19, 2008, from Chinese Patent Application No. 03141275.0, 4 pp.
Notice on Office Action dated Sep. 1, 2006, from Chinese Patent Application No. 03141275.0, 9 pp.
Official Action dated Mar. 22, 2012, from Canadian Patent Application No. 2,430,460, 3 pp.
Official Action dated Sep. 21, 2011, from Canadian Patent Application No. 2,430,460, 3 pp.
Official Action dated Dec. 11, 2006, from Russian Patent Application No. 2003116281, 10 pp.
Official Notice of Final Rejection dated Apr. 9, 2010, from Japanese Patent Application No. 2003-157240, 4 pp.
Panusopone et al., “Direct Prediction for Predictive (P) Picture in Field Coding mode,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG, Document JVT-D046, 8 pp. (Jul. 2002).
Pourazad et al., “An H.264-based Video Encoding Scheme for 3D TV,” EURASIP European Signal Processing Conference—EUSIPCO, Florence, Italy, 5 pages (Sep. 2006).
Printouts of FTP directories from http://ftp3.itu.ch, 8 pp. (downloaded from the World Wide Web on Sep. 20, 2005).
Reader, “History of MPEG Video Compression—Ver. 4.0,” 99 pp. (document marked Dec. 16, 2003).
Schwarz et al., “Tree-structured macroblock partition,” ITU-T SG16/Q.6 VCEG-O17, 6 pp. (Dec. 2001).
Schwarz et al., “Core Experiment Results on Improved Macroblock Prediction Modes,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q6), Document JVT-B054, 10 pp. (Jan.-Feb. 2002).
Sullivan et al., “The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions,” 21 pp. (Aug. 2004).
Suzuki, “Handling of reference pictures and MVs for direct mode,” JVT-D050, 11 pp. (Jul. 2002).
Suzuki et al., “Study of Direct Mode,” JVT-E071 rl, 7 pp. (Oct. 2002).
The Second Office Action dated Nov. 16, 2007, from Chinese Patent Application No.03141275.0, 8 pp.
“The TML Project Web-Page and Archive,” (including pages of code marked “image.cpp for H.26L decoder, Copyright 1999” and “image.c”), 24 pp. (document marked Sep. 2001).
Tourapis et al., “B picture and ABP Finalization,” JVT-E018, 2 pp. (Oct. 2002).
Tourapis et al., “Direct Mode Coding for Bipredictive Slices in the H.264 Standard,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 15, No. 1, pp. 119-126. (Jan. 2005).
Tourapis et al., “Direct Prediction for Predictive (P) and Bidirectionally Predictive (B) frames in Video Coding ,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), Document JVT-C128, 11 pp. (May 2002).
Tourapis et al., “Motion Vector Prediction in Bidirectionally Predictive (B) frames with regards to Direct Mode,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), Document JVT-C127, 7 pp. (May 2002).
Tourapis et al., “Timestamp Independent Motion Vector Prediction for P and B frames with Division Elimination,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), Document JVT-D040, 18 pp. (Jul. 2002).
Tourapis et al., “Performance Comparison of Temporal and Spatial Direct mode,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), Document JVT-E026, 7 pp. (Oct. 2002).
Tourapis et al., “Temporal Interpolation of Video Sequences Using Zonal Based Algorithms,” IEEE, pp. 895-898 (Oct. 2001).
Wang et al., “Adaptive frame/field coding for JVT Video Coding,” ITU-T SG16 Q.6 JVT-B071, 24 pp. (Jan. 2002).
Wang et al., “Interlace Coding Tools for H.26L Video Coding,” ITU-T SG16/Q.6 VCEG-O37, pp. 1-20 (Dec. 2001).
Wiegand et al., “Motion-compensating Long-term Memory Prediction,” Proc. Int'l Conf. on Image Processing, 4 pp. (Oct. 1997).
Wiegand et al., “Long-term Memory Motion Compensated Prediction,” IEEE Transactions on Circuits & Systems for Video Technology, vol. 9, No. 1, pp. 70-84 (Feb. 1999).
Wiegand, “H.26L Test Model Long-Term No. 9 (TML-9) draft 0,” ITU-Telecommunications Standardization Sector, Study Group 16, VCEG-N83, 74 pp. (Dec. 2001).
Wien, “Variable Block-Size Transforms for Hybrid Video Coding,” Dissertation, 182 pp. (Feb. 2004).
Winger et al., “HD Temporal Direct-Mode Verification & Text,” JVT-E037, 8 pp. (Oct. 2002).
Wu et al., “Joint estimation of forward and backward motion vectors for interpolative prediction of video,” IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 684-687 (Sep. 1994).
Yu et al., “Two-Dimensional Motion Vector Coding for Low Bitrate Videophone Applications,” Proc. Int'l Conf. on Image Processing, Los Alamitos, US, pp. 414-417, IEEE Comp. Soc. Press (Oct. 1995).
Communication pursuant to Article 94(3) EPC dated May 22, 2017, from European Patent Application No. 03011935.8, 6 pp.
Final Office Action Issued in U.S. Appl. No. 11/525,059, dated Dec. 6, 2011, 12 Pages.
Final Office Action Issued in U.S. Appl. No. 13/760,486, dated Feb. 14, 2014, 13 Pages.
Girod, “Motion-Compensation: Visual Aspects, Accuracy, and Fundamental Limits,” Motion Analysis and Image Sequence Processing, Kluwer Academic Publishers, pp. 125-152 (1993).
Non-final Office Action Issued in U.S. Appl. No. 10/444,511, dated Mar. 20, 2006, 13 Pages.
Non-final Office Action Issued in U.S. Appl. No. 11/525,059, dated Sep. 30, 2010, 9 Pages.
Non-final Office Action Issued in U.S. Appl. No. 11/525,059, dated Apr. 1, 2011, 10 Pages.
Non-final Office Action Issued in U.S. Appl. No. 13/760,486, dated Sep. 30, 2013, 15 Pages.
Non-final Office Action Issued in U.S. Appl. No. 14/502,529, dated Feb. 12, 2015, 8 Pages.
Non-final Office Action Issued in U.S. Appl. No. 14/875,164, dated Apr. 1, 2016, 8 Pages.
Notice of Allowance Issued in U.S. Appl. No. 11/525,059, dated Jun. 25, 2012, 11 Pages.
Notice of Allowance Issued in U.S. Appl. No. 11/525,059, dated Oct. 9, 2012, 10 Pages.
Notice of Allowance Issued in U.S. Appl. No. 13/760,486, dated Jun. 25, 2014, 9 Pages.
Notice of Allowance Issued in U.S. Appl. No. 14/502,529, dated Jun. 26, 2015, 11 Pages.
Notice of Allowance Issued in U.S. Appl. No. 14/875,164, dated Sep. 29, 2016, 8 Pages.
Office Action dated Mar. 19, 2018, from Brazilian Patent Application No. PI0302164-5, 6 pp.
Related Publications (1)
Number Date Country
20170118488 A1 Apr 2017 US
Provisional Applications (1)
Number Date Country
60385965 Jun 2002 US
Continuations (5)
Number Date Country
Parent 14875164 Oct 2015 US
Child 15391652 US
Parent 14502529 Sep 2014 US
Child 14875164 US
Parent 13760486 Feb 2013 US
Child 14502529 US
Parent 11525059 Sep 2006 US
Child 13760486 US
Parent 10444511 May 2003 US
Child 11525059 US