Video compression can be considered the process of representing digital video data in a form that uses fewer bits when stored or transmitted. Video compression algorithms can achieve compression by exploiting redundancies in the video data, whether spatial, temporal, or color-space. Video compression algorithms typically segment the video data into portions, such as groups of frames and groups of pels, to identify areas of redundancy within the video that can be represented with fewer bits than required by the original video data. When these redundancies in the data are exploited, greater compression can be achieved. An encoder can be used to transform the video data into an encoded format, while a decoder can be used to transform encoded video back into a form comparable to the original video data. The implementation of the encoder/decoder is referred to as a codec.
Standard encoders divide a given video frame into non-overlapping coding units or macroblocks (rectangular regions of contiguous pels) for encoding. The macroblocks (herein referred to more generally as “input blocks” or “data blocks”) are typically processed in a traversal order of left to right and top to bottom in a video frame. Compression can be achieved when input blocks are predicted and encoded using previously-coded data. The process of encoding input blocks using spatially neighboring samples of previously-coded blocks within the same frame is referred to as intra-prediction. Intra-prediction attempts to exploit spatial redundancies in the data. The encoding of input blocks using similar regions from previously-coded frames, found using a motion estimation algorithm, is referred to as inter-prediction. Inter-prediction attempts to exploit temporal redundancies in the data. The motion estimation algorithm can generate a motion vector that specifies, for example, the location of a matching region in a reference frame relative to an input block that is being encoded. Most motion estimation algorithms consist of two main steps: initial motion estimation, which provides an first, rough estimate of the motion vector (and corresponding temporal prediction) for a given input block, and fine motion estimation, which performs a local search in the neighborhood of the initial estimate to determine a more precise estimate of the motion vector (and corresponding prediction) for that input block.
The encoder may measure the difference between the data to be encoded and the prediction to generate a residual. The residual can provide the difference between a predicted block and the original input block. The predictions, motion vectors (for inter-prediction), residuals, and related data can be combined with other processes such as a spatial transform, a quantizer, an entropy encoder, and a loop filter to create an efficient encoding of the video data. The residual that has been quantized and transformed can be processed and added back to the prediction, assembled into a decoded frame, and stored in a framestore. Details of such encoding techniques for video will be familiar to a person skilled in the art.
MPEG-2 (H.262) and H.264 (MPEG-4 Part 10, Advanced Video Coding [AVC]), hereafter referred to as MPEG-2 and H.264, respectively, are two codec standards for video compression that achieve high quality video representation at relatively low bitrates. The basic coding units for MPEG-2 and H.264 are 16×16 macroblocks. H.264 is the most recent widely-accepted standard in video compression and is generally thought to be twice as efficient as MPEG-2 at compressing video data.
The basic MPEG standard defines three types of frames (or pictures), based on how the input blocks in the frame are encoded. An I-frame (intra-coded picture) is encoded using only data present in the frame itself. Generally, when the encoder receives video signal data, the encoder creates I-frames first and segments the video frame data into input blocks that are each encoded using intra-prediction. An I-frame consists of only intra-predicted blocks. I-frames can be costly to encode, as the encoding is done without the benefit of information from previously-decoded frames. A P-frame (predicted picture) is encoded via forward prediction, using data from previously-decoded I-frames or P-frames, also known as reference frames. P-frames can contain either intra blocks or (forward-)predicted blocks. A B-frame (bi-predicted picture) is encoded via bi-directional prediction, using data from both previous and subsequent frames. B-frames can contain intra, (forward-)predicted, or bi-predicted blocks.
A particular set of reference frames is termed a Group of Pictures (GOP). The GOP contains only the decoded pels within each reference frame and does not include information as to how the input blocks or frames themselves were originally encoded (I-frame, B-frame, or P-frame). Older video compression standards such as MPEG-2 use one reference frame (in the past) to predict P-frames and two reference frames (one past, one future) to predict B-frames. By contrast, more recent compression standards such as H.264 and HEVC (High Efficiency Video Coding) allow the use of multiple reference frames for P-frame and B-frame prediction. While reference frames are typically temporally adjacent to the current frame, the standards also allow reference frames that are not temporally adjacent.
Conventional inter-prediction is based on block-based motion estimation and compensation (BBMEC). The BBMEC process searches for the best match between the target block (the current input block being encoded) and same-sized regions within previously-decoded reference frames. When such a match is found, the encoder may transmit a motion vector. The motion vector may include a pointer to the best match's position in the reference frame. One could conceivably perform exhaustive searches in this manner throughout the video “datacube” (height×width×frame index) to find the best possible matches for each input block, but exhaustive search is usually computationally prohibitive and increases the chances of selecting particularly poor motion vectors. As a result, the BBMEC search process is limited, both temporally in terms of reference frames searched and spatially in terms of neighboring regions searched. This means that “best possible” matches are not always found, especially with rapidly changing data.
The simplest form of the BBMEC algorithm initializes the motion estimation using a (0, 0) motion vector, meaning that the initial estimate of a target block is the co-located block in the reference frame. Fine motion estimation is then performed by searching in a local neighborhood for the region that best matches (i.e., has lowest error in relation to) the target block. The local search may be performed by exhaustive query of the local neighborhood (termed here full block search) or by any one of several “fast search” methods, such as a diamond or hexagonal search.
An improvement on the BBMEC algorithm that has been present in standard codecs since later versions of MPEG-2 is the enhanced predictive zonal search (EPZS) algorithm [Tourapis, A., 2002, “Enhanced predictive zonal search for single and multiple frame motion estimation,” Proc. SPIE 4671, Visual Communications and Image Processing, pp. 1069-1078]. The EPZS algorithm considers a set of motion vector candidates for the initial estimate of a target block, based on the motion vectors of neighboring blocks that have already been encoded, as well as the motion vectors of the co-located block (and neighbors) in the previous reference frame. The algorithm hypothesizes that the video's motion vector field has some spatial and temporal redundancy, so it is logical to initialize motion estimation for a target block with motion vectors of neighboring blocks, or with motion vectors from nearby blocks in already-encoded frames. Once the set of initial estimates has been gathered, the EPZS algorithm narrows the set via approximate rate-distortion analysis, after which fine motion estimation is performed.
Historically, model-based compression schemes have also been proposed to avoid the limitations of BBMEC prediction. These model-based compression schemes (the most well-known of which is perhaps the MPEG-4 Part 2 standard) rely on the detection and tracking of objects or features (defined generally as “components of interest”) in the video and a method for encoding those features/objects separately from the rest of the video frame. These model-based compression schemes, however, suffer from the challenge of segmenting video frames into object vs. non-object (feature vs. non-feature) regions. First, because objects can be of arbitrary size, their shapes need to be encoded in addition to their texture (color content). Second, the tracking of multiple moving objects can be difficult, and inaccurate tracking causes incorrect segmentation, usually resulting in poor compression performance. A third challenge is that not all video content is composed of objects or features, so there needs to be a fallback encoding scheme when objects/features are not present.
The present invention recognizes fundamental limitations in the inter-prediction process of conventional codecs and applies higher-level modeling to overcome those limitations and provide improved inter-prediction, while maintaining the same general processing flow and framework as conventional encoders. Higher-level modeling provides an efficient way of navigating more of the prediction search space (the video datacube) to produce better predictions than can be found through conventional BBMEC and its variants. However, the modeling in the present invention does not require feature or object detection and tracking, so the model-based compression scheme presented herein does not encounter the challenges of segmentation that previous model-based compression schemes faced.
The present invention focuses on model-based compression via continuous block tracking (CBT). CBT assumes that the eventual blocks of data to be encoded are macroblocks or input blocks, the basic coding units of the encoder (which can vary in size depending on the codec), but CBT can begin by tracking data blocks of varying size. In one embodiment, hierarchical motion estimation (HME) [Bierling, M., 1988, “Displacement estimation by hierarchical blockmatching,” Proc. SPIE 1001, Visual Communications and Image Processing, pp. 942-951] is applied to begin tracking data blocks much larger than the typical input block size. The HME tracking results for the larger blocks are then propagated to successively smaller blocks until motion vectors are estimated for the input blocks. HME provides the ability to track data at multiple resolutions, expanding the ability of the encoder to account for data at different scales.
The present invention generates frame-to-frame tracking results for each input block in the video data by application of conventional block-based motion estimation (BBME). If HME is applied, BBME is performed first on larger blocks of data and the resulting motion vectors are propagated to successively smaller blocks, until motion vectors for input blocks are calculated.
Frame-to-frame tracking results are then used to generate continuous tracking results for each input block in the video data, motion vectors that specify an input block's best match in reference frames that are not temporally adjacent to the current frame. In a typical GOP structure of IBBPBBP (consisting of I-frames, B-frames, and P-frames), for example, the reference frame can be as far away as three frames from the frame being encoded. Because frame-to-frame tracking results only specify motion vectors beginning at an input block location and likely point to a region in the previous frame that is not necessarily centered on an input block location, the frame-to-frame tracking results for all neighboring blocks in the previous frame must be combined to continue the “block track.” This is the essence of continuous block tracking.
For a given input block, the motion vector from the CBT provides an initial estimate for the present invention's motion estimation. The initial estimate may be followed by a local search in the neighborhood of the initial estimate to obtain a fine estimate. The local search may be undertaken by full block search, diamond or hexagon search, or other fast search methods. The local estimate may be further refined by rate-distortion optimization to account for the best encoding mode (e.g., quantization parameter, subtiling, and reference frame, etc.), and then by subpixel refinement.
In an alternative embodiment, the CBT motion vector may be combined with EPZS candidates to form a set of initial estimates. The candidate set may be pruned through a preliminary “competition” that determines (via an approximate rate-distortion analysis) which candidate is the best one to bring forward. This “best” initial estimate then undergoes fine estimation (local search and subpixel refinement) and the later (full) rate-distortion optimization steps to select the encoding mode, etc. In another embodiment, multiple initial estimates may be brought forward to the subsequent encoding steps, for example the CBT motion vector and the “best” EPZS candidate. Full rate-distortion optimization at the final stage of encoding then selects the best overall candidate.
In another embodiment, the trajectories from continuous tracking results in past frames can be used to generate predictions in the current frame being encoded. This trajectory-based continuous block tracking (TB-CBT) prediction, which does not require new frame-to-frame tracking in the current frame, can either be added to an existing set of prediction candidates (which may include the CBT and EPZS candidates) or can replace the CBT in the candidate set. Regions in intermediate frames along trajectory paths may also be used as additional predictions. In a further embodiment, mode decisions along trajectory paths may be used to predict or prioritize mode decisions in the current input block being predicted.
In further embodiments, information about the relative quality of the tracks, motion vectors, and predictions generated by the CBT or TB-CBT can be computed at different points in the encoding process and then fed back into the encoder to inform future encoding decisions. Metrics such as motion vector symmetry and flat block detection may be used to assess how reliable track-based predictions from the CBT or TB-CBT are and to promote or demote those predictions relative to non-track-based predictions or intra-predictions accordingly.
In additional embodiments, motion vector directions and magnitudes along a CBT or TB-CBT track may be used to determine whether the motion of the input block being tracked is close to translational. If so, a translational motion model may be determined for that track, and points on the track may be analyzed for goodness-of-fit to the translational motion model. This can lead to better selection of reference frames for predicting the region. Translational motion model analysis may be extended to all input blocks in a frame as part of an adaptive picture type selection algorithm. To do this, one may determine whether a majority of blocks in the frame fit well to a frame-average translational motion model, leading to a determination of whether the motion in the frame is “well-modeled” and of which picture type would be most appropriate (B-frames well-modeled motion, P-frames for poorly-modeled motion).
Other embodiments may apply look-ahead tracking (LAT) to provide rate control information (in the form of quantization parameter settings) and scene change detection for the current frame being encoded. The LAT of the present invention is distinguished from other types of look-ahead processing because the complexity calculations that determine the look-ahead parameters are dependent on the continuous tracking results (CBT or TB-CBT).
The present invention is structured so that the resulting bitstream is compliant with any standard codec—including but not limited to MPEG-2, H.264, and HEVC—that employs block-based motion estimation followed by transform, quantization, and entropy encoding of residual signals.
Computer-based methods, codecs, and other computer systems and apparatus for processing video data may embody the foregoing principles of the present invention.
Methods, systems, and computer program products for encoding video data may be provided using continuous block tracking (CBT). A plurality of source video frames having non-overlapping input blocks may be encoded. For each input block to be encoded, continuous block tracking (CBT) may be applied for initial motion estimation within a model-based inter-prediction process to produce CBT motion vector candidates. Frame-to-frame tracking of each input block in a current frame referenced to a source video frame may be applied, which results in a set of frame-to-frame CBT motion vectors. The CBT motion vectors may be configured to specify, for each input block, a location of a matching region in a temporally-adjacent source video frame.
Continuous tracking over multiple reference frames may be provided by relating frame-to-frame motion vectors over the multiple reference frames. The continuous tracking may result in a set of continuous tracking motion vectors that are configured to specify, for each input block, a location of a matching region in a temporally non-adjacent source video frame. The continuous tracking motion vectors may be derived from frame-to-frame motion vectors by interpolating neighboring frame-to-frame motion vectors, in which the neighboring frame-to-frame motion vectors are weighted according to their overlap with the matching region indicated by the frame-to-frame motion vector.
CBT motion vector candidates with enhanced predictive zonal search (EPZS) motion vector candidates may be used to form an aggregate set of initial CBT/EPZS motion vector candidates. The CBT motion vector candidates may be determined by filtering the initial set of CBT/EPZS motion vector candidates separately using an approximate rate-distortion criterion, which results in a “best” CBT candidate and “best” EPZS candidate. Fine motion estimation may be performed on the best CBT and best EPZS candidates. The best initial inter-prediction motion vector candidates may be selected between the best CBT and the best EPZS motion vector candidates by means of rate-distortion optimization.
CBT motion vector candidates may be combined with enhanced predictive zonal search (EPZS) motion vector candidates, and this may be done at an earlier stage via approximate rate-distortion optimization. In this way, the CBT motion vector candidates and EPZS motion vector candidates may be unified, which results in a single “best” CBT/EPZS candidate. The fine motion estimation may be performed on the single best CBT/EPZS candidate. Encoding mode generation and final rate-distortion analysis may be used to determine the best inter-prediction motion vector candidate.
Methods, systems, and computer program products for encoding video data may be provided using trajectory based continuous block tracking (TB-CBT) prediction. Continuous tracking motion vectors may be selected that correspond to the at least one subject data block over multiple reference frames. The centers of the regions in the reference frames corresponding to the selected continuous tracking motion vectors may be related to form a trajectory based (TB) motion model that models a motion trajectory of the respective centers of the regions over the multiple reference frames. Using the formed trajectory motion model, a region in the current frame may be predicted. The predicted region may be determined based on a computed offset between the trajectory landing location in the current frame and the nearest data block in the current frame to determine TB-CBT predictions.
The TB-CBT predictions may be combined with enhanced predictive zonal search (EPZS) motion vector candidates to form an aggregate set of initial TB-CBT/EPZS motion vector candidates. The initial set of TB-CBT/EPZS motion vector candidates may be filtered separately by an approximate rate-distortion criterion, and this filtering results in a “best” TB-CBT candidate and “best” EPZS candidate. Fine motion estimation may be applied on the best TB-CBT and best EPZS candidate. The best initial inter-prediction motion vector candidates between the best TB-CBT and the best EPZS motion vector candidates may be selected by means of rate-distortion optimization.
The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, with emphasis instead placed on illustrating embodiments of the present invention.
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety. A description of example embodiments of the invention follows.
The invention can be applied to various standard encodings. In the following, unless otherwise noted, the terms “conventional” and “standard” (sometimes used together with “compression,” “codecs,” “encodings,” or “encoders”) can refer to MPEG-2, MPEG-4, H.264, or HEVC. “Input blocks” are referred to without loss of generality as the basic coding unit of the encoder and may also sometimes be referred to interchangeably as “data blocks” or “macroblocks.”
Standard Inter-Prediction
The encoding process may convert video data into a compressed, or encoded, format. Likewise, the decompression process, or decoding process, may convert compressed video back into an uncompressed, or raw, format. The video compression and decompression processes may be implemented as an encoder/decoder pair commonly referred to as a codec.
Most inter-prediction algorithms begin with initial motion estimation (110 in
Next, for a given fine motion vector 135, a mode generation module 140 generates a set of candidate predictions 145 based on the possible encoding modes of the encoder. These modes vary depending on the codec. Different encoding modes may account for (but are not limited to) interlaced vs. progressive (field vs. frame) motion estimation, direction of the reference frame (forward-predicted, backward-predicted, bi-predicted), index of the reference frame (for codecs such as H.264 and HEVC that allow multiple reference frames), inter-prediction vs. intra-prediction (certain scenarios allowing reversion to intra-prediction when no good inter-predictions exist), different quantization parameters, and various subpartitions of the input block. The full set of prediction candidates 145 undergoes “final” rate-distortion analysis 150 to determine the best single candidate. In “final” rate-distortion analysis, a precise rate-distortion metric D+λR is used, computing the prediction error D for the distortion portion and the actual encoding bits R (from the entropy encoding 90 in
As noted in the Introduction section, conventional inter-prediction is based on block-based motion estimation and compensation (BBMEC). The BBMEC process searches for the best match between the input block 10 and same-sized regions within previously-decoded reference frames. The simplest form of the BBMEC algorithm initializes the motion estimation using a (0, 0) motion vector, meaning that the initial estimate of the input block is the co-located block in the reference frame. Fine motion estimation is then performed by searching in a local neighborhood for the region that best matches (i.e., has lowest error in relation to) the input block. The local search may be performed by exhaustive query of the local neighborhood (termed here full block search) or by any one of several “fast search” methods, such as a diamond or hexagonal search.
As also noted in the Introduction section, the enhanced predictive zonal search (EPZS) algorithm [Tourapis, A., 2002] considers a set of initial motion estimates for a given input block, based on the motion vectors of neighboring blocks that have already been encoded, as well as the motion vectors of the co-located block (and neighbors) in the previous reference frame. The algorithm hypothesizes that the video's motion vector field has some spatial and temporal redundancy, so it is logical to initialize motion estimation for an input block with motion vectors of neighboring blocks. Once the set of initial estimates has been gathered (115 in
Inter-Prediction Via Continuous Block Tracking
The first step in CBT is to perform frame-to-frame tracking (210 in
An alternative embodiment of frame-to-frame tracking via hierarchical motion estimation (HME) is illustrated in
As shown in
Returning to
In an alternative embodiment, the CBT and EPZS motion vector candidates 255 and 265 in
In a further embodiment, one or more of the candidates from the five or more streams may be filtered using approximate rate-distortion analysis as described above, to save on computations for the final rate-distortion analysis. Any combination of candidates from the five or more streams may be filtered or passed on to the remaining inter-prediction steps.
In another embodiment, the proximity of multiple initial estimates (the outputs of 110 in
While a few specific embodiments of unified motion estimation have been detailed above, the number and type of candidates, as well as the candidate filtering method, may vary depending on the application.
Extensions to Continuous Block Tracking
The trajectory 750 can then be used to predict what region 705 in the current Frame t (700) should be associated with the motion of the content in the data block 715. The region 705 may not (and probably will not) be aligned with a data block (macroblock) in Frame t, so one can determine the nearest data block 706, with an offset 707 between the region 705 and the nearest data block 706.
Different types of TB-CBT predictors are possible, depending on how many reference frames are used to form the trajectory. In the trivial case, just the data block 715 in Frame t-1 is used, resulting in a 0th-order prediction, the (0, 0) motion vector, between Frame t and Frame t-1. Using Frames t-2 and t-1 to form the trajectory results in a 1st-order (linear) prediction. Using Frames t-3, t-2, and t-1 to form the trajectory results in a 2nd-order prediction, which is what is depicted for the trajectory 750 in
The TB-CBT prediction for the data block 706 is then determined by following the trajectory backward to the furthest reference frame in the trajectory. In
The TB-CBT predictors are derived without the need for any additional frame-to-frame tracking between the current Frame t and the most recent reference frame, Frame t-1, thus making the TB-CBT predictor computationally efficient. The TB-CBT predictor may either be added to the basic CBT predictor as another candidate in the rate-distortion optimization or can replace the CBT candidate.
In a further embodiment, the history of encoding mode selections (e.g., macroblock type, subpartition choice) along a trajectory can be used to prioritize or filter the encoding modes for the current data block being encoded. The encoding modes associated with a given trajectory would be derived from the encoding modes for the data blocks nearest the regions along the trajectory. Any encoding modes used in these data blocks would gain priority in the rate-distortion optimization (RDO) process for the current data block, since it is likely that the content of the data represented by the trajectory could be efficiently encoded in the same way. Other encoding modes that are very different from the prioritized encoding modes, and thus unlikely to be chosen, could be eliminated (filtered) from the RDO process for the current data block, thereby saving computations.
In further embodiments, information about the relative quality of the tracks, motion vectors, and predictions generated by the CBT or TB-CBT can be computed at different points in the encoding process and then used to inform current and future tracking and encoding decisions.
In one embodiment, rate-distortion “scores” (the values of the final rate-distortion metric D+λR) from neighboring, already-encoded input blocks may be fed back to the encoding of the current input block to determine how many motion vector candidates should be passed forward to final rate-distortion analysis. For example, low rate-distortion scores indicate good prediction in the neighboring input blocks, meaning that the random motion vector candidate, the median predictor, and the (0, 0) candidate may not be needed for the current input block. By contrast, high rate-distortion scores indicate poor prediction in the neighboring input blocks, meaning that all candidate types—and possibly multiple EPZS candidates—should be sent to final rate-distortion analysis. In a further embodiment, the number of candidates to pass forward to final rate-distortion analysis may be scaled inversely to the rate-distortion scores. In this embodiment, the candidates are ranked according to their approximate rate-distortion metric values, with lower values indicating higher priority.
In an alternative embodiment, statistics for rate-distortion scores can be accumulated for the most recent reference frame[s], and these statistics can be used to calculate a threshold for filtering the prediction candidates in the current frame being encoded. For example, one could derive a threshold as the 75th or 90th percentile of rate-distortion scores (sorted from largest to smallest) in the most recent encoded frame[s]. In the current frame, any candidates whose approximate rate-distortion scores are higher than the threshold could then be removed from consideration for final rate-distortion analysis, thereby saving computations.
In another embodiment, the quality of the tracks generated by the CBT (or the TB-CBT) and the quality of the corresponding CBT-based motion vectors can be measured and used to inform current and future tracking and encoding decisions. For example, motion vector symmetry [Bartels, C. and de Haan, G., 2009, “Temporal symmetry constraints in block matching,” Proc. IEEE 13th Int'l. Symposium on Consumer Electronics, pp. 749-752], defined as the relative similarity of pairs of counterpart motion vectors when the temporal direction of the motion estimation is switched, is a measure of the quality of calculated motion vectors (the higher the symmetry, the better the motion vector quality). The “symmetry error vector” is defined as the difference between the motion vector obtained through forward-direction motion estimation and the motion vector obtained through backward-direction motion estimation. Low motion vector symmetry (a large symmetry error vector) is often an indicator of the presence of complex phenomena such as occlusions (one object moving in front of another, thus either covering or revealing the background object), motion of objects on or off the video frame, and illumination changes, all of which make it difficult to derive accurate motion vectors.
In one embodiment, motion vector symmetry is measured for frame-to-frame motion vectors in the HME framework, so that coarse motion vectors in the upper levels of the HME pyramid with high symmetry are more likely to be propagated to the lower levels of the HME pyramid; whereas low-symmetry motion vectors in the upper levels of the HME pyramid are more likely to be replaced with alternative motion vectors from neighboring blocks that can then be propagated to the lower levels of the HME pyramid. In one embodiment, low symmetry is declared when the symmetry error vector is larger in magnitude than half the extent of the data block being encoded (e.g., larger in magnitude than an (8, 8) vector for a 16×16 macroblock). In another embodiment, low symmetry is declared when the symmetry error vector is larger in magnitude than a threshold based on motion vector statistics derived during the tracking process, such as the mean motion vector magnitude plus a multiple of the standard deviation of the motion vector magnitude in the current frame or some combination of recent frames.
In another embodiment, the motion vector symmetry measured during HME frame-to-frame tracking may be combined with prediction error measurements to detect the presence of occlusions and movement of objects onto or off the video frame (the latter henceforth referred to as “border motion” for brevity). Prediction error may be calculated, for example, as the sum of absolute differences (SAD) or sum of squared differences (SSD) between pixels of the data block being encoded and pixels of a region in a reference frame pointed to by a motion vector. When occlusion or border motion occurs, the motion vector symmetry will be low, while the error in one direction (either forward, where the reference frame is later than the current frame, or backward, where the reference frame is previous to the current frame) will be significantly lower than the error in the other. In this case, the motion vector that produces the lower error is the more reliable one.
In a further embodiment for low motion vector symmetry cases, data blocks in a “higher error” direction (as defined above) may be encoded with high fidelity using intra-prediction. Such a scenario likely occurs because the content of the data block has been revealed (after being occluded) or has come onto the video frame, making good inter-predictions for that data block unlikely. This dictates intra-prediction for that data block.
In a further embodiment for low motion vector symmetry cases, identification of data blocks in a “lower error” direction (as defined above) may be used to eliminate regions and reference frames in the other, “higher error” direction as future candidates for motion estimation. This elimination also removes bidirectional motion estimation candidates (in which predictions are a combination of regions in the forward and backward directions) from consideration, since one direction is unreliable. Besides eliminating candidates that are likely to be inaccurate (because of occlusions or motion off the video frame), this process has the additional benefit of reducing the number of candidates considered during rate-distortion optimization, thus reducing computation time.
In another embodiment, the motion vector symmetry measured during HME frame-to-frame tracking may be combined with prediction error measurements to detect the presence of illumination changes such as flashes, fades, dissolves, or scene changes. In contrast to the occlusion/border motion scenario above, illumination changes may be indicated by low motion vector symmetry and high error in both directions (forward and backward). In a further embodiment, detection of such illumination changes may dictate a de-emphasis of tracking-based candidates (such as from the CBT or TB-CBT) in favor of non-tracking-based candidates such as EPZS candidates, the (0, 0) motion vector, or the median predictor. In an alternative embodiment, detection of illumination changes may be followed by weighted bidirectional prediction using CBT or TB-CBT motion vectors (and corresponding reference frame regions) in both directions, with the weightings determined by measurements of average frame intensities in the forward and backward directions.
In another embodiment, a flat block detection algorithm can be applied during the first stages (upper levels) of HME tracking to determine the presence of “flat blocks” in the data, homogeneous (or ambiguous) regions in the data that usually result in inaccurate motion vectors. Flat blocks may be detected, for example, using an edge detection algorithm (where a flat block would be declared if no edges are detected in a data block) or by comparing the variance of a data block to a threshold (low variance less than the threshold would indicate a flat block). Similar to the use of the motion vector symmetry metric, flat block detection would dictate replacing the motion vectors for those blocks with motion vectors from neighboring blocks, prior to propagation to the lower levels of the HME pyramid. In another embodiment, flat block detection would dictate an emphasis on the (0, 0) motion vector candidate or the median predictor, since it is likely that, in a flat region, many different motion vectors will produce similar errors. In this case, the (0, 0) motion vector is attractive because it requires few bits to encode and is unlikely to produce larger prediction error than other motion vectors with larger magnitudes. The median predictor is desirable in flat block regions because it provides a consensus of motion vectors in neighboring blocks, preventing the motion vector field from becoming too chaotic due to small fluctuations in the pixels in the flat block region.
In further embodiments, metrics such as motion vector symmetry and flat block detection could be accumulated from multiple frame-to-frame tracking results associated with a continuous track, to determine a cumulative track quality measure that could be associated with the resulting CBT motion vector. This track quality measure could then be used to determine the relative priority of the CBT motion vector in the rate-distortion analysis compared to other (non-tracker-based) candidates. A high quality track (corresponding to high motion vector symmetry and no flat block detection for the motion vectors and regions along the track) would indicate higher priority for the CBT candidate. Additionally, a high track quality score could be used to override a “skip” mode decision from the encoder for the data block being encoded, in favor of the CBT candidate.
Additional statistics based on motion vector directions and magnitudes along CBT tracks may be used to improve encoding choices. In one embodiment, motion vector directions and magnitudes along a CBT track may be used to determine whether the motion of the region being tracked is close to translational, in which case the directions and magnitudes of the motion vectors along the track would be nearly constant. Non-constant motion vector magnitudes would indicate motion acceleration, a violation of the constant-velocity assumption of translational motion. Non-constant motion vector directions would violate the “straight-line” assumption of translational motion. If most points along a CBT track fit well to a particular translational motion model, one could observe the points that do not fit the model well. In one embodiment, the reference frames corresponding to the points along a CBT track that do not fit the translational motion model for the track may be excluded from rate-distortion analysis, as the regions in those reference frames would be unlikely to provide good predictions for the data block being encoded in the current frame. Goodness of fit of a particular point on a track to the translational motion model for that track may be determined, for example, by percentage offset of the motion vector magnitude from the constant velocity of the translational motion model and by percentage offset of the motion vector direction from the direction indicated by the translational motion model. The exclusion of certain reference frames from rate-distortion analysis, besides eliminating candidates that are likely to be inaccurate (because of poor fit to the motion of the region being tracked), will also reduce the number of candidates considered during rate-distortion optimization, thus reducing computation time.
In another embodiment, translational motion model analysis as above may be extended to the CBT tracks for all data blocks in a frame, as part of an adaptive picture type selection algorithm. In one embodiment, each CBT track is examined for translational motion model fit, and an average translational motion model is determined for the entire frame (tracks not fitting a translational motion model are excluded from the frame average motion model calculation). If a majority of data blocks in the frame show translational motion close to the frame average motion model (or the “global motion” of the frame), the motion in that frame is determined to be “well-modeled,” indicating that the frame should be encoded as a B-frame. If most of the data blocks in the frame do not show translational motion or show multiple translational motion models not close to the frame average motion model, the motion in that frame is determined to be “not well-modeled,” indicating that the frame should be encoded as a P-frame.
In another embodiment, either trajectory analysis or translational motion model analysis as described above may be used to provide additional predictors in cases where motion vectors from frame-to-frame motion estimation are unreliable. Trajectory-based candidates are desirable in cases when the best predictor for the current data block is not nearby temporally (i.e., regions in the most recent reference frames) but resides in a more distant reference frame. Such cases may include periodic motion (e.g., a carousel), periodic illumination changes (e.g., strobe lights), and occlusions. Translational motion model analysis can provide better predictors through the estimated global motion of the frame when the best prediction for the current data block is not available through either frame-to-frame motion estimation or through motion vectors in neighboring blocks, but is better indicated by the overall motion in the frame. Such cases may include chaotic foreground motion against steady background motion (e.g., confetti at a parade) and flat blocks.
Once the CBT motion vectors are generated from the application of CBT 830, the next step is to perform a frame complexity analysis 840 on each of the future frames, based on the relative accuracy of the motion estimation. In one embodiment, the complexity of a frame is measured by summing the error of each input block in the frame (measured using sum-of-absolute-differences or mean-squared error) when compared with its matching region (the region pointed to by its motion vector) in the previous frame. The frame complexity is thus the sum of all the block error values. Rate control 860 then updates the quantization parameter (QP) 865 for the current frame according to the ratio of the complexity of the future frame to the complexity of the current frame. The idea is that if it is known that a more complex frame is upcoming, the current frame should be quantized more (resulting in fewer bits spent on the current frame) so that more bits are available to encode the future frame. The updated QP value 865 modifies both the encoding modes 140 for inter- and intra-prediction 870 as well as the later quantization step 60 where the residual signal is transformed and quantized. The LAT of the present invention is distinguished from other types of look-ahead processing because the complexity calculations that determine the look-ahead parameters are dependent on continuous tracking results.
In another embodiment, the frame complexity analysis 840 in the look-ahead processing module 815 is used to detect scene changes. This is particularly important for encoding based on the CBT, because tracking through scene changes results in inaccurate motion vectors. One way to use the frame complexity analysis 840 for scene change detection is to monitor the frame error as a time series over several frames and look for local peaks in the frame error time series. In one embodiment, a frame is declared a local peak (and a scene change detected) if the ratio of that frame's error to the surrounding frames' errors is higher than some threshold. Small windows (for example, up to three frames) can be applied in calculating the surrounding frames' errors to make that calculation more robust. Once a scene change is detected, the encoder is instructed to encode that frame as an I-frame (intra-prediction only) and to reset all trackers (frame-to-frame and continuous trackers). Again, this type of scene change detection is distinguished from conventional types of scene change detection because the LAT outputs are dependent on continuous tracking results.
Digital Processing Environment
Example implementations of the present invention may be implemented in a software, firmware, or hardware environment.
Client computer(s)/devices 950 can also be linked through communications network 970 to other computing devices, including other client devices/processes 950 and server computer(s) 960. Communications network 970 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, Local area or Wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth, etc.) to communicate with one another. Other electronic devices/computer network architectures are suitable.
Embodiments of the invention may include means for encoding, tracking, modeling, decoding, or displaying video or data signal information.
Disk storage 995 provides non-volatile storage for computer software instructions 998 (equivalently “OS program”) and data 994 used to implement an embodiment of the present invention: it can also be used to store the video in compressed format for long-term storage. Central processor unit 984 is also attached to system bus 979 and provides for the execution of computer instructions. Note that throughout the present text, “computer software instructions” and “OS program” are equivalent.
In one example, an encoder may be configured with computer readable instructions 992 to provide continuous block tracking (CBT) in a model-based inter-prediction and encoding scheme. The CBT may be configured to provide a feedback loop to an encoder (or elements thereof) to optimize the encoding of video data.
In one embodiment, the processor routines 992 and data 994 are a computer program product, with encoding that includes a CBT engine (generally referenced 992), including a computer readable medium capable of being stored on a storage device 994 which provides at least a portion of the software instructions for the CBT.
The computer program product 992 can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the CBT software instructions may also be downloaded over a cable, communication, and/or wireless connection. In other embodiments, the CBT system software is a computer program propagated signal product 907 (in
In alternate embodiments, the propagated signal is an analog carrier wave or digital signal carried on the propagated medium. For example, the propagated signal may be a digitized signal propagated over a global network (e.g., the Internet), a telecommunications network, or other network. In one embodiment, the propagated signal is transmitted over the propagation medium over a period of time, such as the instructions for a software application sent in packets over a network over a period of milliseconds, seconds, minutes, or longer. In another embodiment, the computer readable medium of computer program product 992 is a propagation medium that the computer system 950 may receive and read, such as by receiving the propagation medium and identifying a propagated signal embodied in the propagation medium, as described above for the computer program propagated signal product.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/950,784, filed Mar. 10, 2014 and U.S. Provisional Application No. 62/049,342, filed Sep. 11, 2014. This application is related to U.S. application Ser. No. 13/797,644 filed on Mar. 12, 2013, which is a continuation-in-part of U.S. application Ser. No. 13/725,940 filed on Dec. 21, 2012, which claims the benefit of U.S. Provisional Application No. 61/615,795 filed on Mar. 26, 2012 and U.S. Provisional Application No. 61/707,650 filed on Sep. 28, 2012. This application is also related to U.S. patent application Ser. No. 13/121,904, internationally filed on Oct. 6, 2009, which is a U.S. National Stage of PCT/US2009/059653 filed Oct. 6, 2009, which claims the benefit of U.S. Provisional Application No. 61/103,362, filed Oct. 7, 2008. The '904 application is also a continuation-in part of U.S. patent application Ser. No. 12/522,322, internationally filed on Jan. 4, 2008, which claims the benefit of U.S. Provisional Application No. 60/881,966, filed Jan. 23, 2007, is related to U.S. Provisional Application No. 60/811,890, filed Jun. 8, 2006, and is a continuation-in-part of U.S. application Ser. No. 11/396,010, filed Mar. 31, 2006, now U.S. Pat. No. 7,457,472, which is a continuation-in-part of U.S. application Ser. No. 11/336,366 filed Jan. 20, 2006, now U.S. Pat. No. 7,436,981, which is a continuation-in-part of U.S. application Ser. No. 11/280,625 filed Nov. 16, 2005, now U.S. Pat. No. 7,457,435, which is a continuation-in-part of U.S. application Ser. No. 11/230,686 filed Sep. 20, 2005, now U.S. Pat. No. 7,426,285, which is a continuation-in-part of U.S. application Ser. No. 11/191,562 filed Jul. 28, 2005, now U.S. Pat. No. 7,158,680. U.S. application Ser. No. 11/396,010 also claims the benefit of U.S. Provisional Application No. 60/667,532, filed Mar. 31, 2005 and U.S. Provisional Application No. 60/670,951, filed Apr. 13, 2005. This present application is also related to U.S. Provisional Application No. 61/616,334, filed Mar. 27, 2012, U.S. Provisional Application No. 61/650,363 filed May 22, 2012 and U.S. application Ser. No. 13/772,230 filed Feb. 20, 2013, which claims the benefit of the '334 and '363 Provisional Applications. The entire teachings of the above application(s) are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5117287 | Koike et al. | May 1992 | A |
5586200 | Devaney et al. | Dec 1996 | A |
5608458 | Chen et al. | Mar 1997 | A |
5710590 | Ichige et al. | Jan 1998 | A |
5748247 | Hu | May 1998 | A |
5760846 | Lee | Jun 1998 | A |
5774591 | Black et al. | Jun 1998 | A |
5774595 | Kim | Jun 1998 | A |
5826165 | Echeita et al. | Oct 1998 | A |
5917609 | Breeuwer et al. | Jun 1999 | A |
5933535 | Lee et al. | Aug 1999 | A |
5969755 | Courtney | Oct 1999 | A |
5991447 | Eifrig et al. | Nov 1999 | A |
6044168 | Tuceryan et al. | Mar 2000 | A |
6061400 | Pearlstein et al. | May 2000 | A |
6069631 | Tao et al. | May 2000 | A |
6088484 | Mead | Jul 2000 | A |
6249318 | Girod et al. | Jun 2001 | B1 |
6256423 | Krishnamurthy et al. | Jul 2001 | B1 |
6307964 | Lin et al. | Oct 2001 | B1 |
6381275 | Fukuhara et al. | Apr 2002 | B1 |
6418166 | Wu et al. | Jul 2002 | B1 |
6546117 | Sun et al. | Apr 2003 | B1 |
6574353 | Schoepflin | Jun 2003 | B1 |
6608935 | Nagumo et al. | Aug 2003 | B2 |
6611628 | Sekiguchi et al. | Aug 2003 | B1 |
6614466 | Thomas | Sep 2003 | B2 |
6625310 | Lipton et al. | Sep 2003 | B2 |
6625316 | Maeda | Sep 2003 | B1 |
6640145 | Hoffberg et al. | Oct 2003 | B2 |
6646578 | Au | Nov 2003 | B1 |
6661004 | Aumond et al. | Dec 2003 | B2 |
6664956 | Erdem | Dec 2003 | B1 |
6711278 | Gu et al. | Mar 2004 | B1 |
6731799 | Sun et al. | May 2004 | B1 |
6731813 | Stewart | May 2004 | B1 |
6738424 | Allmen et al. | May 2004 | B1 |
6751354 | Foote et al. | Jun 2004 | B2 |
6774917 | Foote et al. | Aug 2004 | B1 |
6792154 | Stewart | Sep 2004 | B1 |
6842483 | Au | Jan 2005 | B1 |
6870843 | Stewart | Mar 2005 | B1 |
6909745 | Puri et al. | Jun 2005 | B1 |
6912310 | Park et al. | Jun 2005 | B1 |
6925122 | Gorodnichy | Aug 2005 | B2 |
6950123 | Martins | Sep 2005 | B2 |
7003117 | Kacker et al. | Feb 2006 | B2 |
7027599 | Entwistle | Apr 2006 | B1 |
7043058 | Cornog et al. | May 2006 | B2 |
7088845 | Gu et al. | Aug 2006 | B2 |
7095786 | Schonfeld et al. | Aug 2006 | B1 |
7158680 | Pace | Jan 2007 | B2 |
7162055 | Gu et al. | Jan 2007 | B2 |
7162081 | Timor et al. | Jan 2007 | B2 |
7164718 | Maziere et al. | Jan 2007 | B2 |
7173925 | Dantu et al. | Feb 2007 | B1 |
7184073 | Varadarajan et al. | Feb 2007 | B2 |
7227893 | Srinivasa et al. | Jun 2007 | B1 |
7352386 | Shum et al. | Apr 2008 | B1 |
7356082 | Kuhn | Apr 2008 | B1 |
7415527 | Varadarajan et al. | Aug 2008 | B2 |
7424157 | Pace | Sep 2008 | B2 |
7424164 | Gondek et al. | Sep 2008 | B2 |
7426285 | Pace | Sep 2008 | B2 |
7436981 | Pace | Oct 2008 | B2 |
7457435 | Pace | Nov 2008 | B2 |
7457472 | Pace et al. | Nov 2008 | B2 |
7508990 | Pace | Mar 2009 | B2 |
7574406 | Varadarajan et al. | Aug 2009 | B2 |
7606305 | Rault | Oct 2009 | B1 |
7630522 | Popp et al. | Dec 2009 | B2 |
7715597 | Costache et al. | May 2010 | B2 |
7738550 | Kuhn | Jun 2010 | B2 |
7788191 | Jebara | Aug 2010 | B2 |
7869518 | Kim | Jan 2011 | B2 |
8019170 | Wang | Sep 2011 | B2 |
8036464 | Sridhar et al. | Oct 2011 | B2 |
8065302 | Sridhar et al. | Nov 2011 | B2 |
8068677 | Varadarajan et al. | Nov 2011 | B2 |
8086692 | Sridhar et al. | Dec 2011 | B2 |
8090670 | Sridhar et al. | Jan 2012 | B2 |
8135062 | Cote | Mar 2012 | B1 |
8140550 | Varadarajan et al. | Mar 2012 | B2 |
8149915 | Novotny | Apr 2012 | B1 |
8243118 | Pace | Aug 2012 | B2 |
8259794 | Bronstein et al. | Sep 2012 | B2 |
8290038 | Wang et al. | Oct 2012 | B1 |
8290049 | Kondo | Oct 2012 | B2 |
8379712 | Park | Feb 2013 | B2 |
8737464 | Zhang et al. | May 2014 | B1 |
8902971 | Pace et al. | Dec 2014 | B2 |
8908766 | Pace | Dec 2014 | B2 |
8942283 | Pace | Jan 2015 | B2 |
8964835 | Pace | Feb 2015 | B2 |
9106977 | Pace | Aug 2015 | B2 |
9532069 | Pace et al. | Dec 2016 | B2 |
9578345 | DeForest et al. | Feb 2017 | B2 |
20010038714 | Masumoto et al. | Nov 2001 | A1 |
20020016873 | Gray et al. | Feb 2002 | A1 |
20020025001 | Ismaeil et al. | Feb 2002 | A1 |
20020054047 | Toyama et al. | May 2002 | A1 |
20020059643 | Kitamura et al. | May 2002 | A1 |
20020073109 | Toriumi | Jun 2002 | A1 |
20020085633 | Kim et al. | Jul 2002 | A1 |
20020114392 | Sekiguchi et al. | Aug 2002 | A1 |
20020116529 | Hayden | Aug 2002 | A1 |
20020164068 | Yan | Nov 2002 | A1 |
20020196328 | Piotrowski | Dec 2002 | A1 |
20030011589 | Desbrun et al. | Jan 2003 | A1 |
20030058943 | Zakhor et al. | Mar 2003 | A1 |
20030063778 | Rowe et al. | Apr 2003 | A1 |
20030103647 | Rui et al. | Jun 2003 | A1 |
20030112243 | Garg et al. | Jun 2003 | A1 |
20030122966 | Markman et al. | Jul 2003 | A1 |
20030163690 | Stewart | Aug 2003 | A1 |
20030169812 | Maziere et al. | Sep 2003 | A1 |
20030194134 | Wenzel et al. | Oct 2003 | A1 |
20030195977 | Liu et al. | Oct 2003 | A1 |
20030206589 | Jeon | Nov 2003 | A1 |
20030231769 | Bolle et al. | Dec 2003 | A1 |
20030235341 | Gokturk et al. | Dec 2003 | A1 |
20040013286 | Viola et al. | Jan 2004 | A1 |
20040017852 | Garrido et al. | Jan 2004 | A1 |
20040022320 | Kawada et al. | Feb 2004 | A1 |
20040028139 | Zaccarin et al. | Feb 2004 | A1 |
20040037357 | Bagni et al. | Feb 2004 | A1 |
20040081359 | Bascle et al. | Apr 2004 | A1 |
20040085315 | Duan et al. | May 2004 | A1 |
20040091048 | Youn | May 2004 | A1 |
20040107079 | MacAuslan | Jun 2004 | A1 |
20040113933 | Guier | Jun 2004 | A1 |
20040135788 | Davidson et al. | Jul 2004 | A1 |
20040246336 | Kelly, III et al. | Dec 2004 | A1 |
20040264574 | Lainema | Dec 2004 | A1 |
20050015259 | Thumpudi et al. | Jan 2005 | A1 |
20050128306 | Porter et al. | Jun 2005 | A1 |
20050185823 | Brown et al. | Aug 2005 | A1 |
20050193311 | Das et al. | Sep 2005 | A1 |
20050281335 | Ha | Dec 2005 | A1 |
20060013450 | Shan et al. | Jan 2006 | A1 |
20060029253 | Pace | Feb 2006 | A1 |
20060045185 | Kiryati | Mar 2006 | A1 |
20060067585 | Pace | Mar 2006 | A1 |
20060120571 | Tu et al. | Jun 2006 | A1 |
20060120613 | Su et al. | Jun 2006 | A1 |
20060133681 | Pace | Jun 2006 | A1 |
20060177140 | Pace | Aug 2006 | A1 |
20060204115 | Burazerovic | Sep 2006 | A1 |
20060233448 | Pace et al. | Oct 2006 | A1 |
20060274949 | Gallagher et al. | Dec 2006 | A1 |
20070025373 | Stewart | Feb 2007 | A1 |
20070053513 | Hoffberg | Mar 2007 | A1 |
20070071100 | Shi et al. | Mar 2007 | A1 |
20070071336 | Pace | Mar 2007 | A1 |
20070153025 | Mitchell et al. | Jul 2007 | A1 |
20070183661 | El-Maleh et al. | Aug 2007 | A1 |
20070185946 | Basri et al. | Aug 2007 | A1 |
20070239778 | Gallagher | Oct 2007 | A1 |
20070268964 | Zhao | Nov 2007 | A1 |
20070297645 | Pace et al. | Dec 2007 | A1 |
20080027917 | Mukherjee et al. | Jan 2008 | A1 |
20080040375 | Vo et al. | Feb 2008 | A1 |
20080043848 | Kuhn | Feb 2008 | A1 |
20080101652 | Zhao et al. | May 2008 | A1 |
20080117977 | Lee | May 2008 | A1 |
20080152008 | Sun et al. | Jun 2008 | A1 |
20080232477 | Wang et al. | Sep 2008 | A1 |
20080240247 | Lee et al. | Oct 2008 | A1 |
20090040367 | Zakrzewski et al. | Feb 2009 | A1 |
20090055417 | Hannuksela | Feb 2009 | A1 |
20090067719 | Sridhar et al. | Mar 2009 | A1 |
20090080855 | Senftner et al. | Mar 2009 | A1 |
20090112905 | Mukerjee et al. | Apr 2009 | A1 |
20090129474 | Pandit et al. | May 2009 | A1 |
20090158370 | Li et al. | Jun 2009 | A1 |
20090168884 | Lu | Jul 2009 | A1 |
20090175538 | Bronstein et al. | Jul 2009 | A1 |
20090262804 | Pandit et al. | Oct 2009 | A1 |
20090292644 | Varadarajan et al. | Nov 2009 | A1 |
20100008424 | Pace | Jan 2010 | A1 |
20100027861 | Shekhar et al. | Feb 2010 | A1 |
20100049739 | Varadarajan et al. | Feb 2010 | A1 |
20100073458 | Pace | Mar 2010 | A1 |
20100074600 | Putterman et al. | Mar 2010 | A1 |
20100086062 | Pace | Apr 2010 | A1 |
20100088717 | Candelore et al. | Apr 2010 | A1 |
20100135575 | Guo et al. | Jun 2010 | A1 |
20100135590 | Yang et al. | Jun 2010 | A1 |
20100167709 | Varadarajan et al. | Jul 2010 | A1 |
20100271484 | Fishwick | Oct 2010 | A1 |
20100272185 | Gao et al. | Oct 2010 | A1 |
20100278275 | Yang et al. | Nov 2010 | A1 |
20100290524 | Lu et al. | Nov 2010 | A1 |
20100316131 | Shanableh et al. | Dec 2010 | A1 |
20100322300 | Li et al. | Dec 2010 | A1 |
20100322309 | Huang et al. | Dec 2010 | A1 |
20110019026 | Kameyama | Jan 2011 | A1 |
20110055266 | Varadarajan et al. | Mar 2011 | A1 |
20110058609 | Chaudhury et al. | Mar 2011 | A1 |
20110087703 | Varadarajan et al. | Apr 2011 | A1 |
20110182352 | Pace | Jul 2011 | A1 |
20110221865 | Hyndman | Sep 2011 | A1 |
20110285708 | Chen et al. | Nov 2011 | A1 |
20110286627 | Takacs et al. | Nov 2011 | A1 |
20120044226 | Singh et al. | Feb 2012 | A1 |
20120079004 | Herman | Mar 2012 | A1 |
20120105654 | Kwatra et al. | May 2012 | A1 |
20120155536 | Pace | Jun 2012 | A1 |
20120163446 | Pace | Jun 2012 | A1 |
20120281063 | Pace | Nov 2012 | A1 |
20130027568 | Zou et al. | Jan 2013 | A1 |
20130035979 | Tenbrock | Feb 2013 | A1 |
20130083854 | Pace | Apr 2013 | A1 |
20130107948 | DeForest et al. | May 2013 | A1 |
20130114703 | DeForest et al. | May 2013 | A1 |
20130170541 | Pace et al. | Jul 2013 | A1 |
20130230099 | DeForest et al. | Sep 2013 | A1 |
20140286433 | He | Sep 2014 | A1 |
20140355687 | Takehara | Dec 2014 | A1 |
20150124874 | Pace | May 2015 | A1 |
20150189318 | Pace | Jul 2015 | A1 |
20160073111 | Lee et al. | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
0 614 318 | Sep 1994 | EP |
1 124 379 | Aug 2001 | EP |
1 250 012 | Oct 2002 | EP |
1 426 898 | Jun 2004 | EP |
1 779 294 | May 2007 | EP |
H03253190 | Nov 1991 | JP |
H05244585 | Sep 1993 | JP |
07-038873 | Feb 1995 | JP |
H0795587 | Apr 1995 | JP |
07-288789 | Oct 1995 | JP |
08-235383 | Sep 1996 | JP |
08-263623 | Oct 1996 | JP |
2000-20955 | Jul 2000 | JP |
2001-100731 | Apr 2001 | JP |
2001-103493 | Apr 2001 | JP |
2002-525735 | Aug 2002 | JP |
2004-94917 | Mar 2004 | JP |
2004 356747 | Dec 2004 | JP |
2006-521048 | Sep 2006 | JP |
2007-504696 | Mar 2007 | JP |
2009-501479 | Jan 2009 | JP |
2010-517426 | May 2010 | JP |
200521885 | Jul 2005 | TW |
200527327 | Aug 2005 | TW |
200820782 | May 2008 | TW |
WO 9827515 | Jun 1998 | WO |
WO 9859497 | Dec 1998 | WO |
WO 9926415 | May 1999 | WO |
WO 0016563 | Mar 2000 | WO |
WO 0045600 | Aug 2000 | WO |
WO 02102084 | Dec 2002 | WO |
WO 03041396 | May 2003 | WO |
WO 2005055602 | Jun 2005 | WO |
WO 2005107116 | Nov 2005 | WO |
WO 2006015092 | Feb 2006 | WO |
WO 2006034308 | Mar 2006 | WO |
WO 2006055512 | May 2006 | WO |
WO 2006083567 | Aug 2006 | WO |
WO 2006105470 | Oct 2006 | WO |
WO 2007007257 | Jan 2007 | WO |
WO 2007146102 | Dec 2007 | WO |
WO 2008091483 | Jul 2008 | WO |
WO 2008091484 | Jul 2008 | WO |
WO 2008091485 | Jul 2008 | WO |
WO 2010042486 | Apr 2010 | WO |
WO 2010118254 | Oct 2010 | WO |
WO 2011156250 | Dec 2011 | WO |
WO 2012033970 | Mar 2012 | WO |
WO 2013148002 | Oct 2013 | WO |
WO 2013148091 | Oct 2013 | WO |
WO 2014051712 | Apr 2014 | WO |
WO 2015138008 | Sep 2015 | WO |
WO 2016040116 | Mar 2016 | WO |
Entry |
---|
Notification Concerning Transmittal of International Preliminary Report on Patentability for PCT/US2014/063913, titled “Continuous Block Tracking for Temporal Prediction in Video Encoding,” Mailing Date: Sep. 22, 2016. |
Amit, Yali, 2D Object Detection and Recognition: Models, Algorithms, and Networks, The MIT Press, Cambridge, Massachusetts, pp. 147-149 (Sections 7.3: Detecting Pose and 7.4: Bibliographical Notes and Discussion) (2002). |
Antoszczyszyn, P.M., et al., “Tracking of the Motion of Important Facial Features in Model-Based Coding,” Signal Processing, 66(2):249-260, (Apr. 30, 1998). |
Bay, H., et al., “SURF: Speeded Up Robust Features”, ETH Zurich {bay, vangool}@vision.ee.ethz.ch, 1-14 (Date Not Provided). |
“Bit-Torrent: Introduction”, Retrieved on: Jan. 18, 2006, retrieved online at: http://web.archive.org/web/20060118042451/http://www.bittorrent.com/introduction. html. |
Brenneman, A., et al., “x264”, Wikipedia—The Free Encyclopedia: http:.//en.wikipedia,org/wiki/X264, 1-5 (Date Not Provided). |
Cho, J-H., et al., “Object detection using multi-resolution mosaic in image sequences,” Signal Processing. Image Communication, Elsevier Science Publishers, Amsterdam, vol. 20, No. 3, pp. 233-253, (Mar. 1, 2005). |
Dodgson, N. A., “Image resampling,” Technical Report, UCAM-CL-TR-261, ISSN 1476-2986, University of Cambridge, Computer Laboratory, (264 pp.) (Aug. 1992). |
Doenges, P. K., “MPEG-4: Audio/Video and Synthetic Graphics/Audio for Mixed Media,” Signal Processing: Image Communication, No. 9, pp. 433-463 (1997). |
Ebrahimi, T., et al. “MPEG-4 natural video coding—An Overview”, Signal Processing: Image Communication 15:365-385 (2000). |
Extended European Search Report for 06 73 3758.4, dated Mar. 8, 2011 (17 pages). |
Extended European Search Report for 06 74 0318.8, dated May 6, 2011 (14 pages). |
Fischler, M.A., et al., “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communications of the Association for Computing Machinery, 24(6):381-395 (1981). |
Fukuhara, T., et al., “3-D Motion Estimation of Human Head for Model-Based Image Coding,” IEEE Proceedings-I, 140(1):26-35, (Feb. 1, 1993). |
Garrett-Glaser, J., “Diary of an x264 Developer”, http://x264dev.multimedia.cx/, 1-7 (2008). |
Gorodinchy, et al., “Seeing faces in video by computers. Editorial for Special Issue on Face Processing in Video Sequences,” Image and Vision Computing, Guilford, GB, vol. 24, No. 6, pp. 551-556 (Jun. 1, 2006). |
Gunsel, B. et al., “Content based access to video objects: Temporal segmentation, visual summarization, and feature extraction,” Signal Processing, vol. 66, pp. 261 280 (1998). |
“H.264/MPEG-4 AVC”, Wikipedia—The Free Encyclopedia: http:.//en.wikipedia,org/wiki/X264, 1-17 (Date Not Provided). |
Harris, C., et al., “A Combined Corner nad Edge Detector,” Alvey Vision Conference, Proceedings of the Alvey Vision Conference, p. 147 (1988). |
Huang, R. et al., “Sparse representation of images with hybrid linear models,” in Proc. ICIP '04, 2(1281 1284) Oct. 2004. |
Huang, T.S. et al., “Chapter 5: Three-Dimensional Model-Based Image Communication,” Visual Information Representation, Communication, and Image Processing, Editors: Chen, Chang Wen, et al., Marcel Dekker, Inc., New York, New York, pp. 97-117 (1999). |
Intel Integrated Performance Primitives—Documentation, http://software.intel.com/en-us-articles/intel-integrated-performance-primitives-documentation/ (Retrieved on Dec. 21, 2012). |
International Search Report for International Application No. PCT/US2009/059653, 8 pp., mailed Feb. 2, 2010. |
Invitation to Pay Additional Fees and, Where Applicable, Protest Fee, for International Application No. PCT/US2008/000090, mailed Jun. 2, 2010. |
Irani, M., et al., “Detecting and Tracking Multiple Moving Objects Using Temporal Integration,” European Conference on Computer Vision, 282-287 (1992). |
Jolliffe, I.T., “Principal Component Analysis, Second Edition,” Springer, 518 pp., Apr. 2002. |
Jones, M. and P. Viola, “Fast Multi View Face Detection,” Mitsubishi Electrical Laboratories, Jul. 2003 (10 pp.). |
Kass, Michael, Andrew Witzin, and Demetri Terzopoulos, “Snakes: Active contour Models,” International Journal of Computer Vision (1988). |
Keysers, et al., “Deformation Models for Image Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(8):1422-1435 (2007). |
Lowe, D.G., “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 1-28 (2004). |
Miners, B. W., et al., “Dynamic Facial Expression Recognition Using Fuzzy Hidden Markov Models,” Systems, Man and Cybernetics, 2005 IEEE International Conference on, IEEE, Piscataway, N.J., USA, vol. 2, pp. 1417-1422 (Oct. 10, 2005). |
Neff, et al., “Matching-Pursuit Based Video Compression”, Department of Electrical Engineering and Computer Science, MPEG Meeting, Mar. 11, 1995. |
Notification and Transmittal of International Search Report and Written Opinion dated Jun. 10, 2013 for PCT/US2013/029297, entitled “Video Compression Repository and Model Reuse”. |
Notification Concerning Transmittal of International Preliminary Report on Patentability, PCT/US2013/025123, “Video Compression Repository and Model Reuse,” date of mailing Oct. 1, 2014. |
Notification Concerning Transmittal of International Preliminary Report on Patentability, PCT/US2013/025123, “Context Based Video Encoding and Decoding,” date of mailing Oct. 9, 2014. |
Notification Concerning Transmittal of International Preliminary Report on Patentability, in International Application No. PCT/US2008/000092, pp. 9, mailed Aug. 6, 2009. |
Notification Concerning Transmittal of International Preliminary Report on Patentability, in International Application No. PCT/US2008/000091, pp. 9, mailed Aug. 6, 2009. |
Notification Concerning Transmittal of International Preliminary Report on Patentability (Chapter I of the Patent Cooperation Treaty), for International Application No. PCT/US2008/00090, mailed Sep. 2, 2010. |
Notification Concerning Transmittal of the International Preliminary Report on Patentability for PCT/US2009/059653, mailed Apr. 21, 2011 (10 pages). |
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for International Application No. PCT/US2008/000090, 19 pp., mailed Aug. 18, 2010. |
OpenCV Documentation Page, http://docs.openev.org/ (Retrieved on Dec. 21, 2012). |
Osama, et al., “Video Compression Using Matching Pursuits”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 1, Feb. 1999. |
Park, et al., “Qualitative Estimation of Camera Motion Parameters From the Linear Composition of Optical Flow,” Pattern Recognition: The Journal of the Pattern Recognition Society, 37:767-779 (2004). |
Pati, Y.C., et al., “Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition”, 27th Annual Asilomar conference on Signals systems and Computers ,1-5 (1993). |
PCT International Search Report, for International Application No. PCT/US2008/000091, dated Sep. 23, 2008, 5 pages. |
PCT International Search Report, for International Application No. PCT/US2008/000092, dated Sep. 23, 2008, 5 pages. |
Piamsa nga, P. and N. Babaguchi, “Motion estimation and detection of complex object by analyzing resampled movements of parts,” in Proc. ICIP '04, 1 (365 368), Oct. 2004. |
Pique, R. et al., “Efficient Face Coding in Video Sequences Combining Adaptive Principal Component Analysis and a Hybrid Codec Approach,” Proceedings of International Conference on Acoustics, Speech and Signal Processing, 3:629-632(2003). |
Rehg, J. M. and Witkin, A. P., “Visual Tracking with Deformation Models,” Proc. IEEE Int'l. Conf. on Robotics and Automation, pp. 844-850 (Apr. 1991). |
Reinders, M.J.T., et al., “Facial Feature Localization and Adaptation of a Generic Face Model for model-Based Coding,” Signal Processing: Image Communication, No. 7, pp. 57-74 (1995). |
Richardson, I., “Vcodex White Paper: Video Compression Patents,” Vcodex Ltd., pp. 3-6 (2008-2011). |
Rong, S. et al., “Efficient spatiotemporal segmentation and video object generation for highway surveillance video,” in Proc. IEEE Int'l, Conf. Communications, Circuits and Systems and West Sino Expositions, 1(580 584), Jun. Jul. 2002. |
Schröder, K., et al., “Combined Description of Shape and Motion in an Object Based Coding Scheme Using Curved Triangles,” Proceedings of the International Conference on Image Processing, 2:390-393 (1995). |
“Series H: Audiovisual and Multimedia Systems: Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”, ITU-T, H.264: 1-657 (2012). |
Shin, J. et al., “Optical flow-based real-time object tracking using non-prior training active feature model,” Academic Press Limited, GB, vol. 11, No. 3, pp. 204-218 (Jun. 1, 2005). |
Tabatabai, A. J., et al., “Motion Estimation Methods for Video Compression—A Review,” Journal of the Franklin Institute, 335(8): 1411-1441 (1998). |
Tao, H., et al., “Compression of MPEG-4 Facial Animation Parameters for Transmission of Talking Heads,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 2, pp. 264-276 (Mar. 1999). |
Toklu, C. et al., “Simultaneous Alpha Map Generation and 2 D Mesh Tracking for Multimedia Applications,” Proceedings of the International Conference on Image Processing: 1997, (113 116) (Oct. 1997). |
Urban, M., “Harris Interest Operator,” Jan. 28, 2003, http://cmp.felk.cvut.cz/cmp/courses/dzo/resources/lecture—harris—urban.pdf (23 pp.). |
Vidal, R. and R. Hartley, “Motion segmentation with missing data using PowerFactorization and GPCA,” in Proc. CVPR 04, 2 (II-310-316), Jun.-Jul. 2004. |
Vidal, R. et al., “Generalized principal component analysis (GPCA)”, in Proc. CVPR '03, 1 (I621-628), Jun. 2003. |
Viola, P. and Jones, M.J., “Robust Real-Time Face Detection,” International Journal of Computer Vision, 57(2):137-154 (2004). |
Viola, P. and M. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features,” Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001, vol. 1, pp. 511 518. |
Wang, Y., “Use of Two-Dimensional Deformable Mesh Strucutures for Video Coding, Part II—The Analysis Problem and a Region-Based Coder Employing an Active Mesh Representation” IEEE Transactions on Circuits and Systems for Video Technology, 6(6):1051-8215 (1996). |
Wang, Y., “Use of Two-Dimensional Deformable Mesh Structures for Video Coding, Part I—The Synthesis Problem: Mesh-Based Function Approximation and Mapping” IEEE Transactions on Circuits and Systems for Video Technology, 6(6):1051-8215 (1996). |
Wiegand, T., et al., “Overview of the H.264/AVC Video Coding Standard”, IEEE Transactions on Circuits and Systems for Video Technology, 13(7):560-576 (2003). |
Written Opinion of the International Searching Authority for International Application No. PCT/US2009/059653, 8 pp., mailed Feb. 2, 2010. |
Zhang, et al., “A Novel Video Coding Framework by Perceptual Representation and Macroblock-Based Matching Pursuit Algorithm”, Department of Computer Science and Technology, pp. 322-331 (2007). |
Braspenning, R., et al., “True-Motion Estimation using Features Correspondences,” Visual Communications and Image Processing, SPIE vol. 5308, (2004). |
Chen, M., et al., “Efficient Multi-Frame Motion Estimation Algorithms for MPEG-4 AVC/JVT/H.264,” IEEE International Symposium on Circuits and Systems, pp. III-737 (May 2004). |
Lee, T., et al., “A New Motion Vector Composition Algorithm for H.264 Multiple Reference Frame Motion Estimation,” retrieved from the Internet on Jan. 16, 2015: http://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/39749/1/TA-P2-7. |
Smith, L., et al., “A tutorial on Principal Components Analysis,” Feb. 26, 2002. |
Su, Y., et al., “Fast Multiple Reference Frame Motion Estimation for H.264/AVC,” IEEE Transactions on Circuits and Systems for Video Technology, IEE Service Center, vol. 16(3), pp. 447-452 (Mar. 2006). |
Wikipedia, Motion Perception; 6 pages; downloaded on Aug. 24, 2015; See https://en.wikipedia.org/wiki/Motion—perception#The—aperture—problem. |
Notification of Transmittal of The International Search Report and The Written Opinion of the International Searching Authority, or the Declaration, for International Application No. PCT/US2014/063913, “Continuous Block Tracking for Temporal Prediction in Video Encoding,” 11 pages, mailed May 27, 2015. |
Bulla, C. et al., “High Quality Video Conferencing: Region of Interest Encoding and Joint Video/Audio Analysis,” International Journal on Advances in Telecommunications, 6(3-4): 153-163 (Dec. 2013). |
Chen, Z. et al., “Perception-oriented video coding based on foveated JND model A,” Picture Coding Symposium 2009, Section 2 (May 2009). |
Li, Z., et al., “Visual attention guided bit allocation in video compression,” Image and Vision Computing, 29(1): 1-14 (Jan. 2011). |
Naccari, M. et al., “Improving HEVC Compression Efficiency by Intensity Dependant Spatial Quantisation,” MPEG Meeting (Jul. 2012). |
Richardson, Iain E., The H.264 Advanced Video Compression Standard, 2nd Edition, Chapter 7: H.264 Transform and Coding, Apr. 20, 2010. |
Tang, C-W., “Spatiotemporal Visual Considerations for Video Coding,” IEEE Transactions on Multimedia, 9(2): 231-238 (Feb. 2007). |
Wikipedia, “Lumi masking,” Retrieved from the Internet: https://web.archive.org/web/20061124153834/http://en.wikipedia.org/wiki/Lumi—masking, Retrieved on: Nov. 8, 2006, 1 page. |
Number | Date | Country | |
---|---|---|---|
20150256850 A1 | Sep 2015 | US |
Number | Date | Country | |
---|---|---|---|
61950784 | Mar 2014 | US | |
62049342 | Sep 2014 | US |