Not applicable.
Not applicable.
The amount of video data needed to depict even a relatively short film can be substantial, which may result in difficulties when the data is to be streamed or otherwise communicated across a communications network with limited bandwidth capacity. Thus, video data is generally compressed before being communicated across modern day telecommunications networks. Video compression devices often use software and/or hardware at the source to code the video data prior to transmission, thereby decreasing the quantity of data needed to represent digital video images. The compressed data is then received at the destination by a video decompression device that decodes the video data. With limited network resources and ever increasing demands of higher video quality, improved compression and decompression techniques that improve compression ratio with little to no sacrifice in image quality are desirable.
For example, video compression may use intra-frame prediction, in which a pixel may be predicted from a reference pixel in the same video frame or slice. An accuracy of prediction may depend on a distance between the predicted pixel and its reference pixel. Thus, if the distance is relatively big, the accuracy of intra-prediction may decrease, and a bit-rate needed for the compressed video may increase.
In one embodiment, the disclosure includes video codec having a memory and a processor operably coupled to the memory. The processor is configured to compute a reconstructed pixel based on a residual pixel and a first prediction pixel and compute a second prediction pixel in a directional intra prediction mode based on the reconstructed pixel, wherein the first and second prediction pixels are located in a same block of a video frame.
In another embodiment, the disclosure includes a method comprising computing a reconstructed pixel based on a residual pixel and a first prediction pixel, and computing a second prediction pixel in a directional intra prediction mode based on the reconstructed pixel, wherein the first and second prediction pixels are located in a same block of a video frame.
In yet another embodiment, the disclosure includes a video codec comprising a processor configured to use intra prediction to generate a prediction pixel adaptively based on a plurality of reconstructed neighboring pixels, wherein a distance between the prediction pixel and each of the plurality of reconstructed neighboring pixels is one.
In yet another embodiment, the disclosure includes a method for intra prediction comprising computing a prediction pixel adaptively based on a plurality of reconstructed neighboring pixels, wherein a distance between the prediction pixel and each of the plurality of reconstructed neighboring pixels is one.
These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
It should be understood at the outset that, although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
Typically, video media involves displaying a sequence of still images or frames in relatively quick succession, thereby causing a viewer to perceive motion. Each frame may comprise a plurality of picture samples or pixels, each of which may represent a single reference point in the frame. During digital processing, each pixel may be assigned an integer value (e.g., 0, 1, . . . or 255) that represents an image quality or characteristic, such as luminance (luma or Y) or chrominance (chroma including U and V), at the corresponding reference point. In use, an image or video frame may comprise a large amount of pixels (e.g., 2,073,600 pixels in a 1920×1080 frame), thus it may be cumbersome and inefficient to encode and decode (referred to hereinafter simply as code) each pixel independently. To improve coding efficiency, a video frame is usually broken into a plurality of rectangular blocks or macroblocks, which may serve as basic units of processing such as prediction, transform, and quantization. For example, a typical N×N block may comprise N2 pixels, where N is an integer and often a multiple of four.
In working drafts of the International Telecommunications Union (ITU) Telecommunications Standardization Sector (ITU-T) and the International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) high efficiency video coding (HEVC), which is poised to be a future video standard, new block concepts have been introduced. For example, coding unit (CU) may refer to a sub-partitioning of a video frame into rectangular blocks of equal or variable size. In HEVC, a CU may replace a macroblock structure of previous standards. Depending on a mode of inter or intra prediction, a CU may comprise one or more prediction units (PUs), each of which may serve as a basic unit of prediction. For example, for intra prediction, a 64×64 CU may be symmetrically split into four 32×32 PUs. For another example, for an inter prediction, a 64×64 CU may be asymmetrically split into a 16×64 PU and a 48×64 PU. Similarly, a PU may comprise one or more transform units (TUs), each of which may serve as a basic unit for transform and/or quantization. For example, a 32×32 PU may be symmetrically split into four 16×16 TUs. Multiple TUs of one PU may share a same prediction mode, but may be transformed separately. Herein, the term block may generally refer to any of a macroblock, CU, PU, or TU.
Depending on the application, a block may be coded in either a lossless mode (i.e., no distortion or information loss) or a lossy mode (i.e., with distortion). In use, high quality videos may be coded using a lossless mode, while low quality videos may be coded using a lossy mode. Sometimes, a single video frame or slice (e.g., with YUV subsampling of either 4:4:4 or 4:2:0) may employ both lossless and lossy modes to code a plurality of regions, which may be rectangular or irregular in shape. Each region may comprise a plurality of blocks. For example, a compound video may comprise a combination of different types of contents, such as texts, computer graphics, and natural-view content (e.g., camera-captured video). In a compound frame, regions of texts and graphics may be coded in a lossless mode, while regions of natural-view content may be coded in a lossy mode. Lossless coding of texts and graphics may be desired, e.g. in computer screen sharing applications, since lossy coding may lead to poor quality or fidelity of texts and graphics and cause eye fatigue. Due to lack a lossless coding mode, the coding efficiency of current HEVC test models (HMs) for certain videos (e.g., compound video) may be limited.
Within a video frame or slice, a pixel may be correlated with other pixels within the same frame such that pixel values within a block or across some blocks may vary only slightly and/or exhibit repetitious textures. Modern methods of video-compression exploit these spatial correlations using various techniques which may be known collectively as intra-frame prediction (or in short as intra prediction). Intra-frame prediction may reduce spatial redundancies between neighboring blocks in the same frame, thereby compressing the video data without greatly reducing image quality.
In practice, intra-frame prediction may be implemented by video encoders/decoders (codecs) to interpolate a prediction block (or predicted block) from one or more previously coded/decoded neighboring blocks, thereby creating an approximation of the current block. Hence, the encoder and decoder may interpolate the prediction block independently, thereby enabling a substantial portion of a frame and/or image to be reconstructed from the communication of a relatively few number of reference blocks, e.g., blocks positioned in (and extending from) the upper-left hand corner of the frame. However, intra-frame prediction alone may not reproduce an image of sufficient quality for modern video, and consequently an error correction message, e.g., a residual message, may be communicated between the encoder and decoder to correct differences between the prediction block and the current block. For instance, an encoder may subtract the prediction block from the current block, or vice versa, to produce a residual block, which then may be transformed, quantized, and scanned before being coded into the coded data stream. Upon reception of the coded data stream, a decoder may add the reconstructed residual block to the independently generated prediction block to reconstruct the current block. Although the reconstructed current block may be an imperfect version of the original current block, their differences may be hardly perceptible to the human eye. Thus, substantial bit savings may be obtained without significantly degrading the quality of the reconstructed image.
The residual block may comprise few differences between the prediction block and the current block, and therefore many of the residual block's discrete values, e.g., pixel data, may comprise zero and/or near-zero coefficients, e.g., in areas where the prediction block is identical and/or near-identical to the current block. Furthermore, transformation, quantization, and/or scanning of the residual block may remove many of the zero and/or near-zero coefficients from the data stream, thereby resulting in further compression of the video data. More accurate predictions of the original image may lead to higher coding efficiencies. To harness these coding efficiencies, conventional video/image coding standards may improve prediction accuracy by using a plurality of prediction modes during intra prediction, e.g., each of which may generate a unique texture.
In use, an encoder may use a rate-distortion optimization (RDO) process to select a prediction mode that generates the most accurate prediction for each current block. For example, the sum of absolute difference (SAD) may be calculated for each mode in the intra prediction modes 100, and the one with the least SAD may be selected. In general, more accurate intra prediction may be resulted from a larger number of intra prediction modes. For example, recent research has shown that intra prediction schemes using 35 intra-frame prediction modes, such as the intra prediction modes 100, may more accurately predict complex textures than schemes using fewer prediction modes, such as ITU-T H.264/advanced video coding (AVC), which uses only 9 intra prediction modes. While
In current intra prediction schemes, pixels surrounding a current block may be used as reference pixels (or prediction samples) to generate a prediction block. The quality of intra prediction may be affected by factors such as block size and prediction mode. For example, as the size of the prediction block increases, pixels in the prediction block that are farther away from the reference pixels may have less spatial correlation with the reference pixels, thus the prediction accuracy of the father pixels may be degraded. This degradation of prediction accuracy may result in more residual data, which in turn may increase the data to be encoded, stored, and/or transmitted.
Disclosed herein are systems and methods for improved intra prediction in video coding. Instead of the block-by-block intra prediction currently used, the disclosure teaches intra prediction schemes based on pixel-by-pixel (or sample-by-sample) differential pulse code modulation (DPCM). Embodiments of the DPCM intra prediction schemes may be used in any intra prediction mode, such as one of the 33 directional modes shown in
In contrast with current intra prediction schemes which may generate all prediction pixels (or predicted pixels) of the current block in a parallel fashion, the disclosed intra prediction may generate multiple sets of prediction pixels in a pipeline or sequential fashion. If lossless coding is used, the disclosed intra prediction may generate all prediction pixels (or predicted pixels) of current block in a parallel fashion, since in this case the original pixels are the same as their reconstructed pixels. In an embodiment, a portion of the external reference blocks 220 may used to generate an initial set of prediction pixels. The initial set herein may refer to pixels that are predicted first according to a particular intra mode, such as a top/bottom row and/or a leftmost/rightmost column of the current block 210. For example, in a diagonal mode numbered as mode 7 in
After obtaining the initial set of prediction pixels, the initial set of residual or differential pixels may be generated by subtracting original pixel values from the prediction values, or vice versa. Then, continuing sets of prediction pixels may be based on reference pixels located within the current block 210, which may be referred to hereafter as internal reference pixels 230. To avoid potential decoding drift, same external/internal reference pixels should be used in an encoder and a decoder. Thus, reconstructed pixels rather than original pixels may be used as external/internal reference pixels. Note that in lossless coding, the reconstructed pixels may be the same as the original pixels.
The continuing sets of prediction pixels may be computed from reconstructed pixels within the current block. For example, in a vertical mode, the second row to the top may be predicted from a reconstructed top row, the third row to the top may be predicted from a reconstructed second row to the top, and so forth. Taking into account all prediction modes, all pixels in the current block, except one in a bottom-right corner, may be potentially used as internal reference pixels. As shown in
Prediction pixels, denoted as P(i,j) for i=0, . . . , 3 and j=0, . . . , 3, may be generated for the current block, which may be estimates of the original pixels Z(i,j). DPCM intra prediction may be fulfilled in any of a plurality of intra prediction modes (e.g., the 35 modes in
To explain DPCM intra prediction in a directional mode, we define an angle of the directional mode (referred to hereafter as prediction angle and denoted as a) as an angle between the directional mode and a vertical upright direction (i.e., vertical direction pointing up, which overlaps with the vertical mode). One skilled in the art will recognize that other definitions of a may be similarly applied without departing from principles disclosed herein. As shown in
In HEVC, α may vary between −135 degrees and 45 degrees (i.e., −135≦α≦45). Since α consistently uses a unit of degrees herein, the term “degree” may sometimes be removed after α (e.g., α=45 means α=45 degrees). Note that α>0 when line CA is on the right side of the line BA, which applies to modes 23, 13, 24, 6, 25, 14, 26, and 7 in
A continuing set of prediction pixels may be predicted by using reconstructed pixels within the current block as reference pixels. As shown in
Although not shown in
u=└(i+1)·tan(α)┘ (1)
s=(i+1)·tan(α)−u (2)
P(i,3)=(1−s)·X(3+u)+s·X(3+u+1), (3)
where └w┘ represents a flooring operation, which obtains a greatest integer that is no greater than w. Note that intermediate variables u and s may be skipped, in which case the equations may also be expressed as one equation. In addition, the top row of prediction pixels (except the top-rightmost pixel which is included in the right most column) may be computed using the following equations, wherein j=0,1,2:
s=tan(α) (4)
P(0,j)=(1−s)·X(j)+s·X(j+1). (5)
Further, continuing sets of prediction pixels may be computed using internal reference pixels. Specifically, a prediction pixel located in a i-th row and a j-th column may be computed using the following equations, wherein i=1,2,3 and j=0,1,2:
s=tan(α) (6)
P(i,j)=(1−s)·R(i−1,j)+s·R(i−1,j+1). (7)
Note that when 0<α<45, each prediction pixel is computed as a weighted linear combination of two reference pixels that are adjacent to each other. The two reference pixels may be external or internal reference pixels, depending on the location of the prediction pixel. Further, for each prediction pixel, two weights of the two reference pixel depend on α, and the two weights add up to equal one. For example, if 0<α<45, in equation (7), a first weight of a first internal reference pixel R(i−1, j) equals (1−tan(α)) and a second weight of a second internal reference pixel R(i−1, j+1) equals tan(α).
In fact, it can be seen that α=45 may be considered a special case of the general equations (1) to (7). Specifically, for prediction pixels in the rightmost column, we may derive the following equations, wherein i=0, . . . , 3:
u=└(i+1)·tan(α)┘=i+1 (8)
s=(i+1)·tan(α)−u=i+1−(i+1)=0 (9)
P(i,3)=(1−s)·X(3+u)+s·X(3+u+1)=X(3+i+1)=X(i+4). (10)
In addition, for prediction pixels in the top column, we may derive the following equations, wherein j=0,1,2:
s=tan(α)=1 (11)
P(0,j)=(1−s)·X(j)+s·X(j+1)=X(j+1). (12)
In addition, for a prediction pixel located in a i-th row and a j-th column, wherein i=1,2,3 and j=0,1,2, we may derive the following equations:
s=tan(α)=1 (13)
P(i,j)=(1−s)·R(i−1,j)+s·R(i−1,j+1)=R(i−1,j+1). (14)
It can be seen that the equations (10), (12), and (14) represent the same algorithm as the DPCM intra prediction scheme shown in
When α=0, the intra prediction is in the vertical mode, and the prediction pixels may be computed as P(0,j)=X(j) for j=0,1,2,3 and P(i,j)=R(i−1,j) for i=1,2,3 and j=0,1,2,3.
When −45<α<0, an initial set of prediction pixels for the current block including the top row and the leftmost column may be computed using external reference pixels. In this case, the external reference pixels (denoted as Y(i)) are located in neighboring blocks to the left of the current block. In an embodiment, the leftmost column of prediction pixels may be computed using the following equations, wherein i=0, . . . , 3:
u=└(i+1)·tan(−α)┘ (15)
s=(i+1)·tan(−α)−u (16)
P(i,0)=(1−s)·Y(└(−u+1)·tan−1(α)┘−1)+s·Y(└(−u)·tan−1(α)┘−1). (17)
In addition, the top row of prediction pixels (except the top-leftmost pixel which is included in the leftmost column) may be computed using the following equations, wherein j=1,2,3:
s=tan(−α) (18)
P(0,j)=(1−s)·X(j)+s·X(j−1). (19)
Further, continuing sets of prediction pixels may be computed using internal reference pixels. Specifically, a prediction pixel located in a i-th row and a j-th column may be computed using the following equations, wherein i=1,2,3 and j=1,2,3:
s=tan(−α) (20)
P(i,j)=(1−s)·R(i−1,j)+s·R(i−1,j−1). (21)
It can be seen that α=−45 may be considered a special case of the equations (15) to (21). Specifically, for prediction pixels in the leftmost column, we may derive the following equations, wherein i=0, . . . , 3:
u=└(i+1)·tan(−α)┘=i+1 (22)
s=(i+1)·tan(−α)−u=i+1−(i+1)=0 (23)
In addition, for prediction pixels in the top column, we may derive the following equations, wherein j=1,2,3:
s=tan(−α)=1 (25)
P(0,j)=(1−s)·X(j)+s·X(j−1)=X(j−1) (26)
In addition, for a prediction pixel located in a i-th row and a j-th column, wherein i=1,2,3 and j=1,2,3, we may derive the following equations:
s=tan(−α)=1 (27)
P(i,j)=(1−s)·R(i−1,j)+s·R(i−1,j−1)=R(i−1,j−1). (28)
When −135≦α≦−45, the current block and its external reference pixels may be transposed, after which the DPCM intra prediction scheme described above may be applied.
It should be noted that embodiments of the disclosed DPCM scheme may be applied to any N×N block. In an embodiment, when −45≦α≦45, intra prediction may be implemented using the following algorithm, wherein similar equations may be used when −135≦α≦−45 after transposing the current block and its external reference pixels.
Note that when 0<α<45 or −45<α<0, each prediction pixel is computed as a weighted linear combination of two adjacent reference pixels. The two reference pixels may be external or internal reference pixels, depending on the location of the prediction pixel. Further, for each prediction pixel, two weights of the two reference pixel depend on α, and the two weights add up to equal one. For example, if 0<α<45, in equation (35), a first weight of a first external reference pixel X(j) equals (1−tan(α)) and a second weight of a second external reference pixel X(j+1) equals tan(α).
In the present disclosure, each computed predicted pixel in a continuing set may be based on an adjacent reference pixel. For example, in the diagonal mode 7 as shown in
As mentioned above, intra prediction may also be implemented using a non-directional mode, such as the DC and planar mode described in
P(X)=(A+B+C+D)/4. (47)
In an embodiment of a local median predictor, the prediction pixel may be computed via equation:
P(X)=median(A,B,C,D). (48)
Suppose that A<B<C<D, according to equation (48), P(X) may take an average of two middle values. For example,
In an embodiment of a local adaptive predictor, the prediction pixel may be computed via equation:
In the various predictors, the prediction pixel may be computed adaptively based on its left and upper neighboring pixels. In other words, the prediction pixel may be generated to have similar values to its adjacent neighbors. Regardless of the size of the current block, and regardless of the position of pixel X in the current block, P(X) may be generated from its adjacent reference pixels, wherein a distance between pixel X and each of its reference pixels is one. Thus, the accuracy of intra prediction may be improved compared with the current DC mode. Further, the implementation of the disclosed pixel-based predictors may be relatively simpler than the currently planar mode.
As mentioned previously, the DPCM or pixel-based intra prediction schemes disclosed herein may be implemented in a variety of coding schemes. Depending on the application, lossy (i.e., with distortion or information loss) and/or lossless (i.e., no distortion or information loss) encoding may be implemented in a video encoder. For example, when encoding a video frame or slice, the video encoder may bypass or skip a transform step and/or a quantization step for some or all of blocks in the video frame. Herein, a lossy encoding mode may include quantization without transform encoding (i.e., bypassing a transform step), and the lossless encoding mode may include transform bypass encoding (i.e., bypassing a transform step and a quantization step) and transform without quantization encoding (i.e., bypassing a quantization step), in which the transform operation is invertible or lossless. Likewise, based on information contained in a received bitstream, a video decoder may decode a video frame using a lossless mode and/or a lossy mode. The lossy decoding mode may include quantization without transform decoding, and the lossless decoding mode may include transform bypass decoding and transform without quantization decoding.
Depending on whether a coding scheme is lossless or lossy, a reconstructed pixel may be an exact or lossy version of an original pixel. Since the reconstructed pixel may be used as a reference pixel for intra prediction of other pixels, accuracy of intra prediction may vary with the lossless/lossy scheme. Further, since coding modules used in a lossless and a lossy scheme may be different, the disclosed DPCM intra prediction may be implemented differently. In the interest of clarity, the application of various embodiments of DPCM intra prediction in lossy and lossless coding schemes are described in paragraphs below.
The RDO module 510 may be configured to make logic decisions for one or more of other modules. For example, the RDO module 510 may coordinate the prediction module 520 by determining an optimal intra prediction mode for a current block (e.g., a PU) from a plurality of available prediction modes. The RDO module 510 may select an optimal intra mode based on various algorithms. For example, the RDO module 510 may calculate a SAD for each prediction mode, and select a prediction mode that results in the smallest SAD. The disclosed pixel-based intra prediction may be implemented to replace one or more existing intra prediction modes. Alternatively, they may be added as new prediction modes to accompany the existing modes.
Based on logic decisions made by the RDO module 510, the prediction module 520 may utilize both external reference pixels and internal reference pixels to generate prediction pixels for the current block, according to embodiments of the pixel-based intra prediction schemes disclosed herein. Each prediction pixel may be subtracted from a corresponding original pixel in the current block, thereby generating a residual pixel. To facilitate continuous encoding of pixels, the residual pixels may also be fed into the reconstruction module 540, which may generate reconstructed pixels to serve as reference pixels for intra prediction of future pixels. Furthermore, after de-blocking filter operation and other in-loop filter operation such as sample adaptive offset operation, the modified reconstructed pixels can serve as reference pixels for inter prediction of future pixels.
Then, after all residual pixels have been generated for the current block, the residual pixels may be scanned, and non-zero residual pixels may be encoded by the entropy encoder 530 into an encoded bitstream. The entropy encoder 530 may employ any entropy encoding scheme, such as context-adaptive binary arithmetic coding (CABAC) encoding, truncated Golome-Rice (TR) coding, exponential Golomb (EG) encoding, or fixed length encoding, or any combination thereof. In the transform bypass encoding scheme 500, since the residual block is encoded without a transform step or a quantization step, no information loss may be induced in the encoding process. It should be noted that
When encoding an N×N current block via the transform bypass encoding scheme 500, since a residual pixel (i.e., D(i,j) for i=0, . . . , N−1 and j=0 . . . , N−1) may be directly added to a prediction pixel (i.e., P(i,j)) to generate a reconstructed pixel (e.g., R(ij)=D(ij)+P(ij)) without any additional processing, no distortion or information loss may be induced in the reconstruction process. Consequently, the reconstructed pixel (R(i,j)) may be exactly the same with the original pixel (Z(i,j)). In an embodiment, an initial set of predicted pixels may be generated based on a set of external reference pixels, while a continuing set of predicted pixels may be generated based on a set of internal reference pixels, which are previously reconstructed pixels. For example, in a diagonal mode (i.e., mode 4 in
For a current block being decoded, a residual block may be generated after the execution of the entropy decoder 610. In addition, information containing a prediction mode of the current block may also be decoded by the entropy decoder 610. Then, based on the prediction mode as well as a plurality of external reference pixels located in one or more previously decoded neighboring blocks, the prediction module 620 may generate an initial set of prediction pixels. Then, the reconstruction module 630 may combine the initial set of prediction pixels with a corresponding set of residual pixels to generate an initial set of reconstructed pixels. The initial set of reconstructed pixels may also serve as reference pixels for decoding of continuing sets of pixels. Specifically, by using the initial set of reconstructed pixels, a second set of prediction pixels may be generated. Thus, the second set of prediction pixels may be added to a second set of residual pixels to obtain a second set of reconstructed pixels. This iterative process may continue until all reconstructed pixels for the current block have been obtained. Then, the decoder may proceed to reconstruct a next block.
In use, if an original block is encoded and decoded using lossless schemes, such as the transform bypass encoding scheme 500 and the transform bypass decoding scheme 600, it is possible that no information loss may be induced in the entire coding process. Thus, barring distortion caused during transmission, a reconstructed block may be exactly the same with the original block.
During lossless coding of certain blocks in a video frame, sometimes it may be desirable to include a transform step into the coding process, if the transform operation is invertible (i.e., after transform and inverse transform operations, the input pixel values equal the output pixel values). For example, for some blocks of a text region, an added transform step may generate a shorter bitstream compared to a transform bypass coding scheme.
The transform without quantization encoding scheme 700 may be implemented in a video encoder, which may receive an input video comprising a sequence of video frames. The RDO module 710 may be configured to control one or more of other modules, and may be the same or similar to the RDO module 510 in
An initial set of residual pixels may be generated by the prediction module 720 based on a plurality of external reference pixels. Then, the initial set of residual pixels may be first transformed from a spatial domain to a frequency domain by the transform module 730, which may be implemented using any invertible transform. For example, in a diagonal mode (i.e., mode 4 in
To facilitate encoding of continuing sets in the current block, the transform coefficient matrix may be fed into the inverse transform module 750, which may perform the inverse of the transform module 730 and convert transform coefficients from a frequency domain to a spatial domain. Since the transform is fully invertible, we may have D′(0j)=D(0j). Thus, D′(0,j) may be used by the reconstruction module 760 to generate a set of reconstructed pixels as R(0,j)=D′(0j)+P(0j)=Z(0j), where j is between 0 and N−1. D′(i,0) may be used by the reconstruction module 760 to generate a set of reconstructed pixels as R(i,0)=D′(i,0)+P(i,0)=Z(i,0), where i is between 1 and N−1. Then, R(0,j) may serve as internal reference pixels to generate a second set of prediction pixels as P(1,j)=R(0,j−1), where j is between 0 and N−1. A second set of residual pixels D(1,j) may again be generated. Other pixels may be assigned with arbitrary values and the constructed N×N block may be transformed and inverse transformed, generating a block containing D′(1,j). D′(1,j) may then serve as internal reference pixels for continuing intra prediction. This iterative process may continue until all residual pixels have been generated for the current block. Certain aspects of this iterative prediction process may be similar to the process in the transform bypass encoding scheme 500, thus the similar aspects will not be further described in the interest of conciseness. The transform (e.g., in transform module 730) may be a one-dimensional “line-transform” either in a vertical or horizontal direction, and the transform may be performed one row or column at a time. DPCM prediction may be performed without drift in a transform without quantization scheme when the transform is fully invertible.
Alternatively, since the transform operation is invertible, the reconstructed pixels are the same as the original pixels. Accordingly, the original pixels may be used as reference pixels, and the transform/inverse transform steps may be performed after all residual pixels in the current block have been generated. In this case, there may be no need to transform each set of residual pixels separately, which simplifies the encoding process.
After all transform coefficients have been generated for the current block, the transform coefficients may be scanned, and non-zero transform coefficients may be encoded by the entropy encoder 740 into an encoded bitstream. The entropy encoder 740 may be the same or similar with the entropy encoder 530. Prior to transmission from the encoder, the encoded bitstream may be further configured to include other information, such as video resolution, frame rate, block partitioning information (sizes, coordinates), prediction modes, etc., so that the encoded sequence of video frames may be properly decoded.
For a current block being decoded, a residual block may be generated after the execution of the inverse transform module 820. In addition, information containing a prediction mode of the current block may also be decoded by the entropy decoder 810. Then, based on the prediction mode as well as a plurality of external reference pixels located in one or more previously decoded neighboring blocks, the prediction module 830 may generate an initial set of prediction pixels. Then, the reconstruction module 840 may combine the initial set of prediction pixels with a corresponding set of residual pixels to generate a set of reconstructed pixels. The reconstructed pixels may also serve as reference pixels for decoding of continuing sets of pixels. Specifically, by using the initial set of reconstructed pixels, a second set of prediction pixels may be generated. Thus, the second set of prediction pixels may be added to a second set of residual pixels to obtain a second set of reconstructed pixels. This iterative process may continue until all reconstructed pixels for the current block have been obtained. Then, the decoder may proceed to reconstruct a next block.
In use, sometimes it may be desirable to include a quantization step, instead of a transform step, into the coding process.
The quantization without transform encoding scheme 900 may be implemented in a video encoder, which may receive an input video comprising a sequence of video frames. The RDO module 910 may be configured to control one or more of other modules, and may be the same or similar to the RDO module 510 in
An initial set of residual pixels may be generated by the prediction module 920 based on a plurality of external reference pixels. Then, the initial set of residual pixels may be first quantized or re-scaled by the quantization module 930 to generate an initial set of quantized residual pixels. Depending on the application, the quantization module 930 may employ any appropriate quantization parameter (QP). For example, in a vertical intra mode, residual pixels, denoted as D(0,j) where j is between 0 and N−1, in the 0-th row may be converted to quantized residual pixels, denoted as q_D(0,j). The quantization may use equation q_D(0,j)=floor(d(0,j)/qp scale), where qp scale denotes a quantation step determined by the QP.
To facilitate encoding of other rows in the current block, the initial set of quantized residual pixels may be fed into the de-quantization module 950, which may perform the inverse of the quantization module 930 and recover a scale of the residual pixels. The de-quantization module 950 may generate another set of residual pixels, denoted as D″(0,j) where j is between 0 and N−1 via an equation: D″(0,j)=q_D(0,j)*qp_scale=floor(d(0,j)/qp_scale)*qp_scale. D″(0j), a lossy version of D(0,j), may be used by the reconstruction module 960 to generate a set of reconstructed pixels as R(0,j)=D″(0,j)+P(0,j). Then, the 0-th row of reconstructed pixels R(0j) may serve as internal reference pixels to generate a second set (i.e., a 1-st row) of prediction pixels as P(1,j)=R(0,j). A second set of residual pixels D(1,j) may again be generated, quantized and de-quantized, generating a block containing D″(1,j) and a block containing the reconstructed pixels R(1,j)=D″(1,j)+P(1,j) for j=0, . . . , N−1. R(1,j) may then serve as internal reference pixels for continuing intra prediction. This iterative process may continue until all residual pixels have been generated for the current block. Certain aspects of this iterative prediction process may be similar to the process in the transform bypass encoding scheme 500, thus the similar aspects will not be further described in the interest of conciseness.
After all quantized residual pixels have been generated for the current block, the quantized residual pixels may be scanned, and non-zero quantized residual pixels may be encoded by the entropy encoder 940 into an encoded bitstream. The entropy encoder 940 may be the same or similar with the entropy encoder 530. Prior to transmission from the encoder, the encoded bitstream may be further configured to include other information, such as video resolution, frame rate, block partitioning information (sizes, coordinates), prediction modes, etc., so that the encoded sequence of video frames may be properly decoded.
For a current block being decoded, a residual block may be generated after the execution of the de-quantization module 1020. In addition, information containing a prediction mode of the current block may also be decoded by the entropy decoder 1010. Then, based on the prediction mode as well as a plurality of external reference pixels located in one or more previously decoded neighboring blocks, the prediction module 1030 may generate an initial set of prediction pixels. Then, the reconstruction module 1040 may combine the initial set of prediction pixels with a corresponding set of residual pixels to generate a set of reconstructed pixels. The reconstructed pixels may also serve as reference pixels for decoding of continuing sets of pixels. Specifically, by using the initial set of reconstructed pixels, a second set of prediction pixels may be generated. Thus, the second set of prediction pixels may be added to a second set of residual pixels to obtain a second set of reconstructed pixels. This iterative process may continue until all reconstructed pixels for the current block have been obtained. Then, the decoder may proceed to reconstruct a next block.
Next, in step 1130, the initial set of residual pixels and prediction pixels may be used to generate an initial set of reconstructed pixels for the current block. In step 1140, a continuing set of prediction pixels is computed based on internal reconstructed pixels using the algorithms described herein. For example, the initial set of reconstructed pixels may be used to generate a second set of prediction pixels. If intra prediction is in a directional mode with a preconfigured direction, a pixel in each continuing set of prediction pixels is one position behind a corresponding pixel in a preceding set of prediction pixels according to the direction. For example, a pixel in the second set of prediction pixels is one position behind a corresponding pixel in the initial set of prediction pixels.
In step 1150, a continuing set of residual pixels may be generated similar to the first set of residual pixels. Next, in block 1160, the method 1100 may determine if more prediction pixels need to be computed for the current block. If the condition in the block 1160 is met, the method 1100 may proceed to step 1170. Otherwise, the method 1100 may end. In step 1170, a continuing set of reconstructed pixels may be generated, which may be used again to generate continuing sets of prediction pixels until all prediction pixels for the current blocks have been computed. It should be understood that the method 1100 may only include a portion of all necessary coding steps, thus other steps, such as scanning, encoding, transmitting, and filtering of residual pixels, may also be incorporated into the encoding process wherever appropriate.
The schemes described above may be implemented on a general-purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
The secondary storage 1304 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 1308 is not large enough to hold all working data. The secondary storage 1304 may be used to store programs that are loaded into the RAM 1308 when such programs are selected for execution. The ROM 1306 is used to store instructions and perhaps data that are read during program execution. The ROM 1306 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 1304. The RAM 1308 is used to store volatile data and perhaps to store instructions. Access to both the ROM 1306 and the RAM 1308 is typically faster than to the secondary storage 1304.
The transmitter/receiver 1312 may serve as an output and/or input device of the computer system 1300. For example, if the transmitter/receiver 1312 is acting as a transmitter, it may transmit data out of the computer system 1300. If the transmitter/receiver 1312 is acting as a receiver, it may receive data into the computer system 1300. The transmitter/receiver 1312 may take the form of modems, modem banks, Ethernet cards, universal serial bus (USB) interface cards, serial interfaces, token ring cards, fiber distributed data interface (FDDI) cards, wireless local area network (WLAN) cards, radio transceiver cards such as code division multiple access (CDMA), global system for mobile communications (GSM), long-term evolution (LTE), worldwide interoperability for microwave access (WiMAX), and/or other air interface protocol radio transceiver cards, and other well-known network devices. The transmitter/receiver 1312 may enable the processor 1302 to communicate with an Internet or one or more intranets. I/O devices 1310 may include a video monitor, liquid crystal display (LCD), touch screen display, or other type of video display for displaying video, and may also include a video recording device for capturing video. I/O devices 1310 may also include one or more keyboards, mice, or track balls, or other well-known input devices.
It is understood that by programming and/or loading executable instructions onto the computer system 1300, at least one of the processor 1302, the RAM 1308, and the ROM 1306 are changed, transforming the computer system 1300 in part into a particular machine or apparatus (e.g., a video codec having the novel functionality taught by the present disclosure). It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an application specific integrated circuit (ASIC), because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an application specific integrated circuit that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
While several embodiments have been provided in the present disclosure, it may be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and may be made without departing from the spirit and scope disclosed herein.
The present application is a continuation of U.S. patent application Ser. No. 13/668,094 filed Nov. 2, 2012 by Wen Gao, et al., and entitled “Differential Pulse Code Modulation Intra Prediction for High Efficiency Video Coding,” and claims priority to U.S. Provisional Patent Application No. 61/556,014 filed Nov. 4, 2011 by Wen Gao, et al., and entitled “Lossless Coding Tools for Compound Video,” each of which is incorporated herein by reference as if reproduced in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5847776 | Khmelnitsky et al. | Dec 1998 | A |
6125210 | Yang | Sep 2000 | A |
6233278 | Dieterich | May 2001 | B1 |
6272180 | Lei | Aug 2001 | B1 |
6683992 | Takahashi | Jan 2004 | B2 |
6873609 | Jones et al. | Mar 2005 | B1 |
6909810 | Maeda | Jun 2005 | B2 |
7116830 | Srinivasan | Oct 2006 | B2 |
7295614 | Shen et al. | Nov 2007 | B1 |
7567617 | Holcomb | Jul 2009 | B2 |
7599438 | Holcomb et al. | Oct 2009 | B2 |
7606308 | Holcomb et al. | Oct 2009 | B2 |
7684488 | Marpe et al. | Mar 2010 | B2 |
8184710 | Srinivasan et al. | May 2012 | B2 |
8300696 | Liang et al. | Oct 2012 | B2 |
8331664 | Strom et al. | Dec 2012 | B2 |
8374238 | Xiong | Feb 2013 | B2 |
8432968 | Ye et al. | Apr 2013 | B2 |
8588459 | Bloom et al. | Nov 2013 | B2 |
8724697 | Lee et al. | May 2014 | B2 |
9154792 | Chien et al. | Oct 2015 | B2 |
9167245 | Lou et al. | Oct 2015 | B2 |
9253508 | Gao | Feb 2016 | B2 |
20020034256 | Talluri et al. | Mar 2002 | A1 |
20020061138 | Uenoyama | May 2002 | A1 |
20020163965 | Lee et al. | Nov 2002 | A1 |
20030026335 | Thyagarajan | Feb 2003 | A1 |
20030039396 | Irvine et al. | Feb 2003 | A1 |
20030095597 | Talluri et al. | May 2003 | A1 |
20030099292 | Wang | May 2003 | A1 |
20030118242 | Nakayama et al. | Jun 2003 | A1 |
20030123742 | Zhao et al. | Jul 2003 | A1 |
20040045030 | Reynolds et al. | Mar 2004 | A1 |
20040184545 | Thyagarajan | Sep 2004 | A1 |
20050013497 | Hsu | Jan 2005 | A1 |
20050013500 | Lee et al. | Jan 2005 | A1 |
20050024487 | Chen | Feb 2005 | A1 |
20050025246 | Holcomb | Feb 2005 | A1 |
20050038837 | Marpe et al. | Feb 2005 | A1 |
20050078754 | Liang et al. | Apr 2005 | A1 |
20050219069 | Sato et al. | Oct 2005 | A1 |
20050222775 | Kisra et al. | Oct 2005 | A1 |
20050231396 | Dunn | Oct 2005 | A1 |
20050253740 | Marpe et al. | Nov 2005 | A1 |
20060008003 | Ji | Jan 2006 | A1 |
20060008038 | Song | Jan 2006 | A1 |
20060013320 | Oguz et al. | Jan 2006 | A1 |
20060071826 | Saunders et al. | Apr 2006 | A1 |
20060093031 | Van Der Schaar et al. | May 2006 | A1 |
20060103556 | Malvar | May 2006 | A1 |
20060114993 | Xiong | Jun 2006 | A1 |
20060120450 | Han | Jun 2006 | A1 |
20060153292 | Liang | Jul 2006 | A1 |
20060222066 | Yoo et al. | Oct 2006 | A1 |
20070074266 | Raveendran et al. | Mar 2007 | A1 |
20070081586 | Raveendran et al. | Apr 2007 | A1 |
20070081587 | Raveendran et al. | Apr 2007 | A1 |
20070140345 | Osamoto et al. | Jun 2007 | A1 |
20070171969 | Han et al. | Jul 2007 | A1 |
20070171985 | Kim et al. | Jul 2007 | A1 |
20070217703 | Kajiwara | Sep 2007 | A1 |
20080008238 | Song | Jan 2008 | A1 |
20080037656 | Hannuksela | Feb 2008 | A1 |
20080043843 | Nakaishi | Feb 2008 | A1 |
20080120676 | Morad et al. | May 2008 | A1 |
20080131087 | Lee | Jun 2008 | A1 |
20080198931 | Chappalli | Aug 2008 | A1 |
20080240240 | Kodama | Oct 2008 | A1 |
20090003447 | Christoffersen | Jan 2009 | A1 |
20090016631 | Naito et al. | Jan 2009 | A1 |
20090034857 | Moriya et al. | Feb 2009 | A1 |
20090097558 | Ye et al. | Apr 2009 | A1 |
20090129469 | Kimiyama et al. | May 2009 | A1 |
20090161759 | Seo et al. | Jun 2009 | A1 |
20090220005 | Kim et al. | Sep 2009 | A1 |
20090225834 | Song et al. | Sep 2009 | A1 |
20090257489 | Karczewicz | Oct 2009 | A1 |
20090257492 | Andersson | Oct 2009 | A1 |
20090268823 | Dane | Oct 2009 | A1 |
20090279608 | Jeon | Nov 2009 | A1 |
20100054331 | Haddad | Mar 2010 | A1 |
20100054615 | Choi et al. | Mar 2010 | A1 |
20100054616 | Kim | Mar 2010 | A1 |
20100080284 | Lee et al. | Apr 2010 | A1 |
20100080285 | Lee et al. | Apr 2010 | A1 |
20100080296 | Lee et al. | Apr 2010 | A1 |
20100104022 | Chatterjee et al. | Apr 2010 | A1 |
20100118943 | Shiodera et al. | May 2010 | A1 |
20100172409 | Reznik et al. | Jul 2010 | A1 |
20100260260 | Wiegand et al. | Oct 2010 | A1 |
20100266008 | Reznik | Oct 2010 | A1 |
20100284613 | Tsai et al. | Nov 2010 | A1 |
20110080947 | Chen et al. | Apr 2011 | A1 |
20110150085 | Andrijanic et al. | Jun 2011 | A1 |
20110158323 | Chen et al. | Jun 2011 | A1 |
20110164677 | Lu | Jul 2011 | A1 |
20110170602 | Lee | Jul 2011 | A1 |
20110206289 | Dikbas et al. | Aug 2011 | A1 |
20110243230 | Liu | Oct 2011 | A1 |
20110248873 | Karczewicz et al. | Oct 2011 | A1 |
20110280314 | Sankaran et al. | Nov 2011 | A1 |
20110293001 | Lim et al. | Dec 2011 | A1 |
20120008682 | Karczewicz et al. | Jan 2012 | A1 |
20120014436 | Segall et al. | Jan 2012 | A1 |
20120027077 | Reznik | Feb 2012 | A1 |
20120082215 | Sze et al. | Apr 2012 | A1 |
20120082224 | Van Der Auwera et al. | Apr 2012 | A1 |
20120134425 | Kossentini et al. | May 2012 | A1 |
20120163471 | Karczewicz et al. | Jun 2012 | A1 |
20120170649 | Chen | Jul 2012 | A1 |
20120170650 | Chong et al. | Jul 2012 | A1 |
20120170662 | Karczewicz et al. | Jul 2012 | A1 |
20120230403 | Liu | Sep 2012 | A1 |
20120236931 | Karczewicz et al. | Sep 2012 | A1 |
20120243608 | Yu et al. | Sep 2012 | A1 |
20120243609 | Zheng | Sep 2012 | A1 |
20120257678 | Zhou | Oct 2012 | A1 |
20120328207 | Sasai et al. | Dec 2012 | A1 |
20130016777 | Gao et al. | Jan 2013 | A1 |
20130101033 | Joshi et al. | Apr 2013 | A1 |
20130101036 | Zhou | Apr 2013 | A1 |
20130114676 | Guo et al. | May 2013 | A1 |
20130114696 | Liu | May 2013 | A1 |
20130114716 | Gao | May 2013 | A1 |
20130114738 | Chien et al. | May 2013 | A1 |
20130271566 | Chen et al. | Oct 2013 | A1 |
20130272377 | Karczewicz et al. | Oct 2013 | A1 |
20130287103 | Seregin et al. | Oct 2013 | A1 |
20140010293 | Srinivasan | Jan 2014 | A1 |
20140092980 | Guo et al. | Apr 2014 | A1 |
20140105296 | Alshina | Apr 2014 | A1 |
20140126631 | Zheludkov | May 2014 | A1 |
20140328387 | Puri | Nov 2014 | A1 |
20150215773 | Bai et al. | Jul 2015 | A1 |
20150350640 | Jeong et al. | Dec 2015 | A1 |
Number | Date | Country |
---|---|---|
101222771 | Jul 2008 | CN |
102917339 | Feb 2013 | CN |
103237034 | Aug 2013 | CN |
103501493 | Jan 2014 | CN |
2388999 | Nov 2011 | EP |
2503783 | Jun 2012 | EP |
2004064406 | Jul 2004 | WO |
2011069412 | Jun 2011 | WO |
2011128268 | Oct 2011 | WO |
Entry |
---|
Yan et al, Group-Based Fast Mode Decision Algorithm for Intra Prediction In HEVC, 2012. |
Xiao et al, a fine grained parallel implementation of a H.264/Avc encoder on a 167-processor computational platform, 2011. |
Rajovic et al, an image codec with minimum memory size, 2013. |
Bjontegaard, G., “H.26L Test Model Long Term No. 8 (TML-8) Draft0,” ITU Telecommunications Standardization Sector, Study Group 16, Video Coding Experts Group (VCEG), VCEG-Nxx, Apr. 2, 2001, pp. 1-2, 16-19. |
Bossen, F., et al., “Common Test Conditions and Software References Configurations,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 6th Meeting: Torino, IT, JCTVC-F900, Jul. 14-22, 2011, 3 pages. |
Bross, B. et al., “WD4: Working Draft 4 of High-Efficiency Video Coding,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 6th Meeting: Torino, IT, JCTVC-F803 d0, Jul. 14-22, 2011, pp. 1-215. |
Davies, T. et al., “Suggestion for a Test Model,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 1st Meeting: Dresden, DE, 2010, JCTVC-A033, Apr. 15-23, pp. 1-30. |
Gao, W. et al., “A Lossless Coding Solution for HEVC,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 7th meeting: Geneva, CH; 2011, JCTVC-G664, Nov. 21-30, pp. 1-7. |
Howard, P. et al., “Fast and Efficient Lossless Image Compression,” Data Compression Conference, IEEE Computer Society Press, XP000614252, Mar. 30, 1993, pp. 351-360. |
Nan, Z., et al., “Spatial Prediction Based Intra-Coding,” 2004 IEEE International Conference on Multimedia and Expo (ICME), vol. 1, Jun. 27-30, 2004, pp. 97-100. |
Nguyen, T. et al., “Reduced-complexity Entropy Coding of Transform Coefficient Levels Using a Combination of VLC and PIPE,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, retrieved from: http:/wftp3.itu.int/av-arch/jctvc-site/, 4th Meeting: Daegu, KR, JCTVC-D336, Jan. 20-28, 2011, pp. 1-8. |
“Vision, Application and Requirements for High Efficiency Video Coding (HEVC),” ISO/IEC JTC1/SC29/WG11/N11872, Video and Requirements Subgroups and JCT-VC, Daegu, Korea, Jan. 2011, 6 pages. |
“Working Draft No. 2, Revision 0 (WD-2),” Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, Pattaya, Thailand, JVT-B118, Dec. 3-7, 2001, pp. 1, 3-100. |
Ye, Y., et al., “Improved H.264 Intra Coding Based on Bi-directional Intra Prediction, Directional Transform, and Adaptive Coefficient Scanning,” IEEE International Conference on Image Processing, Oct. 12-15, 2008, pp. 2116-2119. |
Kim, H., et al., “A Lossless Color Image Compression Architecture Using a Parallel Golomb-Rice Hardware CODEC”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, No. 11, Nov. 2011, pp. 1581-1587. |
Foreign Communication From A Counterpart Application, PCT Application PCT/US2012/063427, Partial International Search Report dated Jan. 28, 2013, 8 pages. |
Foreign Communication From A Counterpart Application, PCT Application PCT/US2012/063428, Partial International Search Report dated Jan. 28, 2013, 8 pages. |
Foreign Communication From A Counterpart Application, PCT Application PCT/US2012/063428, International Search Report dated Apr. 4, 2013, 8 pages. |
Foreign Communication From A Counterpart Application, PCT Application PCT/US2012/063428, Written Opinion dated Apr. 4, 2013, 10 pages. |
Office Action dated Dec. 31, 2014, 33 pages, U.S. Appl. No. 13/668,094, filed Nov. 2, 2012. |
Office Action dated Jul. 2, 2015, 31 pages, U.S. Appl. No. 13/668,094, filed Nov. 2, 2012. |
Notice of Allowance dated Sep. 17, 2015, 13 pages, U.S. Appl. No. 13/668,094, filed Nov. 2, 2012. |
Notice of Allowance dated Nov. 12, 2015, 6 pages, U.S. Appl. No. 13/668,094, filed Nov. 2, 2012. |
Office Action dated Dec. 30, 2014, 31 pages, U.S. Appl. No. 13/668,112, filed Nov. 2, 2012. |
Notice of Allowance dated May 22, 2015, 17 pages, U.S. Appl. No. 13/668,112, filed Nov. 2, 2012. |
Office Action dated Oct. 19, 2015, 23 pages, U.S. Appl. No. 13/668,112, filed Nov. 2, 2012. |
Office Action dated Apr. 12, 2016, 18 pages, U.S. Appl. No. 13/668,112, filed Nov. 2, 2012. |
Partial English Translation and Abstract of Chinese Patent Application No. CN101222771, Apr. 21, 2016, 18 pages. |
Partial English Translation and Abstract of Chinese Patent Application No. CN103237034, Dec. 18, 2015, 5 pages. |
Partial English Translation and Abstract of Chinese Patent Application No. CN103501493, Dec. 18, 2015, 4 pages. |
Foreign Communication From A Counterpart Application, PCT Application No. PCT/CN2014/075699, English Translation of International Search Report dated Jan. 20, 2015, 2 pages. |
Foreign Communication From A Counterpart Application, PCT Application No. PCT/CN2014/075699, Written Opinion dated Jan. 20, 2015, 5 pages. |
Notice of Allowance dated Sep. 28, 2016, 11 pages, U.S. Appl. No. 13/668,112, filed Nov. 2, 2012. |
Number | Date | Country | |
---|---|---|---|
20160112720 A1 | Apr 2016 | US |
Number | Date | Country | |
---|---|---|---|
61556014 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13668094 | Nov 2012 | US |
Child | 14978426 | US |