A media player may output moving images to a display device. For example, a media player might retrieve locally stored image information or receive a stream of image information from a media server (e.g., a content provider might transmit a stream that includes high-definition image frames to a television, a set-top box, or a digital video recorder through a cable or satellite network). In some cases, the image information is encoded to reduce the amount of data used to represent the image. For example, an image might be divided into smaller image portions, such as macroblocks, so that information encoded with respect to one image portion does not need to be repeated with respect to another image portion (e.g., because neighboring image portions may frequently have similar color and brightness characteristics). As a result, information about neighboring image portions may need to be locally stored and accessed by the media player when a particular image portion is decoded. As the size and shape of image portions become more complex, however, storing information about these neighboring image portions might require a significant amount of storage space or be otherwise impractical.
A media player may receive image information, decode the information, and output a signal to a display device. For example, a Digital Video Recorder (DVR) might retrieve locally stored image information, or a set-top box might receive a stream of image information from a remote device (e.g., a content provider might transmit a stream that includes high-definition image frames to the set-top box through a cable or satellite network).
An encoder 114 may reduce the amount of data- that is used to represent image content 112 before the data is transmitted by a transmitter 116 as a stream of image information. As used herein, information may be encoded and/or decoded in accordance with any of a number of different protocols. For example, image information may be processed in connection with International Telecommunication Union-Telecommunications Standardization Sector (ITU-T) recommendation H.264 entitled “Advanced Video Coding for Generic Audiovisual Services” (2004) or the International Organization for Standardization (ISO)/International Engineering Consortium (IEC) Motion Picture Experts Group (MPEG) standard entitled “Advanced Video Coding (Part 10)” (2004). As other examples, image information may be processed in accordance with ISO/IEC document number 14496 entitled “MPEG-4 Information Technology—Coding of Audio-Visual Objects” (2001) or the MPEG2 protocol as defined by ISO/IEC document number 13818-1 entitled “Information Technology—Generic Coding of Moving Pictures and Associated Audio Information” (2000).
An image may be divided into smaller image portions, and information encoded with respect to one image portion might be re-used with respect to another image portion. As a result, an output engine 122 at the media player 120 may store information about neighboring portions into, and access that information from, a block-based parameter buffer 124 while decoding a received stream of image information. The block-based parameter buffer 124 might comprise, for example, a memory structure located locally at, or external to, the output engine 122.
Consider, for example, H.264 image information. As illustrated in
When a particular macroblock 210 is being decoded and/or decompressed, information about that macroblock 210 might therefore be derived using a predicted value from one or more neighboring blocks. In some cases, a predicted parameter is derived from a single neighboring block's parameter while in other cases it is derived from parameters associated with multiple neighboring blocks. A difference between the predicted value and the actual value may be determined from the received stream of image information and then be used by the output engine 122 to generate an output that represents the original image content 112.
As a result, information about neighboring macroblocks may be stored and accessed while a particular macroblock 210 is being decoded. As the size and partitioning of macroblocks become more complex, however, storing information about neighboring macroblocks might require a significant amount of storage space or be otherwise impractical.
For example,
Depending on the original image, a macroblock might instead be partitioned as illustrated in
More complex areas of a display can be further divided into sub-macroblocks as illustrated in
Each of these sub-macroblocks can be further divided as illustrated in
Note that different types of image parameters may be defined with respect to different size areas of a macroblock. For example, some types of image parameters might always apply to a whole macroblock, other types might apply to a particular sub-macroblock (or macroblock partition), and still others might apply to a sub-macroblock partition.
To decode the image information, the output engine 910 may store information into and/or access information from a local context buffer. For example, the context buffer might store H.264 parameters associated with macroblocks A, B, C, and D adjacent to the macroblock * currently being decoded. The context buffer may also store information about additional macroblocks (e.g., an entire row of macroblock information might be stored in the context buffer). According to some embodiments, the context buffer is formed on the same die as the output engine 910. A memory unit 920 external to the output engine 910 may also be provided and may store information in accordance with any of the embodiments described herein. The external memory unit 920 may be, for example, a Double Data Rate (DDR) Synchronous Dynamic Random Access Memory (SDRAM) unit.
According to some embodiments, the context buffer and/or the external memory unit 920 includes a first context area 921 associated with a first type of parameter. In particular, the macroblock being decoded is potentially divisible into a first set of sub-portions, and different values of the first parameter type may be associated with different sub-portions of the first set. By way of example, parameters that can specified to a sub-macroblock level (e.g., a particular 8×8 sample area) might be stored in the first context area 921.
Similarly, the context buffer and/or the external memory unit 920 includes a second context area 922 associated with a second type of parameter for that macroblock. In this case, the macroblock can also be divided into a second set of sub-portions, wherein different values of the second parameter type may be associated with different sub-portions of the second set. Moreover, the number of sub-portions in the second set may be greater than the number of sub-portions in the first set. For example, parameters that can be specified to a particular sub-macroblock level (e.g., a particular 4×4 sample area) might be stored in the second context area 922. Note that although two context areas 921, 922 are illustrated in
Moreover, the context areas 921, 922 may not be contiguous. For example, the first context area 921 might be physically stored between portions of the second context area 922. Moreover, the first context area of one macroblock might physically stored remote from the first context area of another macroblock.
The output engine 910 may then decode received image information (e.g., received from a remote media server or a local storage device) in accordance with information in the context buffer (e.g., based in part on parameter values from context areas of neighboring macroblocks). According to some embodiments, the context buffer is located on the same die as the output engine 910.
At 1002, a first value of a first parameter type is received. The first parameter type might be associated with, for example, a macroblock representing a portion of an image. Moreover, the macroblock is divisible into a first set of sub-portions (e.g., sub-macroblocks), and different values of the first parameter type might be associated with different sub-portions of the first set.
At 1004, a second value of a second parameter type is received for the macroblock. The macroblock is also divisible into a second set of sub-portions (e.g., sub-macroblock partitions), and different values of the second parameter type might be associated with different sub-portions of the second set. In addition, sub-portions of the first set represent a larger area of the image as compared to sub-portions of the second set.
At 1006, the first and second parameter types are mapped into a context buffer. In particular, the context buffer has a first context area associated with the first parameter type and a second context area associated with the second parameter type. Moreover, the first context area is adapted to store fewer values for each parameter type as compared to the second context area.
The first value may then be stored into the first context area and the second value may be stored into the second context area at 1008 based on the mapping. According to some embodiments, information in the context buffer is then used to decode the macroblock and to generate an output associated with the image.
The first type of parameter in the context buffer 1100 is referred to herein as a “group I” parameter. A group I parameter may be, for example, a parameter that can only be defined on a macroblock basis. That is, a single value for that parameter will always apply to an entire macroblock. As a result, only a single value or “context” for each of parameter of this type needs to be stored in the context buffer 1100. With respect to H.264 decoding, examples of group I parameters might include SKIPMB (e.g., the macroblock is to be skipped), PMODE (e.g., intra or inter prediction mode information), and/or INTRLCMB (e.g., frame or field mode information associated with the macroblock).
The second type of parameter stored in the context buffer 1100 is referred to herein as a “group II” parameter. A group II parameter might be, for example, a parameter that can apply to samples that map to an 8×8 area irrespective of actual macroblock partitioning. That is, up to four different values for this type of parameter can apply to a macroblock. Thus, four values or “contexts” for each of these parameters are stored in the context buffer 1100 (e.g., cntx—0 through cntx—3). With respect to H.264 decoding, examples of group II parameters might included a reference index and/or an inference flag.
The third type of parameter stored in the context buffer 1100 is referred to herein as a “group III” parameter. A group III parameter might be, for example, a parameter that can apply to samples that map to a 4×4 area irrespective of actual macroblock partitioning. That is, up to sixteen values for that parameter could apply to a macroblock. Thus, sixteen values or “contexts” for each of these parameters are stored in the context buffer 1100 (e.g. cntx—0 through cntx—15). With respect to H.264 decoding, examples of group III parameters might included motion vectors in the x or y direction, intra prediction mode information, and/or a coded bit flag.
By partitioning the parameter or context buffer 1100 in this way, embodiments may reduce the amount of storage structures that are used to facilitate decoding. Some ways to map parameter values into and out of the context buffer 1100 will now be described with respect to
Although four different values of a group II parameter could potentially apply to a single macroblock, in some cases a single value will apply to the whole macroblock (e.g., when the macroblock is associated with a background area of an image and a larger partition is chosen). In this case, the single value will be stored into all four contexts in the context buffer (e.g., as illustrated by “(1-3)” in the mapping 1300 illustrated in
Similarly, a single group II parameter value might apply to an entire macroblock partition. Consider, for example,
When the macroblock is divided into four different sub-macroblocks, each of the four contexts for a group II parameter may store a different value. For example,
Although sixteen different values for a group III parameter could potentially apply to a single macroblock, in some cases a single value will apply to the whole macroblock (e.g., when the macroblock is associated with a background area of an image). In this case, the single value will be stored into all sixteen contexts in the context buffer (e.g., as illustrated by “(1-15)” in the mapping 1900 illustrated in
Similarly, a single group III parameter value might apply to an entire macroblock partition. Consider, for example,
A group III parameter might also be defined to a sub-macroblock level. In this case, as illustrated by the mapping 2200 of
Finally,
The lower left sub-macroblock has been partitioned into sub-macroblock partition 8 (mapping into contexts 8 and 10) and sub-macroblock partition 9 (mapping into contexts 9 and 11). The lower right sub-macroblock has been partitioned into four sub-macroblock partitions 12 through 15, each being stored in the associated context. At worst, a single group III parameter will be associated with sixteen different values (not illustrated in
Note that the mapping described with respect to
The system 2400 includes a data storage device 2420, such as an on-chip buffer or an external SDRAM unit, that may operate in accordance with any of the embodiments described herein. For example, the data storage device 2420 may include an overall area storage portion associated with an overall area parameter type for a moving image area (e.g., a macroblock), wherein a single value of the overall area parameter type is to be associated with the image area. The overall area storage portion might, for example, be used to store group I H.264 parameter values as described herein.
The data storage device 2420 may also include a first storage portion associated with a first parameter type for the image area, the image area being potentially divisible into a first set of sub-areas. Note that different values of the first parameter type may be associated with different sub-areas of the first set. The first storage portion might, for example, be used to store group II H.264 parameter values as described herein (e.g., which can apply to a sub-macroblock). The data storage device 2420 may further include a second storage portion associated with a second parameter type for the image area, the image area being potentially divisible into a second set of sub-areas. Moreover different values of the second parameter type may be associated with different sub-areas of the second set. Note that the number of sub-areas in the second set may be different than the number of sub-areas in the first set. The second storage portion might, for example, be used to store group III H.264 parameter values as described herein (e.g., which can apply to a sub-macroblock partition). The data storage device may store the first, second, and third portions illustrated in
The system 2400 may further include an output engine 2410, such as an H.264 decoder, to decode a received stream of image information in accordance with information in the data storage device 2420. For example, the output engine 2410 may decode an H.264 macroblock, or portion of an H.264 macroblock, based at least in part on parameters associated with neighboring areas of the display. According to some embodiments, the output engine 2410 generates information that is provided to a display device via a digital output 2430.
The following illustrates various additional embodiments. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that many other embodiments are possible. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above description to accommodate these and other embodiments and applications.
For example, although a particular context buffer numbering and mapping scheme has been described herein, embodiments may be associated with any other types of buffer numbering and mapping techniques. For example, a particular decoding approach might include different sized blocks of image information than those that have been described herein as examples.
Moreover, although particular image processing protocols and networks have been used herein as examples (e.g., H.264 and MPEG4), embodiments may be used in connection any other type of image processing protocols or networks, such as Digital Terrestrial Television Broadcasting (DTTB) and Community Access Television (CATV) systems.
The several embodiments described herein are solely for the purpose of illustration. Persons skilled in the art will recognize from this description other embodiments may be practiced with modifications and alterations limited only by the claims.