The present invention relates to information technology in general, and, more particularly, to video decoding and memory management in video decoding systems.
There are techniques, however, for reducing, on average, the number of bytes that must be transmitted. One such technique is known as H.264. In accordance with H.264, some of the pixels in a frame are transmitted explicitly while others are not, but instead are derived or extrapolated from those that are.
To accomplish this, the pixels in the video frame are organized in a hierarchy of data structures. First, the frame is partitioned into a two-dimensional array of 45 by 30 macroblocks, as shown in
All 1350 macroblocks in the frame are established in row-column order, as depicted in
The pixels in a luma block are designated p[x,y] through p[x+3, y+3] as depicted in
This can wreak havoc on the speed with which the video frame can be decoded, and, therefore, the need exists for a technique for ensuring the expedient decoding of a video frame.
The present invention presents a method for memory management in video decoding systems that avoids some of the costs and disadvantages with video decoding systems in the prior art. Some embodiments of the present invention are especially well-suited for use with the H.264 video decoding standard.
The illustrative embodiment is a memory management technique that controls which data is in the fastest memory available to a processor performing video decoding. In particular, the technique seeks to ensure that the data the processor will need is in the primary memory and expunges data that the processor will not need. The technique is based upon an analysis of predictive video decoding standards, such as H.264. By employing this technique, the illustrative embodiment ensures the expedient decoding of video frames.
The illustrative embodiment comprises: retaining pixel[x−1, y−1] in a first memory until pixel[x, y] has been established; and expunging pixel[x, y] from the first memory before pixel[x+3, y+3] is expunged from the first memory; wherein x ε{x: x=4n} and n ε{n: n is a non-negative integer}; and wherein y ε{y: y=4m} and m ε{m: m is a non-negative integer}.
a depicts a graphical illustration of the H.264 Intra—4×4_Horizontal_Up prediction mode.
b depicts a graphical illustration of the H.264 Intra—4×4_Horizontal prediction mode.
c depicts a graphical illustration of the H.264 Intra—4×4_Horizontal_Down prediction mode.
d depicts a graphical illustration of the H.264 Intra—4×4_Diagonal_Down_Right prediction mode.
e depicts a graphical illustration of the H.264 Intra—4×4_Vertical_Right prediction mode.
f depicts a graphical illustration of the H.264 Intra—4×4_Vertical prediction mode.
g depicts a graphical illustration of the H.264 Intra—4×4_Vertical_Left prediction mode.
h depicts a graphical illustration of the H.264 Intra—4×4 Diagonal_Down_Left prediction mode.
Video decoding system 900 comprises: processor 901, primary memory 911, secondary memory 912, tertiary memory 913, and memory management unit 902, interconnected as shown.
Processor 901 is a general-purpose processor that can read and write to primary memory 911 and that perform the functionality described herein.
Primary memory 911 is the fastest addressable memory and processor 901 can access data within primary memory 911 in one clock cycle. Primary memory 911 is not a content-addressable memory with a hardwired cache-retention discipline—or a cache that is invisible to the system programmer. In accordance with the illustrative embodiment, processor 901 and primary memory 911 are on the same monolithic die. In accordance with the illustrative embodiment, primary memory 911 has its own address space, which is distinct from the address space of secondary memory 912. It will be clear to those skilled in the art, however, how to make and use alternative embodiments of the present invention in which primary memory 911 and secondary memory 912 are in the same address space.
Secondary memory 912 is the second-fastest addressable memory in the system. Processor 901 can access data within secondary memory 912 in approximately 100 clock cycles. In accordance with the illustrative embodiment, secondary memory 912 is semiconductor memory but secondary memory 912 is not on the same die as processor 901. This accounts for the substantial difference in speed between secondary memory 912 and primary memory 911.
Tertiary memory 913 is the slowest addressable memory. Processor 901 can access data within tertiary memory 913 in approximately 10,000 clock cycles. In accordance with the illustrative embodiment, tertiary memory 913 is a mass storage device, such as a hard drive, and this accounts for the substantial difference in speed between tertiary memory 913 and secondary memory 912.
Primary memory 911 costs substantially more, per bit, than does secondary memory 912, and secondary memory 912 costs substantially more, per bit, than does tertiary memory 913. For this reason, primary memory 911 comprises substantially fewer bytes than secondary memory 912, and secondary memory 912 comprises substantially fewer bytes than tertiary memory 913.
When processor 901 seeks a word of data and the word is in primary memory 911, processor 901 can continue processing very quickly. In contrast, when processor 901 seeks a word of data and the word is not in primary memory 911, processor 901 waits until the word can be retrieved. Given that secondary memory 912 is 1/100th of the speed of primary memory 911, processing can become very slow if processor 901 must regularly wait for data to be retrieved from secondary memory 912.
One solution is to ensure that the size of primary memory 911 is large because this reduces, probabilistically, the frequency with which a desired word is not in primary memory 911. This approach is problematic, however, because it is expensive and because some applications, such as video decoding, use such large quantities of data that any reasonably-sized primary memory would be ineffective.
To overcome this problem, the illustrative embodiment employs memory management unit 902 which controls what data is in primary memory 911 and what is not. In other words, memory management unit 902 retains in primary memory 911 data that will be needed by processor 901 soon and expunges from primary memory 911 data that will not be needed by processor 901 again soon. By retaining in primary memory 911 data that will be needed soon, the illustrative embodiment reduces the frequency and likelihood that processor 901 must wait until data can be retrieved from secondary memory 912, and by expunging from primary memory 911 data that will not be needed by processor 901 again soon, the illustrative embodiment frees up space in primary memory 911 for data that will be needed by processor 901 soon.
In many cases, a memory management unit cannot predict what data the processor will need again soon and what data it will not, but there are applications, such as video decoding, when reasonable predictions can be made.
As can be seen in
It is to be understood that the above-described embodiments are merely illustrative of the present invention and that many variations of the above-described embodiments can be devised by those skilled in the art without departing from the scope of the invention. It is therefore intended that such variations be included within the scope of the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
5973742 | Gardyne et al. | Oct 1999 | A |
6157676 | Takaoka et al. | Dec 2000 | A |
6690835 | Brockmeyer et al. | Feb 2004 | B1 |
20040264570 | Kondo et al. | Dec 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20070274391 A1 | Nov 2007 | US |