Many different types of memory devices are used to read and write media data for various computers and similar systems. Decoding high resolution media data is a primary memory bandwidth consumer for a System-on-Chip (SoC). In particular, memory bandwidth for SoCs is often the bottleneck that limits expansion of system use. As such, increasing efficiency of memory bandwidth usage for both compression encoders that write media data to memory and compression decoders that read media data from memory is a common target for improving system performance.
Memory bandwidth refers to a rate at which data can be read from or written to (stored in) memory devices, such dynamic random access memory (DRAM) devices, static random memory (SRAM) devices, electrically erasable read only memory (EEPROM), NOR flash memory, NAND flash memory, and so on. Memory bandwidth is commonly expressed in units of bytes per second for systems implementing 8-bit bytes.
Approaches to increasing efficiency of a system's memory bandwidth usage include applying transforms to media data followed by entropy coding the transformed media data. However, these approaches remain inefficient and there is a desire to reduce memory bandwidth used when writing media data to, or reading media data from, system memory.
This summary is provided to introduce subject matter that is further described below in the Detailed. Description and Drawings. Accordingly, this Summary should not be considered to describe essential features nor used to limit the scope of the claimed subject matter.
A tile-based run-length compression and decompression system is described that reduces memory bandwidth when encoding media data to memory and decoding media data from memory. As discussed herein, “media data” refers to video data, image data, graphics data, or combinations thereof. The described system employs various pre-processing modules and a run-length coding module for compressing frames of media data to compressed bits that reduce memory bandwidth when the media data is written to memory. Pre-processing modules include a Differential Pulse Code Modulation (DPCM) module, a gray coding module, a bitplane decomposition module, and a bitstream extractor module, which are configured to prepare media data frames for run-length encoding. Similarly, the described system employs a run-length decoding module and various post-processing modules for decompressing compressed bits of media data from memory into media data frames in a manner that reduces memory bandwidth when the media data is read from memory. Post-processing modules include a bitplane organizer module, a pixel assembly module, an inverse gray coding module, and a reverse DPCM module, which are configured to construct media data frames from compressed media data bits that have been decoded by the run-length decoding module.
The details of one or more implementations are set forth in the accompanying figures and the detailed description below. In the figures, the left-most digit of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different instances in the description and the figures indicate like elements.
This disclosure describes apparatuses and techniques for run-length compression and decompression of media content that reduce memory bandwidth usage for the system in which they are incorporated when reading media content from or writing media content to memory. The described run-length compression system and techniques implement a series of processing steps followed by run-length encoding to compress one or more media frames into a plurality of compressed bits in the form of a run-length encoded bitstream. The described run-length decompression system and techniques implement run-length decoding followed by a series of processing steps to decompress a compressed run-length encoded bitstream into one or more media frames. The described run-length compression and decompression systems can be used to compress media content from embedded SoC memory to external memory and/or vice versa. In various implementations, processing steps applied by run-length compression and decompression systems include one or more of DPCM steps, gray coding steps, bitplane decomposition steps, pixel assembly steps, bitstream extraction steps, or bitplane organization steps. The processing steps, run-length encoding, and run-length decoding techniques discussed herein increase system compression ratios and reduce memory bandwidth usage for a system implementing the techniques when reading media data from, or writing media data to, system memory. The described run-length compression and decompression systems can be used to compress and decompress entire pieces of media content, or portions thereof. For example, in a scenario where only a subset of a plurality of frames of a piece of media content need to be edited, played back, and so on, the described run-length compression and decompression systems can be used to compress and/or decompress only the subset of the plurality of frames. By only compressing and/or decompressing a subset of media content, the processing steps, run-length encoding, and run-length decoding techniques discussed herein further increase system compression ratios and reduce memory bandwidth usage when reading media data from, or writing media data to, system memory.
DPCM module 104 is configured to receive one or more media content frames 102 that are to be compressed and written to memory. DPCM module 104 is configured to organize the one or more media content frames 102 into one or more tiles. As discussed herein, a tile represents a local region of media content. For example, DPCM module 104 may organize a single frame of the one or more media content frames 102 into a plurality of tiles. A tile size of an individual tile is measured in terms of pixels of media content. For example, in accordance with one or more embodiments, DPCM module 104 is configured to organize media content frames 102 into individual tiles that are each sized 64 pixels×4 pixels. However, any suitable tile shape and tile size can be used. By organizing media content frames 102 into individual tiles, DPCM module 104 improves data locality when compressing the media content. For example, access to memory is fastest when accessed addresses are sequential within one memory page. By organizing media content frames 102 into tiles, DPCM module 104 creates a local region of a media content frame, in two dimensions, that corresponds to a linear sequence of addresses in memory, thereby enabling fast access to a local media content region stored in memory. Although discussed herein as being performed by DPCM module 104, organizing media content frames 102 into tiles may be performed before the media content frames are received by the DPCM module 104, such as by a computing device implementing the run-length media compression system 100.
After media content frames have been organized into tiles, DPCM module 104 is configured to reduce magnitudes of pixel values in the individual tiles using DPCM, To accomplish this, DPCM module 104 selects a hard-coded starting value as a first previous value. For example, the hard-coded starting value may be a pixel value representing an average color, such as a mid-gray. Alternatively, the hard-coded starting value may be a pixel value representing black. In accordance with one or more embodiments where pixel color values for a tile are known, the hard-coded starting value may be a pixel value representing the most frequent color in the tile or a pixel value representing the average color of the tile. DPCM module 104 then loops through each pixel in a tile. During the loop, the DPCM module 104 replaces an individual pixel's value with the difference between the current value of the individual pixel and the value of the previous pixel. If the individual pixel is the first pixel in the tile, DPCM module 104 selects the hard-coded starting value as the previous pixel value. By applying DPCM to individual pixels of each tile, DPCM module 104 reduces entropy in higher-order pixel bits. The DPCM module 104 may process all tiles or only some tiles. After the DPCM module 104 processes the tiles, run-length media compression system 100 gray codes each tile processed by DPCM module 104 using gray coding module 106.
Gray coding module 106 is representative of functionality that applies gray coding to each pixel in a tile, which reduces hamming distances between different pixel values for a tile. The hamming distance between two pixels refers to a difference between binary values of the two pixels. By processing individual tiles using gray coding, gray coding module 106 reduces bad or undesirable compression spots near certain value boundaries. For example, bad or undesirable compression spots may occur when there is a small variation between two pixel values but there is a large number of bit flips, such as the variation between hexadecimal values 0x7f and 0x80. 0x7f corresponds to 127 when expressed in decimal format and 1111111 when expressed in binary format. Conversely, 0x80 corresponds to 128 when expressed in decimal format and 10000000 when expressed in binary format. Thus, although there is a small decimal variation between the two pixel values, there is a significant number of bit flips, which creates a bad or undesirable compression spot.
In accordance with one or more embodiments, gray coding module 106 is configured to reduce the hamming distance between two pixel values using Binary Reflected Gray Code, represented by the following C code:
The above code is merely illustrative, and gray coding module 106 can be configured to perform gray coding using a variety of approaches. Gray coding module 106 performs gray coding for each pixel of each tile generated from the media content frames 102. After each tile has been gray coded, nm-length media compression system 100 decomposes bits of each tile into bitplanes using the bitplane decomposition module 108.
Bitplane decomposition module 108 is representative of functionality that separates bits of tile pixel values into bitplanes. As discussed herein, an individual bitplane represents a bit integer value for a specific bit integer position for each pixel in a tile. Functionality of bitplane decomposition module 108 is discussed in further detail below with respect to
Bitstream extractor module 110 is representative of functionality that converts two-dimensional bitplanes into a one-dimensional bitstream. Bitstream extractor module 110 converts two-dimensional bitplanes into a one-dimensional bitstream by extracting bitplane pixel values from each bitplane. The order and method M which bitstream extractor module 110 extracts bitplane pixel values can vary in accordance with one or more embodiments. Functionality of bitstream extractor module 110 is discussed in further detail below with respect to
Run-length encoding module 112 is representative of functionality that identifies runs of identical bits in a bitstream that are longer than or equal to a given minimum length of bits and generates an output bitstream that does not contain the runs of identical bits. As discussed herein, a run of identical bits may also be referred to as a repeat run. Functionality of run-length encoding module 112 is discussed in further detail below with respect to
Run-length decoding module 204 is representative of functionality that receives (i.e., reads) compressed media content bits of the input bitstream 202 from memory and decompresses the received media content bits. Run-length decoding module 204 decompresses media content bits by performing the reverse operations of run-length encoding module 112 illustrated in
Bitplane organizer module 206 is representative of functionality that converts a one-dimensional bitstream into two-dimensional bitplanes. Using a similar order and method to an order and method that was used to generate the one-dimensional bitstream, bitplane organizer module 206 sequentially extracts bitstream values to create corresponding bitplanes for media content associated with the decompressed media bits. Thus, bitplane organizer module 206 is configured to perform the reverse operations of bitstream extractor module 110 illustrated in
Pixel assembly module 208 is representative of functionality that extracts bitplane values to create tiles that each include pixels. Pixel assembly module 208 generates a pixel value for each pixel in a tile by combining a bit value from a location corresponding to the pixel from each of bitplane. Recall that each bitplane represents a bit integer position, thus enabling pixel assembly module 208 to generate the proper pixel value for a pixel using each of the bitplanes. Pixel assembly module 208 generates a pixel value for each pixel in a tile by performing the reverse operations of bitplane decomposition module 108 illustrated in
Inverse gray coding module 210 is representative of functionality that recovers original pixel values from the gray-coded pixel values of the one or more tiles received from pixel assembly module 208. In accordance with one or more embodiments, inverse gray coding module 210 restores a hamming distance between two pixel values in a tile using the following C code:
The above code is merely illustrative and the inverse gray coding module 210 can perform inverse gray coding using any suitable inverse gray coding approach. Although illustrated as a separate module that is implemented in a different system, functionality of inverse gray coding module 2:1.0 may be performed by gray coding module 106 illustrated in
Reverse DPCM module 212 is representative of functionality that generates media content frames 214 from tiles associated with media content. Reverse DPCM module 212 increases of pixel value magnitudes in tiles to an original pixel value magnitude for each pixel. Reverse DPCM module 212 restores original pixel value magnitudes for pixels in a tile by performing the reverse operations of DPCM module 104 illustrated in
Each pixel in a media tile includes a pixel value having a length of n bits. The number of different colors in the media content depends on the media content's depth of color, which is represented by bits per pixel. For example, pixel 310 is illustrated as including 8 bits, which indicates that the media content represented by media tile 302 is of an 8-bit per pixel format. Thus, if the media content represented by media tile 302 was configured in a 10-bit per pixel format, pixel 310 would include a pixel value having a length of 10 bits.
Table 312 illustrates how pixel bit values are decomposed into n bitplanes for n bit positions. Using the illustrated example media tile 302, the media content represented by media tile 302 is of an 8-bit format, which results in decomposing media tile 302 into eight different bitplanes. As discussed herein, a bitplane is a set of bits, corresponding to a given bit position in a binary number representing a pixel value, for each pixel in a media tile. Individual bitplanes are created for each bit position in a pixel value. Accordingly, a bitplane can be illustrated as an array including rows of bits 306 and columns of bits 308 corresponding to a single bit position for a binary number representing a pixel value.
For example, a bit value in the 0th bit position (i.e., n−n) for each pixel value in media tile 302 is extracted and decomposed into bitplane 314, illustrated generally at 304. Similarly, a bit value in the 1st bit position for each pixel in media tile 302 is extracted and decomposed into bitplane 316. This decomposition is performed for each bit position until the final bit position n−1 is extracted and decomposed into bitplane 318. In accordance with one or more embodiments, decomposing pixel values into individual bitplanes is performed by bitplane decomposition module 108, illustrated in
Size and amount values of the luminance and chrominance bitplanes for media content depend on both a media tile size and a data format of the media content. As discussed above, the number of bits per pixel determines a number of bitplanes that will be generated from a media tile. For example, assume that media content is formatted as 4:2:0 8-bit video pixel format and that frames of the media content are organized into 64 pixel×4 pixel tiles, such as media tile 304 illustrated in
Each Y bitplane includes a plurality of bit rows 410 and a plurality of bit columns 412. A number of bit rows 410 is equivalent to a number of pixel rows in a media tile, such as pixel rows 306 in media tile 304, as illustrated in
Using the bitplane decomposition techniques described herein, a plurality of luminance bitplanes 404, 406 . . . 408 illustrated at 402 are stored in a two-dimensional region spanning bit rows 410 by bit columns 414. Because a number of bitplanes generated from a media tile is dependent on a number of bits per pixel in associated media content, a resulting number of bit columns 414 in the two-dimensional region is equivalent to a number of media tile columns, such as media tile columns 308 illustrated in
Continuing to the plurality of chrominance bitplanes illustrated at 402, each U bitplane includes a plurality of bit rows 410 and a plurality of bit columns 428. A number of bit rows 410 is equivalent to a number of pixel rows in a media tile constructed from media content, such as pixel rows 306 in media tile 304, as illustrated M
Using the bitplane decomposition techniques described herein, a plurality of chrominance bitplanes 416, 418, 420, 422 . . . 424, and 426 are organized in a two-dimensional region spanning bit rows 410 by bit columns 414. As illustrated at 402, chrominance U and V bitplanes for a media tile are stored sequentially corresponding to a bit position for the bitplane. Although U bitplanes 416, 420, and 424 are illustrated as preceding corresponding V bitplanes 418, 422, and 426, respectively, V bitplanes 418, 422, and 426 may be stored as preceding U bitplanes 416, 420, and 424 in the two-dimensional region illustrated at 402 in accordance with one or more embodiments. The resulting number of bit columns 414 will be the same as the number of bit columns generated for the Y bitplane and is equivalent to a number of media tile columns multiplied by a number of bits per pixel of the media content. For example, assuming that media content is of an 8-bit format and organized into 64 pixel×4 pixel tiles, a resulting plurality of chrominance bitplanes would span 4 bit rows by 512 bit columns. If media content is of a 10-bit format and organized into 64 pixel×4 pixel tiles, a resulting plurality of chrominance bitplanes would span 4 bit rows by 640 bit columns.
Although the example bitplanes of
In the illustrated example 500 of
A linear bitstream can be created from a two-dimensional region of bitplanes 506 using any path and that paths 510 and 512 are merely illustrative examples. For example, bit values may be extracted from a region of bitplanes 506 into a linear bitstream following path 514 illustrated in example 504. In the example 504, callouts “A” and “B” represent transitions between various boxes in the continuous progression of path 514. For example, path 514 begins at block 508 and proceeds to the right along the X-Axis until reaching callout A of the same row as block 508, then proceeds without interruption to the block below block 508 in the Y-Axis, represented by the callout A below block 508. Similarly, callout B represents a transition between blocks in the progression of path 514. The size of compressed media content will vary based on visual features included in uncompressed media content and the corresponding path used to generate a linear bitstream from a two-dimensional region of bitplanes that were decomposed from the media content.
In accordance with one or more embodiments, a nm-length encoded bitstream is constructed from a linear bitstream by identifying runs of identical bits (i.e., repeat runs) that are longer than or equal to a minimum run length. The run-length encoded bitstream does not contain these repeat runs, thereby reducing memory bandwidth overhead when writing the run-length encoded bitstream to memory or reading the run-length encoded bitstream from memory.
An encoder that outputs the run-length encoded bitstream, such as run-length encoding module 112 illustrated in
A run-length encoded bitstream is given a single bit initial value “v”, illustrated at portion 602 in the example run-length encoded bitstream at 600. The initial value v as discussed herein represents the inverse of a first lit of an input linear bitstream from which the run-length encoded bitstream is constructed. The initial value v is then followed by one or more literal runs and one or more repeat runs of bits. As discussed herein, a literal run of bits identifies bits that are copied from an input bitstream to an output bitstream during decoding. A length of the literal run of bits is identified by lengthm and the contents of the literal run are identified by bitsm. A repeat run of bits means that constant bits are inserted into an output bitstream and a length of a repeat run is identified by repeatm.
The bit value to be included for bits in a repeat run during decoding is defined as the inverted value of the last bit produced. For example, if a repeat run follows a last bit produced having a value of “1”, the repeat run would insert a designated length of “0” value bits. If the first run following the initial value of a run-length encoded bitstream is a repeat run, the initial value v is used as the last bit produced. The length of a repeat run, by definition, must be greater than or equal to MIN_REPEAT. Accordingly, the value repeatm, indicating the length of a repeat run, identifies the number of bits minus MIN_REPEAT in accordance with one or more embodiments.
In one or more embodiments, the length of a literal run can be zero. In this case, there are no bits included in the contents of the literal nm identified by bitsm. A literal run having zero length can occur if the first run following the initial value v is a repeat run, where a zero length literal run is inserted to preserve output format during decoding. Additionally or alternatively, a literal run having zero length can occur if a first repeat run is adjacent to a second repeat run, where the values of the first and second repeat runs are inverse from one another, Inserting a zero-length literal run for adjacent inverse repeat runs similarly preserves the output format of the nm-length encoded bitstream during decoding.
For example, in the example run-length encoded bitstream illustrated at 600, portion 604 identifies a first length of bits that are to be copied literally from the run-length encoded bitstream during decoding and portion 606 identifies the bit values spanning the first length that are to be copied literally during decoding. Portion 608 identifies a length of repeat bits that are to be inserted into a run-length bitstream during decoding. Similarly, portion 610 identifies a second length of bits that are to be copied literally from the run-length encoded bitstream during decoding and portion 612 identifies the bit values spanning the second length that are to be copied literally during decoding. Portion 614 identifies a length of repeat bits that are to be inserted into a run-length bitstream during decoding. The run-length encoded bitstream either may end with a literal run, illustrated as portions 616 and 618, or may end with a repeat run, illustrated as portion 612.
By replacing repeat runs of an input bitstream, a run-length encoded bitstream generated using the techniques described herein will be shorter than a length of the input bitstream. This decreased bitstream length reduces memory bandwidth usage, thereby increasing system performance when reading media data from, or writing media data to, memory.
At 702, one or more tiles are determined based on one or more media frames. In at least some embodiments, one or more media frames are organized into tiles spanning 64 pixels by 4 pixels. At 704, pixel values for pixels in the one or more tiles are determined based on differential pulse code modulation (DPCM). At 706, gray coded pixel values are determined for the tiles based on gray coding the DPCM processed tiles. At 708, a plurality of bitplanes are determined based on the gray coded pixel values of the one or more tiles. A number of bitplanes that will be produced from the one or more processed tiles is defined by a number of bits per pixel in the media frames. At 710, a linear bitstream is determined from the bitplanes. In at least some embodiments, this is performed by traversing a two-dimensional array of the bitplanes in a serpentine path. In other embodiments, this is performed by traversing the two-dimensional array of bitplanes in a diagonal path. At a compressed media bitstream is determined based on run-length encoding of the linear bitstream. In at least some embodiments, run-length encoding of the extracted linear bitstream comprises identifying repeat runs of bits and removing the repeat runs of bits from the compressed media bitstream. At 714, the compressed media bitstream produced from the run-length encoding is output. In at least some embodiments, outputting the compressed media bitstream includes writing the compressed media bitstream to memory.
At 802, a compressed media bitstream is received. In at least some embodiments, receiving a compressed media bitstream includes reading the compressed media bitstream from memory. At 804, a linear bitstream is determined based on run-length decoding the compressed media bitstream. In at least some embodiments, applying run-length decoding is performed by copying literal runs of bits from the compressed media bitstream to a decompressed media bitstream during decoding and inserting repeat runs of bits into the decompressed media bitstream during decoding. At 806, bitplanes are determined based on the decompressed linear bitstream. In at least some embodiments, bitplanes are determined by traversing a two-dimensional array of the bitplanes in a path that is a reverse of a path used to create the compressed media bitstream. At 808, media tiles are determined based on the bitplanes. In at least some embodiments, these media tiles are determined by identifying binary integer values for a pixel value from corresponding pixel locations in the bitplanes and aggregating these identified binary integer values into a pixel value for each pixel represented by the bitplanes. At 810, a pixel value for each pixel in the media tiles is determined based on inverse gray coding the media tiles. In at least some embodiments, this inverse gray coding recovers original pixel values from the gray coded pixel values. At 812, one or more media frames are determined based on reverse DPCM of the inverse gray coded media tiles. In at least some embodiments, the one or more media frames are determined by recreating original pixel values using DPCM data associated with the compressed media bitstream. Recreating original pixel values from DPCM data restores magnitudes of pixel values to their original values that existed prior to compression. At 814, decompressed media frames are output. In at least some embodiments, outputting decompressed media frames includes organizing one or more media frames from one or more media tiles.
The above-described techniques and embodiments can be implemented to reduce memory bandwidth usage associated with reading media data from memory and writing media data to memory.
Devices 902 can include one or more of desktop computer 904, laptop computer 906, server 908, television 910, a set top box communicatively coupled with television 910, mobile computing device 912, as well as a variety of other devices.
Each device 902 includes processor(s) 914 and computer-readable storage media 916. Computer-readable storage media 916 may include any type and/or combination of suitable storage media, such as memory 918 and storage drive(s) 920. Memory 918 may include memory such as dynamic random-access memory (DRAM), static random access memory (SRAM), read-only memory (ROM), or Flash memory useful to store data of applications and/or an operating system of the device 902. Storage drive(s) 920 may include hard disk drives and/or solid-state drives (not shown) useful to store code or instructions associated with an operating system and/or applications of device. Processor(s) 911 can be any suitable type of processor, either single-core or multi-core, for executing instructions or commands of the operating system or applications stored on storage drivels) 920.
Devices 902 can also each include I/O ports 922, media engine 924, system microcontroller 922, network interface(s) 928, and a run-length media compression and decompression system 930 that includes or otherwise makes use of one or both of a run-length media compression system 932 or a run-length media decompression system 934 that operate as described herein. In accordance with one or more embodiments, run-length media compression system 932 may be configured as run-length media compression system 100, illustrated in
I/O ports 922 allow device 902 to interact with other devices and/or users. I/O ports 922 may include any combination of internal or external ports, such as audio inputs and outputs, USB ports, Serial ATA (SATA) ports, PCI-express based ports or card-slots, and/or other legacy ports. I/O ports 922 may also include or be associated with a packet-based interface, such as a USB host controller, digital audio processor, or SATA host controller. Various peripherals may be operatively coupled with I/O ports 922, such as human-input devices (HIDs), external computer-readable storage media, or other peripherals.
Media engine 924 processes and renders media for device 902, including user interface elements of an operating system, applications, command interface, or system administration interface. System microcontroller 926 manages low-level system functions of device 902. Low-level system functions may include power status and control, system clocking, basic input/output system (BIOS) functions, switch input (e.g. keyboard and button), sensor input, system status/health, and other various system “housekeeping” functions. Network interface(s) 928 provides connectivity to one or more networks
The SoC 1000 can be integrated with electronic circuitry, microprocessor, memory, input-output (I/O) logic control, communication interfaces and components, other hardware, firmware, and/or software needed to provide communicate coupling for a device, such as any of the above-listed devices. The SoC 1000 can also include an integrated data bus (not shown) that couples the various components of the SoC for data communication between the components. A wireless communication device that includes the SoC 1000 can also be implemented with many combinations of differing components. In some cases, these differing components may be configured to implement concepts described herein over a wireless connection or interface.
In this example, SoC 1000 includes various components such as an input-output (I/O) logic control 1002 (e.g., to include electronic circuitry) and a microprocessor 1004 (e.g., any of a microcontroller or digital signal processor). SoC 1000 also includes a memory 1006, which can be any type of RAM, SRAM, low-latency nonvolatile memory (e.g., flashmemory), ROM, and/or other suitable electronic data storage. SoC 1000 can also include various firmware and/or software, such as an operating system 1008, which can be computer-executable instructions maintained by memory 1006 and executed by microprocessor 1004. SoC 1000 can also include other various communication interfaces and components, communication components, other hardware, firmware, and/or software.
SoC 1000 includes a run-length media compression and decompression system 930 that includes or otherwise makes use of one or both of a run-length media compression system 932 or a run-length media decompression system 934 that operate as described herein.
A tile-based run-length compression and decompression system is described that reduces memory bandwidth when encoding media data to memory and decoding media data from memory. The described run-length compression system and techniques implement a series of processing steps followed by run-length encoding to compress one or more media frames into a plurality of compressed bits in the form of a run-length encoded bitstream. The described run-length decompression system and techniques implement run-length decoding followed by a series of processing steps to decompress a compressed run-length encoded bitstream into one or more media frames. In various implementations, processing steps applied by run-length compression and decompression systems include one or more of Differential Pulse Code Modulation steps, gray coding steps, bitplane decomposition steps, pixel assembly steps, bitstream extraction steps, or bitplane organization steps. The processing steps, run-length encoding, and run-length decoding techniques discussed herein increase system compression ratios and reduce memory bandwidth usage for a system implementing the techniques when reading media data from, or writing media data to, system memory.
Although the subject matter has been described in language specific to structural features and/or methodological operations, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or operations described above, including orders in which they are performed.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/252,722 filed Nov. 9, 2015, the disclosure of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5881176 | Keith | Mar 1999 | A |
20060071825 | Demos | Apr 2006 | A1 |
20160277738 | Puri | Sep 2016 | A1 |
Number | Date | Country | |
---|---|---|---|
62252722 | Nov 2015 | US |