This disclosure generally relates to video coding, and in particular to intra prediction video coding.
Digital video coding is used in wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, video gaming devices, cellular or satellite radio telephones, or the like. Digital video devices implement video compression techniques, such as MPEG-2, MPEG-4, or H.264/MPEG-4 Advanced Video Coding (AVC), to transmit and receive digital video more efficiently.
Video compression techniques generally perform spatial prediction, motion estimation, and motion compensation to reduce or remove redundancy inherent in video data. Intra prediction video coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame.
The systems, methods, and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.
One aspect of this disclosure provides an apparatus for coding video data including video blocks, the apparatus comprising a processor and a memory. The processor is configured to: divide a video block into a plurality of video sub-blocks having a first size, the video block comprising video units and having an intra prediction mode, each video sub-block of the plurality of video sub-blocks comprising at least a non-zero integer number of video units of the video block; determine prediction variables for a first video sub-block of the plurality of video sub-blocks based on the intra prediction mode of the video block; and determine a predicted video unit for each video unit of the first video sub-block based on the intra prediction mode of the video block and the prediction variables for the first video sub-block. The memory is configured to store the predicted video units.
Another aspect of this disclosure provides a method for coding video data including video blocks, the method comprising: dividing a video block into a plurality of video sub-blocks having a first size, the video block comprising video units and having an intra prediction mode, each video sub-block of the plurality of video sub-blocks comprising at least a non-zero integer number of video units of the video block; determining prediction variables for a first video sub-block of the plurality of video sub-blocks based on the intra prediction mode of the video block; and determining a predicted video unit for each video unit of the first video sub-block based on the intra prediction mode of the video block and the prediction variables for the first video sub-block.
One aspect of this disclosure provides an apparatus for coding video data including video blocks, the apparatus comprising: means for dividing a video block into a plurality of video sub-blocks having a first size, the video block comprising video units and having an intra prediction mode, each video sub-block of the plurality of video sub-blocks comprising at least a non-zero integer number of video units of the video block; means for determining prediction variables for a first video sub-block of the plurality of video sub-blocks based on the intra prediction mode of the video block; and means for determining a predicted video unit for each video unit of the first video sub-block based on the intra prediction mode of the video block and the prediction variables for the first video sub-block.
Another aspect of this disclosure provides a non-transitory computer-readable storage medium comprising instructions that upon execution in a processor cause the processor to: divide a video block into a plurality of video sub-blocks, the video block comprising video units and having an intra prediction mode, each video sub-block of the plurality of video sub-blocks comprising at least a non-zero integer number of video units of the video block; determine prediction variables for a first video sub-block of the plurality of video sub-blocks based on the intra prediction mode of the video block; and determine a predicted video unit for each video unit of the first video sub-block based on the intra prediction mode of the video block and the prediction variables for the first video sub-block.
In general, this disclosure is directed to architectures and techniques for intra prediction video coding. The term “coding,” as used herein, may refer to encoding, decoding or both. Although the techniques described in this disclosure may be applicable to a wide variety of practical applications, the disclosure will refer to digital video encoding and decoding for purposes of example and illustration.
In the example of
Source device 12 generates video for transmission to destination device 14. In some cases, however, devices 12, 14 may operate in a substantially symmetrical manner. For example, each of devices 12, 14 may include video encoding and decoding components. Hence, system 10 may support one-way or two-way video transmission between video devices 12, 14, e.g., for video streaming, video broadcasting, or video telephony. For other data compression and coding applications, devices 12, 14 could be configured to send and receive, or exchange, other types of data, such as image, speech or audio data, or combinations of two or more of video, image, speech and audio data. Accordingly, discussion of video encoding and decoding applications is provided for purposes of illustration and should not be considered limiting of the various aspects of the disclosure as broadly described herein.
Video source 18 may include a video capture device, such as one or more video cameras, a video archive containing previously captured video, or a live video feed from a video content provider. As a further alternative, video source 18 may generate computer graphics-based data as the source video, or a combination of live video and computer-generated video. In some cases, if video source 18 is a camera, source device 12 and receive device 14 may form so-called camera phones or video phones. Hence, in some aspects, source device 12, receive device 14 or both may form a wireless communication device handset, such as a mobile telephone handset. In each case, the captured, pre-captured or computer-generated video may be encoded by video encoder 20 for transmission from video source device 12 to video decoder 26 of video receive device 14 via transmitter 22, channel 16, and receiver 24. Display device 28 may include any of a variety of display devices such as a liquid crystal display (LCD), plasma display, or organic light emitting diode (OLED) display.
Video encoder 20 and video decoder 26 may be configured to support scalable video coding (SVC) for spatial, temporal, and/or signal-to-noise ratio (SNR) scalability. In some aspects, video encoder 20 and video decoder 26 may be configured to support fine granularity SNR scalability (FGS) coding for SVC. Encoder 20 and decoder 26 may support various degrees of scalability by supporting encoding, transmitting, and decoding of a base layer and one or more scalable enhancement layers. For scalable video coding, a base layer carries video data with a minimum level of quality. One or more enhancement layers carry additional bitstream to support higher spatial, temporal, and/or SNR levels.
Video encoder 20 and video decoder 26 may operate in part according to techniques described herein and in part according to a video compression standard, such as MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 Advanced Video Coding (AVC), or High Efficiency Video Coding (HEVC). For example, the techniques used herein may be used to augment or replace the respective techniques used in a video compressions standard. Although not shown in
The H.264/MPEG-4 (AVC) standard was formulated by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a collective partnership known as the Joint Video Team (JVT). The H.264 standard is described in ITU-T Recommendation H.264, Advanced video coding for generic audiovisual services, by the ITU-T Study Group, and dated March 2005, which may be referred to herein as the H.264 standard or H.264 specification, or the H.264/AVC standard or specification.
In some aspects, for video broadcasting, the techniques described in this disclosure may be applied to Enhanced H.264 video coding for delivering real-time video services in terrestrial mobile multimedia multicast (TM3) systems using the Forward Link Only (FLO) Air Interface Specification, “Forward Link Only Air Interface Specification for Terrestrial Mobile Multimedia Multicast,” to be published as Technical Standard TIA-1099 (the “FLO Specification”), e.g., via a wireless video broadcast server or wireless communication device handset. The FLO Specification includes examples defining bitstream syntax and semantics and decoding processes suitable for the FLO Air Interface. Alternatively, video may be broadcasted according to other standards such as DVB-H (digital video broadcast-handheld), ISDB-T (integrated services digital broadcast-terrestrial), or DMB (digital media broadcast). Hence, source device 12 may be a mobile wireless terminal, a video streaming server, or a video broadcast server. However, techniques described in this disclosure are not limited to any particular type of broadcast, multicast, or point-to-point system. In the case of broadcast, source device 12 may broadcast several channels of video data to multiple receive device, each of which may be similar to receive device 14 of
Video encoder 20 and video decoder 26 each may be implemented as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. Hence, each of video encoder 20 and video decoder 26 may be implemented at least partially as an integrated circuit (IC) chip or device, and included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective mobile device, subscriber device, broadcast device, server, or the like. In addition, source device 12 and receive device 14 each may include appropriate modulation, demodulation, frequency conversion, filtering, and amplifier components for transmission and reception of encoded video, as applicable, including radio frequency (RF) wireless components and antennas sufficient to support wireless communication. For ease of illustration, however, such components are not shown in
A video sequence includes a series of video frames. Video encoder 20 operates on blocks composed of video units, such as pixels, within individual video frames in order to encode the video data. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame includes a series of slices. Each slice may include a series of macroblocks (MBs) or coding units (CUs), which may be arranged into video blocks or sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16×16, 8×8, 4×4 for luma components, and 8×8 for chroma components, as well as inter prediction in various block sizes, such as 16×16, 16×8, 8×16, 8×8, 8×4, 4×8 and 4×4 for luma components and corresponding scaled sizes for chroma components.
Smaller video blocks can provide better resolution, and may be used for locations of a video frame that include higher levels of detail. In general, MBs, CUs, and the various sub-blocks may be considered to be video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit. After prediction, a transform may be performed on the 8×8 residual block or 4×4 residual block, and an additional transform may be applied to the DC coefficients of the 4×4 blocks for chroma components or luma component if the intra 16×16 prediction mode is used.
Video encoder 20 of system 10 of
As shown in
During the encoding process, video encoder 20 receives a video block including video units to be coded, and prediction unit 32 performs predictive coding techniques. For inter coding, prediction unit 32 may compare the video block to be encoded to various blocks in one or more video reference frames or slices in order to define a predictive block. For intra coding, prediction unit 32 may determine whether the video block is a block having a baseline block size or a baseline total number of video units. If the video block is larger than the baseline block size or has more video units than the baseline total number of video units, prediction unit 32 may divide the video block into sub-blocks having the baseline block size or the baseline total number of video units.
Intra prediction unit 34 of prediction unit 32 may generate a prediction block or prediction sub-block for each block or sub-block having the baseline block size or the baseline total number of video units. In some aspects, intra prediction unit 34 may only generate prediction blocks or prediction sub-blocks for blocks or sub-blocks having the baseline block size or the baseline total number of video units. Intra prediction unit 34 may generate a prediction block or prediction sub-block based on neighboring video units of at least one neighboring video block of the video block to be encoded. One or more intra prediction modes (e.g., directional mode, mean mode, or planar mode, and the like) may determine how an intra prediction block or sub-block may be defined. In addition, if prediction unit 32 generates prediction sub-blocks for multiple sub-blocks of a video block, intra prediction unit 34 may construct a prediction block having a size equal to the size of the video block using one or more of the generated prediction sub-blocks. Prediction unit 32 may output the prediction block, and adder 48 subtracts the prediction block from the video block being coded in order to generate a residual block. In some aspects, prediction unit 32 includes more than one intra prediction unit 34 to enable substantially parallel processing of multiple sub-blocks and thus faster processing of video blocks. For instance, each intra prediction unit 34 may separately process one sub-block of one video block so that multiple sub-blocks of the one video block can be encoded in parallel. In such cases, predicted video units or prediction variables for multiple sub-blocks can be determined in parallel.
After prediction unit 32 outputs the prediction block and adder 48 subtracts the prediction block from the video block being coded in order to generate a residual block, transform unit 38 applies a transform to the residual block. The transform may comprise a discrete cosine transform (DCT) or a conceptually similar transform, such as that defined by the H.264 or HEVC standard. Wavelet transforms, integer transforms, sub-band transforms or other types of transforms may be used. Transform unit 38 applies the transform to the residual block, producing a block of residual transform coefficients. The transform may convert the residual information from a pixel domain to a frequency domain.
Quantization unit 40 quantizes the residual transform coefficients to further reduce bit rate. Quantization unit 40, for example, may limit the number of bits used to code each of the coefficients. After quantization, entropy encoding unit 46 scans the quantized coefficient block from a two-dimensional representation to one or more serialized one-dimensional vectors. The scan order may be pre-programmed to occur in a defined order (such as zig-zag scanning or another pre-defined order), or adaptively defined based on previous coding statistics, for instance.
Following this scanning process, entropy encoding unit 46 may encode the quantized transform coefficients (along with any syntax elements) according to an entropy coding methodology, such as CAVLC or CABAC, to further compress the data. Syntax elements included in the entropy coded bitstream may include prediction syntax from prediction unit 32, such as motion vectors for inter coding or prediction modes for intra coding. Syntax elements included in the entropy coded bitstream may also include filter information or other data that may be used in the decoding process.
Following the entropy coding by entropy encoding unit 46, the encoded video may be transmitted to another device or archived for later transmission or retrieval. Again, the encoded video may comprise the entropy coded motion vectors and other various syntax that may be used by the decoder to properly configure the decoding process. Inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain. Adder 51 adds the reconstructed residual block to the prediction block produced by prediction unit 32 to produce a reconstructed video block for storage in memory 36. Prior to such storage, filtering may also be applied on the video block to improve video quality. Such filtering may reduce blockiness or other artifacts, and may be performed in loop (in which case the data used for prediction may be filtered data) or post loop (in which case the data used for prediction may be unfiltered data).
Although intra prediction unit 34A is illustrated as including H.264 intra predictor 310 and VP 7/8 intra predictor 320, intra prediction unit 34 may include one or more other predictors configured to perform other intra prediction standards or approaches. In some aspects, intra prediction unit 34A includes shared or common hardware and/or software for performing one or more functions of some or all predictors. For example, shared hardware or software may perform functions of DC/planar calculator 312 of H.264 intra predictor 310 and functions of DC calculator 322 of VP 7/8 intra predictor 320. As another example, shared hardware or software may perform functions of neighbor fetch module 314 of H.264 intra predictor 310 and functions of neighbor fetch module 324 of VP 7/8 intra predictor 320. Similarly, shared hardware or software may perform functions of intermediate value calculator module 316 and intermediate value calculator module 326, and shared hardware and/or software may perform functions of pixel predictor module 318 and pixel predictor module 328.
Intra prediction unit 34A may include task FIFO (first-in first-out) module 302 configured to control processing of video units by intra prediction unit 34A. Task FIFO module 302 may receive video blocks or sub-blocks having a baseline block size or a baseline total number of video units and control the intra prediction processing of each block or sub-block by the modules of H.264 intra predictor 310 and VP 7/8 intra predictor 320.
Intra prediction unit 34A may include neighbor pixel manager module 304. Neighbor pixel manager module 304 may determine, store, and/or provide neighboring video unit information such as pixel values of neighboring video units of neighboring video blocks previously processed by task FIFO module 302. For instance, neighbor pixel manager module 304 may determine a video block currently being processed by task FIFO module 302 and obtain neighboring video unit information for the video block from memory 36 of
H.264 intra predictor 310 may include DC/planar calculator module 312, neighbor fetch module 314, intermediate value calculator module 316, and pixel predictor module 318. DC/planar calculator module 312 and neighbor fetch module 314 may receive inputs from task FIFO module 302 and neighbor pixel manager module 304, including a video block or sub-block to be predicted and neighboring video unit information for the video block. Depending on the intra prediction mode for the video block to be predicted, DC/planar calculator module 312 may calculate parameters based on neighboring video units. Intermediate value calculator module 316 may receive from DC/planar calculator module 312 and/or neighbor fetch module 314 prediction variables, including neighboring video unit information or parameters calculated based on neighboring video units. Intermediate value calculator module 316 may use the prediction variables to determine one or more predicted video units based on the intra prediction mode for the video block, for example, by filtering and generating a minimum number of unique video units. Pixel predictor module 318 may receive from DC/planar calculator module 312 and intermediate value calculator 316 the prediction variables and the one or more predicted video units and thereby determine and output a predicted video unit for each video unit of the video block or sub-block to be predicted. Pixel predictor module 318 may further include a memory for storing the predicted video units.
VP 7/8 intra predictor 320 may include DC calculator module 322, neighbor fetch module 324, intermediate value calculator module 326, and pixel predictor module 328. DC calculator module 322 and neighbor fetch module 324 may receive inputs from task FIFO module 302 and neighbor pixel manager module 304, including a video block or sub-block to be predicted and neighboring video unit information for the video block. Depending on the intra prediction mode for the video block or sub-block to be predicted, DC calculator module 322 may calculate parameters based on neighboring video units. Intermediate value calculator module 326 may receive from DC calculator module 322 and/or neighbor fetch module 324 prediction variables, including neighboring video unit information or parameters calculated based on neighboring video units. Intermediate value calculator module 326 may use the prediction variables to determine one or more predicted video units based on the intra prediction mode for the video block, for example, by filtering and generating a minimum number of unique video units. Pixel predictor module 328 may receive from DC calculator module 322 and intermediate value calculator 326 the prediction variables and the one or more predicted video units and thereby determine and output a predicted video unit for each video unit of the video block or sub-block to be predicted. Pixel predictor module 328 may further include a memory for storing the predicted video units.
HEVC intra predictor 330 may include DC/Planar calculator module 332, neighbor fetch module 334, intermediate value calculator module 336, and pixel predictor module 338. DC/planar calculator module 332 and neighbor fetch module 334 may receive inputs from task FIFO module 302 and neighbor pixel manager module 304, including a video block or sub-block to be predicted and neighboring video unit information for the video block. Depending on the intra prediction mode for the video block or sub-block to be predicted, DC calculator module 332 may calculate parameters based on neighboring video units. Intermediate value calculator module 336 may receive from DC calculator module 332 and/or neighbor fetch module 334 prediction variables, including neighboring video unit information or parameters calculated based on neighboring video units. Intermediate value calculator module 336 may use the prediction variables to determine one or more predicted video units based on the intra prediction mode for the video block, for example, by filtering and generating a minimum number of unique video units. Pixel predictor module 338 may receive from DC calculator module 332 and intermediate value calculator 336 the prediction variables and the one or more predicted video units and thereby determine and output a predicted video unit for each video unit of the video block or sub-block to be predicted. Pixel predictor module 338 may further include a memory for storing the predicted video units.
At node 705, method 700 determines whether a video block is an 8×8, 16×16, or 32×32 block. In some aspects, the video block may be a luminance block or a chrominance block, for instance.
If the video block is an 8×8, 16×16, or 32×32 block, at node 710, the 8×8, 16×16, or 32×32 block is divided into a plurality of 4×4 sub-blocks. At node 715, prediction variables for one 4×4 sub-block of the plurality of 4×4 sub-blocks are determined. The prediction variables may be determined based on neighboring video unit information and/or parameters calculated based on neighboring video units of at least one neighboring block of the 8×8, 16×16, or 32×32 block. At node 720, a predicted video unit is determined for each video unit of the one 4×4 sub-block. Intra prediction unit 34 of prediction unit 32 may determine the predicted video units.
At node 725, method 700 decides whether to repeat nodes 715 and 720 and determine predicted video units for additional 4×4 sub-blocks of the plurality of 4×4 sub-blocks. In some aspects, method 700 decides to determine predicted video units once for each 4×4 sub-block of the 8×8, 16×16, 32×32 block. In other aspects, the method 700 decides to determine predicted video units once for some 4×4 sub-blocks but not others (e.g., because some sub-blocks may be known to be identical to a previously determined predicted video sub-blocks). Prediction unit 32, for example, may determine whether to repeat nodes 715 and 720 for additional 4×4 sub-blocks.
If method 700 decides to determine predicted video units for additional 4×4 sub-blocks, method 700 moves to node 715 and determines prediction variables for another 4×4 sub-block of the plurality of sub-blocks. If method 700 decides not to determine predicted video units for additional 4×4 sub-blocks, method 700 terminates.
If the video block is not an 8×8, 16×16, or 32×32 block, the video block accordingly is a 4×4 block in this example. At node 730, prediction variables for the 4×4 block are determined. The prediction variables may be determined based on neighboring video unit information and/or parameters calculated based on neighboring video units of at least one neighboring block of the 4×4 block. At node 735, a predicted video unit is determined for each video unit of the 4×4 block. After node 735, method 700 terminates.
Although aspects of method 700 are discussed using 4×4, 8×8, 16×16, and 32×32 video blocks as examples, method 700 is not limited to those block sizes. Instead, the 4×4 block (including 16 video units) of method 700 may represent a video block having a baseline block size or a baseline total number of video units. For example, the baseline block size can be 2×4 (including 8 video units), 8×5 (including 40 video units), 20×20 (including 400 video units), and the like, in some implementations. The 8×8 block (including 64 video units), 16×16 block (including 256 video units), and 32×32 block (including 1024 video units) of method 700 may represent video blocks having a block size or a total number of video units greater than the baseline block size or the baseline total number of video units, respectively. For example, the block size greater than the baseline block size can be 8×4 (including 32 video units), 15×6 (including 90 video units), and 64×64 (including 4096 video units), and the like, in some implementations. The video blocks having the block size or total number of video units greater than the baseline block size or baseline total number of video units may be divided into sub-blocks having a baseline block size or a baseline total number of video units for processing according to the approaches in this disclosure. In some aspects, video units may be discarded from video blocks or additional video units may be added as padding video units so that the video block may be processed according to method 700.
At node 805, a video block is divided into a plurality of video sub-blocks. The video block may include video units and have an intra prediction mode. Each video sub-block of the plurality of video sub-blocks may include at least a non-zero integer number of video units of the video block. In some aspects, the video block is divided into a plurality of video sub-blocks having a first size. The first size may correspond to a baseline block size or a baseline total number of video units for hardware and/or software used to determine prediction variables and/or predicted video units. Prediction unit 32, for example, may divide the video block.
At node 810, prediction variables for a video sub-block are determined based on the intra prediction mode of the video block. In some aspects, the predictions variables include calculated values based on neighboring video unit pixel values or received values from a neighbor pixel manager that stores neighboring video block pixel information. Further, in some aspects, a predictor may be used to determine the prediction variables, and the predictor may be configured to determine the prediction variables only for video sub-blocks of the baseline block size or the video blocks of the baseline block size. Intra prediction unit 34 of prediction unit 32, for example, may determine the prediction variables.
At node 815, a predicted video unit for each video unit of the video sub-block is determined based on the intra prediction mode of the video block and the prediction variables for the video sub-block. In some aspects, a minimum number of unique predicted video units may be first generated or fetched for the video sub-block. The unique predicted video units may then be used to determine predicted video units for other predicted video units of the video sub-block. Further, in some aspects, a predictor may be used to determine the predicted video units, and the predictor may be configured to determine the predicted video units only for video sub-blocks of the baseline block size or the video blocks of the baseline block size. Intra prediction unit 34, for example, may determine the predicted video units.
Moreover, in one aspect, means for determining whether a video block is a first size comprises the determining block size module 905. In another aspect, means for dividing the video block into a plurality of video sub-blocks comprises the dividing block module 910. In yet another aspect, means for determining prediction variables for a first video sub-block of the plurality of video sub-blocks comprises the determining prediction variables module 915. In a further aspect, means for determining a predicted video unit for each video unit of the first video sub-block comprises the determining predicted video units module 920.
Information and signals disclosed herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves.
The program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured for encoding and decoding, or incorporated in a combined video encoder-decoder (CODEC).
Various embodiments of the disclosure have been described. These and other embodiments are within the scope of the following claims.
This application claims benefit under 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 61/646,725 entitled “SYSTEMS AND METHODS FOR INTRA PREDICTION VIDEO ENCODING” filed on May 14, 2012; the disclosure of which is hereby incorporated by reference in its entirety. In addition, this application claims benefit under 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 61/734,086 entitled “SYSTEMS AND METHODS FOR INTRA PREDICTION VIDEO CODING” filed on Dec. 6, 2012; the disclosure of which is hereby incorporated by reference in its entirety
Number | Date | Country | |
---|---|---|---|
61646725 | May 2012 | US | |
61734086 | Dec 2012 | US |