Method and device for encoding and decoding video image data

Information

  • Patent Grant
  • 9330060
  • Patent Number
    9,330,060
  • Date Filed
    Thursday, April 15, 2004
    22 years ago
  • Date Issued
    Tuesday, May 3, 2016
    9 years ago
Abstract
A method and device for encoding and decoding video image data. An MPEG decoding and encoding process using data flow pipeline architecture implemented using complete dedicated logic is provided. A plurality of fixed-function data processors are interconnected with at least one pipelined data transmission line. At least one of the fixed-function processors performs a predefined encoding/decoding function upon receiving a set of predefined data from said transmission line. Stages of pipeline are synchronized on data without requiring a central traffic controller. This architecture provides better performance in smaller size, lower power consumption and better usage of memory bandwidth.
Description
FIELD OF THE INVENTION

The present invention relates to the field of digital video image processing. More particularly, embodiments of the present invention relate to methods and devices for encoding and decoding video image data without requiring a separate digital signal processor (DSP) or an embedded processor to perform the main data-stream management.


BACKGROUND OF THE INVENTION

The conventional art of designing and configuring a Motion Pictures Experts Group (MPEG) encoding and decoding system is confronted with several technical limitations and difficulties. Particularly, the task of processing a video image data based on the MPEG video standard involves many complex algorithms and requires several processing stages. Each of these algorithms consists of many computationally intensive tasks, executing all the complex encoding and decoding procedures in real time. For the purpose of generating real time video images, conventional methods of configuring a MPEG system generally require a very high performance solution. A conventional configuration usually requires a digital signal processor (DSP) or embedded processor to handle mainstream processes and may also require additional hardware assist logic circuits.


However, the conventional configurations create several technical challenges and difficulties. Implementation of a conventional configuration first requires the selection of an appropriate high performance DSP platform to support the high processing demand thus causing an increase in the production costs of such system. The processor selected based on this DSP platform then extracts and executes software programs stored in the memory that causes the size and power consumptions to increase and also degrades the processing bandwidth due to the data transfer operations between the memory and processor. The handling and control of data transfer sequencing and synchronization further adds to the overhead of DSP overhead that further slow down the MPEG encode/decode operations.


Even though current digital video encoding and compression techniques are able to take advantage of redundancies inherent in natural imagery to dramatically improve the efficiency in video image data storage and processing and to allow for faster transmission of images, there are still needs to lower the power consumption, to increase the processing speed and to achieve more compact video storage. Particularly, this is a challenging task as the decoding of the MPEG compressed video data involves five basic operations: 1) bit stream parser and variable decoder; 2) inverse scan and run-level code decoder; 3) de-quantization and inverse discrete cosine transform function (IDCT); 4) motion compensation; and 5) YUV to RGB color conversion.


For example, FIG. 1 shows a functional block diagram of a conventional MPEG video image display system, in accordance with the prior art. In particular, DSP/RISC 110 controls, manages and co-processes the fixed functions necessary for the image data processing, such as discrete cosine transform function (DCT), motion estimation (ME) and motion compensation), and any components of codec functions. These functions include the five operations described above. A quantitative estimate of the complexity of the general MPEG video real-time decoding process in terms of the number of required instruction cycles per second reveals that, for a typical general-purpose RISC processor, all of the resources of the microprocessor are exhausted by, for example, the color conversion operation alone. Real-time decoding refers to decoding at the rate at which the video signals were originally recorded (e.g., 30 frames per second). An exemplary digital television signal generates about 10.4 million picture elements (pixels) per second. Since each pixel has three independent color components (primary colors: red, green and blue), the total data element rate is more than 30 million per second, which is of the same order of magnitude as current CPU clock speeds. Thus, even at the highest current CPU clock speed of 200 MHz, there are only 20 clock cycles available for processing each pixel, and less than 7 clocks per color component.


Furthermore, to convert the video signals of a digital television signal from YUV format to RGB format in real time, for example, using even the fastest conventional microprocessors requires approximately 200 million instruction cycles per second (nearly all of the data processing bandwidth of such a microprocessor). Depending on the type of processor used and several other factors such as bit rate, average symbol rate, etc., implementing each of the IDCT function and motion compensation in real time may require, for example, anywhere from approximately 90 million operations per second (MOPS) to 200 MOPS for full resolution images. Existing general-purpose microprocessors are extremely inefficient in handling real-time decompression of full-size, digital motion video signals compressed according to MPEG standards. Typically, additional hardware is needed for such real-time decompression, which adds to system complexity and cost.


The requirement for performing these tasks using a processor that involves the execution of software programs increase the costs, power consumption, and size of the system and further degrades the bandwidth and speed of video image data processing. For these reasons, there is a need for a more efficient implementation of real-time decompression of digital motion video compressed according to MPEG standards such that the difficulties and limitations of the conventional techniques can be resolved.


SUMMARY OF THE INVENTION

Various embodiments of the present invention provide a device configuration and method for carrying out video image data encoding/decoding function implemented with pipelined, data-driven functional blocks to eliminate the requirement of using a digital signal processor (DSP) as a central processor to overcome the above-mentioned prior art difficulties and limitations. In one embodiment, the functional blocks may be fixed functions.


In one embodiment, the present invention provides an MPEG-4 video image data encoding/decoding device including fixed-function processors connected in a pipelined configuration. A fixed function processor carries out a predefined encoding/decoding function upon receiving a set of predefined data in a data driven manner such that a central processor is not required. By configuring the fixed function processors in a pipelined architecture, a high degree of parallel processing capability can be achieved. The pipelined functional blocks can be configured such that the functional blocks are highly portable and can be conveniently maintained, are easily scalable and can be implemented in different encoding/decoding devices. As the configuration and operations are significantly simplified, the encoding-decoding device can achieve low power consumption and the functional block that are not used can be powered down in an idle state until it is activated again when data is received. Since each functional block may be a dedicated processor, the memory size can be optimal designed to minimize resource waste arising from the storage of a large amount of data required for performing multiple logic functions.


In one embodiment, the present invention provides a video image data encoding/decoding device including a plurality of fixed-function data processors interconnected with at least one pipelined data transmission line. The fixed-function processors perform predefined encoding/decoding functions upon receiving a set of predefined data from the transmission line. In one embodiment, the plurality of fixed function data processors may include a data buffer queue for receiving a set of predefined data from the transmission line. In another embodiment, the plurality of fixed function data processors may include a control queue for initiating a performance of the predefined encoding/decoding function upon receiving a set of predefined data from the transmission line.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention:



FIG. 1 shows a functional block diagram of a conventional MPEG video image display system, in accordance with the prior art.



FIG. 2 shows a functional block diagram of a data-driven MPEG encode-decode system, in accordance with an embodiment of the present invention.



FIG. 3 shows a functional block diagram of a data-driven MPEG decoder architecture, in accordance with an embodiment of the present invention.



FIG. 4 shows a functional block diagram of a data-driven MPEG encoder architecture, in accordance with an embodiment of the present invention.



FIG. 5 is a functional block diagram for showing a pipelined fixed function, in accordance with an embodiment of the present invention.



FIG. 6 is a functional block diagram showing a general pipelined architecture of a system implemented with pipelined fixed functions, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION


FIG. 2 shows an exemplary functional block diagram of a data-driven MPEG encode/decode system 200, in accordance with an embodiment of the present invention. In one embodiment, MPEG encode/decode system 200 is a data-driven pipelined configuration of MPEG-4 decoder architecture. MPEG encode/decode system 200 includes bit stream fixed function logic 210, IDCT fixed function logic 220, motion estimation/compensation fixed function logic 230, and post process fixed function logic 240.



FIG. 2 shows the pipelined data flow where the data transfers from one functional block, e.g., from bit stream fixed function logic 210 to inverse discrete cosine transform (IDCT) fixed function logic 220 and motion estimation/compensation fixed function logic 230 and then to post process fixed function logic 240, are automatically controlled based on a data driven process. The configuration is significantly simplified because there is no need to employ a high performance digital signal processor (DSP) to control and coordinate the overall data flow. Fixed function logic of a data-driven MPEG encode/decode system 200 includes dedicated logic that operates independently of other fixed functions. Fixed function logic carries out a predefined encoding/decoding function upon receiving a set of predefined data in a data driven manner such that a central processor is not required.


It should be appreciated that the dedicated logic processors can also be implemented with higher performance without requiring a high cost implementation because of the simplified configuration that does not require synchronizations and complicated check and branch operations. The speed of carrying out the encoding/decoding function is improved because each of the pipelined functional blocks can each perform the assigned dedicated function simultaneously. These significant benefits are achieved because there are less requirements for wasting the overhead resources arising from the tracking and synchronizations of the data flows among many processors that are required in the encoding/decoding devices implemented with conventional configurations.



FIG. 3 shows a functional block diagram of data-driven MPEG decoder architecture 300, in accordance with an embodiment of the present invention. In one embodiment, data-driven MPEG decoder architecture 300 is operable to decode MPEG-4 video image data. In one embodiment, the pipelined configuration is constructed by employing blocks to carry out a fixed logic that represent a pipeline stage. The smaller functional block inside each larger functional block represents the fixed function (e.g., pipelined stages implemented inside this larger functional block).


The MPEG encoded data is stored in external memory 310. In one embodiment, bit stream decoder stage 320 accesses the MPEG encoded data. The data bits are fetched from external memory 310 to the bit stream decoder function 322, where the MPEG data bits are decoded. In one embodiment, a data bus allows communication as required among the stages and functions of data-driven MPEG decoder architecture 300 and external memory 310. As understood by those skilled in the art, a “bus” may comprise a shared set of wires or electrical signal paths to which other elements connect. However, as also understood by those skilled in the art, required communication paths may also be provided by other structures, such as individual point-to-point connections from each element to a switch, dedicated connections for each for each pair of elements that communicate with each other, or any combination of dedicated and shared paths. Therefore, it should be appreciated that the term “bus” refers to any structure that provides the communication paths required by the methods and device described below.


The decoded MPEG data bits are pushed to two other stages: preprocess stage 330 and motion compensation stage 350, for further computation. In one embodiment, preprocess 330 stage comprises of five functions: DC scalar calculation function 332 for determining the discrete transform value; predict direction function 334 for determining the prediction direction; AC/DC prediction function 336 for calculating the predicted AC and DC values; de-quantization function 338 for reversing the quantization and calculating the result value; and Run Level Coding (RLC) and Inverse Scan (I-Scan) function 340 for decoding the RLC and reversing the scan process to lay out the correct order of values.


From preprocess stage 330, a decoded block matrix is pushed to inverse discrete cosine transform (IDCT) stage 360. IDCT stage 360 performs the IDCT function 362 of transforming the matrix from the frequency domain into the time domain. The decoded block matrix elements represent the correct color space values.


While bit stream decoder stage 320 sends data to preprocess stage 330, the decoded bit stream is also sent as motion vectors to motion compensation stage 350. At motion compensation function 352 of motion compensation stage 350, the previous frame data is retrieved from external memory 310 and processed into a block matrix for the next stage copy and transfer.


The final pipelined stage may be copy and transfer stage 370 that is implemented to receive the block matrices sent from motion compensation stage 350 and IDCT stage 360. At copy and retire function 372, the block matrices are combined if necessary, and the final decoded picture is written back to external memory 310 to complete the data flow that drives the functions implemented as pipelined stages to carry out the functions sequentially.



FIG. 4 shows a functional block diagram of a data-driven MPEG encoder architecture 400, in accordance with an embodiment of the present invention. In one embodiment, data-driven MPEG decoder architecture 400 is operable to encode MPEG-4 video image data. In one embodiment, the pipelined configuration is constructed by employing blocks to carry out a fixed logic that represent a pipeline stage. The smaller functional block inside each larger functional block represents the fixed function (e.g., pipelined stages implemented inside this larger functional block).


The data of the original picture is stored in external memory 410. In one embodiment, motion estimation stage 420 accesses the original picture data. Motion estimation function 422 is operable to retrieve the picture data, search for the optimal block matrix, and send the optimal block matrix to discrete cosine transform (DCT) stage 430. In one embodiment, motion estimation function is also operable to transmit the motion form vector of the picture data to bit stream encoding stage 440. Also, the pipelined process transfers the decoder motion compensation data to DCT stage 430 and to copy and retire stage 490 for decoded picture reconstruction.


DCT function 430 of DCT stage 432 is implemented to transform the matrix from time domain to frequency domain upon receiving the data from motion estimation stage 420. The result is transmitted to quantization stage 450. Quantization function 452 of quantization stage 450 is operable to calculate and quantize the values of the received data. The quantized data is then forwarded to inverse preprocess stage 460 and to de-quantization stage 470. De-quantization function 472, IDCT function 482, and copy and retire function 492 operate in a similar manner as de-quantization function 338, IDCT function 362, and copy and retire function 372 of FIG. 3, respectively, for a MPEG-4 decoder. The re-constructed picture may be saved back to external memory 410 for future use to complete the data flow and the data driven pipelined stages to perform the encoder functions in a sequential pipelined fashion.


Inverse preprocess stage 460 includes AC/DC prediction function 464 and RLC and scan stage 462. The quantized block matrix from quantization stage 450 is combined with AC/DC predictions and scanned to find all the RLC. The RLC is then pushed to bit stream encoding stage 440. Bit stream encoding stage 440 gathers all the information about the picture including RLC and motion vectors from inverse preprocess stage 460 and motion estimation stage 420. Bit stream encoder function 442 is performed to encode the final bit stream of MPEG and store back to external memory 410 to complete the data flow. Bit stream encoding stage 440 is also implemented with bit rate control function 444 to prepare the compression ratio of next frame of the video.



FIG. 5 is a functional block diagram for showing a pipelined fixed function 500, in accordance with an embodiment of the present invention. In one embodiment, the stage blocks as shown in FIGS. 2, 3, and 4 include functional elements shown in pipelined fixed function 500 of FIG. 5. In one embodiment, the function logic is a fixed function circuit (e.g. IDCT function 362 of FIG. 3). Each function requires two inputs from previous stage, control input and data input. A function block includes at least one control queue 510 for queuing control inputs and at least one data buffer queue 520 for queuing data input. When control queue 510 has a command, function logic 530 starts to process the data from data queue 520. At the completion of functional performance, the functional stage stores the result to data queue 520 provided in a next stage. Meanwhile a command is sent to the control queue 510 of a next functional stage to initiate a functional performance designated for the next functional stage.



FIG. 6 is a functional block diagram showing a general pipelined architecture 600 of a system implemented with pipelined fixed functions, in accordance with an embodiment of the present invention. In one embodiment, pipelined architecture 600 is a data driven processing system that is configured by sequentially connecting a plurality of fixed function blocks (e.g., pipelined fixed function 500 of FIG. 5). The configuration as shown can be flexibly and effectively implemented in many different data processing systems to simplify system configuration, to lower the implementation cost and to minimize the complicated control and synchronization problems often encountered in the conventional central control processing systems.


The encoding and decoding systems as detailed in the described embodiments are divided into blocks of pipeline stages. Each block automatically synchronizes passing and buffering of data, and the system is completely data driven. There is no need for a central processor to control the sequence and data. Thus, the streamlined design provides a very efficient and high performance engine.


In one embodiment, pipeline stages are partitioned to follow the logic sequence in an MPEG encoding/decoding process. A stage of the pipeline is programmed to look at the control queue. For example, with reference to FIG. 6, if there is a request for data computation received at control queue 610a, fixed function 600a starts to process the data in data buffer queue 620a. When fixed function 600a finishes the data computation, the data is stored in data buffer queue 620b of fixed function 600b (e.g., the next stage). At the same time, fixed function 600a transmits the sequence information into control queue 610b of fixed function 600b.


In one embodiment, the data buffer queues and control queues as shown in FIGS. 5 and 6 are implemented using a dual buffer scheme (ping-pong buffer), one for current process and one for previous stage to use. The advantage of using dual buffer is to ensure the process can continue without being stalled by next stage.


Various embodiments of the present invention, a device and method for encoding and decoding video image data, are described. In one embodiment, the present invention includes a plurality of fixed-function data processors interconnected with a least one pipelined data transmission line wherein each of the fixed-function processors perform a predefined encoding/decoding function upon receiving a set of predefined data from the transmission line. In one embodiment, the plurality of fixed-function data processors includes a data buffer queue for receiving a set of predefined data from the transmission line. In another embodiment, the plurality of fixed-function data processors includes a control queue for initiating a performance of the predefined encoding/decoding function upon receiving a set of predefined data from the transmission line.


In another embodiment, the present invention provides a method for encoding and/or decoding a video image. The method includes sequentially pipelining a set of data via a data transmission line connected between a plurality of fixed-function data processors for sequentially performing a predefined encoding/decoding function upon receiving the set of data from the data transmission line.


Various embodiments of the invention, a method and device for encoding and decoding video image data, are thus described. While the present invention has been described in particular embodiments, it should be appreciated that the invention should not be construed as limited by such embodiments, but rather construed according to the below claims.

Claims
  • 1. A video image data encoding/decoding device comprising: a plurality of fixed-function data processors interconnected with at least one pipelined data transmission line wherein each of said plurality of fixed-function data processors performs a predefined encoding/decoding function upon receiving a set of predefined data from another of said plurality of fixed-function data processors,wherein said plurality of fixed-function data processors are synchronized on data without a central controller,wherein each of said plurality of fixed-function data processors is data driven, and each of said plurality of fixed-function data processors comprises dedicated logic that operates independently from the remaining said plurality of fixed-function data processors, and each of said plurality of fixed-function data processors comprises a first queue to queue a set of predefined data and a second queue to queue a set of predefined control data, said set of predefined data and set of predefined control data are received from a previous fixed-function data processor of said plurality of fixed-function data processors, and each of said plurality of fixed-function data processors is operable to simultaneously store a set of predefined data to a first queue of a subsequent fixed-function data processor of said plurality of fixed-function data processors and to send a set of predefined control data to a second queue of said subsequent fixed-function data processor,wherein said first queue comprises a ping-gong buffer and said second queue comprises a ping-pong buffer.
  • 2. The video image data encoding/decoding device of claim 1 wherein said first queue is operable for receiving a set of predefined data from said transmission line.
  • 3. The video image data encoding/decoding device of claim 1 wherein said second queue is operable for initiating a performance of said predefined encoding/decoding function upon receiving a set of predefined control data from said transmission line.
  • 4. The video image data encoding/decoding device of claim 1 wherein said first queue is operable for receiving a set of predefined data from said transmission line and said second queue is operable for initiating a performance of said predefined encoding/decoding function upon receiving a set of predefined control data.
  • 5. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises a bit-stream decoder.
  • 6. The video image data encoding/decoding device of claim 5 wherein at least one of said plurality of fixed-function data processors comprises a motion compensation processor.
  • 7. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises a discrete cosine transformation (DCT) logic processor.
  • 8. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises an inverse discrete cosine transformation (IDCT) logic processor.
  • 9. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises a direction prediction processor.
  • 10. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises a de-quantization processor.
  • 11. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises an AC/DC prediction processor.
  • 12. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises a Run Level Coding (RLC) and Inverse Scan (I-Scan) logic processor.
  • 13. The video image data encoding/decoding device of claim 1 wherein at least one of said plurality of fixed-function data processors comprises a copy and retire processor operable to combine data block matrices.
  • 14. The video image data encoding/decoding device of claim 1 further comprising a data bus for transmitting data between said video image data encoding/decoding device and an external memory.
  • 15. The video image data encoding/decoding device of claim 1, wherein at least one of said plurality of fixed-function data processors is powered down.
  • 16. The video image data encoding/decoding device of claim 1, wherein at least one of said plurality of fixed-function data processors comprises a bit stream encoder processor.
  • 17. The video image data encoding/decoding device of claim 1, wherein each of said plurality of fixed-function data processors is operable to automatically synchronize passing and buffering of data.
  • 18. The video image data encoding/decoding device of claim 1, wherein a pipeline architecture comprises said plurality of fixed-function data processors.
  • 19. A method for encoding/decoding video image data, said method comprising: receiving a first set of predefined image data at a first data driven processor for performing a first predefined encoding/decoding function, wherein said set of predefined image data is queued by a first queue of said first data driven processor and wherein a first set of predefined control data associated with said first set of predefined data is queued by a second queue of said first data driven processor;performing said first predefined encoding/decoding function via said first data driven processor; andtransmitting via said first data driven processor a second set of predefined image data to at least a second data driven processor for performing a second predefined encoding/decoding function, said second set of predefined image data is queued by a first queue of said second data driven processor, said first data driven processor and said second data driven processor are synchronized on data without a central controller, and said first data driven processor is operable to simultaneously perform said transmitting and send a second set of predefined control data to said second data driven processor, wherein said second set of predefined control data is queued by a second queue of said second data driven processor;wherein said first queue of said first data driven processor comprises a ping-pong buffer, said second queue of said first data driven processor comprises a ping-pong buffer, said first queue of said second data driven processor comprises a ping-pong buffer, and said second queue of said second data driven processor comprises a ping-pong buffer.
  • 20. The method as recited in claim 19 wherein said second queue of said first data driven processor is operable for initiating said performing upon receiving said first set of predefined control data.
  • 21. The method as recited in claim 20 further comprising: performing said second predefined encoding/decoding function via said second data driven processor, said second queue of said second data driven processor is operable for initiating said performing said second predefined encoding/decoding function upon receiving said second set of predefined control data.
  • 22. A data encoding/decoding system comprising: a first data driven processor operable to receive a first set of predefined image data and a first set of predefined control data from a previous data driven processor, said first data driven processor is operable to perform a first predefined encoding/decoding function, said first set of predefined image data is queued by a data buffer queue of said first data driven processor and a first set of predefined control data associated with said first set of predefined image data is queued by a control queue of said first data driven processor;a second data driven processor connected to said first data driven processor, said second data driven processor comprises a data buffer queue and a control queue; andwherein said first data driven processor is operable to simultaneously store a second set of predefined image data to said data buffer queue of said second data driven processor and send a second set of predefined control data to said control queue of said second data driven processor that is associated with said second set of predefined image data, said second data driven processor is operable to perform a second predefined encoding/decoding function, said first data driven processor and said second data driven processor are synchronized on data without a central controller,wherein said data buffer queue of said first data driven processor comprises a ping-pong buffer, said control queue of said first data driven processor comprises a ping-pong buffer, said data buffer queue of said second data driven processor comprises a ping-pong buffer, and said control queue of said second data driven processor comprises a ping-pong buffer.
  • 23. The system as recited in claim 22 wherein each of said first and second data driven processors is operable to synchronize passing and buffering of data automatically.
  • 24. The system as recited in claim 22 further comprising a third data driven processor connected to said second data driven processor, said third data driven processor comprises a data buffer queue and a control queue, said data buffer of said third data driven processor comprises a ping-pong buffer and said control queue of said third data driven processor comprises a ping-gong buffer.
RELATED U.S. APPLICATION

This application claims priority to the provisional patent application, Ser. No. 60/463,017, entitled “Data Flow Pipeline Architecture for MPEG Video Codec,” with filing date Apr. 15, 2003, and assigned to the assignee of the present application.

US Referenced Citations (221)
Number Name Date Kind
3679821 Schroeder Jul 1972 A
3794984 Deerfield et al. Feb 1974 A
4177514 Rupp Dec 1979 A
4583164 Tolle Apr 1986 A
4591979 Iwashita May 1986 A
4644461 Jennings Feb 1987 A
4745544 Renner et al. May 1988 A
4755810 Knierim Jul 1988 A
4814978 Dennis Mar 1989 A
4843540 Stolfo Jun 1989 A
4992857 Williams Feb 1991 A
5045940 Peters et al. Sep 1991 A
5130797 Murakami et al. Jul 1992 A
5146324 Miller et al. Sep 1992 A
5212742 Normile et al. May 1993 A
5224213 Dieffenderfer Jun 1993 A
5225875 Shapiro et al. Jul 1993 A
5233689 Rhoden et al. Aug 1993 A
5267334 Normille et al. Nov 1993 A
5267344 Nelson, III Nov 1993 A
5369744 Fukushima et al. Nov 1994 A
5371896 Gove et al. Dec 1994 A
5448745 Okamoto Sep 1995 A
5586281 Miyama et al. Dec 1996 A
5596369 Chau Jan 1997 A
5598514 Purcell et al. Jan 1997 A
5608652 Astle Mar 1997 A
5613146 Gove et al. Mar 1997 A
5623311 Phillips et al. Apr 1997 A
5630033 Purcell et al. May 1997 A
5646692 Bruls Jul 1997 A
5657465 Davidson et al. Aug 1997 A
5682491 Pechanek et al. Oct 1997 A
5768429 Jabbi et al. Jun 1998 A
5790881 Nguyen Aug 1998 A
5809538 Pollmann et al. Sep 1998 A
5821886 Son Oct 1998 A
5845083 Hamadani et al. Dec 1998 A
5870310 Malladi Feb 1999 A
5883823 Ding Mar 1999 A
5889949 Charles Mar 1999 A
5898881 Miura et al. Apr 1999 A
5909224 Fung Jun 1999 A
5923375 Pau Jul 1999 A
5926643 Miura Jul 1999 A
5954786 Volkonsky Sep 1999 A
5969728 Dye et al. Oct 1999 A
5999220 Washino Dec 1999 A
6035349 Ha et al. Mar 2000 A
6049818 Leijten et al. Apr 2000 A
6073185 Meeker Jun 2000 A
6088355 Mills et al. Jul 2000 A
6098174 Baron et al. Aug 2000 A
6144362 Kawai Nov 2000 A
6145073 Cismas Nov 2000 A
6148109 Boon et al. Nov 2000 A
6157751 Olson et al. Dec 2000 A
6175594 Strasser et al. Jan 2001 B1
6188799 Tan et al. Feb 2001 B1
6195389 Rodriguez et al. Feb 2001 B1
6222883 Murdock et al. Apr 2001 B1
6269174 Koba et al. Jul 2001 B1
6272281 De Vos et al. Aug 2001 B1
6305021 Kim Oct 2001 B1
6311204 Mills Oct 2001 B1
6317124 Reynolds Nov 2001 B2
6356945 Shaw et al. Mar 2002 B1
6360234 Jain et al. Mar 2002 B2
6418166 Wu et al. Jul 2002 B1
6459738 Wu et al. Oct 2002 B1
6526500 Yumoto et al. Feb 2003 B1
6539060 Lee et al. Mar 2003 B1
6539120 Sita et al. Mar 2003 B1
6560629 Harris May 2003 B1
6570579 MacInnis May 2003 B1
6647062 Mackinnon Nov 2003 B2
6665346 Lee et al. Dec 2003 B1
6687788 Vorbach et al. Feb 2004 B2
6690835 Brockmeyer et al. Feb 2004 B1
6690836 Natarajan et al. Feb 2004 B2
6708246 Ishihara et al. Mar 2004 B1
6721830 Vorbach et al. Apr 2004 B2
6751721 Webb et al. Jun 2004 B1
6760478 Adiletta et al. Jul 2004 B1
6782052 Sun et al. Aug 2004 B2
6799192 Handley Sep 2004 B1
6807317 Mathew et al. Oct 2004 B2
6823443 Horiyama et al. Nov 2004 B2
6950473 Kim et al. Sep 2005 B2
6993639 Schlansker et al. Jan 2006 B2
6996645 Wiedenman et al. Feb 2006 B1
7038687 Booth, Jr. et al. May 2006 B2
7095783 Sotheran et al. Aug 2006 B1
7173631 Anderson Feb 2007 B2
7181594 Wilkinson et al. Feb 2007 B2
7215823 Miura et al. May 2007 B2
7260148 Sohm Aug 2007 B2
7277101 Zeng Oct 2007 B2
7289672 Sun et al. Oct 2007 B2
7379501 Lainema May 2008 B2
7394284 Vorbach Jul 2008 B2
7403564 Laksono Jul 2008 B2
7450640 Kim et al. Nov 2008 B2
7499491 Lee et al. Mar 2009 B2
7548586 Mimar Jun 2009 B1
7548596 Yen et al. Jun 2009 B2
7551671 Tyldesley et al. Jun 2009 B2
7565077 Rai et al. Jul 2009 B2
7581076 Vorbach Aug 2009 B2
7581182 Herz Aug 2009 B1
7630097 Kodama et al. Dec 2009 B2
7689000 Kazama Mar 2010 B2
7693219 Yan Apr 2010 B2
7720311 Sriram May 2010 B1
7721069 Ramchandran et al. May 2010 B2
7792194 Zhong et al. Sep 2010 B2
7924923 Lee et al. Apr 2011 B2
7996827 Vorbach et al. Aug 2011 B2
8009923 Li et al. Aug 2011 B2
8369402 Kobayashi et al. Feb 2013 B2
8442334 Drugeon et al. May 2013 B2
8660182 Zhong et al. Feb 2014 B2
8660380 Bulusu et al. Feb 2014 B2
8666166 Bulusu et al. Mar 2014 B2
8666181 Venkatapuram et al. Mar 2014 B2
8724702 Bulusu et al. May 2014 B1
8731071 Kimura May 2014 B1
8756482 Goel Jun 2014 B2
8873625 Goel Oct 2014 B2
20010020941 Reynolds Sep 2001 A1
20010024448 Takase et al. Sep 2001 A1
20010028353 Cheng Oct 2001 A1
20010028354 Cheng et al. Oct 2001 A1
20020015445 Hashimoto Feb 2002 A1
20020015513 Ando et al. Feb 2002 A1
20020025001 Ismaeil et al. Feb 2002 A1
20020041626 Yoshioka et al. Apr 2002 A1
20020109790 Mackinnon Aug 2002 A1
20020114394 Ma Aug 2002 A1
20020118743 Jiang Aug 2002 A1
20030020835 Petrescu Jan 2003 A1
20030048361 Safai Mar 2003 A1
20030078952 Kim et al. Apr 2003 A1
20030141434 Ishikawa et al. Jul 2003 A1
20030161400 Dinerstein et al. Aug 2003 A1
20040056864 Valmiki Mar 2004 A1
20040057523 Koto et al. Mar 2004 A1
20040095998 Luo et al. May 2004 A1
20040100466 Deering May 2004 A1
20040150841 Lieberman et al. Aug 2004 A1
20040156435 Itoh et al. Aug 2004 A1
20040174998 Youatt et al. Sep 2004 A1
20040181564 MacInnis et al. Sep 2004 A1
20040181800 Rakib et al. Sep 2004 A1
20040190613 Zhu et al. Sep 2004 A1
20040190617 Shen et al. Sep 2004 A1
20040202245 Murakami et al. Oct 2004 A1
20040213348 Kim et al. Oct 2004 A1
20040218626 Tyldesley et al. Nov 2004 A1
20040218675 Kim et al. Nov 2004 A1
20040228415 Wang Nov 2004 A1
20040257434 Davis et al. Dec 2004 A1
20040268088 Lippincott et al. Dec 2004 A1
20050008254 Ouchi et al. Jan 2005 A1
20050033788 Handley Feb 2005 A1
20050047502 McGowan Mar 2005 A1
20050066205 Holmer Mar 2005 A1
20050079914 Kaido et al. Apr 2005 A1
20050105618 Booth et al. May 2005 A1
20050123040 Bjontegard Jun 2005 A1
20050190976 Todoroki et al. Sep 2005 A1
20050238102 Lee et al. Oct 2005 A1
20050238103 Subramaniyan et al. Oct 2005 A1
20050265447 Park Dec 2005 A1
20050265454 Muthukrishnan et al. Dec 2005 A1
20050276493 Xin et al. Dec 2005 A1
20050281337 Kobayashi et al. Dec 2005 A1
20050286630 Tong et al. Dec 2005 A1
20060002466 Park Jan 2006 A1
20060017802 Yoo et al. Jan 2006 A1
20060056513 Shen et al. Mar 2006 A1
20060056708 Shen et al. Mar 2006 A1
20060109910 Nagarajan May 2006 A1
20060115001 Wang et al. Jun 2006 A1
20060133501 Lee et al. Jun 2006 A1
20060133506 Dang Jun 2006 A1
20060176299 Subbalakshmi et al. Aug 2006 A1
20060176962 Arimura et al. Aug 2006 A1
20060203916 Chandramouly et al. Sep 2006 A1
20060291563 Park et al. Dec 2006 A1
20070002945 Kim Jan 2007 A1
20070002950 Yang Jan 2007 A1
20070036215 Pan et al. Feb 2007 A1
20070070080 Graham et al. Mar 2007 A1
20070133689 Park et al. Jun 2007 A1
20070171981 Qi Jul 2007 A1
20070217506 Yang et al. Sep 2007 A1
20070230564 Chen et al. Oct 2007 A1
20070274389 Kim et al. Nov 2007 A1
20070286284 Ito et al. Dec 2007 A1
20070286508 Le Leannec et al. Dec 2007 A1
20080069203 Karczewicz et al. Mar 2008 A1
20080117214 Perani et al. May 2008 A1
20080137726 Chatterjee et al. Jun 2008 A1
20080151997 Oguz Jun 2008 A1
20080285444 Diab et al. Nov 2008 A1
20080291209 Sureka et al. Nov 2008 A1
20080310509 Goel Dec 2008 A1
20090060277 Zhang et al. Mar 2009 A1
20090086827 Wu et al. Apr 2009 A1
20090116549 Shen et al. May 2009 A1
20090122864 Palfner et al. May 2009 A1
20090161763 Rossignol et al. Jun 2009 A1
20090196350 Xiong Aug 2009 A1
20090268974 Takagi Oct 2009 A1
20100034268 Kusakabe et al. Feb 2010 A1
20100118943 Shiodera et al. May 2010 A1
20100128797 Dey May 2010 A1
20130170553 Chen et al. Jul 2013 A1
20130294507 Song et al. Nov 2013 A1
20150195522 Li Jul 2015 A1
Foreign Referenced Citations (16)
Number Date Country
1489391 Apr 2004 CN
1283640 Feb 2003 EP
1283640 Dec 2003 EP
2348559 Apr 2000 GB
2348559 Oct 2000 GB
04162893 Jun 1992 JP
11-96138 Apr 1999 JP
2001184323 Jul 2001 JP
2005192232 Jul 2005 JP
2005354686 Dec 2005 JP
2006287315 Oct 2006 JP
WO 9827742 Jun 1998 WO
0233650 Apr 2002 WO
2005001625 Jan 2005 WO
2005096168 Oct 2005 WO
2006085137 Aug 2006 WO
Non-Patent Literature Citations (9)
Entry
—The Merriam-Webster Dictionary—. 2005 ed. Springfield, MA: Merriam-Webster Inc., 2005.
A Single-Chip Video/Audio Codec for Low Bit Rate Application Seongmo Park, Seongmin Kim, Igkyun Kim, Kyungjin Byun, Jin Jong Cha, and Hanjin Cho, ETRI Journal, vol. 22, No. 1, Mar. 2000, pp. 20-29.
Tung-Chien Chen; Yu-Wen Huang; Liang-Gee Chen, “Analysis and design of macroblock pipelining for H.264/AVC VLSI architecture,” Circuits and Systems, 2004. ISCAS '04. Proceedings of the 2004 International Symposium on , vol. 2, No., pp. II-273-6 vol. 2, May 23-26, 2004.
Iwasaki, I.; Naganuma, J.; Nitta, K.; Nakamura, K.; Yoshitome, T.; Ogura, M.; Nakajima, Y.; Tashiro, Y.; Onishi, T.; Ikeda, M.; Endo, M., “Single-chip MPEG-2 422P@HL CODEC LSI with multi-chip configuration for large scale processing beyond HDTV level,” Design, Automation and Test in Europe Conference and Exhibition, Mar. 2003.
Mizuno, M. et al.; “A 1.5-W single-chip MPEG-2 MP@ML video encoder with low power motion estimation and clocking,” Solid-State Circuits, IEEE Journal of , vol. 32, No. 11, pp. 18-7-1816, Nov. 1997.
Shih-Hao Wang et al.; “A platform-based MPEG-4 advanced video coding (AVC) decoder with block level pipelining,” Information, Communications and Signal Processing, 2003 and the Fourth Pacific Rim Conference on Multimedia. Proceedings of the 2003 Joint Conference of the Fourth International Conference on , vol. 1, No., pp. 51-55 vol. 1, Dec. 2003.
Tu, C., Liang, J., and Tran, T. “Adaptive Runlength Coding”, in —IEEE Signal Processing Letters— vol. 10, No. 3, pp. 61-64. Mar. 2003.
National Semiconductor Corp., “USBN9603/4—Increased Data Transfer Rate Using Ping-Pong Buffering,” Application Note 1222, Mar. 2002, Revision 1.0, Texas Instruments, Literature No. SNOA417.
Jong, et al., “Accuracy Improvement and Cost Reduction of 3-Step Search Block Matching Algorithm for Video Coding”, Feb. 1, 1994, IEEE Transaction on Circuits and Systems for Video Technology, vol. 4 No. 1, pp. 88-90, XP000439487.
Provisional Applications (1)
Number Date Country
60463017 Apr 2003 US