Rewind-enabled hardware encoder

Information

  • Patent Grant
  • 8923385
  • Patent Number
    8,923,385
  • Date Filed
    Thursday, May 1, 2008
    16 years ago
  • Date Issued
    Tuesday, December 30, 2014
    9 years ago
  • CPC
    • H04N19/00521
    • H04N19/00309
    • H04N19/00781
    • H04N19/00278
  • US Classifications
    Field of Search
    • US
    • 375 240010
    • 375 240030
    • 375 240120
    • 375 240160
    • 375 240180
    • 382 173000
    • 382 232000
    • 382 239000
    • 382 240000
    • 382 248000
    • 382 260000
  • International Classifications
    • G06K9/36
    • H04N19/436
    • H04N19/184
    • H04N19/61
    • H04N19/176
    • Term Extension
      1583
Abstract
Described herein are a number of approaches for implementing a video encoder with hardware-enabled rewind functionality. In several embodiments, rewind functionality can be implemented in hardware, in a manner which allows the transform engine of the encoder to reprocess video data, without requesting data from other stages in the encoder. Such rewind functionality is useful in implementing some video standards in a pipeline architecture, such as the H.264 standard. In one embodiment, a method of encoding video data is described, which involves obtaining a first portion of video data from a first location in a buffer, and performing an encoding operation on it. The second portion of video data is obtained from a second location in the buffer, and encoding operations begin on the second portion. The first portion of video data can be retrieved from the first location, in order to reprocess the first portion if necessary.
Description
FIELD OF THE INVENTION

The present invention is generally related to encoding digital video data.


BACKGROUND

The continuing spread of digital media has led to a proliferation of video encoding standards, such as MPEG-4, H.263, H.264, DIVX, and XVID. These video standards attempt to balance compression of raw data and quality of video playback. Most video compression techniques use temporal and spatial prediction to compress raw video streams. However, each of the standards calls for different specific operations.


In addition to the proliferation of competing video standards, more devices are being marketed which include video encoding or decoding functionality. The manufacturers of these devices must decide which video standards to support, which requires balancing the costs associated with supporting a given video standard against the value added by supporting that standard.


Typically, support for a video standard can be implemented one of two ways. Either support is provided via software, or via a specialized hardware. Software implementations require that the processor in the device perform all of the encoding or decoding operations, which can be a computationally expensive task, and often cannot be performed in real-time by a general-purpose processor. Hardware implementations typically require a completely separate encoder for each video standard supported, with the associated expenses of developing, manufacturing, and powering the related hardware.


SUMMARY

Described herein are a number of approaches for implementing a video encoder with hardware-enabled rewind functionality. In several embodiments, rewind functionality can be implemented in hardware, in a manner which allows the transform engine of the encoder to reprocess video data, without requesting data from other stages in the encoder. Such rewind functionality is useful in implementing some video standards in a pipeline architecture, such as the H.264 standard. In one embodiment, a method of encoding video data is described, which involves obtaining a first portion of video data from a first location in a buffer, and performing an encoding operation on it. The second portion of video data is obtained from a second location in the buffer, and encoding operations begin on the second portion. The first portion of video data can be retrieved from the first location, in order to reprocess the first portion if necessary.


Another embodiment describes a system for encoding video data, which includes a transform buffer for storing processed macroblocks, a transform engine for transforming the processed macroblocks into quantized macroblocks, and a rewind control module for causing the transform engine to reprocess one of the processed macroblocks.


A further embodiment describes a handheld computer system device, which includes a system memory, a central processing unit (CPU), and a graphics processing unit (GPU). The GPU includes an encoder for encoding video data, which is configured to obtain a first portion of video data from a first location in a buffer, and perform an encoding operation on it the encoder is further configured to obtain a second portion of video data from a second location in the buffer, and begin performing encoding operations on it. The encoder is also configured to retrieve the first portion of video data from the first location in the buffer, in order to reprocess the first portion of video data, as needed.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example, and not by way of limitation, in the Figures of the accompanying drawings and in which like reference numerals refer to similar elements.



FIG. 1 depicts a block diagram of a computer system in accordance with one embodiment of the present invention.



FIG. 2 depicts a block diagram of a video encoder, in accordance with one embodiment.



FIG. 3 depicts a block diagram of a multistandard video encoder, in accordance with one embodiment.



FIG. 4 depicts a flowchart of a method of video encoding, in accordance with one embodiment.



FIG. 5 depicts a block diagram of an encoder with hardware-enabled rewind functionality, in accordance with one embodiment.



FIG. 6 depicts a flowchart of a method of rewind-enabled hardware encoding, in accordance with one embodiment.





DETAILED DESCRIPTION

Reference will now be made in detail to several embodiments of the invention. While the invention will be described in conjunction with the alternative embodiment(s), it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternative, modifications, and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims.


Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. However, it will be recognized by one skilled in the art that embodiments may be practiced without these specific details or with equivalents thereof. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects and features of the subject matter.


Portions of the detailed description that follows are presented and discussed in terms of a method. Although steps and sequencing thereof are disclosed in figures herein (e.g., FIG. 3) describing the operations of this method, such steps and sequencing are exemplary. Embodiments are well suited to performing various other steps or variations of the steps recited in the flowchart of the figure herein, and in a sequence other than that depicted and described herein.


Some portions of the detailed description are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that can be performed on computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer-executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout, discussions utilizing terms such as “accessing,” “writing,” “including,” “storing,” “transmitting,” “traversing,” “associating,” “identifying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.


Computing devices typically include at least some form of computer readable media. Computer readable media can be any available media that can be accessed by a computing device. By way of example, and not limitation, computer readable medium may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device. Communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signals such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.


Some embodiments may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.


Although embodiments described herein may make reference to a CPU and a GPU as discrete components of a computer system, those skilled in the art will recognize that a CPU and a GPU can be integrated into a single device, and a CPU and GPU may share various resources such as instruction logic, buffers, functional units and so on; or separate resources may be provided for graphics and general-purpose operations. Accordingly, any or all of the circuits and/or functionality described herein as being associated with GPU could also be implemented in and performed by a suitably configured CPU.


Further, while embodiments described herein may make reference to a GPU, it is to be understood that the circuits and/or functionality described herein could also be implemented in other types of processors, such as general-purpose or other special-purpose coprocessors, or within a CPU.


Basic Computing System


Referring now to FIG. 1, a block diagram of an exemplary computer system 112 is shown. It is appreciated that computer system 112 described herein illustrates an exemplary configuration of an operational platform upon which embodiments may be implemented to advantage. Nevertheless, other computer systems with differing configurations can also be used in place of computer system 112 within the scope of the present invention. That is, computer system 112 can include elements other than those described in conjunction with FIG. 1. Moreover, embodiments may be practiced on any system which can be configured to enable it, not just computer systems like computer system 112. It is understood that embodiments can be practiced on many different types of computer system 112. System 112 can be implemented as, for example, a desktop computer system or server computer system having a powerful general-purpose CPU coupled to a dedicated graphics rendering GPU. In such an embodiment, components can be included that add peripheral buses, specialized audio/video components, IO devices, and the like. Similarly, system 112 can be implemented as a handheld device (e.g., cellphone, etc.) or a set-top video game console device such as, for example, the Xbox®, available from Microsoft Corporation of Redmond, Wash., or the PlayStation3®, available from Sony Computer Entertainment Corporation of Tokyo, Japan. System 112 can also be implemented as a “system on a chip”, where the electronics (e.g., the components 101, 103, 105, 106, and the like) of a computing device are wholly contained within a single integrated circuit die. Examples include a hand-held instrument with a display, a car navigation system, a portable entertainment system, and the like.


Computer system 112 comprises an address/data bus 100 for communicating information, a central processor 101 coupled with bus 100 for processing information and instructions; a volatile memory unit 102 (e.g., random access memory [RAM], static RAM, dynamic RAM, etc.) coupled with bus 100 for storing information and instructions for central processor 101; and a non-volatile memory unit 103 (e.g., read only memory [ROM], programmable ROM, flash memory, etc.) coupled with bus 100 for storing static information and instructions for processor 101. Moreover, computer system 112 also comprises a data storage device 104 (e.g., hard disk drive) for storing information and instructions.


Computer system 112 also comprises an optional graphics subsystem 105, an optional alphanumeric input device 106, an optional cursor control or directing device 107, and signal communication interface (input/output device) 108. Optional alphanumeric input device 106 can communicate information and command selections to central processor 101. Optional cursor control or directing device 107 is coupled to bus 100 for communicating user input information and command selections to central processor 101. Signal communication interface (input/output device) 108, which is also coupled to bus 100, can be a serial port. Communication interface 108 may also include wireless communication mechanisms. Using communication interface 108, computer system 112 can be communicatively coupled to other computer systems over a communication network such as the Internet or an intranet (e.g., a local area network), or can receive data (e.g., a digital television signal). Computer system 112 may also comprise graphics subsystem 105 for presenting information to the computer user, e.g., by displaying information on an attached display device 110, connected by a video cable 111. In some embodiments, graphics subsystem 105 is incorporated into central processor 101. In other embodiments, graphics subsystem 105 is a separate, discrete component. In other embodiments, graphics subsystem 105 is incorporated into another component. In other embodiments, graphics subsystem 105 is included in system 112 in other ways.


Multistandard Video Encoder


The embodiments detailed herein describe a multistandard encoder, where expensive redundant elements can be shared across different video standards. In some embodiments, for example, buffers between stages in the encoding pipeline can be used regardless of the video standard being used, while standard-specific hardware data paths are used to perform the necessary manipulation of the data stored in these buffers. In this way, these embodiments eliminate the need to duplicate the expensive buffers across separate hardware encoders for each supported video standard. Embodiments utilizing this approach require fewer hardware elements to implement, are more modular in design such that support for a given standard is easier to add or remove, and require less power than the traditional approach of completely separate hardware encoders for every video standard.


Moreover, some of the embodiments described herein describe a rewind-enabled hardware encoder. Several modern video standards, such as H.264, describe a “rewind” functionality, where data can be reprocessed under a number of different circumstances. In these embodiments, multiple buffers are used to store data after it has been processed by the transform engine in an encoder, in order to allow the data to be easily reprocessed.


One embodiment described herein combines the functionality detailed above, to create a multistandard encoder which supports hardware rewind. This embodiment offers the advantages of multistandard hardware video encoding, in combination with the processing time advantage of hardware-enabled rewind, to support the goal of real-time encoding.


Encoder Architecture


With reference now to FIG. 2, a block diagram of encoder 200 is depicted, in accordance with one embodiment of the present invention. While encoder 200 is shown as incorporating specific, enumerated features, elements, and arrangements, it is understood that embodiments are well suited to applications involving additional, fewer, or different features, elements, or arrangements.


Encoder 200, in the depicted embodiment, is representative of a typical hardware encoder for a video standard using temporal and spatial prediction to compress raw video streams. Raw video data is placed in memory 210. Motion search module 220 retrieves the raw video data and processes it, often in macroblocks of 16×16 pixels. Each processed macroblock is loaded into transform buffer 225. Transform engine 230 retrieves the processed macroblock from transform buffer 225, performs additional operations, and outputs data to quantization buffer 235. Entropy encoder 240 takes the data from quantization buffer 235, and outputs an encoded bitstream.


Buffers, such as transform buffer 225 and quantization buffer 235, are used in encoding to increase hardware efficiency. Buffers allow the various encoding stages to work simultaneously and relatively independent of the other stages. For example, rather than requiring motion search module 220 to wait for transform engine 230 to complete operations, motion search module 220 loads a completed macroblock into transform buffer 225, and begins processing the next macroblock.


Multistandard Encoder with Shared Buffers


With reference now to FIG. 3, a block diagram of multistandard encoder 300 is depicted, in accordance with one embodiment. While encoder 300 is shown as incorporating specific, enumerated features, elements, and arrangements, it is understood that embodiments are well suited to applications involving additional, fewer, or different features, elements, or arrangements.


The depicted embodiment shows a portion of a multistandard encoder, to illustrate the approach used therein. As with encoder 200, motion search module 320 processes macroblocks, and outputs them to transform buffers 325. Transform engine 330 retrieves the macro blocks from transform buffers 325, processes them, and outputs quantized macroblock data to quantization buffers 335. Entropy encoder 340 retrieves the quantized macroblock data, and uses it to produce an encoded bitstream.


In this embodiment, transform buffers 325 include source data buffer 326, prediction data buffer 327, and input parameter buffer 320. Motion search module 320, in this embodiment, populates these buffers. Source data buffer 326 stores raw video pixels of the current macroblock. Prediction data buffer 327 stores predicted video pixels for the current macroblock by motion search module, which transform engine 330 will use when processing macroblock information from source data buffer 326. Input parameter buffer 328 stores parameters of the current macroblock such as motion vectors, quantization parameters, etc., which are used by transform engine 330 in determining how to process macroblock information, e.g., what bit rate the video should be encoded at.


In this embodiment, quantization buffers 335 include quantization data buffer 336, and output parameter buffer 337. Quantization data buffer 336 is used to store quantized macroblock pixels or coefficients produced by transform engine 330, and used by entropy encoder 340. Output parameter buffer 337 is used to pass encoding parameters to entropy encoder 340, for use in processing the quantized macroblock information.


In the depicted embodiment, transform engine 330 includes a number of standard-specific datapaths, e.g., MPEG-4 transform datapaths 331, H.263 transform datapath 332, and H.264 transform datapath 333. In different embodiments, different, fewer, or additional video standards may be supported by inclusion of different, fewer, or additional hardware datapaths.


Under this approach, buffers can be shared between different hardware datapaths, e.g., both the MPEG-4 and H.264 transform datapaths can read from the same set of transform buffers 325, and write to the same set of quantization buffers 335. In some embodiments, the encoder can be instructed, e.g., by driver software executing on a processor, as to which video standard to use when encoding the raw video data. This instruction, in turn, will determine which transform datapath is used by transform engine 330 when encoding data. Similarly, motion search module 320 and/or entropy encoder 340 may include several hardware datapaths, in order to support and select between multiple video standards.


Method of Video Encoding


With reference now to FIG. 4, a flowchart 400 of a method of video encoding is depicted, in accordance with one embodiment. Although specific steps are disclosed in flowchart 400, such steps are exemplary. That is, embodiments of the present invention are well suited to performing various other (additional) steps or variations of the steps recited in flowchart 400. It is appreciated that the steps in flowchart 400 may be performed in an order different than presented, and that not all of the steps in flowchart 400 may be performed.


With reference to step 410, a driver instructs a processor to encode video data. In some embodiments, a graphics processor or GPU is utilized, incorporating an encoder such as that described in FIG. 3; in other embodiments, other implementations are utilized. The encoder is instructed to encode video data, e.g., by driver software executing on a processor.


With reference now to step 415, the driver provides a context for encoding video frame data. In some embodiments, as previously discussed, the encoder may be capable of encoding video data in accordance with a number of different video encoding standards. In one such embodiment, the driver software instructs the encoder as to which video standard to use in encoding the video data. In one such embodiment, the encoder supports changing the encoding standard on a frame-by-frame basis.


With reference now to step 420, a motion search module obtains and processes raw video data. In some embodiments, a motion search module performs some encoding tasks. In several such embodiments, the motion search module may be configured to perform a different tasks, depending upon the video standard specified in step 415.


With reference now to step 425, a motion search module loads processed video data into shared transform buffers. In these embodiments, a single set of transform buffers are shared by a number of different encoding data paths. Regardless of which video standard is specified, the motion search module outputs processed video data to the same shared transform buffers.


For example, with reference to FIG. 3, motion search module 320 obtains raw video data from memory, and performs tasks related to encoding the raw video data. Motion search module 320 outputs processed macroblocks to transform buffers 325.


With reference now to step 430, a transform engine selects an appropriate transform datapath. As discussed previously, several embodiments incorporate hardware support for multiple video encoding standards, and include multiple hardware datapaths in the encoder. Depending upon the video standard specified in step 415, an appropriate hardware transform datapath may be selected. Moreover, in some embodiments, software encoding may be supported for several video standards; in such an embodiment, software instructions executing on a processor may be utilized during the encoding process. These embodiments allow for expandability in supported video encoding standards, particularly for standards which are computationally less demanding.


With reference now to step 435, the transfer engine passes data from the shared transform buffers through the selected datapath. In different embodiments, and depending upon the selected video standard, different operations may be performed by the selected transform datapath.


With reference now to step 440, the transform engine loads the output from the transform datapath into shared quantization buffers. In some embodiments, the output from a the transform datapath consists of quantized macroblock information, e.g., quantized coefficients. This quantized macroblock information can be loaded into shared quantization buffers.


Continuing the preceding example, transform engine 330 selects the appropriate transform datapath for the desired video standard, e.g., MPEG4 transform datapath 331 is used if the video is to be encoded using the MPEG-4 standard, or H.264 transform datapath 333 may be selected for H.264 video encoding. The selected transform datapath is connected to source data buffer 326, prediction data buffer 327, and input parameter buffer 328. The data is processed in accordance with the selected video standard, and output to quantization data buffer 336 and output parameter buffer 337.


With reference now to step 445, an entropy encoder processes data from the shared quantization buffers. In some embodiments, an entropy encoder is used to further process video data during the encoding process. The operations performed by the entropy encoder may vary, depending upon the embodiment and the selected video standard. As with the motion search module in the transform engine, the entropy encoder may include multiple hardware datapaths, to support multiple video standards. Also as with the motion search module and the transform engine, the entropy encoder may use software instructions executing a processor to support a video encoding standard. The shared quantization buffers are accessible to the various datapaths included in the entropy encoder.


With reference now to step 450, the entropy encoder outputs an encoded bit stream. In some embodiments, the entropy encoder outputs a packetized bit stream, which may be written to memory, to a buffer, and/or output to a display.


Hardware-Enabled Rewind Functionality


With reference now FIG. 5, a block diagram of an encoder 500 is depicted, in accordance with one embodiment. Encoder 500 provides hardware support for a rewind operation, as specified in a number of video standards, including the H.264 standard. While encoder 500 is shown as incorporating specific, enumerated features, elements, and arrangements, it is understood that embodiments are well suited to applications involving additional, fewer, or different features, elements, or arrangements.


As with FIG. 3, FIG. 5 depicts a portion of an encoder, such as may be incorporated into a graphics processor. As in encoders 200 and 300, motion search module 520 processes macroblocks, and outputs them to transform buffers 525. In the depicted embodiment, the various transform buffers 525, such as search data buffer 526, prediction data buffer 527, and input parameter buffer 528, can store data associated with multiple macroblocks; in the depicted embodiment, each of these buffers can store three macroblocks' worth of data. In this embodiment, these additional buffers can be used to retain data associated with a previously processed macroblock. As such, when H.264 transform engine 530 is processing macroblock n, data associated with macroblock n−1 is still stored in the transform buffers, while motion search module 520 is writing data associated with macroblock n+1 into the transform buffers. This allows support for macroblock rewind, which can aid in implementing the H.264 video standard in a macroblock processing pipeline, in such a way that the transform engine can perform the rewind function without requesting data from the motion search module.


Transform engine 530 is shown as incorporating forward transform module 531, inverse transform module 533, and reconstructed frame buffer 534. For the H.264 standard, as with a number of other video standards, the operations performed by this collection of modules are standardized, though the organization and naming of modules may vary across different embodiments. Forward transform module 531 loads data into quantization buffers 535, where entropy encoder 540 can retrieve it.


In order to implement some video standards, such as H.264, in a macroblock pipeline architecture, rewind functionality is utilized, such that the entropy encoder can reject a processed macroblock. Such rejection typically occurs for one of two reasons. If the processed macroblock data, as produced by the transform data path, is larger than the unprocessed macroblock data, the entropy encoder will report an IPCM error. If the processed macroblock data does not fit in the current video data packet, the entropy encoder will return a bit-based error. If both of these conditions occur, the entropy encoder will report both errors.


Depending upon the configuration of the encoder, as well as the video standard being utilized, the transform engine may react in a number of different ways to these errors. In one embodiment, the transform engine will respond to an IPCM error by sending the unprocessed video data instead, rather than passing the data through the forward transform module. In another embodiment, the transform engine may reprocess the data, using a different set of parameters, to attempt to produce acceptable processed macroblock data. In some embodiments, the transform engine responds to a bit-based error by reprocessing the data for the rejected macroblock. In one embodiment, the transform engine responds to the combination of an IPCM error and a bit-based error by responding as per an IPCM error.


Encoder 500, in the depicted embodiment, includes rewind control module 590. Rewind control module 590 receives the rewind signal from entropy encoder 540. In some embodiments, entropy encoder 540 outputs a rewind signal for every macroblock processed; in other embodiments, entropy encoder 540 might only output a rewind signal when a macroblock is rejected. In the case of a rewind condition occurring, rewind control module 590 utilizes the control functionality present in each of the transform buffers 525, to alter which buffers transform engine 530 is accessing, e.g., by selecting the buffers corresponding to the rejected macroblock.


In some embodiments, the rewind signal is also passed to driver software (not pictured) which controls encoder 500. In one such embodiment, the driver software instructs the transform engine to stop processing its current macroblock, and to process the macroblock in the currently-designated buffers, e.g., the buffers associated with the rejected macroblock. For example, if macroblock n−1 was rejected by entropy encoder 540, the driver would instruct the transform engine to stop processing macroblock n. Rewind control 590 would alter the pointers for transform buffers 525 to point to the buffers containing data for macroblock n−1, and the driver software would instruct H.264 transform engine 530 to reprocess the data. If only a bit-based error was reported by entropy encoder 540, the macroblock would be reprocessed with the original parameters. If an IPCM error was reported, the unprocessed macroblock data would be written to quantization buffers 535.


Method of Rewind-Enabled Encoding


With reference now to FIG. 6, a flowchart 600 of a method of rewind-enabled hardware encoding is depicted, in accordance with one embodiment. Although specific steps are disclosed in flowchart 600, such steps are exemplary. That is, embodiments of the present invention are well suited to performing various other (additional) steps or variations of the steps recited in flowchart 600. It is appreciated that the steps in flowchart 600 may be performed in an order different than presented, and that not all of the steps in flowchart 600 may be performed.


With reference to step 610, a transform engine processes a first macroblock. As previously discussed, the steps performed in conjunction with processing macroblock data may vary, across different video encoding standards and different embodiments.


With reference now to step 615, the transform engine writes the processed first macroblock to the quantization buffers and the reconstructed frame buffer. As with step 610, the specific buffers involved, as well as the format and type of data involved, may vary across different video encoding standards and different embodiments.


With reference now to step 620, the transform engine begins processing a second macroblock. As noted earlier, one advantage of including buffers between modules is to enable them to operate independently, and hence more efficiently. The transform engine is not forced to wait for the entropy encoder to accept the first macroblock, before beginning work on the second.


With reference now to step 622, if the entropy encoder detects an error, it sends a rewind signal indicating the nature of the error. The entropy encoder may routinely send a signal, providing status information regarding the processing of macroblock data, and including a status flag to indicate any errors; alternatively, the entropy encoder may only send a signal when an error occurs.


With reference now to step 624, the transform engine stops processing the second macroblock. In many video standards, the processing of a macroblock depends upon how the preceding macroblocks were processed, such that it may not be possible to complete the processing of the second macroblock, if the first was rejected and may change during reprocessing. In different embodiments, different actions may be involved in this step. For example, the software driver controlling the encoder may instruct the transform engine to cease processing; alternatively, a hardware rewind control module may be able to stop the transform engine, in response to a rewind signal from the entropy encoder.


With reference now to step 626, the transform engine reads from the buffers associated with the first macroblock. In different embodiments, this step may be accomplished in different ways. In one embodiment, for example, the software driver may force a reload of the necessary data into the transform buffers. In another embodiment, such as that of FIG. 5, the data for the first macroblock is still available, and a rewind control module directs the transform engine to the appropriate buffers.


With reference now to step 630, the transform engine reprocesses the first macroblock. In different embodiments, different error types may result in different actions.


With reference to step 632, if the rewind signal was the result of an IPCM error (or both an IPCM error and a bit-based error), the processed data produced by the transform engine was unacceptable large, e.g., larger than the unprocessed data was. In one embodiment, the transform engine provides the unprocessed data instead. In another embodiment, the transform engine may reprocess the first macroblock, using different input parameters to attempt to produce an acceptable output.


With reference to step 634, if the rewind signal was the result of a bit-based error, the current video data packet being prepared by the entropy encoder cannot include the processed first macroblock data. The first macroblock should be reprocessed, such that it can be included in the next video data packet.


With reference now to step 635, the reprocessed first macroblock is written to the quantization buffers.


With reference now to step 640, the transform engine begins processing the second macroblock. In some embodiments, the transform engine may be able to resume processing from a partially-processed state. In most embodiments, however, the processing of the second macroblock depends upon the first one, such that changes in how the first macroblock was processed will result in changes to how the second macroblock is processed.


Multistandard Rewind-Enabled Architecture


In some embodiments, multistandard video encoding support, such as previously described, can be combined with the hardware-enabled rewind functionality just described. In one such embodiment, the shared buffers include the multiple entries and control functionality necessary to enable the rewind function, as well as including the rewind signaling in the entropy encoder and the rewind control module.


Embodiments such as these provide the advantages of multistandard video encoding support, where redundant hardware can be limited and support for individual encoding standards can be more readily added or removed. These embodiments also provide hardware support for the rewind functionality described in several video encoding standards, which is helpful in attempting to provide real-time encoding for standards such as H.264. Those video standards which do not require a hardware rewind are not affected by including support for those standards which do.


Embodiments of the present invention are thus described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the following claims.

Claims
  • 1. A method of encoding video data, comprising: storing a first portion of video data at a first location in a buffer;accessing said first portion of video data from said first location in said buffer;performing an encoding operation on said first portion of video data;obtaining a second portion of video data from a second location in said buffer;beginning said encoding operation on said second portion of video data; andretrieving said first portion of video data from said first location in said buffer;re-performing said encoding operation on said first portion of video data retrieved from said first location, wherein said performing said encoding operation on said first portion of video data further comprises generating a first quantized portion of video data;writing said first quantized portion of video data to a quantization buffer;passing said first quantized portion of video data through an entropy encoder; andreceiving a rewind signal from said entropy encoder.
  • 2. The method of claim 1, wherein said first portion of video data and said second portion of video data comprise macroblocks.
  • 3. The method of claim 1, wherein said encoding operation comprises a forward transform operation.
  • 4. The method of claim 1, wherein said retrieving and reprocessing are performed in response to said rewind signal.
  • 5. The method of claim 1, wherein said rewind signal comprises an error signal indicating a bit-based encoding error.
  • 6. The method of claim 1, wherein said rewind signal comprises an error signal indicating an intra pulse code modulation (IPCM) encoding error.
  • 7. The method of claim 1, further comprising: restarting said encoding operation on said second portion of video data.
  • 8. A system for encoding video data, comprising: a transform buffer for storing a plurality of processed macroblocks;a transform engine, coupled to said transform buffer, for transforming said plurality of processed macroblocks into a plurality of quantized macroblocks, wherein said transform engine is further configured to: write said first quantized portion of video data to a quantization buffer;pass said first quantized portion of video data through an entropy encoder; andreceive a rewind signal from said entropy encoder; anda rewind control module, coupled to said transform buffer, for causing said transform engine to retrieve one of said plurality of processed macroblocks stored in said transform buffer and transformed into a quantized macroblock, and re-transform said retrieved processed macroblock into a re-quantized macroblock.
  • 9. The system of claim 8, wherein said rewind control module is operable to cause said transform engine to retrieve said one of said plurality of processed macroblocks from a specific location in said transform buffer.
  • 10. The system of claim 8, further comprising: a motion search module, for processing a plurality of raw video macroblocks into said plurality of processed macroblocks.
  • 11. The system of claim 8, further comprising: said entropy encoder, coupled to said transform engine, for encoding said plurality of quantized macroblocks.
  • 12. The system of claim 11, wherein said entropy encoder is operable to send a rewind signal to said rewind control module.
  • 13. The system of claim 8, wherein said system is operable to encode said video data in accordance with a version of the H.264 video compression standard.
  • 14. The system of claim 8, wherein said transform engine comprises: a plurality of transform datapaths operable to encode said video data in accordance with a plurality of video compression standards.
  • 15. A handheld computer system device, comprising: a system memory;a central processing unit (CPU) communicatively coupled to said system memory; anda graphics processing unit (GPU) communicatively coupled to said CPU, wherein said GPU includes an encoder for encoding a video data, and wherein said encoder is configured to: store a first portion of video data at a first location in a buffer;access said first portion of video data from said first location in said buffer;perform an encoding operation on said first portion of video data;obtain a second portion of video data from a second location in said buffer;begin said encoding operation on said second portion of video data; andretrieve said first portion of video data from said first location in said buffer;re-perform said encoding operation on said first portion of video data, wherein said encoder is further configured to perform said encoding operation on said first portion of video data by generating a first quantized portion of video data;write said first quantized portion of video data to a quantization buffer;pass said first quantized portion of video data through an entropy encoder; andreceive a rewind signal from said entropy encoder.
  • 16. The handheld computer system device of claim 15, wherein said retrieving and reprocessing are performed in response to said rewind signal.
US Referenced Citations (382)
Number Name Date Kind
3091657 Stuessel May 1963 A
3614740 Delagi et al. Oct 1971 A
3987291 Gooding et al. Oct 1976 A
4101960 Stokes et al. Jul 1978 A
4208810 Rohner et al. Jun 1980 A
4541046 Nagashima et al. Sep 1985 A
4566005 Apperley et al. Jan 1986 A
4748585 Chiarulli et al. May 1988 A
4897717 Hamilton et al. Jan 1990 A
4918626 Watkins et al. Apr 1990 A
4958303 Assarpour et al. Sep 1990 A
4965716 Sweeney Oct 1990 A
4965751 Thayer et al. Oct 1990 A
4985848 Pfeiffer et al. Jan 1991 A
5040109 Bowhill et al. Aug 1991 A
5047975 Patti et al. Sep 1991 A
5081594 Horsley Jan 1992 A
5175828 Hall et al. Dec 1992 A
5179530 Genusov et al. Jan 1993 A
5197130 Chen et al. Mar 1993 A
5210834 Zurawski et al. May 1993 A
5263136 DeAguiar et al. Nov 1993 A
5287438 Kelleher Feb 1994 A
5313287 Barton May 1994 A
5327369 Ashkenazi Jul 1994 A
5357623 Megory-Cohen Oct 1994 A
5375223 Meyers et al. Dec 1994 A
5388206 Poulton et al. Feb 1995 A
5388245 Wong Feb 1995 A
5418973 Ellis et al. May 1995 A
5421029 Yoshida May 1995 A
5430841 Tannenbaum et al. Jul 1995 A
5430884 Beard et al. Jul 1995 A
5432898 Curb et al. Jul 1995 A
5432905 Hsieh et al. Jul 1995 A
5446836 Lentz et al. Aug 1995 A
5452104 Lee Sep 1995 A
5452412 Johnson, Jr. et al. Sep 1995 A
5483258 Cornett et al. Jan 1996 A
5517666 Ohtani et al. May 1996 A
5522080 Harney May 1996 A
5543935 Harrington Aug 1996 A
5560030 Guttag et al. Sep 1996 A
5561808 Kuma et al. Oct 1996 A
5570463 Dao Oct 1996 A
5574944 Stager Nov 1996 A
5594854 Baldwin et al. Jan 1997 A
5610657 Zhang Mar 1997 A
5623692 Priem et al. Apr 1997 A
5627988 Oldfield May 1997 A
5633297 Valko et al. May 1997 A
5644753 Ebrahim et al. Jul 1997 A
5649173 Lentz Jul 1997 A
5664162 Dye Sep 1997 A
5666169 Ohki et al. Sep 1997 A
5682552 Kuboki et al. Oct 1997 A
5682554 Harrell Oct 1997 A
5706478 Dye Jan 1998 A
5708511 Gandhi et al. Jan 1998 A
5754191 Mills et al. May 1998 A
5761476 Martell Jun 1998 A
5764243 Baldwin Jun 1998 A
5784590 Cohen et al. Jul 1998 A
5784640 Asghar et al. Jul 1998 A
5796974 Goddard et al. Aug 1998 A
5802574 Atallah et al. Sep 1998 A
5809524 Singh et al. Sep 1998 A
5812147 Van Hook et al. Sep 1998 A
5815162 Levine Sep 1998 A
5835740 Wise et al. Nov 1998 A
5835788 Blumer et al. Nov 1998 A
5848192 Smith et al. Dec 1998 A
5848254 Hagersten Dec 1998 A
5854631 Akeley et al. Dec 1998 A
5854637 Sturges Dec 1998 A
5872902 Kuchkuda et al. Feb 1999 A
5920352 Inoue Jul 1999 A
5925124 Hilgendorf et al. Jul 1999 A
5940090 Wilde Aug 1999 A
5940858 Green Aug 1999 A
5949410 Fung Sep 1999 A
5950012 Shiell et al. Sep 1999 A
5977987 Duluk, Jr. Nov 1999 A
5978838 Mohamed et al. Nov 1999 A
5999199 Larson Dec 1999 A
6009454 Dummermuth Dec 1999 A
6016474 Kim et al. Jan 2000 A
6028608 Jenkins Feb 2000 A
6034699 Wong et al. Mar 2000 A
6041399 Terada et al. Mar 2000 A
6049672 Shiell et al. Apr 2000 A
6072500 Foran et al. Jun 2000 A
6073158 Nally et al. Jun 2000 A
6092094 Ireton Jul 2000 A
6104407 Aleksic et al. Aug 2000 A
6104417 Nielsen et al. Aug 2000 A
6108766 Hahn et al. Aug 2000 A
6112019 Chamdani et al. Aug 2000 A
6115049 Winner et al. Sep 2000 A
6118394 Onaya Sep 2000 A
6128000 Jouppi et al. Oct 2000 A
6131152 Ang et al. Oct 2000 A
6137918 Harrington et al. Oct 2000 A
6141740 Mahalingaiah et al. Oct 2000 A
6144392 Rogers Nov 2000 A
6150610 Sutton Nov 2000 A
6160557 Narayanaswami Dec 2000 A
6188394 Morein et al. Feb 2001 B1
6189068 Witt et al. Feb 2001 B1
6192073 Reader et al. Feb 2001 B1
6192458 Arimilli et al. Feb 2001 B1
6201545 Wong et al. Mar 2001 B1
6204859 Jouppi et al. Mar 2001 B1
6208361 Gossett Mar 2001 B1
6209078 Chiang et al. Mar 2001 B1
6219070 Baker et al. Apr 2001 B1
6222552 Haas et al. Apr 2001 B1
6230254 Senter et al. May 2001 B1
6239810 Van Hook et al. May 2001 B1
6247094 Kumar et al. Jun 2001 B1
6249853 Porterfield Jun 2001 B1
6252610 Hussain Jun 2001 B1
6259460 Gossett et al. Jul 2001 B1
6292886 Makineni et al. Sep 2001 B1
6301299 Sita et al. Oct 2001 B1
6301600 Petro et al. Oct 2001 B1
6314493 Luick Nov 2001 B1
6317819 Morton Nov 2001 B1
6323874 Gossett Nov 2001 B1
6351808 Joy et al. Feb 2002 B1
6359623 Larson Mar 2002 B1
6362819 Dalal et al. Mar 2002 B1
6366289 Johns Apr 2002 B1
6370617 Lu et al. Apr 2002 B1
6429877 Stroyan Aug 2002 B1
6437780 Baltaretu et al. Aug 2002 B1
6437789 Tidwell et al. Aug 2002 B1
6438664 McGrath et al. Aug 2002 B1
6452595 Montrym et al. Sep 2002 B1
6469707 Voorhies Oct 2002 B1
6480205 Greene et al. Nov 2002 B1
6480927 Bauman Nov 2002 B1
6490654 Wickeraad et al. Dec 2002 B2
6496902 Faanes et al. Dec 2002 B1
6499090 Hill et al. Dec 2002 B1
6501564 Schramm et al. Dec 2002 B1
6504542 Voorhies et al. Jan 2003 B1
6522329 Ihara et al. Feb 2003 B1
6525737 Duluk, Jr. et al. Feb 2003 B1
6529201 Ault et al. Mar 2003 B1
6529207 Landau et al. Mar 2003 B1
6587506 Noridomi et al. Jul 2003 B1
6597357 Thomas Jul 2003 B1
6603481 Kawai et al. Aug 2003 B1
6606093 Gossett et al. Aug 2003 B1
6611272 Hussain et al. Aug 2003 B1
6614444 Duluk, Jr. et al. Sep 2003 B1
6614448 Garlick et al. Sep 2003 B1
6624818 Mantor et al. Sep 2003 B1
6624823 Deering Sep 2003 B2
6629188 Minkin et al. Sep 2003 B1
6631423 Brown et al. Oct 2003 B1
6631463 Floyd et al. Oct 2003 B1
6633197 Sutardja Oct 2003 B1
6633297 McCormack et al. Oct 2003 B2
6646639 Greene et al. Nov 2003 B1
6657635 Hutchins et al. Dec 2003 B1
6658447 Cota-Robles Dec 2003 B2
6671000 Cloutier Dec 2003 B1
6674841 Johns et al. Jan 2004 B1
6693637 Koneru et al. Feb 2004 B2
6693639 Duluk, Jr. et al. Feb 2004 B2
6697063 Zhu Feb 2004 B1
6700588 MacInnis et al. Mar 2004 B1
6715035 Colglazier et al. Mar 2004 B1
6717576 Duluk, Jr. et al. Apr 2004 B1
6717578 Deering Apr 2004 B1
6732242 Hill et al. May 2004 B2
6734861 Van Dyke et al. May 2004 B1
6741247 Fenney May 2004 B1
6747057 Ruzafa et al. Jun 2004 B2
6765575 Voorhies et al. Jul 2004 B1
6778177 Furtner Aug 2004 B1
6788301 Thrasher Sep 2004 B2
6798410 Redshaw et al. Sep 2004 B1
6803916 Ramani et al. Oct 2004 B2
6809732 Zatz et al. Oct 2004 B2
6812929 Lavelle et al. Nov 2004 B2
6819332 Baldwin Nov 2004 B2
6825843 Allen et al. Nov 2004 B2
6825848 Fu et al. Nov 2004 B1
6833835 van Vugt Dec 2004 B1
6839062 Aronson et al. Jan 2005 B2
6862027 Andrews et al. Mar 2005 B2
6891543 Wyatt May 2005 B2
6906716 Moreton et al. Jun 2005 B2
6915385 Leasure et al. Jul 2005 B1
6938176 Alben et al. Aug 2005 B1
6940514 Wasserman et al. Sep 2005 B1
6944744 Ahmed et al. Sep 2005 B2
6947057 Nelson et al. Sep 2005 B2
6952214 Naegle et al. Oct 2005 B2
6956579 Diard et al. Oct 2005 B1
6961057 Van Dyke et al. Nov 2005 B1
6965982 Nemawarkar Nov 2005 B2
6975324 Valmiki et al. Dec 2005 B1
6976126 Clegg et al. Dec 2005 B2
6978149 Morelli et al. Dec 2005 B1
6978317 Anantha et al. Dec 2005 B2
6978457 Johl et al. Dec 2005 B1
6981106 Bauman et al. Dec 2005 B1
6985151 Bastos et al. Jan 2006 B1
7002591 Leather et al. Feb 2006 B1
7009607 Lindholm et al. Mar 2006 B2
7009615 Kilgard et al. Mar 2006 B1
7015909 Morgan, III et al. Mar 2006 B1
7031330 Bianchini, Jr. Apr 2006 B1
7032097 Alexander et al. Apr 2006 B2
7035979 Azevedo et al. Apr 2006 B2
7061495 Leather Jun 2006 B1
7064771 Jouppi et al. Jun 2006 B1
7075542 Leather Jul 2006 B1
7081902 Crow et al. Jul 2006 B1
7119809 McCabe Oct 2006 B1
7126600 Fowler et al. Oct 2006 B1
7148888 Huang Dec 2006 B2
7151544 Emberling Dec 2006 B2
7154066 Talwar et al. Dec 2006 B2
7154500 Heng et al. Dec 2006 B2
7158148 Toji et al. Jan 2007 B2
7159212 Schenk et al. Jan 2007 B2
7170515 Zhu Jan 2007 B1
7184040 Tzvetkov Feb 2007 B1
7185178 Barreh et al. Feb 2007 B1
7202872 Paltashev et al. Apr 2007 B2
7224364 Yue et al. May 2007 B1
7260677 Vartti et al. Aug 2007 B1
7305540 Trivedi et al. Dec 2007 B1
7307628 Goodman et al. Dec 2007 B1
7307638 Leather et al. Dec 2007 B2
7321787 Kim Jan 2008 B2
7334110 Faanes et al. Feb 2008 B1
7369815 Kang et al. May 2008 B2
7373478 Yamazaki May 2008 B2
7382368 Molnar et al. Jun 2008 B1
7406698 Richardson Jul 2008 B2
7412570 Moll et al. Aug 2008 B2
7453466 Hux et al. Nov 2008 B2
7483029 Crow et al. Jan 2009 B2
7486290 Kilgariff et al. Feb 2009 B1
7487305 Hill et al. Feb 2009 B2
7493452 Eichenberger et al. Feb 2009 B2
7545381 Huang et al. Jun 2009 B2
7548996 Baker et al. Jun 2009 B2
7551174 Iourcha et al. Jun 2009 B2
7564460 Boland et al. Jul 2009 B2
7633506 Leather et al. Dec 2009 B1
7634637 Lindholm et al. Dec 2009 B1
7714747 Fallon May 2010 B2
7750913 Parenteau et al. Jul 2010 B1
7777748 Bakalash et al. Aug 2010 B2
7791617 Crow et al. Sep 2010 B2
7852341 Rouet et al. Dec 2010 B1
7869835 Zu Jan 2011 B1
7957465 Dei et al. Jun 2011 B2
7965902 Zelinka et al. Jun 2011 B1
8020169 Yamasaki Sep 2011 B2
8063903 Vignon et al. Nov 2011 B2
20010005209 Lindholm et al. Jun 2001 A1
20010026647 Morita Oct 2001 A1
20010028747 Sato et al. Oct 2001 A1
20010031092 Zeck et al. Oct 2001 A1
20010055336 Krause et al. Dec 2001 A1
20020050979 Oberoi et al. May 2002 A1
20020061184 Miyamoto May 2002 A1
20020097241 McCormack et al. Jul 2002 A1
20020114402 Doetsch et al. Aug 2002 A1
20020116595 Morton Aug 2002 A1
20020130863 Baldwin Sep 2002 A1
20020130874 Baldwin Sep 2002 A1
20020140655 Liang et al. Oct 2002 A1
20020144061 Faanes et al. Oct 2002 A1
20020158885 Brokenshire et al. Oct 2002 A1
20020194430 Cho Dec 2002 A1
20020196251 Duluk, Jr. et al. Dec 2002 A1
20030001847 Doyle et al. Jan 2003 A1
20030003943 Bajikar Jan 2003 A1
20030014457 Desai et al. Jan 2003 A1
20030016217 Vlachos et al. Jan 2003 A1
20030016844 Numaoka Jan 2003 A1
20030031258 Wang et al. Feb 2003 A1
20030067468 Duluk, Jr. et al. Apr 2003 A1
20030067473 Taylor et al. Apr 2003 A1
20030076325 Thrasher Apr 2003 A1
20030122815 Deering Jul 2003 A1
20030163589 Bunce et al. Aug 2003 A1
20030172326 Coffin, III et al. Sep 2003 A1
20030188118 Jackson Oct 2003 A1
20030194116 Wong et al. Oct 2003 A1
20030201994 Taylor et al. Oct 2003 A1
20030204673 Venkumahanti et al. Oct 2003 A1
20030204680 Hardage, Jr. Oct 2003 A1
20030227461 Hux et al. Dec 2003 A1
20030228063 Nakayama et al. Dec 2003 A1
20040012597 Zatz et al. Jan 2004 A1
20040049379 Thumpudi et al. Mar 2004 A1
20040073771 Chen et al. Apr 2004 A1
20040073773 Demjanenko Apr 2004 A1
20040085313 Moreton et al. May 2004 A1
20040103253 Kamei et al. May 2004 A1
20040130552 Duluk, Jr. et al. Jul 2004 A1
20040183801 Deering Sep 2004 A1
20040193837 Devaney et al. Sep 2004 A1
20040196285 Rice et al. Oct 2004 A1
20040205326 Sindagi et al. Oct 2004 A1
20040207642 Crisu et al. Oct 2004 A1
20040212730 MacInnis et al. Oct 2004 A1
20040215887 Starke Oct 2004 A1
20040221117 Shelor Nov 2004 A1
20040246251 Fenney et al. Dec 2004 A1
20040263519 Andrews et al. Dec 2004 A1
20050002454 Ueno et al. Jan 2005 A1
20050012759 Valmiki et al. Jan 2005 A1
20050024369 Xie Feb 2005 A1
20050030314 Dawson Feb 2005 A1
20050041037 Dawson Feb 2005 A1
20050066148 Luick Mar 2005 A1
20050071722 Biles Mar 2005 A1
20050088448 Hussain et al. Apr 2005 A1
20050094729 Yuan et al. May 2005 A1
20050122338 Hong et al. Jun 2005 A1
20050123057 MacInnis et al. Jun 2005 A1
20050134588 Aila et al. Jun 2005 A1
20050134603 Iourcha et al. Jun 2005 A1
20050147166 Shibata et al. Jul 2005 A1
20050147308 Kitamura Jul 2005 A1
20050179698 Vijayakumar et al. Aug 2005 A1
20050239518 D'Agostino et al. Oct 2005 A1
20050259100 Teruyama Nov 2005 A1
20050262332 Rappoport et al. Nov 2005 A1
20050280652 Hutchins et al. Dec 2005 A1
20060020843 Frodsham et al. Jan 2006 A1
20060034369 Mohsenian Feb 2006 A1
20060044317 Bourd et al. Mar 2006 A1
20060045359 Chen et al. Mar 2006 A1
20060064517 Oliver Mar 2006 A1
20060064547 Kottapalli et al. Mar 2006 A1
20060103659 Karandikar et al. May 2006 A1
20060152519 Hutchins et al. Jul 2006 A1
20060152520 Gadre et al. Jul 2006 A1
20060170690 Leather Aug 2006 A1
20060176308 Karandikar et al. Aug 2006 A1
20060176309 Gadre et al. Aug 2006 A1
20060203005 Hunter Sep 2006 A1
20060245001 Lee et al. Nov 2006 A1
20060267981 Naoi Nov 2006 A1
20070002946 Bouton et al. Jan 2007 A1
20070025441 Ugur et al. Feb 2007 A1
20070076010 Swamy et al. Apr 2007 A1
20070130444 Mitu et al. Jun 2007 A1
20070139440 Crow et al. Jun 2007 A1
20070153907 Mehta et al. Jul 2007 A1
20070253491 Ito et al. Nov 2007 A1
20070268298 Alben et al. Nov 2007 A1
20070273689 Tsao Nov 2007 A1
20070285427 Morein et al. Dec 2007 A1
20070286280 Saigo et al. Dec 2007 A1
20070286289 Arai et al. Dec 2007 A1
20070296725 Steiner et al. Dec 2007 A1
20080016327 Menon et al. Jan 2008 A1
20080024497 Crow et al. Jan 2008 A1
20080024522 Crow et al. Jan 2008 A1
20080100618 Woo et al. May 2008 A1
20080181522 Hosaka et al. Jul 2008 A1
20080226187 Sato Sep 2008 A1
20080273218 Kitora et al. Nov 2008 A1
20080278509 Washizu et al. Nov 2008 A1
20090102686 Fukuhara et al. Apr 2009 A1
20090154557 Zhao et al. Jun 2009 A1
20090168899 Schlanger et al. Jul 2009 A1
20090235051 Codrescu et al. Sep 2009 A1
20120023149 Kinsman et al. Jan 2012 A1
Foreign Referenced Citations (19)
Number Date Country
101093578 Dec 2007 CN
29606102 Apr 1996 DE
06-180758 Jun 1994 JP
07-101885 Apr 1995 JP
H0877347 Mar 1996 JP
08-153032 Jun 1996 JP
09-287217 Nov 1997 JP
H09325759 Dec 1997 JP
10-134198 May 1998 JP
11-190447 Jul 1999 JP
11-195132 Jul 1999 JP
2000148695 May 2000 JP
2003-178294 Jun 2000 JP
2005-182547 Jul 2005 JP
100262453 Aug 2000 KR
413766 Dec 2000 TW
436710 May 2001 TW
442734 Jun 2001 TW
0013145 Mar 2000 WO
Non-Patent Literature Citations (62)
Entry
“Alpha Testing State”; http://msdn.microsoft.com/library/en-us/directx9—c/directx/graphics/programmingguide/GettingStarted/Direct3Kdevices/States/renderstates/alphatestingstate.asp Mar. 25, 2005.
“Anti-aliasing”; http://en.wikipedia.org/wiki/Anti-aliasing; Mar. 27, 2006.
“Vertex Fog”; http://msdn.microsoft.com/library/en-us/directx9—cVertex—fog.asp?frame=true Mar. 27, 2006.
A Hardware Assisted Design Rule Check Architecture Larry Seller Jan. 1982 Proceedings of the 19th Conference on Design Automation DAC '82 Publisher: IEEE Press.
A Parallel Alogorithm for Polygon Rasterization Juan Pineda Jun. 1988 ACM.
A VLSI Architecture for Updating Raster-Scan Displays Satish Gupta, Robert F. Sproull, Ivan E. Sutherland Aug. 1981 ACM SIGGRAPH Computer Graphics, Proceedings of the 8th Annual Conference on Computer Graphics and Interactive Techniques SIGGRAPH '81, vol. 15 Issue Publisher: ACM Press.
Blythe, OpenGL section 3.4.1, Basic Line Segment Rasterization, Mar. 29, 1997, pp. 1-3.
Boyer, et al.; “Discrete Analysis for Antialiased Lines,” Eurographics 2000; 3 Pages.
Brown, Brian; “Data Structure And Number Systems”; 2000; http://www.ibilce.unesp.br/courseware/datas/data3.htm.
Crow; “The Use of Grayscale for Improves Raster Display of Vectors and Characters;” University of Texas, Austin, Texas; Work supported by the National Science Foundation unser Grants MCS 76-83889; pp. 1-5: ACM Press, 1978.
Definition of “block” from FOLDOC, http://foldoc.org/index.cgi?block, Sep. 23, 2004.
Definition of “first-in first-out” from FOLDOC, http://foldoc.org/index.cgi?query=fifo&action=Search, Dec. 6, 1999.
Definition of “queue” from Free on-Line Dictionary of Computing (FOLDOC), http://foldoc.org/index.cgi?query=queue&action=Search, May 15, 2007.
Definition of “Slot,” http://www.thefreedictionary.com/slot, Oct. 2, 2012 .
Dictionary of Computers, Information Processing & Technology, 2nd Edition , 1984.
Duca et al., A Relational Debugging Engine for Graphics Pipeline, International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2005, pp. 453-463, ISSN: 0730-0301.
Fisher, Joseph A., Very Long Instruction Word Architecture and the ELI-512, ACM, 1993, pp. 140-150.
FOLDOC (Free On-Line Dictionary of Computing), definition of X86, Feb. 27, 2004.
FOLDOC, definition of “frame buffer”, from foldoc.org/index.cgi?query=frame+buffer&action=Search, Oct. 3, 1997.
FOLDOC, definition of “motherboard”, from foldoc.org/index.cgi?query=motherboard&action=Search, Aug. 10, 2000.
FOLDOC, definition of “separate compilation”, from foldoc.org/index.cgi?query=separate+compilation&action=Search, Feb. 19, 2005.
FOLDOC, definition of “superscalar”, http://foldoc.org/, Jun. 22, 2009.
FOLDOC, definition of “vector processor”, http://foldoc.org/, Sep. 11, 2003.
FOLDOC, definition of Pentium, Sep. 30, 2003.
FOLDOC, Free Online Dictionary of Computing, definition of SIMD, foldoc.org/index.cgi?query=simd&action=Search, Nov. 4, 1994.
Foley, J. “Computer Graphics: Principles and Practice”, 1987, Addison-Wesley Publishing, 2nd Edition, p. 545-546.
Free On-Line Dictionary of Computing (FOLDOC), definition of “video”, from foldoc.org/index.cgi?query=video&action=Search, May 23, 2008.
Fuchs; “Fast Spheres Shadow, Textures, Transparencies, and Image Enhancements in Pixel-Planes”; ACM; 1985; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514.
Gadre, S., Patent Application Entitled “Separately Schedulable Condition Codes for a Video Processor”, U.S. Appl. No. 11/267,793, filed Nov. 4, 2005.
Gadre, S., Patent Application Entitled “Stream Processing in a Video Processor”, U.S. Appl. No. 11/267,599, filed Nov. 4, 2005.
Gadre, S., Patent Application Entitled “Video Processor Having Scalar and Vector Components With Command FIFO for Passing Function Calls From Scalar to Vector”, U.S. Appl. No. 11/267,700, filed Nov. 4, 2005.
gDEBugger, graphicRemedy, http://www.gremedy.com, Aug. 8, 2006.
Graf, Rudolf F., Modern Dictionary of Electronics, Howard W. Sams & Company, 1984, pp. 566.
Graf, Rudolf F., Modern Dictionary of Electronics, Howard W. Sams & Company, 1988, pp. 273.
Graham, Susan L. et al., Getting Up to Speed: The future of Supercomputing, the National Academies Press, 2005, glossary.
Graston et al. (Software Pipelining Irregular Loops On the TMS320C6000 VLIW DSP Architecture); Proceedings of the ACM SIGPLAN workshop on Languages, compilers and tools for embedded systems; pp. 138-144; Year of Publication: 2001.
Hamacher, V. Carl et l., Computer Organization, Second Edition, McGraw Hill, 1984, pp. 1-9.
Heirich; Optimal Automatic Multi-pass Shader Partitioning by Dynamic Programming; Eurographics-Graphics Hardware (2005); Jul. 2005.
HPL-PD A Parameterized Research Approach—May 31, 2004 http://web.archive.org/web/*/www.trimaran.org/docs/5—hpl-pd.pdf.
Hutchins E., SC10: A Video Processor and Pixel-Shading GPU for Handheld Devices; presented at the Hot Chips conferences on Aug. 23, 2004.
IBM TDB, Device Queue Management, vol. 31 Iss. 10, pp. 45-50, Mar. 1, 1989.
Intel, Intel Architecture Software Developer's Manual, vol. 1: Basic Architecture 1997 p. 8-1.
Intel, Intel Architecture Software Developer's Manual, vol. 1: Basic Architecture 1999 p. 8-1, 9-1.
Intel, Intel MMX Technology at a Glance, Jun. 1997.
Intel, Intel Pentium III Xeon Processor at 500 and 550 Mhz, Feb. 1999.
Intel, Pentium Processor Family Developer's Manual, 1997, pp. 2-13.
Intel, Pentium processor with MMX Technology at 233Mhz Performance Brief, Jan. 1998, pp. 3 and 8.
Rosenberg, Jerry M., Dictionary of Computers, Information Processing & Telecommunications, 2nd Edition, John Wiley & Sons, 1987, pp. 102 and 338.
Merriam-Webster Dictionary Online; Definition for “program”; retrieved Dec. 14, 2010.
Rosenberg, Jerry M., Dictionary of Computers, Information Processing & Telecommunications, 2nd Edition, John Wiley & Sons, 1987, pp. 305.
Woods J., Nvidia GeForce FX Preview, at http://www.tweak3d.net/reviews/nvidia/nv30preview/1.shtml; dated 11/18/202; retrieved Jun. 16, 2011.
Wilson D., Nvidia's Tiny 90nm G71 and G73: GeForce 7900 Debut; at http://www.anantech.com/show/1967/2; dated Sep. 3, 2006, retrieved Jun. 16, 2011.
NVIDIA Corporation, Technical Brief: Transform and Lighting; dated 1999; month unknown.
Parhami, Behrooz, Computer Arithmetic: Algorithms and Hardware Designs, Oxford University Press, Jun. 2000, pp. 413-418.
Pcreview, article entitled “What is a Motherboard”, from www.pcreview.co.uk/articles/Hardware/What—is—a—Motherboard., Nov. 22, 2005.
Quinnell, Richard A. “New DSP Architectures Go “Post-Harvard” for Higher Performance and Flexibility” Techonline; posted May 1, 2002.
SearchStorage.com Definitions, “Pipelining Burst Cache,” Jul. 31, 2001, url: http://searchstorage.techtarget.com/sDefinition/0,,sid5—gci214414,00.html.
Wikipedia, definition of “scalar processor”, Apr. 4, 2009.
Wikipedia, entry page defining term “SIMD”, last modified Mar. 17, 2007.
Wikipedia, definition of Multiplication, accessed from en.wikipedia.org/w/index.php?title=Multiplication&oldid=1890974, published Oct. 13, 2003.
Wikipedia, definition of “subroutine”, published Nov. 29, 2003, four pages.
Wikipedia, definition of “vector processor”, http://en.wikipedia.org/, May 14, 2007.
Related Publications (1)
Number Date Country
20090273606 A1 Nov 2009 US