GRAPHICS PROCESSING UNIT AND GRAPHICS PROCESSING METHOD

Abstract
A graphics processing unit and method with hardware for parameter setting and providing adaptive video coding. The hardware for parameter setting has a storage device storing a parameter database. The hardware for parameter setting further performs graphic analysis on a video received by the graphics processing unit to extract graphics features, and looks up the matching set of parameters in the parameter database in accordance with the graphics features. Accordingly, video coding hardware within the graphics processing unit performs video coding based on the matching set of parameters.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority of China Patent Application No. 201410765122.X, filed on Dec. 11, 2014, the entirety of which is incorporated by reference herein.


BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates to a graphics processing unit and method, and it particularly relates to high-resolution video recording, compression and release technologies like an advanced video coding technology (i.e. H.264), a high efficiency video coding technology (aka HEVC, i.e. H.265) and so on.


2. Description of the Related Art


High-resolution video recording, compression, and releasing technologies like an advanced video coding technology (i.e. H.264), a high efficiency video coding technology (aka HEVC, i.e. H.265) and so on are generally being used in many applications like those for videoconferencing, video surveillance, consumer electronics, video storage, video-on-demand (VoD) and so on. Generally, a high-resolution video coding and decoding chip is used.


BRIEF SUMMARY OF THE INVENTION

Graphics processing technology with a parameter database for adaptive video coding is introduced in this disclosure.


A graphics processing unit in accordance with an exemplary embodiment of the disclosure comprises hardware for parameter setting and hardware for video coding. The hardware for parameter setting uses a storage unit to store a parameter database, performs a graphic analysis on a video received by the graphics processing unit to extract graphics features, and looks up a set of parameters matching the graphics features in the parameter database. The hardware for video coding encodes the video based on the matching set of parameters.


In another exemplary embodiment, a graphics processing method using the aforementioned hardware for parameter setting and video coding is disclosed.


A detailed description is given in the following embodiments with reference to the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:



FIG. 1 depicts how a graphics processing unit 100 is used in different applications;



FIG. 2 is a block diagram depicting the internal hardware of the graphics processing unit 100 in accordance with an exemplary embodiment of the disclosure;



FIG. 3A is a flowchart depicting a graphics processing method with respect to the architecture of FIG. 2, for adaptive video coding;



FIG. 3B is a flowchart depicting a graphics processing method in accordance with another exemplary embodiment of the disclosure; and



FIG. 4 is a flowchart depicting a parameter training step S308 in accordance with an exemplary embodiment of the disclosure.





DETAILED DESCRIPTION OF THE INVENTION

The following description shows several exemplary embodiments carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.



FIG. 1 depicts how a graphics processing unit 100 is used in different applications. The graphics processing unit 100 may be fabricated by a system-on-chip (SOC) technology for advanced video coding (i.e. H.264) or high efficiency video coding (i.e. HEVC/H.265). The graphics processing unit 100 may retrieve video data from a high-definition multimedia interface (HDMI) 102, a YPbPr input 104, a multiple CVBS input 106, an SCART input 108, an RGB input 110, a hybrid tuner 112, an USB 3.0 input 114, an RJ45 input 116 and so on, and store the encoded and compressed video into a storage device 118. The adaptive graphics processing unit 100 may further recover a high-definition video from the compressed video, and display the high-definition video on the display 120.



FIG. 2 is a block diagram depicting the internal hardware of the graphics processing unit 100 in accordance with an exemplary embodiment of the disclosure, which comprises hardware 200 for video coding and hardware 220 for parameter setting.


First, the hardware 200 for video coding is discussed. There are two types of coding algorithms: intra prediction coding; and inter prediction coding. The intra prediction coding generates predicted pixels P from the pixels within the current field F(n). As for the inter prediction coding, a reconstructed field F′(n−1), also known as a reference field, of the previous field is also taken into account to generate the predicted pixels P. Residual values D(n) are calculated from the difference between the predicted pixels P and the current field F(n). The residual values D(n) are transformed into transform coefficients C by DCT hardware 202 and quantization hardware 204. Then, the transform coefficients C are transformed into a video coding stream 208 by entropy coding hardware 206. Furthermore, the transform coefficients C are further converted by inverse quantization hardware 210 and inverse DCT hardware 212 to generate reconstructed residual values D′(n). The residual values D′(n) are added back to the predicted pixels P to form reconstructed pixels μF′(n). The reconstructed pixels μF′(n) are processed by deblocking filter hardware 214 and thereby a reconstructed field F′(n) is reconstructed as a reference field for the next coding field.


The hardware 200 for video coding encodes a video based on a matching set of parameters that the hardware 220 for video coding provides corresponding to the graphics features extracted from the video. The hardware 220 for parameter setting includes hardware 222 for graphic analysis, hardware 224 for matching parameter analysis, hardware 226 for parameter training, and a storage unit 228 storing a parameter database. The hardware 222 for graphic analysis extracts graphics features from a video received by the graphics processing unit 100. Based on the graphics features, the hardware 224 for matching parameter analysis access the storage unit 228 to search the parameter database stored therein. When there is a set of parameters matching the graphics features extracted from the video, the hardware 224 for matching parameter analysis retrieves the set of parameters from the parameter database as a matching set of parameters and, accordingly, drives the hardware 200 for video coding. When the parameter database does not include any set of parameters matching the graphics features extracted from the video, the hardware 226 for parameter training is driven to use a variety of parameter combinations to test the video coding efficiency and thereby to evaluate a matching set of parameters matching the graphics features of the video and store the matching set of parameters into the parameter database. Thus, the matching set of parameters matching the graphics features of the video is available to the matching parameter analysis hardware 224 to drive the hardware 200 for video coding.


The hardware 222 for graphic analysis may analyze color space data, YUV, of images to extract the motion features, complex spatial texture features and so on. Based on the color space data, YUV, an average momentum, custom-character, may be evaluated to represent the graphic motion, and a mean average deviation (MAD, an average pixel difference between the original image and the predicted image) may be evaluated to show graphic complexity. In an exemplary embodiment, based on the color space data, YUV, the images may be classified as static and flat images, static and complex images, flat images with slow motion, complex images with slow motion, and flat images with fast motion. The different categories of graphics features correspond to different sets of parameters for video coding.


In addition to the analysis on the color space data (YUV) of a current field, the hardware 222 for graphic analysis may compare the current field with the previous field (e.g. a comparison on the histogram diagrams) for similarity analysis. The similarity analysis is also named I-frame detection, which may be used in scene-change detection. The set of parameters to be used in video coding may depend on scene content.


In an exemplary embodiment, the adaptive parameters include: a frame type, a quantization parameter (abbreviated to QP), a search range, a coding unit size (CUSize), a coding unit depth (CU depth), a discrete cosine transform unit size (THSize, for H.264 applications), a discrete cosine transform unit depth (TU slip depth, for H.264 applications) and so on. The parameter database in the storage unit 228 may use a lookup table to record sets of parameters for categories of graphics features.


For one category of graphics features that has not been matched to any set of parameters stored in the parameter database, the hardware 226 for parameter training may use a variety of parameter combinations to test the video coding efficiency of this category of graphics features and store the set of parameters with the best performance into the parameter database. The video coding efficiency may be evaluated by a PSNR bitrate evaluation that involves a BD-rate (Bjontegaard Distortion rate) management. Sets of parameters with respect to the different categories of graphics features may be stored in the storage unit 228 as a lookup table.



FIG. 3A is a flowchart depicting a graphics processing method with respect to the architecture of FIG. 2, for adaptive video coding parameters. In step S302, the hardware 222 for graphic analysis extracts graphics features. In step S304, based on the graphics features extracted by the hardware 222, the hardware 224 for matching parameter analysis searches the parameter database of the storage unit 228 for a matching set of parameters corresponding to the graphics features. In step S306, when it is determined that the parameter database in the storage unit 228 does not include any set of parameters matching the current graphics features, step S308 is performed, by which the hardware 226 for parameter training changes the combination of parameters to get the set of parameters matching the current graphics features. In step S310, the set of parameters matching the current graphics features is stored into the storage unit 228 and recorded in the parameter database. In step S312, the video coding hardware 200 executes video coding in accordance with the set of parameters matching the current graphics features. In step S314, it is checked, e.g., by the hardware 222 for graphic analysis, whether the graphics features are changed. When a change is detected in the graphics features, the flow goes back to step S304.


Furthermore, when a matching set of parameters corresponding to the current graphics features is obtained from the parameter database stored in the storage unit 228, steps S308 to S310 are bypassed and step S312 is performed for video coding in accordance with the set of parameters matching the current graphics features.


Furthermore, in some exemplary embodiments, the parameter database is not dynamically updated with the user's actions. The hardware 226 for parameter training is implemented in a factory machine rather than being fabricated in the graphics processing unit 100. FIG. 3B is a flowchart depicting a graphics processing method for such a design in accordance with an exemplary embodiment of the disclosure. Unlike the flowchart of FIG. 3A, when it is determined in step S306 that the parameter database in the storage unit 228 does not include any set of parameters matching the current graphics features, step S322 is performed in accordance with the flowchart of FIG. 3B. Step S322 continues using the present parameters and regards the present parameters as the matching set of parameters. Then, in step S312, the video coding hardware 200 executes video coding in accordance with the set of parameters that is regarded as the matching set of parameters.



FIG. 4 is a flowchart depicting a parameter training step S308 in accordance with an exemplary embodiment of the disclosure. To build a parameter database in the storage unit 228, a training video with a series of training images may be provided to evaluate matching sets of parameters for several categories of graphics features. The update of the training image is performed in step S402. In step S404, a graphic analysis is performed on the training image to analyze the color space data, YUV, to extract the motion features, complex spatial texture features and so on, for graphics classification into the following categories: static and flat images, static and complex images, flat images with slow motion, complex images with slow motion, and flat images with fast motion. Furthermore, in some exemplary embodiments, step S404 further performs I-frame detection for scene-change detection. The analyzed result of step S404 may be used to determine the category of graphics features of the current training image. In step S406, video coding is performed. For example, a default set of parameters is initially used in the video coding of the training image. In step S408, the video coding efficiency is evaluated and collected. In step S410, it is determined whether any possible combination of parameters has not been tested. When there is a possible combination of parameters has not been tested, step S412 is performed to change the combination of parameters and step S406 is repeated to use the changed combination of parameters to execute the video coding. When it is determined in step S410 that there is no other possible combinations of parameters to be tested, step S414 is performed and the set of parameters with the best video coding efficiency is recorded into the storage unit 228 to build the parameter database and regarded as the matching set of parameters corresponding to the category, determined in step S404, of graphics feature.


Any technique using the aforementioned concept in graphics processing is within the scope of the invention. The invention further involves a graphics processing methods, which are not limited to any specific hardware architecture.


While the invention has been described by way of example and in terms of the preferred embodiments, it should be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims
  • 1. A graphics processing unit, comprising: hardware for parameter setting, using a storage unit to store a parameter database, performing a graphic analysis on a video received by the graphics processing unit to extract graphics features, and looking up the parameter database in accordance with the graphics features to find a matching set of parameters; andhardware for video coding, encoding the video based on the matching set of parameters.
  • 2. The graphics processing unit as claimed in claim 1, wherein: when not finding any set of parameters in the parameter database matching the graphics features, the hardware for parameter setting uses a variety of parameter combinations to evaluate the matching set of parameters based on video coding efficiency.
  • 3. The graphics processing unit as claimed in claim 2, wherein: the hardware for video coding further fills the evaluated matching set of parameters into the parameter database.
  • 4. The graphics processing unit as claimed in claim 3, wherein: the hardware for video coding further updates the matching set of parameters with respect to a change in the graphics features of the video.
  • 5. The graphics processing unit as claimed in claim 1, wherein: the hardware for video coding further extracts motion features and complex spatial texture features from color space data, YUV, of the video received by the graphics processing unit, and consults the parameter database accordingly.
  • 6. The graphics processing unit as claimed in claim 5, wherein: the hardware for video coding further detects scene changes based on the color space data, YUV, of the video received by the graphics processing unit, and consults the parameter database accordingly.
  • 7. The graphics processing unit as claimed in claim 1, wherein: the parameter database uses a lookup table to record multiple sets of parameters; andeach set of parameters includes at least a frame type, a quantization parameter, a search range, a coding unit size, a coding unit depth, a discrete cosine transform unit size, or a discrete cosine transform unit depth.
  • 8. A graphics processing method, comprising: using hardware for parameter setting to provide a storage unit to store a parameter database, and further using the hardware for parameter setting to perform a graphic analysis on a video received by a graphics processing unit to extract graphics features and consulting the parameter database in accordance with the graphics features to find a matching set of parameters; andusing hardware for video coding to encode the video based on the matching set of parameters.
  • 9. The graphics processing method as claimed in claim 8, further comprising: when the hardware for parameter setting does not find any set of parameters in the parameter database matching the graphics features, further using the hardware for parameter setting to use a variety of parameter combinations to evaluate the matching set of parameters based on video coding efficiency.
  • 10. The graphics processing method as claimed in claim 9, further comprising: driving the hardware for video coding to further fill the evaluated matching set of parameters into the parameter database.
  • 11. The graphics processing method as claimed in claim 10, further comprising: driving the hardware for video coding to further update the matching set of parameters with respect to a change in the graphics features of the video.
  • 12. The graphics processing method as claimed in claim 8, further comprising: using the hardware for video coding to extract motion features and complex spatial texture features from color space data, YUV, of the video received by the graphics processing unit, and consulting the parameter database accordingly.
  • 13. The graphics processing method as claimed in claim 12, further comprising: using the hardware for video coding to detect scene changes based on the color space data, YUV, of the video received by the graphics processing unit, and consulting the parameter database accordingly.
  • 14. The graphics processing method as claimed in claim 8, further comprising: using the parameter database to build a lookup table to record multiple sets of parameters,wherein each set of parameters includes at least a frame type, a quantization parameter, a search range, a coding unit size, a coding unit depth, a discrete cosine transform unit size, or a discrete cosine transform unit depth.
Priority Claims (1)
Number Date Country Kind
201410765122.X Dec 2014 CN national