This relates generally to video display controllers that control video displays. A video display controller handles the merging and blending of various display planes.
The final picture on a display screen may consist of various content types. In addition, the final display may include one, two, or more video display windows, menus, television guides, closed captioned text, volume bars, channel numbers, and other overlays. Each of these display content types are rendered separately and merged or blended with others in the video display controller.
Referring to
A video display controller 18 receives video content from various sources and blends and merges it for display on a video display 20. The video display 20 can be any type of video display, including a television.
A memory storage 22 is also coupled to the system bus 16.
Video data sources may be coupled to the system bus 16. The video data may be received from a media player, from a broadcast source, from a cable source, or from a network, to mention a few examples.
Referring to
Each stage has the flexibility to choose the relevant two pixels to be blended and their alpha values. In one embodiment, one of the pixels is always received directly from an attached plane. The previous source pixel is selectable from two other sources called either left blender out or right blender out.
Thus, in the embodiment shown in
The output from the blend stage 24a is provided to the left blender out of the next stage 24b. The next stage also receives the alpha pipe and right blender out in the same way as the previous stage. The pixel pipe is connected to the universal pixel plane 0 and the output of the blend stage 24b is coupled to the next blend stage 24g. It is connected to receive the same right blender out and alpha pipe input as the previous stages. Its pixel pipe input is provided from the universal pixel pipe 1. Its output goes to a multiplexer 30 that goes to a first output window TG0. That output also goes to the next blend stage 24e and on other blend stage 24c.
The blend stage 24c receives its pixel plane data from the index-alpha plane 0. The right blender out comes from the blend stage 24e and the output is provided both to the multiplexer 30 and the blend stage 24d.
The blend stage 24e receives its alpha pipe input from index-alpha plane 1. The pixel pipe input is received from the universal pixel plane 2 and the right blender out comes from CColor1. The output is provided to the multiplexer 26 and to the multiplexer 30.
The blend stage 24f has an output connected to the multiplexer 28, which may provide the second video window TG1. The right blender out is connected to CColor1. The input pixel pipe is connected to universal pixel plane 3. The alpha pipe is coupled to the index-alpha plane 1. The output from the blend stage 24f goes to the multiplexer 28 and the multiplexer 26 for selective display in either the window TG0 or the window TG1.
The processing in each blend stage 24 and its hardware may be the same, with only the inputs being different. Thus, as shown in
The blending operation basically uses the alpha value to adjust the relative transparency between two pixels to be blended. The blending can be done in any domain, including the RGB or YCbCr domains, to mention two examples.
The multiplexer 34 selects either per pixel alpha values or alpha pipe values. The constant alpha value is basically a scaling ratio that can be used alone or with a per pixel alpha value. Usually a constant alpha is used for scaling the selected per pixel alpha value, but it is not used alone in some embodiments. When the selected per pixel alpha value is always a constant “1” (in that case, the pixel pipe or the alpha do not really have an alpha source), the scaled alpha value is actually the constant alpha value. In this sense, the constant alpha value looks like it is used alone. The resulting alpha value “a” may be used in the multiplier 38 or multiplier 40, as appropriate.
Alpha-blending is used to create a semi-transparent look. The color components of the prior stage picture pixels (output of multiplexer 32) are multiplied by 1-alpha and added with this pipe's color (normally pre-multiplied with alpha) in one embodiment. When alpha=0, the new pixel is completely transparent and therefore invisible in one embodiment. When alpha=1, this pipe's pixel is opaque and prior pixel is invisible in one example.
The alpha value used for blending may have two sources. The alpha value may come with pixels from the pixel pipe (PP input) which is the output of a Universal Pixel Plane (UPP). In this case, the content of every UPP output pixel includes an alpha value. As an example, for video format of ARGB8888, each pixel has 4 components: 8 bit alpha, 8 bit R, 8 bit G, 8 bit B. As another option, the alpha value may come from a separate alpha pipe (AP input) which is the output of an Alpha-Index plane (IAP). In this case, the content of every IAP output only has an alpha value. As an example, for ARIB standard, every output of the switching plane corresponds to a pixel position and a one bit alpha value is used to select a pixel either from a still picture or from the video plane (blending has only two effects: transparent and opaque). See Association of Radio Industries and Businesses, Video Coding, Audio Coding and Multiplexing Specifications for Digital Broadcasting (ARIB STD-B32) Ver. 2.1 (Mar. 14, 2007).
For both of these alpha value sources, the alpha value is pixel based, i.e., it changes pixel by pixel. Each pixel has its own alpha value. That is why it is called a per pixel alpha value.
A constant alpha value is a programmable constant and it is plane-based (coming from the attached plane, so it does not change for a specific plane). It is used to scale the selected alpha value from either of the alpha sources described above.
A pseudo code functional description for the embodiment of
The multiplexer 34 in
(1) it selects an alpha value from either of the per pixel alpha (PP) or alpha pipe (AP);
(2) it scales the result of (1) above with a constant alpha; and/or
(3) it selects whether to apply scaling or not.
Thus, an alpha value can come from three different sources: a per pixel alpha from the attached plane, a constant alpha, or a per pixel output from a separate alpha plane. In addition, if either of the per pixel alpha sources is selected, there is an additional option to scale with that constant alpha value. The selected alpha value is then used in the blending operation. For the current plane pixels, optionally, the alpha value is not multiplied, it is assumed that the pixels are pre-multiplied. The previous source pixel is always multiplied by alpha value 1 (should be 1-alpha).
The configuration shown in
Through the use of a flexible blender architecture, a variety of applications, including high definition (HD) DVD and Direct TV® satellite broadcasting, can be supported in some embodiments. The seven blend stages 24 can be partitioned into two separate data paths to support two simultaneous display outputs, indicated as TG0 and TG1 in one embodiment. A flexible number of planes can be assigned to these paths to get different effects.
The graphics processing techniques described herein may be implemented in various hardware architectures. For example, graphics functionality may be integrated within a chipset. Alternatively, a discrete graphics processor may be used. As still another embodiment, the graphics functions may be implemented by a general purpose processor, including a multicore processor.
References throughout this specification to “one embodiment” or “an embodiment” mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present invention. Thus, appearances of the phrase “one embodiment” or “in an embodiment” are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application.
While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.