1. Field of the Invention
The present invention relates to video processing devices employing video data of sprites, e.g. moving elements of graphical objects used in video games, displayed on screens of visual displays.
The present application claims priority on Japanese Patent Application No. 2009-176766 (Filing Date: Jul. 29, 2009), the content of which is incorporated herein by reference.
2. Description of the Related Art
Conventionally-known video processing devices employing sprites need to perform rendering on display buffers storing video data of sprites, which are displayed on screens of visual displays.
Alpha blending is a known technique for rendering sprites with video processing devices. Alpha blending refers to color values representing the brightness or intensity of primary colors, i.e. red (R), green (G), and Blue (B), in pixels with respect to a source picture (i.e. an original picture subjected to alpha blending) and a destination picture (i.e. a background picture dedicated to the rendering of the source picture) as well as alpha data representing the transparency of each pixel. Based on color values and alpha data, alpha blending calculates color values of pixels with respect to a rendering-completed picture in which the source picture is embedded in the destination picture, or it calculates color values of pixels in overlapped areas in which the source picture overlaps the destination picture. A concrete explanation of alpha blending will be given with reference to
D′=(1−α)·SC+α·D (1)
The above calculation makes it possible to produce a rendering-completed picture which visualizes the destination picture prior to rendering through the source picture with the transparency designated by alpha data.
Alpha blending is not only used as a visual effect visualizing a back-side picture through a front-side picture, but also as an anti-aliasing filter rendering quality outlines of sprites. Next, alpha blending serving as an anti-aliasing filter will be described with reference to
The anti-aliasing filter is used to smooth the outline of the colored region by smoothing out the jagged boundary. Specifically, the anti-aliasing filter performs alpha blending by use of sprite data of
In the alpha table of
Patent Documents 1 and 2 disclose alpha blending serving as anti-aliasing filters.
Patent Document 1: Japanese Patent Application Publication No. 2004-213464
Patent Document 2: Japanese Patent Application Publication No. 2008-198065
Conventionally-known video processing devices employing sprites are designed to use a single alpha table per one type of sprite; hence, alpha blending is performed using a single alpha table dedicated to the rendering of one type of sprite. Since conventionally-known video processing devices are equipped with alpha tables inseparably ascribed to sprites, they are limited in the range of rendering via alpha blending. Since a single sprite table is dedicated to one type of sprite, it is difficult to adequately demonstrate an anti-aliasing effect even when alpha blending is used as an anti-aliasing filter. This drawback occurs when expanding or reducing sprites via rendering. Patent Document 2 may handle this drawback by expanding or reducing alpha tables in conformity with expanded sizes of sprites or reduced sizes of sprites, wherein rendering is performed on expanded sprites or reduced sprites with references to expanded alpha tables or reduced alpha tables. Although alpha tables are expanded or reduced in conformity with expanded sizes of sprites or reduced sizes of sprites, areas of intermediate alpha data (which are intermediate between nontransparent alpha data and transparent alpha data) do not overlap with jagged boundaries of colored regions of expanded or reduced sprites; hence, it is difficult to adequately achieve an anti-aliasing effect.
It is an object of the present invention to provide a video processing device which improves alpha blending so as to achieve rich variations in rendering.
A video processing device of the present invention is constituted of a display buffer that stores video data displayed on the screen of a visual display, an alpha buffer that stores alpha data for use in alpha blending on video data involved in rendering in units of pixels, and a sprite rendering processor that alternately performs first rendering and second rendering. The first rendering is performed via alpha blending on video data of sprites, rendering-destination video data of the display buffer, and rendering-destination alpha data of the alpha buffer which are produced via the second rendering with the alpha buffer, thus producing resultant video data, which is written over rendering-destination video data in the display buffer.
The video processing device is equipped with an attribute data table that stores sprite attribute data representing rendering conditions of sprites with respect to the first rendering, and alpha attribute data representing rendering conditions of alpha data with respect to the second rendering.
Since the second rendering of alpha data is performed independently of the first rendering of video data of sprites, it is possible to arbitrarily change alpha data which are used for alpha blending between sprites and its background pictures. Thus, it is possible to produce a wide variety of rendering using alpha blending on the screen of a visual display.
These and other objects, aspects, and embodiments of the present invention will be described in more detail with reference to the following drawings.
The present invention will be described in further detail by way of examples with reference to the accompanying drawings.
Upon receipt of an instruction from a CPU 201, the video processing device 100 reads sprite pattern data from a pattern memory 202 (i.e. an external memory such as a ROM) so as to produce video data, which are displayed on a screen of a liquid crystal display (LCD) 203. As shown in
Conventionally-known video processing devices employ one-to-one correspondence between sprites and alpha tables. In contrast, the video processing device 100 of the present embodiment does not necessarily need one-to-one correspondence between sprite pattern data and alpha tables stored in the pattern memory 202. It is possible to store multiple types of alpha tables in correspondence with one type of sprite pattern data in the pattern memory 202. Alternatively, it is possible to additionally store alpha tables dedicated to unspecified sprite pattern data in the pattern memory 202. The present embodiment does not impose restrictions on the correspondence between sprite pattern data and alpha tables, thus making it possible to freely select alpha tables for use in alpha blending in rendering sprites.
Next, the constitution of the video processing device 100 will be described in detail. A register 101 stores control information controlling several parts of the video processing device 100. The CPU 201 and a sprite rendering processor 110 of the video processing device 100 write the control information into the register 101.
An attribute data table 102 (e.g. a RAM) stores sprite attribute data and alpha attribute data. Sprite attribute data designate rendering conditions of sprites, while alpha attribute data designate rendering conditions of alpha tables. The present embodiment is characterized in that alpha tables are subjected to rendering according to alpha attribute data independently of sprites, wherein a lower layer (representing a picture which precedes the alpha table in rendering) and an upper layer (representing a picture which is succeeded by the alpha table in rendering) are subjected to alpha blending. Details will be discussed later.
One sprite attribute data designating one rendering condition of each sprite is a collection of information such as a display position of the sprite in a display screen of the LDC 203 (e.g. an upper-left top position of the sprite), a pattern memory address designating a storage area of sprite pattern data in the pattern memory 202, X-direction and Y-direction dimensions of the sprite, and an expansion/reduction factor of the sprite. One alpha attribute data designating one rendering condition of each alpha table is a collection of information such as a position of an area adopting the alpha table in the display screen of the LCD 203 (e.g. an upper-left top position of an area adopting the alpha table), a pattern memory address designating a storage area storing the alpha table in the pattern memory 202, X-direction and Y-direction dimensions of the alpha table, and an expansion/reduction factor of the alpha table.
A pattern data decoder 103 reads sprite pattern data (involved in rendering) from the pattern memory 202 so as to decode them into video data representing colors of pixels constituting sprites.
A color palette 104 is used in light of color codes designating colors of pixels of sprite pattern data (involved in rendering). The color palette 104 serves as a conversion table for converting sprite pattern data into video data of sprites (i.e. sprite data) representing the brightness or intensity of color components of pixels by way of the predetermined color representation consisting of three color components such as R, G, B and Y, U, V. The CPU 201 is able to update the content of the color palette 104.
Line buffers 105A and 105B store video data which are displayed on the screen of the LCD 203, wherein they have the same storage capacity for storing video data of one line (or one horizontal scanning line). Specifically, one of the line buffers 105A is exclusively used for rendering, while the other is exclusively used for displaying video data. They switch over their functions every time one horizontal scanning line is switched over to the next horizontal scanning line, thus interchanging a “rendering” line buffer and a “display” line buffer. One line buffer serving as a “rendering” line buffer is allocated to rendering of video data.
An alpha buffer 106 stores alpha data which are used to perform alpha blending with video data of one horizontal scanning line (stored in the rendering line buffer) and video data involved in rendering. The alpha buffer 106 includes a plurality of areas storing alpha data in connection with a plurality of pixels constituting one horizontal scanning line. The alpha buffer 106 serves as a rendering destination of each alpha table.
A display control unit 107 generates synchronizing signals such as vertical synchronizing signals VSYNC_N and HSYNC_N, which are used for display control on the LCD 203, so as to supply them to the LCD 203 and the sprite rendering processor 110. In addition, the display control unit 107 reads video data of one horizontal scanning line (constituting a plurality of pixels) from the display line buffer so as to supply them to the LCD 203 in each horizontal scanning period.
The sprite rendering processor 110 performs first rendering or second rendering in accordance with sprite/alpha attribute data which are written into the attribute data table 102 under the control of the CPU 201. In the first rendering, the sprite rendering processor 110 produces sprite data designated by sprite attribute data of the attribute data table 102 so as to store them in the rendering line buffer (i.e. one of the line buffers 105A and 105B). In the second rendering, the sprite rendering processor 110 stores alpha data of the alpha table designated by alpha attribute data of the attribute data table 102 in the alpha buffer 106. In the first rendering, the sprite rendering processor 110 performs alpha blending using alpha data of the alpha buffer 106 together with rendering of sprite data.
The sprite rendering processor 110 performs the first rendering or the second rendering by means of the alpha blending unit 111 and the calculation unit 112. Detailed procedures will be described below.
In the first rendering, the alpha blending unit 111 executes the following tasks a1 to a4.
(a1) Read rendering-destination video data (i.e. video data allocated to an area of rendering destination) with respect to sprite data (i.e. video data of sprites which will be displayed on one horizontal scanning line in the next horizontal scanning period) among video data of the rendering line buffer.
(a2) Read alpha data corresponding to the same pixels as rendering-destination video data among alpha data of the alpha buffer 106.
(a3) Perform alpha blending with sprite data, rendering-destination video data, and alpha data read from the alpha buffer 106, thus producing resultant video data, which are written over video data of the rendering line buffer.
(a4) Set alpha data dedicated to alpha blending (i.e. alpha data corresponding to the same pixels as rendering-destination video data among alpha data of the alpha buffer 106) to “0” representing “perfect transparency”.
In the second rendering, the calculation unit 112 executes the following tasks b1 and b2.
(b1) Read rendering-destination alpha data from the alpha buffer 106.
(b2) Write alpha data involved in rendering over rendering-destination alpha data whose value is zero in the alpha buffer 106. Alternatively, write a multiplication result of alpha data, in which alpha data involved in rendering is multiplied by rendering-destination alpha data, over rendering-destination alpha data whose value is not zero in the alpha buffer 106.
The sprite rendering processor 110 executes one-line rendering every time the display control unit 107 outputs one horizontal synchionizing signal HSYNC_N.
The sprite rendering processor 110 starts the one-line rendering in connection with the issuance of horizontal synchronizing signals HSYNC_N, wherein it performs line counting in step S1. Herein, the sprite rendering processor 110 counts the number of horizontal synchronizing signals HSYNC_N after a vertical synchronizing signal VSYNC_N, thus determining the number of the present horizontal scanning line, counted from the uppermost horizontal scanning line on the display screen, in the present horizontal scanning period. In step S2, the sprite rendering processor 110 initializes all the video data of the rendering line buffer with video data of the background color (e.g. white) while initializing all the alpha data of the alpha buffer 106 with “0” representing the perfect non-transparency.
In step S3, the sprite rendering processor 110 reads one attribute data from the attribute data table 102. As shown in
In step S4, the sprite rendering processor 110 discriminates whether sprite attribute data or alpha attribute data is read from the attribute data table 102 in step S3. When sprite attribute data is read from the attribute data table 102, the sprite rendering processor 110 performs the first rendering in step S5. When alpha attribute data is read from the attribute data table 102, the sprite rendering processor 110 performs the second rendering in step S6. Upon completion of the first rendering of step S5 or the second rendering of step S6, the sprite rendering processor 110 makes a decision as to whether or not it reads all the attribute data from the attribute data table 102 in step S7. When the decision result of step S7 is “NO”, the flow proceeds back to step S3 so that the sprite rendering processor 110 restart to read next attribute data from the attribute data table 102. Subsequently, the sprite rendering processor 110 performs the first rendering of step S5 based on sprite attribute data or the second rendering of step S6 based on alpha attribute data. When the sprite rendering processor 110 completely reads all the attribute data from the attribute data table 102 so that the decision result of step S7 turns to “YES”, it switches over the rendering line buffer and the display line buffer in step S8. Thus, it is possible to complete one-line rendering.
As the attribute data table 102 stores a series of attribute data shown in
(a) First rendering based on sprite attribute data SP1 (step S5)
(b) Second rendering based on alpha attribute data A1 (step S6)
(c) First rendering based on sprite attribute data SP2 (step S5)
(d) Second rendering based on alpha attribute data A2 (step S6)
(e) First rendering based on sprite attribute data SP3 (step S5)
Video data written into the rendering line buffer 105 according to first rendering and alpha data written into the alpha buffer 106 according to second rendering are referred to as layers. Hence, a plurality of layers is accumulated with the hierarchical relationship therebetween. Considering two types of rendering, data written into the rendering line buffer 105 or the alpha buffer 106 according to preceding rendering are subclassified into lower layers, while data written into the rendering line buffer 105 or the alpha buffer 106 according to subsequent rendering are subclassified into upper layers.
In the present horizontal scanning period, the sprite rendering processor 110 performs first rendering of step S5 (in the one-line rendering of
In the task a1, the sprite rendering processor 110 reads rendering-destination video data Rs1, which serves as a rendering destination for video data of the sprite SP2, from the rendering line buffer 105. In the task a2, the sprite rendering processor 110 reads alpha data Ra1 corresponding to the same pixels as the rendering-destination video data Rs1 from the alpha buffer 106. Specifically, it reads a part of the alpha table A1 shown in
Rs1′=(1−Ra1)−SP2+Ra1·Rs1 (2)
In the task a4, alpha data Ra1 used in the alpha blending (i.e. alpha data corresponding to the same pixels as rendering-destination video data Rs1) is set to “0” representing the perfect non-transparency.
Next, a working example of second rendering of step s6 will be described with respect to the alpha table A2 shown in
Subsequently, the first rendering is performed on video data of the sprite SP3 such that alpha blending is performed using rendering-destination alpha data of the alpha buffer 106.
The attribute data table 102 stores sprite attribute data and alpha attribute data such that, for example, sprite attribute data designating an expansion or a reduction of a sprite with a certain expansion/reduction factor is preceded by alpha attribute data allocating an alpha table of an anti-aliasing filter on the expanded/reduced sprite to the same rendering destination as the expanded/reduced sprite. Thus, it is possible to adequately demonstrate an anti-aliasing effect with respect to expanded/reduced sprites.
Upon rendering the sprite SP2, alpha blending is performed on the area R4 according to Equation (3) so as to produce the alpha-blending result of video data SP2′, which is written into the rendering line buffer 105.
SP2′=(1−A1)·SP2+A1·SP1 (3)
Upon rendering the sprite SP3, alpha blending is performed on the area R4 according to Equation (4) so as to produce the alpha-blending result of video data SP3′, which is written into the rendering line buffer 105.
SP3′=(1−A2)·SP3+A2·SP2′=(1−A2)·SP3+A2·{(1−A1)·SP2+A1·SP1}=(1−A2)·SP3+A2·(1−A1)·SP2+A1·SP1 (4)
Alpha data of the alpha table A1 are not used in rendering the sprite SP2 with respect to the areas R5 and R6. Upon rendering the sprite SP3 with respect to the areas R5 and R6, alpha blending is performed using the multiplication result “A2·A1”, which is produced by multiplying “previously-rendered” alpha data of the alpha table A1 by “subsequently-rendered” alpha data of the alpha table A2, according to Equation (5) so as to produce the alpha-blending result of video data SP3′, which is written into the rendering line buffer.
SP3′=(1−A2·A1)·SP3+A2·A1·SP1 (5)
Above Equations (4) and (5) explicitly shows that, upon completion of alpha blending of rendering, the same weight coefficient of A2·A1 is set to both the sprite SP1 of video data SP3′ in the area R4 and the sprite SP1 of video data SP3′ in the areas R5, R6. Thus, it is possible to appropriately adjust a see-through manner of the background visualized through sprites in the areas R4, R5, and R6 on the display screen of the LCD 203. Specifically, the sprite SP1 (serving as the background) is visualized through the other sprites SP2 and SP3, which are aligned in front-back directions, in the area R4 on the display screen, while the sprite SP1 (serving as the background) is visualized through the sprite SP3 in the areas R5 and R6 on the display screen.
It is possible to create variations in relation to the present embodiment as follows.
Lastly, the present invention is not necessarily limited to the present embodiment and variations, which can be further modified within the scope of the invention as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-176766 | Jul 2009 | JP | national |