This application claims the benefit of Korean Patent Application No. 10-2004-0027154, filed on Apr. 20, 2004, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of the Invention
The present invention relates to an apparatus for rendering 3D (three-dimensional) graphics data and more particularly to an apparatus and method for reconstructing 3D graphics data.
2. Description of the Related Art
A VRML (Virtual Reality Modeling Language), MPEG (Moving Picture Expert Group) standards, and a file format defined by a general commercial program (e.g., 3D Studio Max, Maya, etc.) are used by an apparatus for rendering 3D graphics data on a screen. 3D data includes: geometric information (e.g., information about positions of 3D points constituting an object and connection information thereof) of an object positioned in 3D space; material information of an object (e.g., information about texture, transparency of an object, color of an object, and shininess of an object's surface); and information about change of such information depending on position, characteristics of a light source, and time. Such 3D graphics data is expressed with a structure intuitively or logically understandable so that a user may easily generate and modify the graphics data. An object in which 3D graphics data is expressed with a structure intuitively understandable in this manner is called a scene graph.
To read in such 3D graphics data and output the same to a screen, an apparatus for analyzing the meaning of the read 3D graphics data and performing data conversion, is required. Generally, such an apparatus is called a 3D graphics rendering engine. The 3D graphics rendering engine includes a parser and a renderer.
The parser reads in and interprets 3D graphics data. Namely, the parser identifies whether the read data is geometric information of an object, material information of an object, or information about a subordinate relationship between objects originated from the scene graph structure, and interpreting and judging the information.
The renderer changes the scene graph parsed by the parser into a form appropriate for display on a screen of an output device. Since a screen is appropriate for displaying 2D information, the 3D scene graph cannot be directly used. A primary role of the renderer is to convert 3D graphics data into 2D graphics data by performing a coordinate transformation on an object expressed in 3D space. Therefore, output data from the renderer is converted into 2D graphics data.
However, a problem with such a conventional 3D graphics rendering engine lies in producing a 2D output image by performing a 2D data conversion process only on 3D graphics data without changing a 3D graphics data structure. The scene graph, which is a method for expressing a data structure of the 3D graphic data, is an easy structure that can be easily understood and modified intuitively by a user as described above, but storage space is wasted, generation of an output image is delayed, and image quality of the output image is deteriorated. As shown in
Also, when material information changes between two neighboring nodes during the rendering process, the renderer throws away the original material information and reads the new material information into the storage space of the terminal. Generally, since the process of reading and/or writing data from and to the storage space requires more time compared to other processes such as mathematical computation, such information change causes a large delay in the whole process of converting 3D information into 2D information. Since the conventional 3D graphics rendering engine always reads material information due to a discontinuous arrangement of the same material information, the generation of an output image is delayed.
Also, information representing transparency of an object may be included in the information held by the 3D graphics data. When the nodes having this information are arranged in a discontinuous and irregular order, exact conversion of 3D information into 2D information cannot be guaranteed. When 3D objects having a transparent property are positioned in an overlapping manner, conversion of 3D information into 2D information should be performed by reflecting characteristics such as transparency and the distance of other 3D objects existing in the viewing frustum. However, since the conventional 3D graphics rendering engine does not reflect transparency and distance between 3D objects that belong to each node when determining a rendering order, deterioration in image quality of an output image occurs.
An aspect of the present invention provides an apparatus and method for reconstructing 3D graphic data capable of grouping nodes of a scene graph according to material information such as transparency information and texture information.
Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
According to an aspect of the present invention, there is provided an apparatus for reconstructing 3D graphics data including: a parsing unit to parse 3D graphics data; an aligning unit to align nodes storing geometric information and material information that corresponds to the geometric information included in the 3D graphics data parsed by the parsing unit, and objects of a scene graph having hierarchy information regarding the nodes using predetermined detail information included in the material information as a reference; and a rendering unit to interpret the objects of the scene graph aligned by the aligning unit to convert the 3D graphics data into 2D graphics data.
According to another aspect of the present invention, there is provided a method of reconstructing 3D graphics data including: parsing 3D graphics data; aligning nodes storing geometric information and material information that corresponds to the geometric information included in the 3D graphics data, and objects of a scene graph having hierarchy information regarding the nodes using predetermined detail information included in the material information as a reference; and interpreting the aligned objects of the scene graph to convert the 3D graphics data into a 2D graphics data.
These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below to explain the present invention by referring to the figures.
The invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art.
The parsing unit 100 reads in and parses 3D graphics data such as VRML (Virtual Reality Modeling Language) data, or MPEG (Moving Picture Expert Group) data. The parsing unit 100 parses the scene graph of the 3D graphics data which is input through an input terminal IN1, and outputs information regarding the parsed scene graphs, to the aligning unit 120. The scene graph has nodes that include geographic information and material information that corresponds to the geographic information, and also has hierarchy information of the nodes. Assume that detail information regarding geometric information and material information of the scene graphs, is an object of the scene graph. The geometric information is said to be 3D information representing the appearance of an object. The material information includes detail information. The detail information represents texture, transparency of an object, color of an object and shininess of an object's surface. Since the specific functions of the parsing unit 100 are similar to the conventional art, a description thereof will be omitted.
The aligning unit 120 aligns objects of the scene graph parsed by the parsing unit 100 using as a reference predetermined detail information included in the material information, and outputs the objects of the aligned scene graph to the rendering unit 140. In one embodiment, aligning is performed on the transparency information and the texture information included in the predetermined detail information. The transparency information is divided into transparent geometric information and opaque geometric information depending on transparency. Transparent geometric information and opaque geometric information are separated and individually stored. The opaque geometric information is aligned according to texture information and the transparent geometric information is aligned in consideration of a distance between a camera and a position of the transparent geometric information.
The first scene graph interpreting unit 200 receives objects of the scene graph parsed by the parsing unit 100 via an input terminal IN2, and interprets the objects of the input scene graph and outputs the same to the detail information classifying unit 220.
When the scene graph is received via an input terminal IN3, the shape node determination unit 300 determines whether a node is a shape node of the scene graph, and outputs the determination results to the sub-node detecting unit 320 or to the detail information classifying unit 220 via an output terminal OUT3. A shape node is a node storing the geometric information and material information that corresponds to the geometric information. Since the shape node is a leaf node, the shape node does not have sub-nodes. If the node inspected by the shape node determination unit 300 is not a shape node, the inspected node is a group node having sub-nodes. If the inspected node is a shape node, the shape node determination unit 300 outputs the determination results to the detail information classifying unit 220 via the output terminal OUT3. However, if the inspected node is not a shape node, the shape node determination unit 300 outputs the determination results to the sub-node detecting unit 320.
The sub-node detecting unit 320 detects sub-nodes connected to a sub-hierarchy of the inspected node. If the determination results indicate that the inspected node has sub-nodes, the sub-node detecting unit 320 detects the sub-nodes in the sub-hierarchy of the inspected node, and outputs the detected results to the shape node determination unit 300.
When the result of detection is received from the sub-node detecting unit 320, the shape node determination unit 300 determines whether the detected sub-nodes are shape nodes.
When objects of the interpreted scene graph are received from the first scene graph interpreting unit 200, the detail information classifying unit 220 classifies and stores the interpreted objects using predetermined detail information as a reference, and outputs the stored results to the classification completion determination unit 260.
The matrix storage unit 400 determines which hierarchy of the scene graph the read shape node belongs to, computes a GTM (Global Transformation Matrix) capable of performing conversion from local coordinates of the relevant hierarchy to global coordinates, stores the results in the read shape node, and outputs the read shape node to the detail information access unit 420.
The detail information access unit 420 reads predetermined detail information of the material information of the shape node. The detail information access unit 420 reads predetermined detail information of the material information of the interpreted shape node and outputs the same to the transparent geometric storage unit 440 or the opaque geometric storage unit 460 according to the predetermined detail information.
In particular, the detail information access unit 420 reads texture information and/or transparency information of the predetermined detail information of the material information of the shape node. If the transparency information is transparent geometric information representing something transparent, the detail information access unit 420 outputs the read shape node to the transparent geometric storage unit 440; and if the transparency information is the opaque geometric information representing something opaque, the detail information access unit 420 outputs the read shape node to the opaque geometric storage unit 460.
The transparent geometric storage unit 440 uses a one-dimensional array storage structure to store the read shape node. After a distance between the read shape node and a camera viewing the transparent geometric information is computed, the shape nodes are stored in the array, in which shape nodes which are distant from the camera are stored first, and shape nodes which are close to the camera are stored later. Such a storing order is intended to prevent image quality deterioration due to corruption of the transparent geometric information.
The opaque geometric storage unit 460 has a ring structure for storing texture information included in the detail information. The ring structure, in which first data and last data are stored in adjacent storage units, denotes a storage structure of a linked list.
The opaque geometric storage unit 460 determines whether texture information included in the detail information of the read shape node is already included in the ring. If the texture information is not included in the ring, the texture information is added to the ring and the opaque geometric information of the shape node is stored in a sub-ring of the texture information. If the texture information is already included in the ring, the texture information is not added to the ring, and the opaque geometric information of the shape node is stored in a sub-ring of the texture information which is already stored in the ring.
The scene graph shown in
If the shape node includes opaque geometric information, the texture ring is constructed using the texture information included in the shape nodes and the geometric information that includes the texture information is stored in the sub-rings of the texture information, as shown in
The classification completion determination unit 260 determines whether all the objects of the interpreted scene graph are classified, and outputs the determination results via the output terminal OUT2. At this point, if not all nodes of the interpreted scene graph are classified completely with the predetermined detail information used as a reference, the classification completion determination unit 260 outputs the determination results to the first scene graph interpreting unit 200. The first scene graph interpreting unit 200 reinterprets the scene graph in response to the determination results of the classification completion determination unit 260.
As described above, the scene graph is aligned based on each piece of texture information by the aligning unit, and thus the same texture information need not be stored repeatedly.
The rendering unit 140 interprets the objects of the scene graphs aligned by the aligning unit 120 to convert 3D data into 2D graphics data and outputs the converted 2D graphics data via the output terminal OUT1.
The aligned scene graph interpreting unit 500 interprets predetermined detail information received via an input terminal IN5 and the transparent and opaque geometric information that corresponds to the predetermined detail information, and outputs the interpreted results to the converting unit 520.
The aligned scene graph interpreting unit 500 receives the texture information and the opaque geometric information stored in the sub-ring of the received texture information from the aligning unit 120, and interprets the received texture information and the opaque geometric information. Also, the aligned scene graph interpreting unit 500 receives the transparent geometric information from the aligning unit 120 and interprets the received transparent geometric information.
The converting unit 520 converts the predetermined detail information, the transparent geometric information, and the opaque geometric information into 2D graphics data, and outputs the converted results via an output terminal OUT5. In particular, the converting unit 520 converts the 3D graphics data into the 2D graphics data starting with the transparent geometric information farthest from the camera and finished with the transparent geometric information closest to the camera, with respect to the interpreted transparent geometric information.
The rendering unit 140 interprets and converts the opaque geometric information first, and then interprets and converts the transparent geometric information. If rendering is performed on the aligned transparent object after rendering is performed on the aligned opaque object, image quality deterioration that may be generated due to the transparent object can be prevented.
A method for reconstructing the 3D graphics data according to an embodiment of the present invention will now be described with reference to the accompanying drawings.
First, the 3D graphics data is parsed (operation 1000). The scene graph among the 3D graphics data is also parsed. The scene graph includes the nodes that include geometric information and material information that corresponds to each piece of the geometric information. The scene graph also includes hierarchy information for the nodes.
Next, the objects of the scene graph of the 3D graphics data are aligned using the predetermined detail information of the material information as a reference (operation 1002). At this point, the aligned predetermined detail information is the transparency information and the texture information.
First, the objects of the scene graph are interpreted (operation 1100).
First, it is determined whether one of the nodes is a shape node having the geometric and material information (operation 1200).
If the inspected node is not a shape node, the sub-nodes connected to the inspected node are detected and the operation 1200 (operation 1202) is performed.
However, if the inspected node is a shape node, the operation 1102 is performed.
After the operation 1100, the objects of the interpreted scene graph are classified and stored using the predetermined detail information as a reference (operation 1102).
First, a hierarchical position of the shape node is determined and a GTM is computed (operation 1300).
Then, the predetermined detail information of the material information stored in the shape node is read (operation 1302).
Subsequently, the transparent and opaque geometric information included in the material information of the shape node are stored (operation 1304). At this point, if the material information is opaque geometric information, the opaque geometric information is stored in the sub-ring structure of the texture information; and if the material information is transparent geometric information, the transparent geometric information is stored in the array structure. That is, depending on the opaque geometric information included in the material information stored in the shape node, the material information is divided into transparent object lists and opaque object lists, and the transparent object lists are aligned according to a distance between the transparent object and a camera viewing the transparent object. One ring is constructed by the textures used for the opaque object list, and a sub-ring is constructed for each object that includes the same texture information and stored.
Then, it is determined whether the objects of the scene graph are all classified (operation 1104).
If the objects of the scene graph are all classified, the operation 1004 is performed. However, if the objects of the scene graph are not all classified, the operations 1100 through 1104 are performed.
Next, the aligned scene graph is interpreted and the 3D graphics data is converted into the 2D graphics data (operation 1004). At this point, after the opaque geometric information included in the material information is interpreted and converted, the transparent geometric information is interpreted and converted.
First, the predetermined detail information, and the transparent and opaque geometric information that correspond to the predetermined detail information are interpreted (operation 1400).
Then, the predetermined detail information, the transparent geometric information and the opaque geometric information are converted into the 2D graphics data (operation 1402). At this point, the 3D graphics data is converted into the 2D graphics data based on the order in which the transparent geometric information is aligned in operation 1304.
Unlike conventional methods, in the present invention, 3D graphics data is classified according to its characteristics and the 3D graphics data is reconstructed according to the characteristics, whereby a fast and a more precise conversion of the 3D graphics data into a 2D graphics data is guaranteed and efficient management of the 3D graphics data is enhanced. Such characteristics of the present invention can be applied to a PC (personal computer) used in a fixed place, a mobile device such as a PDA (Personal Digital Assistant) or a cellular phone.
As described above, an apparatus and method for reconstructing 3D graphics data according to the present invention can omit operations of repeatedly storing the same texture information by using an aligning unit to classify each node of a scene graph into groups according to texture information.
Also, according to the present invention, since an operation of converting the 3D graphics data into the 2D data, can be performed for each group of the texture information, time loss in the data conversion due to change of the texture information during the conversion process can be minimized.
Also, the present invention classifies the geometric information by using the transparent information to sequentially perform data conversion according to the distance, from a viewing point, thereby converting the 3D graphics data into the 2D data in a more precise manner.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2004-0027154 | Apr 2004 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
6369812 | Iyriboz et al. | Apr 2002 | B1 |
6597363 | Duluk et al. | Jul 2003 | B1 |
20050140694 | Subramanian et al. | Jun 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20050234946 A1 | Oct 2005 | US |