DISCRETE OBJECTS FOR BUILDING VIRTUAL ENVIRONMENTS

Abstract
Described is a virtual environment built by drawing stacks of three-dimensional objects (e.g., discrete blocks) as manipulated by a user. A user manipulates one or more objects, resulting in stack heights being changed, e.g., by adding, removing or moving objects to/from stacks. The stack heights are maintained as sample points, e.g., each point indexed by its associated horizontal location. A graphics processor expands height-related information into visible objects or stacks of objects by computing the vertices for each stack to draw that stack's top surface, front surface and/or side surface based upon the height-related information for that stack. Height information for neighboring stacks may be associated with the sample point, whereby a stack is only drawn to where it is occluded by a neighboring stack, that is, by computing the lower vertices for a surface according to the height of a neighboring stack where appropriate.
Description
BACKGROUND

Computer simulated environments such as virtual worlds are one of the ways that users interact with computer systems and gaming machines. To support real-time interaction, such systems need to be efficient in rendering scenes and in how they handle user interaction, particularly manipulation of data by users to build the environment.


In contemporary technologies related to simulated environments, large scale terrain data and constructive solid geometry (CSG) techniques may be used. Large scale terrain data is frequently represented as heightmaps of sample points, with the terrain surface generated essentially by laying a “sheet” over the sample points. While convenient for rendering geographic information system (GIS) data as obtained from satellites, it is difficult for users to manipulate such data; for example, heightmap surfaces cannot represent vertical walls, and are especially unsuited for use in interior environments, such as buildings. Constructive solid geometry techniques are generally used for interior spaces, but suffer from extreme limitations. For example, CSG modeling tools are non-intuitive and require extensive training, as well as considerable talent to generate desired results. Further, they are generally not suited for exterior spaces such as landscapes.


SUMMARY

This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.


Briefly, various aspects of the subject matter described herein are directed towards a technology by which a virtual environment is built by drawing stacks of three-dimensional objects (e.g., blocks) as manipulated by a user. A user provides interaction corresponding to an object being manipulated in the computer-simulated environment, which results in height-related information of the object being determinable. Graphics are rendered to output an updated representation of the computer-simulated environment as a result of the user interaction. For example, a stack of one or more objects is increased in height or decreased in height as a result of the object being added, deleted or moved.


In one aspect, vertices used in rendering the stack are determined based upon a sample point including the height-related information of each stack. The horizontal position of the stack is determined from the index for that sample point. Further, height information for neighboring stacks may be associated with the sample point, whereby when partially occluded by a neighboring stack, the stack only needs to be drawn until it starts being occluded by that neighboring stack, that is, by computing the lower vertices according to the base height of the underlying surface or the height of a neighboring stack, whichever is higher.


Other advantages may become apparent from the following detailed description when taken in conjunction with the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:



FIG. 1 is a block diagram showing a computing environment having example components for processing user interaction data to build virtual environments with discrete objects.



FIG. 2 is a representation of a virtual environment built with discrete objects.



FIG. 3 is a representation of rendered discrete objects (blocks) stacked for use in a virtual environment.



FIG. 4 is a representation of computing vertices of top surfaces from sample points/height information for rendering the discrete objects of FIG. 3.



FIG. 5 is a representation of computing vertices of front surfaces from sample points/height information for rendering the discrete objects of FIG. 3.



FIG. 6 is a representation of computing vertices of side surfaces from sample points/height information for rendering the discrete objects of FIG. 3.



FIGS. 7 and 8 are representations of how sets of discrete objects may be layered in levels (e.g., as in floors of a building) by changing an underlying base height for each level.



FIG. 9 is a flow diagram showing example steps that may be taken to process user interaction data directed to manipulating discrete objects for rendering in a virtual environment as three-dimensional representations of those objects.





DETAILED DESCRIPTION

Various aspects of the technology described herein are generally directed towards a computer-simulated environment technology in which the “world” is represented to the user as stacks of three dimensional objects such as blocks (typically cubes or cuboids, also called rectangular prisms) of various materials. In general, an environment (or part of the environment) is built and manipulated by user interaction directed towards piling the objects into stacks, removing objects, and pushing objects around, e.g., as discrete objects. As will be understood, an extremely efficient representation is described herein that allows rendering in real time to represent such manipulation.


Note that while cube-like blocks are described and shown herein as examples, particularly because they stack and tile well, other suitable polyhedrons or geometric objects, such as hexagonal prisms, or even non-tiling objects, may be used instead of cubes or in addition to cubes. Thus, it should be understood that any of the examples described herein are non-limiting examples. As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in computing and computer simulated environments in general.


Turning to FIG. 1, there is shown an example system, such as integrated into a computing machine (e.g., a computer system or game console), in which various components handle various stages of discrete object manipulation for building a virtual environment. In general, a user interacts via a mechanism such as a computer program controlled by via human interface device and sees the results on a display or the like. This user manipulation is represented in FIG. 1 by the input 102 to a virtual environment program 104 running on an operating system 106. The operating system 106 runs via a CPU 108 which is coupled to graphics hardware including a GPU 110.


As described below, based upon the input 102, the program 104 generates data which is received as object vertex buffer stream data 112 and associated object data 114 (e.g., various constants and the like, described below) at a vertex shader program 116 running on the GPU 110. The stream data include height-related information for each object, as also described below. The vertex shader program 116 uses the vertex buffer stream data 112 and the constants 114 to compute a full vertex set 118, which is then rasterized via a pixel shader 120 into output 122, e.g., viewed as rendered objects on a display in a window or other viewing area associated with the program 104.


As exemplified in FIG. 2, the system presents an environment (or some part thereof) via stacks of objects; note that gentle slopes are formed from a series of progressively taller stacks. In general, the use of object stacks allows the efficiency of heightmaps to be combined with the intuitive ease of volumetric mechanisms. Having a landscape built from piles of three-dimensional objects conveys to the user that the interaction is with solid objects, (rather than a two-dimensional deforming surface, for example). In other words, because the environment is presented as discrete chunks of matter, there are no user perception or interaction issues that arise from trying to stretch a flat texture over a two-dimensional surface in three-dimensional space, as in other approaches, whereby textures remain undistorted. This engages the user's real world intuition, as people are used to interacting with solid objects.


In one implementation, the environment is stored and accessed by the program 104 as a series of heightmaps of sample points, e.g., each point indexed according to a horizontal (x, y) grid location, such as in an array. Each sample point thus includes the data that determines the height of the stack of objects centered at that point, which in one implementation is relative to a base height. Each sample point also may contain a material tag, which determines the appearance and physical properties of that stack of object(s), wherein a stack comprises one or more objects. For example, the tag may identify a particular set of texture, shading data and so forth that is maintained and/or computed from part of the associated object data 114.


In one implementation, as part of the vertex buffer stream data 112, a single dataset (e.g., point) per stack of objects is passed to the hardware for rendering. The point contains the height of the center of the top of that stack relative to the base, e.g., corresponding to stack 301 in FIG. 3, as well as the heights of the four neighboring stacks in the cardinal directions; (note that only three stacks are visible in FIG. 3 given this particular viewing angle, namely stacks 302-304, but as can be readily appreciated, one stack may be behind the stack 301 and visible from a different viewing angle). Thus, one example of such a structure for representing a stack of one or more objects (e.g., indexed by its x, y coordinate), is <height, left neighbor height, right neighbor height, front neighbor height, rear neighbor height, material tag>, in any suitable ordering. For an implementation in which all of the objects that have the same height, the height-related data may be equivalently represented by the count of objects in each stack and computed by a simple multiplication of the per block height; also, negative heights/object counts are feasible to represent depressions below the base height. Other data may be present in each sample point. In any event, as can be readily appreciated, such a small amount of data per sample point provides an extremely efficient internal representation of a landscape that consumes very small amounts of storage. As described below, in one implementation this is all of the data needed to facilitate rendering in a manner that is very suitable for contemporary consumer graphics hardware.


In this example scheme, the horizontal (x, y) position of the stack is inferred from the vertex's index. Note that this is based upon each block having the same fixed, height, length and width, (the length and width may be the same as one another thus making the top of each stack a square as in FIG. 3; objects that are cubes exist if the height of each is also the same as the length and width). In a more complex implementation, different height, length and/or width objects may be allowed, possibly with non-rectangular (e.g., hexagonal) tops and/or faces. Similarly, it is feasible to have an implementation in which objects in a stack may have different material tags. Stacking upon fractional parts of objects is another alternative, including a system which allows stacks that are not necessarily multiples of the object length, width and/or height. In such implementations, more data needs to be passed to represent each object and/or its position, or at least for differing subsets of a stack.


As represented in FIGS. 4-6 in which the stacks are built from uniformly-sized blocks of uniform materials per stack, from these five floating point values, the hardware generates the vertices of each stack that comprising position, normal, and texture coordinates that are needed for rendering. Using these five values to generate each stack represents a compression of 192:5, which significantly reduces memory usage and bus bottlenecks.


In general, each vertex of the height map is expanded into a full cube (conceptually), with those cubes selected from a library of detailed cube types or the like, providing a graphically detailed visualization (in contrast to a straight rendering of the heightmap). In this example, the rendering is done in up to five passes, one pass for each object facing direction that is visible based on the viewing angle, (that is, visible to the “camera”). There is always one face direction which is on the far side of the object stacks, and need not be rendered. Further, when the camera is above the base height for a group of stacks, then the bottom faces also need not be rendered, whereby only four passes are needed. Similarly, when the camera is below the base height, the top faces need not be rendered.


For example, FIG. 4 shows computing (and rendering) the top surfaces' positions given the stack heights for four of the stacks. FIG. 5 shows computing/rendering the front faces, and FIG. 6 computing/rendering the side faces. The camera position is known, and thus the angles for the various faces are known for that camera position.


Note that drawing each stack in its full height may result in impractical levels of overdraw which may prevent rendering from reaching interactive rates, (and is inefficient even if not impractical on a given system). To avoid such overdraw processing, the heights of the four cardinal neighbors are provided as part of the sample point's data, whereby the side faces need only be extended down far enough to reach the top of the neighboring stack, or the base height if none. As a result, there is no processing wasted to draw a front side face down to the base when some of it would be occluded by the neighboring stack. Thus, as can be seen, the front face of stack 301 only needs to be drawn to the top of the block 303, which extends to the base surface 330, which is accomplished when computing the vertices for that front face. Similarly, the right side of the block 301 only needs to be drawn to the top of the block 304. Note that this assumes opaque blocks; in an implementation in which a stack or portion thereof is allowed to be somewhat translucent or fully transparent, more drawing is needed behind such a stack. Further, it is feasible to stop drawing based on side occlusion, e.g., the top, front and right surfaces of the stack 302 are occluded in part or in whole by the stacks 301 and 303, whereby drawing the full stack 302 is not necessary to see its visible portion; a shorter stack behind the block 301 need not be (and is not) drawn at all. Note that in one implementation, CPU processing can determine an order to draw the stacks and/or which stacks not to send for drawing based upon the camera angle. Indeed, the overall processing may be divided in virtually any way between the CPU and GPU. As can be readily appreciated, however, significant advantages arise from having the CPU 108 provide only small amounts of data (e.g., the five heights plus the material tag) in the stream data 112 to the GPU 110, including that the memory usage is small and bus bottlenecks are avoided by letting the highly parallel GPU architecture compute and render the various blocks from these small amounts of data.


In sum, the system is based on a height field, but unlike other height field methods, the system supports purely vertical walls; (note that heightmap techniques in general do not allow for such vertical walls). Further, vertical walls are natural to construct from the user's perspective, as if placing cinder blocks or toy blocks upon one another. While in one implementation the stacks are restricted to alignment with an underlying virtual grid, objects may be moved from stack to stack, added or deleted. The perception is that the interaction is with solid physical material, rather than deforming a virtual surface.


Turning to another aspect, multiple levels are easy to construct, as each group of blocks can have its own base height. For example, with successive layering by changing the base height for each layer, multiple floors of a building may be presented as in FIGS. 7 and 8. In general, to construct a building with multiple floors, for example, the user lays out the first floor and its walls on the ground (as Base Height0), selects the top of the wall as the next base height (Base Height1) for the next group of objects, and then lays out the next floor and its walls with those objects, and so on. The base heights may be varied by changing the associated object data 114 when appropriate, for example. Because the approach represents solids rather than surfaces, there are no complications resulting from missing bottoms or insides.


There is thus provided a technology in which via objects, a user is able to predict what is going to happen to terrain as it is edited, unlike a typical virtual environment system in which unpredictable visual artifacts such as creases, unusual shadows, and the like regularly are present. With an object-centric user interface, the system is able to provide a user experience that matches, to a significant extent, user expectations and on-screen results. Most users can predict the change to a terrain as they move objects up and down, and the system meets their expectations.


For example, the program may provide a “cube” (or other block) tool from which a type of cube to place may be chosen, so as to start placing cubes into the world. Cubes may be chosen and/or changed by their (or their stack's) material tag, e.g., grass, dirt, plastic and so forth materials may be present. When a terrain is formed by the stacks, a shaping tool may be used to reshape (e.g., level, push, and so forth) the terrain. Cubes may be deleted. Further, existing cubes which are touched by a brush or the like may be converted to a different (e.g., a currently active) material.


Water is one possible cube type. In general, the user may place water by activating the cube tool, selecting the water material, and moving a brush to a desired start point before activating. The height of the start point is stored as the water level for the current body of water. A flood fill is initiated from that point in the horizontal plane. The fill stops when it reaches terrain higher than the start point, or when it reaches the edge of the world; (the edge of the world is defined by terrain boundaries, and water can only exist as a layer on top of terrain). The perceived effect is that of water having been poured into the level, flowed downhill to fill contiguous deposits, and poured continuously until the water has reached the level of the start point.



FIG. 9 summarizes various operations of one such system, beginning at step 902 where user interaction data (e.g., commands and directional data) are received, such as corresponding to adding, removing or moving a discrete object. The program accordingly adjusts the data to account for the change in height of each stack involved. Step 904 represents the height information of the sample point being provided from the CPU to the GPU. Note that when the last object of a stack is removed, no data need be sent for that sample point in an implementation in which the visible stacks are redrawn in each frame (or at least each changed frame), that is, there is no need to “erase” deleted object, it is just not redrawn in the next frame.


Step 906 represents the top surface of the stack being drawn by computing the vertices, which is based upon the stack height information and the camera angle. Any shading may also be performed. Steps 908 and 910 repeat the computations and rendering for the front and side surfaces.


While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents failing within the spirit and scope of the invention.

Claims
  • 1. In a computing machine that generates output representing a computer-simulated environment, a method comprising, receiving user interaction corresponding to an object being manipulated in the computer-simulated environment, communicating data to graphics hardware, the data corresponding to height-related information of the object as a result of the manipulation, and rendering graphics based on the height-related information to output an updated representation of the computer-simulated environment as a result of the user interaction.
  • 2. The method of claim 1 wherein receiving the user interaction comprises receiving manipulation instructions to add the object at a horizontal location associated with the computer-simulated environment, and further comprising, increasing height-related information corresponding to that location to account for the object being added.
  • 3. The method of claim 1 wherein receiving the user interaction comprises receiving manipulation instructions to delete the object from a horizontal location associated with the computer-simulated environment, and further comprising, decreasing height-related information corresponding to that location to account for the object being deleted.
  • 4. The method of claim 1 wherein receiving the user interaction comprises receiving manipulation instructions to move the object from a first horizontal location associated with the computer-simulated environment to a second horizontal location, and further comprising, decreasing first height-related information corresponding to the first location and increasing second height-related information corresponding to the second location to account for the object being moved.
  • 5. The method of claim 1 further comprising, computing vertices for a top surface of the object to be rendered based upon the height-related information.
  • 6. The method of claim comprising, computing vertices for a front or side surface to be rendered based upon the height-related information.
  • 7. The method of claim 6 wherein computing the vertices for the front or side surface comprises, determining the vertices based upon height-related information of at least one neighboring location.
  • 8. The method of claim 6 wherein computing the vertices for a front or side surface to be rendered based upon the height-related information comprises determining the vertices according to a current base height.
  • 9. The method of claim 8 further comprising, changing the current base height to provide a new base height corresponding to another level.
  • 10. The method of claim 1 further comprising, providing a user interface tool by which user interaction manipulates of a plurality of objects.
  • 11. In a computing machine that generates output representing a computer-simulated environment, a system comprising, a mechanism including a user interface by which a user stacks a representation of an object at a horizontal location, the object stacked at the horizontal location upon a base surface or upon one or more other objects or fractional parts of objects, the mechanism communicating height-related information of the object to graphics hardware that renders the object at a vertical position that corresponds to the height-related information.
  • 12. The system of claim 11 wherein the object is added upon a stack of one or more other objects, and further comprising, rendering the stack to include the object and at least part of the stack below the object.
  • 13. The system of claim 11 wherein the system determines the horizontal location from an index associated with the height-related information.
  • 14. The system of claim 11 wherein the data corresponding to the height-related information of an object comprises a set of data indexed by a horizontal location, the set of data including the height-related information including an object height and one or more neighboring heights, and wherein the rendering mechanism draws a side surface by computing a set of vertices that is determined at least in part by the one or more of the neighboring heights.
  • 15. One or more computer-readable media having computer-executable instructions, which when executed perform steps, comprising: receiving a stream of height-related information at a graphics processor, the stream corresponding to stack heights at various horizontal locations of a virtual environment, andfor each horizontal location, processing the height-related information to render a stack at that horizontal location with a stack height determined from the height-related information.
  • 16. The one or more computer-readable media of claim 15 having further computer-executable instructions comprising, receiving user input that varies the height of a stack by adding an object to a stack or deleting an object from a stack.
  • 17. The one or more computer-readable media of claim 15 wherein processing the height-related information comprises determining the horizontal location from an index associated with the height-related information.
  • 18. The one or more computer-readable media of claim 15 wherein processing the height-related information comprises expanding the height-related information into a conceptual stack of at least one object, including drawing a top surface of each stack by computing vertices based upon the height-related information corresponding to that stack.
  • 19. The one or more computer-readable media of claim 15 wherein processing the height-related information comprises expanding the height-related information into a conceptual stack of at least one object, including drawing a front surface or a side surface of each stack, or both a front surface and a side surface, by computing vertices based upon the height-related information corresponding to that stack.
  • 20. The one or more computer-readable media of claim 19 computing the vertices comprises determining lower vertices based upon height-related information corresponding to one or more neighboring stacks.