Parallel graphics system employing multiple graphics processing pipelines with multiple graphics processing units (GPUS) and supporting an object division mode of parallel graphics processing using programmable pixel or vertex processing resources provided with the GPUS

Information

  • Patent Grant
  • 8497865
  • Patent Number
    8,497,865
  • Date Filed
    Sunday, December 31, 2006
    18 years ago
  • Date Issued
    Tuesday, July 30, 2013
    11 years ago
Abstract
A multiple graphics processing unit (GPU) based parallel graphics system comprising multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation. Each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem. According to the principles of the present invention, pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and the video memory and the pixel processing subsystem in the primary GPU are used to carry out the image recomposition process, without the need for dedicated or specialized apparatus.
Description
BACKGROUND OF THE INVENTION

1. Field of Invention


The present invention relates to new and improved ways of and means for carrying out the object division method of parallel graphics rendering on multiple GPU-based graphics platforms associated with diverse types of computing machinery.


2. Brief Description of the State of the Knowledge in the Art


There is a great demand for high performance three-dimensional (3D) computer graphics systems in the fields of product design, simulation, virtual-reality, video-gaming, scientific research, and personal computing (PC). Clearly a major goal of the computer graphics industry is to realize real-time photo-realistic 3D imagery on PC-based workstations, desktops, laptops, and mobile computing devices.


In general, there are two fundamentally different classes of machines in the 3D computer graphics field, namely: (1) Graphical Display List (GDL) based systems, wherein 3D scenes and objects are represented as a complex of geometric models (primitives) in 3D continuous geometric space, and 2D views or images of such 3D scenes are computed using geometrical projection, ray tracing, and light scattering/reflection/absorption modeling techniques, typically based upon laws of physics; and (2) VOlume ELement (VOXEL) based systems, wherein 3D scenes and objects are represented as a complex of voxels (x,y,z volume elements) represented in 3D Cartesian Space, and 2D views or images of such 3D voxel-based scenes are also computed using geometrical projection, ray tracing, and light scattering/reflection/absorption modeling techniques, again typically based upon laws of physics. Examples of early GDL-based graphics systems are disclosed in U.S. Pat. No. 4,862,155, whereas examples of early voxel-based 3D graphics systems are disclosed in U.S. Pat. No. 4,985,856, each incorporated herein by reference in its entirety.


In the contemporary period, most PC-based computing systems include a 3D graphics subsystem based the “graphics display list (GDL)” system design. In such graphics system design, “objects” within a 3D scene are represented by 3D geometrical models, and these geometrical models are typically constructed from continuous-type 3D geometric representations including, for example, 3D straight line segments, planar polygons, polyhedra, cubic polynomial curves, surfaces, volumes, circles, and quadratic objects such as spheres, cones, and cylinders. These 3D geometrical representations are used to model various parts of the 3D scene or object, and are expressed in the form of mathematical functions evaluated over particular values of coordinates in continuous Cartesian space. Typically, the 3D geometrical representations of the 3D geometric model are stored in the format of a graphical display list (i.e. a structured collection of 2D and 3D geometric primitives). Currently, planar polygons, mathematically described by a set of vertices, are the most popular form of 3D geometric representation.


Once modeled using continuous 3D geometrical representations, the 3D scene is graphically displayed (as a 2D view of the 3D geometrical model) along a particular viewing direction, by repeatedly scan-converting the graphical display list. At the current state of the art, the scan-conversion process can be viewed as a “computational geometry” process which involves the use of (i) a geometry processor (i.e. geometry processing subsystem or engine) as well as a pixel processor (i.e. pixel processing subsystem or engine) which together transform (i.e. project, shade and color) the display-list objects and bit-mapped textures, respectively, into an unstructured matrix of pixels. The composed set of pixel data is stored within a 2D frame buffer (i.e. Z buffer) before being transmitted to and displayed on the surface of a display screen.


A video processor/engine refreshes the display screen using the pixel data stored in the 2D frame buffer. Any changes in the 3D scene requires that the geometry and pixel processors repeat the whole computationally-intensive pixel-generation pipeline process, again and again, to meet the requirements of the graphics application at hand. For every small change or modification in viewing direction of the human system user, the graphical display list must be manipulated and repeatedly scan-converted. This, in turn, causes both computational and buffer contention challenges which slow down the working rate of the graphics system. To accelerate this computationally-intensive pipeline process, custom hardware including geometry, pixel and video engines, have been developed and incorporated into most conventional “graphics display-list” system designs.


In high-performance graphics applications, the number of computations required to render a 3D scene (from its underlying graphical display lists) and produce high-resolution graphical projections greatly exceeds the capabilities of systems employing a single graphics processing unit (GPU). Consequently, the use of parallel graphics pipelines, and multiple graphics processing units (GPUs), have become the rule for high-performance graphics system architecture and design.


In order to distribute the computational workload associated with interactive parallel graphics rendering processes, three different methods of graphics rendering have been developed over the years. These three basic methods of parallel graphics rendering are illustrated in FIGS. 1A through 1C. While these three methods of parallel graphics rendering are different in ways which will be described below, they each have five (5) basic stages or phases in common, namely:


(1) the Decomposition Phase, wherein the 3D scene or object is analyzed and its corresponding graphics display list data and commands are assigned to particular graphics pipelines available on the parallel multiple GPU-based graphics platform;


(2) the Distribution Phase, wherein the graphics display list data and commands are distributed to particular available graphics pipelines determined during the Decomposition Phase;


(3) the Rendering Phase, wherein the geometry processing subsystem/engine and the pixel processing subsystem/engine along each graphics pipeline of the parallel graphics platform uses the graphics display list data and commands distributed to its pipeline, and transforms (i.e. projects, shades and colors) the display-list objects and bit-mapped textures into a subset of unstructured matrix of pixels;


(4) the Recomposition Phase, wherein the parallel graphics platform uses the multiple sets of pixel data generated by each graphics pipeline to synthesize (or compose) a final set of pixels that are representative of the 3D scene (taken along the specified viewing direction), and this final set of pixel data is then stored in a frame buffer; and


(5) the Display Phase, wherein the final set of pixel data retreived from the frame buffer; and provided to the screen of the device device of the system. As will be explained below with reference to FIGS. 1A through 1C, each of these methods of parallel graphics rendering has both advantages and disadvantages.


Image Division Method of Parallel Graphics Rendering


As illustrated in FIG. 1A, the Image Division (Sort-First) Method of Parallel Graphics Rendering distributes all graphics display list data and commands to each of the graphics pipelines, and decomposes the final view (i.e. projected 2D image) in Screen Space, so that, each graphical contributor (e.g. graphics pipeline and GPU) renders a 2D tile of the final view. This mode has a limited scalability due to the parallel overhead caused by objects rendered on multiple tiles.


Time Division (DPlex) Method of Parallel Graphics Rendering


As illustrated in FIG. 1B, the Time Division (DPlex) Method of Parallel Graphics Rendering distributes all display list graphics data and commands associated with a first scene to the first graphics pipeline, and all graphics display list data and commands associated with a second/subsequent scene to the second graphics pipeline, so that each graphics pipeline (and its individual rendering node or GPU) handles the processing of a full, alternating image frame. Notably, while this method scales very well, the latency between user input and final display increases with scale, which is often irritating for the user.


Object Division (Sort-Last) Method of Parallel Graphics Rendering


As illustrated in FIG. 1C, the Object Division (Sort-last) Method of Parallel Graphics Rendering decomposes the 3D scene (i.e. rendered database) and distributes graphics display list data and commands associated with a portion of the scene to the particular graphics pipeline (i.e. rendering unit), and recombines the partially rendered pixel frames, during recomposition. This mode scales the rendering process very well, but implementation of the recomposition step is very expensive due to the amount of pixel data processing required during recomposition. Consequently, the practice of the Object Division Method of Parallel Graphics Rendering has not been commerically feasible in the affordable PC computing marketplace, while the Image and Time Division Methods of Parallel Graphics Rendering are being widely practiced in commercial PC-based graphics products, as indicated above.


A primary and highly desirable advantage associated with the Object Division Method of Parallel Graphics Rendering stems from dividing the stream of graphic display commands and data into partial streams, targeted to different GPUs, thereby removing traditional bottlenecks associated with polygon and texture data processing. Applications with massive polygon data (such as CAD) or massive texture data (such as high-quality video games) are able to take the most advantage of this kind of graphics rendering parallelism. Thus, there is a real need for CAD workers and video gamers who typically use PC-based computing systems and workstations have access to computer graphics subsystems that support the Object Division Method of Parallel Graphics Rendering.


Particular Prior Art Examples of Time Division and Image Division Methods of Parallel Graphics Rendering


In order to increase the level of parallelism and thus rendering performance of conventional PC-based graphics systems (i.e. beyond the converge limitations of a single-core GPU), it is now popular for conventional PC computing platforms to practice the Image and Time Division Methods of Parallel Graphics Rendering using either multiple GPU-based graphics cards, or multiple GPU chips on a graphics card. As shown in FIGS. 2A and 2B, this parallel graphics processing technique is practiced today in a number of commercial products (e.g. the SLI™ product design by Nvidia, and the Crossfire™ product design by ATI), employing a dual-card graphics subsystem, and supporting both Image and Time Division Methods/Modes of Parallel Graphics Rendering.


As shown in FIG. 2A, the PC motherboard is populated with a CPU (201) that is equipped with a memory bridge (i.e. “chipset,” 203) (e.g. nforce 680 by Nvidia). The memory bridge supports two PCI-express buses (207, 208) which are capable of driving two external graphic cards. As shown in FIG. 2A, the primary graphics card (205) and the secondary graphics card (204) are attached to a display device (206) such as a LCD panel. As shown in FIG. 2B, the architecture of a typical Shader-based graphic card (204, 205) comprises a GPU (212) and video memory (213). The GPU comprises a geometry subsystem (which is transform bound) and a pixel subsystem (which is fill bound). The video memory (213) comprises texture memory (218), a frame buffer (216), a command buffer, and a vertex buffer. The stream of graphics (display list) data and commands, originating at the host CPU, describes the 3D scene in terms of polygon vertices and bit-mapped textures. As shown, this data stream is provided to the video memory (213) via PCIexpress bus (207 or 208). In the shader-based GPU, the texture memory (218) plays a central role, and is accessible to and from the chip input, the vertex and fragment shaders, FB (216), and the blend & raster ops unit (217). As shown, the shader hardware (214, 215) is realized as a programmable parallel array of processing elements running shader source code written in a graphics-specific programming language. Notably, the Vertex Shader (214) specializes in vertex data processing, whereas the Fragment Shader (215) specializes in pixel data processing.


Particular Prior Art Examples of Object Division Method of Parallel Graphics Rendering


In FIG. 3A1, there is shown a parallel graphics system supporting the Object Division Method of Parallel Graphics Rendering, as illustrated as in FIG. 1C, but with further emphasis on the Recomposition Stage which is shown carried out using specialized apparatus. In FIG. 3A2, the basic object division recomposition process carried out by such specialized apparatus is schematically illustrated in the form of a flow chart. As described in FIG. 3A2, the first step of this pixel composition process involves accessing images (pixel data sets) from first and second frame buffers (FB1, FB20, each having a color value buffer and a depth value (Z) buffer. The second step involves performing a relatively simple process for each x,y pixel value in the frame buffers: advance to the next x,y location; for x,y, compare the depth values in the Z buffers and select the lower value which corresponds to the pixel value closest to the view (along the specified viewing direction); move the corresponding pixel value from the color buffer associated with the winning Z-buffer1 to the final FB. The determine whether or not all x,y values in the image have been processed as described above. If not, then return to the beginning of the processing loop as shown, and continue to process all pixel values in the image until the composition process is completed.


In FIGS. 3B1 and 3B2, there is shown a prior art multiple GPU-based graphics subsystem having multiple graphics pipelines with multiple GPUs supporting the Object Division Method of Parallel Graphics Rendering, using dedicated/specialized hardware to perform the basic image recomposition process illustrated in FIG. 3A2. Examples of prior art parallel graphics systems based on this design include: the Chromium™ Parallel Graphics System developed by researchers and engineers of Stanford University, and employing Binaryswap SPU hardware to carry out the image (re)composition process illustrated in FIG. 3A2; and HP Corporation's PixelFlow (following development of North Carolina University at Chapel Hill) employing parallel pipeline hardware, and SGI's Origin 2000 Supercomputer Shared Memory Compositor method (known also as “Direct Send”) on distributed memory architecture.


As shown in FIG. 3B1, the application's rendering code (301), which is representative of a 3D scene to be viewed from a particular viewing direction, is decomposed into two streams of graphics (display list) data and commands (302). These streams of graphics data and commands (302) are distributed (303) to the multiple graphics processing pipelines for rendering (304). Each GPU in its pipeline participates in only a fraction of the overall computational workload. Each frame buffer (FB) holds a full 2D image (i.e. frame of pixel data) of a sub-scene. According to this prior art method of Object Division, the full image of the 3D scene must be then composed from the viewing direction, using these two full 2D images, and this compositing process involves testing each and every pixel location for the pixel that is closest to the eye of the viewer (305). Consequently, recomposition according to this prior art Object Division Method of Parallel Graphics Rendering is expensive due to the amount of pixel data processing required during recomposition. The recomposed final FB is ultimately sent to the display device (306) for display to the human viewer.


As shown in FIG. 3B2, the dedicated/specialized hardware-based recomposition stage/phase of the object division mode of the parallel graphics rendering process of FIG. 3B1 comprises multiple stages of frame buffers (FBs), wherein each graphics pipeline will have at least one FB. In each FB, there is buffered image data comprising pixel color and depth (z) values. These pixel color and depth (z) values are processed according to the basic pixel processing algorithm of FIG. 3A2, so as to ultimately compose the final pixel data set (i.e. image) which is stored in the final frame buffer. The pixel data stored in the final frame buffer is then ultimately used to display the image on the screen of the display device using conventional video processing and refreshing techniques generally known in the art. Notably, the more graphics processing pipelines (GPUs) that are employed in the parallel graphics rendering platform, the more complex and expensive the dedicated hardware becomes to practice this prior art hard-ware based recomposition technique during the object division mode of such a parallel graphics rendering platform.


In FIGS. 3C1, 3C2 and 3C3, there is shown a prior art multiple GPU-based graphics subsystem having multiple graphics pipelines with multiple GPUs supporting the Object Division Method of Parallel Graphics Rendering, using a dedicated/specialized software solution to perform the basic image recomposition process illustrated in FIG. 3A2. Examples of prior art parallel graphics systems based on this design include: the Onyx® Parallel Graphics System developed by SGI, and employing pseudocode illustrated in FIGS. 3C2 and 3C3, to carry out the image (re)composition process illustrated in FIG. 3A2.


As shown in FIG. 3C1, the application's rendering code (301), which is representative of a 3D scene to be viewed from a particular viewing direction, is decomposed into two streams of graphics (display list) data and commands (302). These streams of graphics data and commands (302) are distributed (303) to the multiple graphics processing pipelines for rendering (304). Each GPU in its pipeline participates in only a fraction of the overall computational workload. Each frame buffer (FB) holds a full 2D image (i.e. frame of pixel data) of a sub-scene. According to this prior art method of Object Division, the full image of the 3D scene must be then composed from the viewing direction, using these two full 2D images, and this compositing process involves testing each and every pixel location for the pixel that is closest to the eye of the viewer (305). Consequently, recomposition according to this prior art Object Division Method of Parallel Graphics Rendering is expensive due to the amount of pixel data processing required during recomposition. The recomposed final FB is ultimately sent to the display device (306) for display to the human viewer.


In FIGS. 3C2, the software-based recomposition stage/phase of the object division mode of the parallel graphics rendering process of FIG. 3C1 is schematically illustrated in greater detail. As shown, this prior art image (re)composition process involves using a dedicated/specialized computational platform to implement the basic pixel processing algorithm of FIG. 3A2. As In general, this comprising dedicated/specialized computational platform comprises a plurality of CPUs for accessing and composite-processing the pixel color and z depth values of the pixel data sets buffered in the frame buffers (FBs) of each graphics pipeline supported on the parallel graphics platform. In the FB of each graphics pipeline (i.e. GPU), there is buffered image data comprising pixel color and depth (z) values. In FIG. 3C2, there is shown an illustrative example of a dedicated software-based recomposition platform employing two CPUs, and a final frame buffer FB0, to support a dual GPU-based parallel graphics rendering platform. The pixel color and depth (z) values stored in FB1 and FB2 are processed according to the basic pixel processing algorithm of FIG. 3A2, so as to ultimately compose the final pixel data set (i.e. image) which is stored in the final frame buffer FB0. FIG. 3C3 shows pseudocode that is executed by each CPU on the recomposition platform in order to carry out the pixel processing algorithm described in FIG. 3A2. The pixel data stored in the final frame buffer is then ultimately used to display the image on the screen of the display device using conventional video processing and refreshing techniques generally known in the art. Notably, the more graphics processing pipelines (GPUs) that are employed in the parallel graphics rendering platform, the more complex and expensive the software-based recomposition platform becomes to practice this prior art software based recomposition technique during the object division mode of such a parallel graphics rendering platform.


In both prior art parallel graphics systems described in FIGS. 3B1 and 3B2 and 3C1 through 3C3, the image recomposition step requires the use of dedicated or otherwise specialized computational apparatus which, when taken together with the cost associated with computational machinery within the multiple GPUs to support the rendering phase of the parallel graphics process, has put the Object Division Method outside limits of practicality and feasibility for use in connection with PC-based computing systems.


Thus, there is a great need in the art for a new and improved way of and means for practicing the object division method of parallel graphics rendering in computer graphics systems, while avoiding the shortcomings and drawbacks of such prior art methodologies and apparatus.


OBJECTS AND SUMMARY OF THE PRESENT INVENTION

Accordingly, a primary object of the present invention is to provide a new and improved method of and apparatus for supporting the object division method of parallel graphics rendering, while avoiding the shortcomings and drawbacks associated with prior art apparatus and methodologies.


Another object of the present invention is to provide such apparatus in the form of a multiple graphics processing unit (GPU) based parallel graphics system having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and wherein the video memory and the pixel processing subsystem in the primary GPU are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process.


Another object of the present invention is to provide a multiple GPU-based parallel graphics system having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and wherein the video memory and the pixel processing subsystem in the primary GPU are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process.


Another object of the present invention is to provide a multiple GPU-based parallel graphics system having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and wherein the video memory and both the geometry and pixel processing subsystems in the primary GPU are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process.


Another object of the present invention is to provide such a multiple GPU-based parallel graphics system having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein the video memory of each GPU includes texture memory and a pixel frame buffer, wherein the geometry processing subsystem includes a vertex shading unit, wherein the pixel processing subsystem includes a fragment/pixel shading unit, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and wherein the texture memory and the fragment/pixel shading unit are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process.


Another object of the present invention is to provide such a multiple GPU-based parallel graphics system having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein the video memory of each GPU includes texture memory and a pixel frame buffer, wherein the geometry processing subsystem includes a vertex shading unit, wherein the pixel processing subsystem includes a fragment/pixel shading unit, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and wherein the texture memory and the vertex shading unit are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process.


Another object of the present invention is to provide such a multiple GPU-based parallel graphics system having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and wherein the texture memory and the vertex shading unit are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process.


Another object of the present invention is to provide such a multiple GPU-based parallel graphics system, wherein the recomposition stage of the object division mode of the parallel graphics rendering process can be carried out using conventional GPU-based graphics cards originally designed to support the image and time division modes of a parallel graphics rendering process.


Another object of the present invention is to provide such a multiple GPU-based parallel graphics system, wherein the pixel frame buffers within multiple GPUs of a parallel graphics pipeline can be composited without use of specialized and/or additional components of hardware or software.


Another object of the present invention is to provide such a multiple GPU-based parallel graphics system, having low design and manufacturing cost as well as relatively low architectural complexity that is highly suitable for PC-based computing systems, as well as video game consoles and systems widely used in the consumer entertainment industry.


Another object of the present invention is to provide a multiple GPU-based parallel graphics system, having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein the object division mode can be implemented entirely in software, using the same computational resources provided for within the multiple GPUs for the purpose of carrying out the rendering stage of the parallel graphics rendering process (i.e. involving geometry projection, ray tracing, shading and texture mapping) and at cost of implementation that is comparable to the cost of implementation of image division and time division modes of a parallel graphics rendering process.


Another object of the present invention is to provide a multiple GPU-based parallel graphics system, having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, which does not require compositing in main, shared or distributed memory of the host system (e.g. involving the movement of pixel data from the frame buffers or FBs to main memory, processing the pixel data in the CPU of the host for composition, and moving the result out to the GPU for display) thereby avoiding the use of expensive procedure and resources of the system (e.g. buses, caches, memory, and CPU).


Another object of the present invention is to provide a novel method of operating a multiple GPU-based parallel graphics system, having multiple graphics processing pipelines (e.g. cards) with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein implementation of the pixel composition phase of the parallel graphics rendering process is carried out using the computational resources within the GPUs, thereby avoiding the need for dedicated or specialized pixel image compositing hardware and/or software based apparatus.


Another object of the present invention is to provide a novel method of converting a multiple GPU-based parallel graphics system supporting a parallel graphics rendering process having a time and/or image division mode of operation, into a multiple GPU-based parallel graphics system supporting a parallel graphics rendering process having an object division mode of operation,


Another object of the present invention is to provide a novel process of parallel graphics rendering having an object division mode of operation, which can be implemented on conventional as well as non conventional multiple GPU-based graphics platforms.


Another object of the present invention is to provide a novel process of parallel graphics rendering having an object division mode of operation, which can be implemented on any special purpose graphic systems requiring an image composition or comparable pixel compositing processes.


Another object of the present invention is to provide a novel parallel graphics rendering system supporting an object division mode of operation, and which can be implemented on conventional multiple GPU-based platforms so as to replacing image division or time division parallelism supported by the original equipment manufacturer (OEM),


Another object of the present invention is to provide a novel parallel graphics rendering system supporting an object division mode of operation, wherein the vendors of conventional multiple GPU-based graphics platforms can easily incorporate object division modes of operation, into their image division and time division modes of operation.


Another object of the present invention is to provide a novel method of parallel graphics rendering which enables the construction of low cost multiple GPU-based cards supporting an object division mode of parallel graphics rendering, with or without other time and/or image division parallelization modes.


Another object of the present invention is to provide a novel method of parallel graphics rendering that enables the construction of reduced-cost silicon chips having multiple GPUs that support an object division mode of parallel graphics rendering for diverse end-user applications.


Another object of the present invention is to provide a novel parallel graphics rendering system supporting an object division mode of operation, which can be embodied within an integrated graphics device (IGD) which is capable of running external GPU-based graphics cards, without the risk of the IGD getting disconnected by the BIOS of the host system when the external GPU-based graphics cards are operating, thereby improving the efficiency and performance of such systems.


Another object of the present invention is to provide a novel parallel graphics rendering system supporting an object division mode of operation, which can be embodied within an integrated graphics device (IGD) which is capable of driving multiple external GPU-based graphics cards.


Another object of the present invention is to provide a novel parallel graphics rendering system supporting an object division mode of operation, which can be embodied within an integrated graphics device (IGD) based chipset having two or more IGDs.


Another object of the present invention is to provide a novel parallel graphics rendering system supporting an object division mode of operation, which allows users to enjoy sharp videos and photos, smooth video playback, astonishing effects, and vibrant colors, as well as texture-rich 3D performance in next-generation games.


These and other objects of the present invention will become apparent hereinafter and in the claims to invention.





BRIEF DESCRIPTION OF THE DRAWINGS OF PRESENT INVENTION

For a more complete understanding of how to practice the Objects of the Present Invention, the following Detailed Description of the Illustrative Embodiments can be read in conjunction with the accompanying Drawings, briefly described below:



FIG. 1A is a schematic representation illustrating the prior art Image Division Method of Parallel Graphics Rendering on a computer graphics platform employing a pair of graphical processing units (GPUs);



FIG. 1B is a schematic representation illustrating the prior art Time Division Method of Parallel Graphics Rendering on a computer graphics platform employing a pair of graphical processing units (GPUs);



FIG. 1C is a schematic representation illustrating the prior art Object Division Method of Parallel Graphics Rendering on a computer graphics platform employing a pair of graphical processing units (GPUs);



FIG. 2A is a schematic representation of a prior art PC-based computing system employing a dual-GPU graphics subsystem having a chipset (203) (e.g. nForce 680 by Nvidia) with two PCI-express buses (207, 208), driving a pair of external graphic cards (204, 205), and supporting the Image Division Method of Parallel Graphics Rendering (e.g. such as SLI by Nvidia, or Crossfire by AMD, driving 2 or 4 graphic cards);



FIG. 2B is a schematic representation of each prior art graphics card (204, 205) employed in the PC-based computing system of FIG. 2A, shown comprising a GPU (202) and a Video Memory (203), wherein the stream of graphics commands and data, originating at host CPU, describes the 3D scene in terms of vertices and textures, wherein the Shader hardware is a programmable parallel array of processing elements running shader source code written in a graphics specific programming language, wherein the Vertex Shader (214) specializes in vertex processing, the Fragment Shader (215) specializes in pixel processing, and wherein the texture memory (218) is accessible from/to chip input, Shaders, FB (216), and blend & raster ops unit (217);


FIG. 3A1 is a schematic representation illustrating the prior art Object Division Method of Parallel Graphics Rendering on a computer graphics platform employing a pair of graphical processing units (GPUs), wherein emphasis is placed on the fact that the image recomposition stage is implemented using specialized/dedicated apparatus;


FIG. 3A2 is a flow chart illustrating the basic steps associated with the prior art image recomposition process carried out in most object division methods of parallel graphics rendering supported on multiple GPU-based graphics platform;


FIG. 3B1 is a schematic representation illustrating the prior art Object Division Method of Parallel Graphics Rendering on a computer graphics platform employing a multiple graphical processing units (GPUs), wherein the image recomposition stage is implemented using specialized/dedicated hardware-based recomposition apparatus comprising multiple stages of pixel composing units (indicated by COMPOSE);


FIG. 3B2 is a schematics representation of the prior art specialized/dedicated hardware-based recomposition apparatus used to carry out the recomposition stage of the object division mode of graphics parallel rendering process supported on the multiple GPU-based graphics platform shown in FIG. 3B1;


FIG. 3C1 is a schematic representation illustrating the prior art Object Division Method of Parallel Graphics Rendering on a computer graphics platform employing a multiple graphical processing units (GPUs), wherein the image recomposition stage is implemented using specialized/dedicated software-based recomposition apparatus comprising multiple CPUs programmed for pixel composition using a graphics programming language (e.g. Cg);


FIG. 3C2 is a schematics representation of the prior art specialized/dedicated software-based recomposition apparatus used to carry out the recomposition stage of the object division mode of graphics parallel rendering process supported on the multiple GPU-based graphics platform shown in FIG. 3C1, wherein an illustrative example of two CPUs or processors (p0, p1) and three pixel Frame Buffers (FB0, FB1, FB2) provide the apparatus for carrying out the pixel composition process illustrated in FIG. 3A2;


FIG. 3C3 is a prior art pseudo code for programming the processors to carry out the software-based recomposition stage of FIG. 3C2;



FIG. 4A is a schematic representation illustrating the Object Division Method of Parallel Graphics Rendering in accordance with the principles of the present invention, carried out on a parallel graphics platform employing a multiple graphical processing units (GPUs), wherein the recomposition stage of the rendering process is carried out using computational resources (e.g. video memory and the geometry and/or pixel processing subsystems/engines) supplied by the a primary GPU employed on the parallel graphics platform;



FIG. 4B is a schematic representation illustrating a multiple GPU-based parallel graphics system according to the present invention, having multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated (via an inter-GPU communication process) to the video memory of a primary GPU, and wherein the video memory and the geometry and/or pixel processing subsystems in the primary GPU are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process;



FIG. 4C is a time-line representation of process of generating a frame of pixels for an image along a specified viewing during a particular parallel rendering cycle in accordance with the principles of the present invention, wherein the pixel recomposition step of the parallel rendering process is shown to reuses GPU-based computational resources during its idle time, without the need for specialized or dedicated compositional apparatus required by prior art parallel graphics systems supporting an object division mode of parallel graphics rendering;



FIG. 5A is a schematic representation of the a dual GPU-based parallel graphics system according to the present invention, having a pair of graphics processing pipelines with a pair of GPUs supporting a parallel graphics rendering process having an object division mode of operation, wherein each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem, wherein pixel (color and z depth) data buffered in the video memory of each GPU is communicated (via an inter-GPU communication process supported over a pair of PCI Express buses) to the video memory of a primary GPU, and wherein the video memory and the geometry and/or pixel processing subsystems in the primary GPU are used to carry out the image recomposition phase of the object division mode of parallel graphics rendering process;



FIG. 5B is a schematic representation of decomposing module and distributing module supported within the host CPU Program Space of the host computer system shown in FIG. 5A, wherein the decomposing module of present invention comprises functional sub-blocks for OS-GPU I/F and Utilities, Division Control and State Monitoring, and Composition Management, whereas the Distributing Module comprises functional sub-block for (Graphics Display List Data and Command) Distribution Management;



FIG. 6A is a schematic representation of the parallel graphics rendering process of the present invention having an object division mode, supported on the dual GPU-based parallel graphics system shown in FIG. 5A;



FIG. 6B is a graphical representation of Shader code (expressed in a graphics programming language, e.g. Cg) that is used within the primary GPU in order to carry out the pixel recomposition stage of the object division mode/method of the parallel graphics rendering process of the present invention, supported on the dual GPU-based parallel graphics system shown in FIG. 5A;



FIG. 7A is a schematic representation of an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed using a multiple GPU-based graphics cards;



FIG. 7B is a schematic representation of an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed using multiple GPUs on a single card.



FIG. 7C is a schematic representation of an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed using an external box equipped with multiple GPU-based graphics cards;



FIG. 8 is a schematic representation of an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed using a multiple-GPU chip design;



FIG. 9A is a schematic representation of an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed as an integrated graphics device (IGD) based system provided with a single graphics card;



FIG. 9B is a schematic representation of an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed as a combined system comprising an IGD-based system provided with two graphics cards;



FIG. 9C is a schematic representation of an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed as a system having dual or multiple IGDs each embodying the object division based parallel graphics rendering process of the present invention; and



FIG. 10 is a graphical representation of an implementation of the multiple GPU-based parallel graphics system of the present invention, wherein the multiple GPU-based parallel graphics system and process are incorporated within a video game console.





DETAILED DESCRIPTION OF THE ILLUSTRATIVE EMBODIMENTS OF THE PRESENT INVENTION

Referring to the FIGS. 4A through 10 in the accompanying Drawings, the various illustrative embodiments of the multiple GPU-based parallel graphics system and process of the present invention will be described in great detail, wherein like elements will be indicated using like reference numerals.


In accordance with the principles of the present invention, the pixel recomposition phase of the object division based graphics rendering process is carried out by means of GPU-based pixel processing resources including video memory, and the geometry processing subsystem and/or the pixel processing subsystem, as emphasized in FIG. 4A.



FIG. 4B illustrates the give major steps associated with the object division based parallel graphics rendering process of present invention, namely: decompose, distribute, render, recompose, and display. As illustrated in FIG. 4B, the computational resources contained within one of the multiple GPUs on the platform, termed the primary GPU, and to which the display device (406) is attached, are reutilized for image/pixel recomposing, in addition to graphics rendering. In FIG. 5B, the general case of a multiple GPU-based graphic subsystem is considered with n GPUs being shown. Pixel data contained in the Frame buffers (FB) associated with the secondary GPUs are moved to the primary GPU by way of an inter-GPU communication process (e.g. supported by multiple-lane PCI Express™ buses), and processed within the local FB of the primary GPU, to perform pixel image (re)composition, as will be explained in greater detail hereinafter. The pixel composition result is then sent to display device, and alternatively, also returned to the secondary GPUs, if required in some applications as a basis of the next pixel frame.



FIG. 4C illustrates the time-line of one complete composited pixel frame, including time slots associated with the different steps of object division rendering. The reuse of GPU resources for compositing not only eliminates the use of an additional specialized apparatus, but it is utilized in a time slot, where in prior art the specialized apparatus works, leaving the GPU resources idle during the recompose step. Thus, by virtue of the present invention, GPU resources are used “for free” without sacrificing system performance.



FIG. 5A shows an illustrative example of a dual GPU-based graphics system of present invention supporting object division mode of parallel graphics rendering employing GPU-based pixel recomposition. In the illustrative embodiment, the object division based GPU graphics system of the present invention is implemented using a conventional dual-GPU graphics platform (e.g. Nvidia's SLI™ dual-GPU graphics system system) supporting image or/and time division modes of parallel graphics rendering, and converting into the novel object division based GPU-based parallel graphics system of the present invention. This graphics platform conversion process, which is only one aspect of the present invention, is carried out using the novel software architecture and programming principles of the present invention (503), which in combination transforms a multiple GPU-based graphics platform (capable of supporting time and/or image division mode of parallel graphics rendering), by controlling the primary GPU (on the primary graphics card (205)) in such as way that it operates and supports pixel and other data processing operations that implement compositing stage of the object division based parallel graphics rendering process of the present invention.


As shown in FIG. 5A, decomposition is accomplished by Decomposing module (504), and the distribution of graphics display list data and command streams by Distributing module (505). Typically, the graphics support environment on the host computer comprises: the application (501); the Standard Graphics Library (502, e.g. OpenGL or Direct3D); and the vendor's GPU driver (506). In the illustrative embodiment of the system, there are two graphic cards (204, 205) driven by vendor's GPU driver, attached to the PC motherboard, and connected by two PCIexpress buses (207, 208) via memory bridge (203) (termed also “chipset”). One of these cards (205) is attached to display device (106) and is therefore termed the “primary” card, and its GPU, is termed the “primary GPU”. The primary card is magnified so as show its internal architecture, which was described in part with reference to FIG. 2B.


As shown in FIG. 5B, the decomposing and distributing modules of present invention comprises a number of functional sub-blocks, namely: Decomposing module (504); OS-GPU Interface and Utilities (521); Division Control and State Monitoring (522); Composition Management (523); Distribution Management module (525).


The Decomposing module (504) primarily implements the decomposing step of object division rendering process, but also interfaces with the OS and vendor's GPU driver and with GPUs, and supervises recomposition process in GPUs. These steps are accomplished by means of the following functional blocks: OS-GPU Interface and Utilities (521); Division Control and State Monitoring (522); and Composition Management (523).


OS-GPU Interface and Utilities Module (521) performs all the functions associated with interaction with the Operation System, graphic library (e.g. OpenGL or DirectX), and interfacing with GPUs. This functional block is responsible for interception of the graphic commands from the standard graphic library, forwarding and creating graphic commands to Vendor's GPU Driver, controlling registry and installation, OS services and utilities.


Division Control and State Monitoring Module (522) controls the object division parameters and data to be processed by each GPU for load balancing, data validity, etc., and also handles state validity across the system. The graphic libraries (e.g. OpenGL and DirectX) are state machines. Parallelization must preserve cohesive state across the graphic system. It is done by continuous analysis of all incoming commands, while the state commands and some of the data must be duplicated to all pipelines in order to preserve the valid state across the graphic pipeline. This function is exercised mainly in object division scheme, as disclosed in detail in inventor's previous pending patent PCT/IL04/001069, incorporated herein by reference.


Composition Management Module (523) supervises the composition process at GPUs issuing commands and shader codes to handling the read-back of frame buffers, transfer data and perform compositing, as will be described in details hereinafter.


The Distributing Step/Phase (505) of object division parallel graphics rendering process of the present invention is implemented by the Distribution Management Module (525), which addresses the streams of commands and data to the different GPUs via chipset outputs.


Referring now to FIGS. 4A, 5A, 6A, and 6B, the innovative pixel compositing phase of the multiple-GPU based parallel graphics rendering process of the present invention will now be described in great technical detail. In one illustrative embodiment of the present invention, the compositing phase/stage involves moving the pixel Depth and Color values from the frame buffers FB in the secondary GPU, to the FB in the primary GPU (via inter-GPU communication), and merging then merging these pixel values with their counterparts at the primary GPU by means of programmable Fragment Shader in the pixel processing subsystem (211).



FIG. 6A provides a flowchart for the compositing process. In the illustrative embodiment, a compositing process employing dual GPUs is described. However, it is understood that if more GPUs are involved, then the flowchart process will repeat accordingly for each additional “secondary” GPU, until the final step when the partially composited pixel data in the frame buffer (FB) in the last secondary GPU, is finally composited with the pixel data within the frame buffer of the primary GPU.


As shown in FIG. 6A, the pixel frame generating pipeline includes the steps: decompose (402, see FIG. 4A)), distribute (403), and render (404). Towards the end of the graphics pipeline, the recompose step (405) is carried out for final FB, which is finally displayed to the display device (405).


During the Decompositing step (402), graphics commands and data stream are decomposed into well load balanced sub-streams in Decompositing Module (504, FIG. 5A), keeping state consistency of graphics libraries.


The Distributing step (403) is supervised by the Distributing module (505, FIG. 5A). They are sent to Vendor's GPU Driver (506), memory bridge (203), and delivered for rendering to graphics cards, the primary (205) and secondary (204) via separate PCIexpress buses (207, 208).


Rendering (step 404) is done simultaneously (602, 603) in both GPUs, creating two partial FBs.


The compositing process (step 405 of FIG. 4) comprises the following substeps:


Step (606): The color FB is read back from the secondary GPU, and moved via memory bridge (203) to the primary GPU's Texture memory (218) as a texture tex1.


Step (607)L The Z-buff is read back from the secondary GPU, and moved via memory bridge (203) to the primary GPU's Texture memory (218) as a texture dep1.


Step (604): Color FB of primary GPU is copied to texture memory as texture tex2.


Step (605): Z-buffer of primary GPU is copied to texture memory as texture dep2.


Step (608): Shader code for recomposition (as shown in FIG. 6B) is downloaded and exercised on four textures tex1, tex2, dep1, dep2 as follows:


Step (609): The two depth textures are compared pixel by pixel for their depth values. Assuming the rule that the closest pixel is the one to be transferred to the final FB, at each x,y location the two depth textures are compared for lowest depth value, the lowest is chosen, and the color value at x,y of its correspondent color texture is moved to the x,y location in the final texture.


Step (610): The resulting texture is copied back to the primary color FB.


To complete rendering (step 404b), the following substeps are performed:


Step (611): All transparent objects of the scene and overlays (such as score titles) are essentially kept by applications for the very last data to be rendered. Therefore, once all opaque objects have been rendered in parallel at separate GPUs and composed back to the primary's FB, the additional and final phase of a non-parallel rendering of transparent objects takes place in the primary GPU.


Step (612): The final FB is sent to the display device for display on its display screen.


In step 405, the detailed shader program is used to composite two color textures based on depth test between two depth textures, as shown in FIG. 6B.


While the above illustrative embodiment discloses the use of the Fragment Shader in the pixel processing subsystem/engine within the primary GPU to carry out the composition process in the dual GPU-based graphics platform of the present invention, it is understood that other computational resources within the GPU can be used in accordance with the scope and spirit of the present invention. In particular, in a second illustrative embodiments, the compositing phase/stage can involve moving the pixel Depth and Color values from the frame buffers FB in the secondary GPU, to the FB in the primary GPU (via inter-GPU communication), and merging then merging these pixel values with their counterparts at the primary GPU by means of programmable Vertex Shader in the geometry processing subsystem (210). And in yet another illustrative embodiment of the present invention, the compositing phase/stage can involve moving the pixel Depth and Color values from the frame buffers FB in the secondary GPU, to the FB in the primary GPU (via inter-GPU communication), and merging then merging these pixel values with their counterparts at the primary GPU by means of both programmable Vertex and Fragment Shaders in the geometry and pixel processing subsystems in the primary GPU. Such modifications will become readily apparent to those skilled in the art having the benefit of the present inventive disclosure.


As taught hereinabove, the GPU-based composition process associated with the object division parallel graphics rendering process of the present invention can be realized as a software method that (i) controls the computational machinery within the GPUs of the parallel graphics platform, and (ii) exploits the Shader (pixel) processing capabilities in the primary GPU, with no need for any external hardware. As this GPU exists in any dual or multiple GPU-based graphics system, the object division parallel graphics rendering process and platform of present invention can be implemented on a great variety of existing, as well as new graphics systems in a multitude of ways. Below are just some examples of possible system designs that can be constructed using the principles of the present invention.


In FIG. 7A, there is shown a first implementation of the dual GPU-based parallel graphics system (700) according to the present invention, constructed using a multiple GPU-based graphics cards (701). In an illustrative embodiment, host motherboard can be realized as an Asus A8N-SLI (702), and the graphics cards can be realized by a pair of Nvidia GEforce 6800 cards (701) and chipset nForce 680 by Nvidia (not shown).


In FIG. 7B, there is shown an implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed using multiple GPUs on a single card.


In FIG. 7C, there is shown a second implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed using an external box equipped with multiple GPU-based graphics cards.


In FIG. 8, there is shown a third implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed using a multiple-GPU chip design.


In FIG. 9, there is shown a fourth implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed as an integrated graphics device (TGD) based system (901) provided with a single graphics card (902). As conventional IGD-based chipsets (e.g. Intel 845GL) do not have programmable shaders yet the external GPU on the graphics card will function as the primary GPU and perform the pixel composition process in accordance with the principles of the present invention.


In FIG. 9B, there is shown a fifth implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed as a combined system comprising an IGD-based system (911) provided with two graphics cards (912, 913). In the illustrative embodiment, the IGD of this dual card system has dual PCIexpress bus (e.g. bearlake by Intel), so that two external GPUs can be used.


In FIG. 9C, there is show a sixth implementation of the multiple GPU-based parallel graphics system according to the present invention, constructed as a system having dual or multiple IGDs each embodying the object division based parallel graphics rendering process of the present invention. Such an IGD will include a programmable shader to practice the pixel composition process of the present invention.


In FIG. 10, there is shown a graphical representation of a seventh possible implementation of the multiple GPU-based parallel graphics system of the present invention, wherein the multiple GPU-based parallel graphics system and process are incorporated within a video game console. This illustrative example shows the Microsoft™ Xbox 360 game console embodying the parallel graphics rendering process and apparatus of the present invention.


While the illustrative embodiments of the present invention have been described in connection with various PC-based computing system applications, it is understood that that parallel graphics systems and rendering processes of the present invention can also be used in video game consoles and systems, mobile computing devices, e-commerce and POS displays and the like.


It is understood that the parallel graphics rendering technology employed in computer graphics systems of the illustrative embodiments may be modified in a variety of ways which will become readily apparent to those skilled in the art of having the benefit of the novel teachings disclosed herein. All such modifications and variations of the illustrative embodiments thereof shall be deemed to be within the scope and spirit of the present invention as defined by the Claims to Invention appended hereto.

Claims
  • 1. A method of recompositing partial pixel data within a graphics processing unit (GPU) of a multi-GPU graphics processing subsystem so as to produce images for display on a screen, wherein said partial pixel data is generated by a plurality of GPUs operating according to an object division mode of graphics parallelization, wherein said object division mode includes a rendering phase of operation and a recomposition phase of operation, wherein each GPU has one or more programmable shaders, and a frame buffer (FB) having depth buffers for buffering pixel depth values and color frame buffers for buffering pixel color values, and wherein at least one of said GPUs is a display-designated GPU that is connectable to said screen for displaying images produced by said multi-GPU graphics processing subsystem, and at least one of said GPUs is a non-display-designated GPU, said method comprising the sequence of steps of:(a) entering said rendering phase, and programming and configuring said one or more programmable shaders within said display designated GPU and said one or more non-display-designated GPUs, to perform rendering operations that produce partial results;(b) passing geometric data and commands to said plurality of GPUs according to said object division mode of parallelization;(c) using said one or more programmable shaders within said display designated GPO and said one or more non-display-designated GPUs, to generate partial results for said depth and color frame buffers according to said object division mode; and(d) exiting said rendering phase, and entering said recomposition phase, and reprogramming and reconfiguring said one or more programmable shaders within said display designated GPU, to perform recomposition operations that produce an image for display;(e) moving said partial results to said display-designated GPU;(f) using said one or more programmable shaders within said display-designated GPU to produce an image for display; and(g) displaying said image on said screen.
  • 2. The method of claim 1, said one or more programmable shaders comprises programmable fragment shaders.
  • 3. The method of claim 1, said one or more programmable shaders comprises programmable vertex shaders.
  • 4. The method of claim 2, further comprising the recomposition steps of said programmable shaders in the said display-designated GPU, (i) comparing, on a pixel-by-pixel basis, the pixel depth (z) values of opaque rendered objects buffered in said depth buffers, and representative of partial results, and(ii) at each x,y location in said color frame buffer of said display-designated GPU, selecting the pixel having the lowest depth value representative of the pixel closet to the viewer along a viewing direction, and(iii) moving the associated color value of the selected pixel to the corresponding x.y location in said color frame buffer of said display-designated GPU, for composition with other selected pixels, so as to produce an image for subsequent display.
  • 5. A multi-GPU graphics processing subsystem for use in a computing system, said graphics processing subsystem comprising: a plurality of GPUs operating according to object division mode of graphics parallelization,wherein said object division mode includes a rendering phase of operation and a recomposition phase of operation;wherein at least one of said GPUs is a display-designated GPU that is connectable to a screen for displaying images produced by said multi-GPU graphics processing subsystem, and at least one of said GPUs is a non-display-designated GPU;wherein each said GPU includes (i) one or more programmable shaders, and (ii) video memory including a frame buffer (FB) having depth buffers for buffering pixel depth values and color frame buffers for buffering pixel color values; andwherein for images to be generated and displayed on said screen, geometric data and graphics commands are distributed to said plurality of GPUs according to said object division mode of parallelization, wherein the multi-GPU graphics processing subsystem supports the following data processing steps:(a) upon entering said rendering phase, programming and configuring said one or more programmable shaders within said display designated GPU and said one or more non-display-designated GPUs, to perform rendering operations that produce partial results;(b) passing geometric data and commands to said plurality of GPUs according to said object division mode of parallelization;(c) using said one or more programmable shaders within said display designated GPU and said one or more non-display-designated GPUs, to generate partial results for depth and color frame buffers according to the object division mode of parallelization;(d) exiting said rendering phase, and entering said recomposition phase, and reprogramming and reconfiguring said one or more programmable shaders within said display designated GPU, to perform recomposition operations that produce an image for display; and(e) displaying said image on said screen.
  • 6. The subsystem of claim 5, wherein said one or more programmable shaders comprise programmable fragment shaders.
  • 7. The subsystem of claim 5, wherein said one or more programmable shaders comprise programmable vertex shaders.
  • 8. The subsystem of claim 5, wherein said programmable shaders in said display-designated GPU, perform the following operations within said display-designated GPU: (i) comparing, on a pixel-by-pixel basis, the pixel depth (z) values of opaque rendered objects,(ii) at each x,y location, selecting the pixel having the lowest depth value representative of the pixel closet to the viewer along said viewing direction, (iii) moving the associated color value of the selected pixel to the corresponding x,y location in said color frame buffer, for composition with other selected pixels to produce an image for subsequent display; and(iv) displaying said image pixel data on said screen.
US Referenced Citations (227)
Number Name Date Kind
5475856 Kogge Dec 1995 A
5535410 Watanabe et al. Jul 1996 A
5687357 Priem Nov 1997 A
5740464 Priem et al. Apr 1998 A
5745762 Celi, Jr. et al. Apr 1998 A
5754866 Priem May 1998 A
5757385 Narayanaswami et al. May 1998 A
5758182 Rosenthal et al. May 1998 A
5794016 Kelleher Aug 1998 A
5841444 Mun et al. Nov 1998 A
5909595 Rosenthal et al. Jun 1999 A
6118462 Margulis Sep 2000 A
6169553 Fuller et al. Jan 2001 B1
6181352 Kirk et al. Jan 2001 B1
6184908 Chan et al. Feb 2001 B1
6188412 Morein Feb 2001 B1
6191800 Arenburg et al. Feb 2001 B1
6201545 Wong et al. Mar 2001 B1
6212261 Meubus et al. Apr 2001 B1
6212617 Hardwick Apr 2001 B1
6259460 Gossett et al. Jul 2001 B1
6288418 Reed et al. Sep 2001 B1
6292200 Bowen et al. Sep 2001 B1
6333744 Kirk et al. Dec 2001 B1
6337686 Wong et al. Jan 2002 B2
6352479 Sparks, II Mar 2002 B1
6415345 Wu et al. Jul 2002 B1
6442656 Alasti et al. Aug 2002 B1
6462737 Lindholm et al. Oct 2002 B2
6473086 Morein et al. Oct 2002 B1
6473089 Wei et al. Oct 2002 B1
6477687 Thomas Nov 2002 B1
6492987 Morein Dec 2002 B1
6496187 Deering et al. Dec 2002 B1
6496404 Fiedler et al. Dec 2002 B1
6502173 Aleksic et al. Dec 2002 B1
6529198 Miyauchi Mar 2003 B1
6532013 Papakipos et al. Mar 2003 B1
6532525 Aleksic et al. Mar 2003 B1
6535209 Abdalla et al. Mar 2003 B1
6542971 Reed Apr 2003 B1
6557065 Peleg et al. Apr 2003 B1
6577309 Lindholm et al. Jun 2003 B2
6577320 Kirk Jun 2003 B1
6578068 Bowman-Amuah Jun 2003 B1
6593923 Donovan et al. Jul 2003 B1
6633296 Laksono et al. Oct 2003 B1
6636212 Zhu Oct 2003 B1
6636215 Greene Oct 2003 B1
6646639 Greene et al. Nov 2003 B1
6650330 Lindholm et al. Nov 2003 B2
6650331 Lindholm et al. Nov 2003 B2
6657635 Hutchins et al. Dec 2003 B1
6662257 Caruk et al. Dec 2003 B1
6664960 Goel et al. Dec 2003 B2
6664963 Zatz Dec 2003 B1
6670958 Aleksic et al. Dec 2003 B1
6677953 Twardowski et al. Jan 2004 B1
6683614 Walls et al. Jan 2004 B2
6690372 Donovan et al. Feb 2004 B2
6691180 Priem et al. Feb 2004 B2
6700583 Fowler et al. Mar 2004 B2
6704025 Bastos et al. Mar 2004 B1
6724394 Zatz et al. Apr 2004 B1
6725457 Priem et al. Apr 2004 B1
6728820 Brian et al. Apr 2004 B1
6731298 Moreton et al. May 2004 B1
6734861 Van Dyke et al. May 2004 B1
6734874 Lindholm et al. May 2004 B2
6741243 Lewis et al. May 2004 B2
6744433 Bastos et al. Jun 2004 B1
6753878 Heirich et al. Jun 2004 B1
6774895 Papakipos et al. Aug 2004 B1
6778176 Lindholm et al. Aug 2004 B2
6778177 Furtner Aug 2004 B1
6778181 Kilgariff et al. Aug 2004 B1
6778189 Kilgard Aug 2004 B1
6779069 Treichler et al. Aug 2004 B1
6789154 Lee et al. Sep 2004 B1
6797998 Dewey et al. Sep 2004 B2
6801202 Nelson et al. Oct 2004 B2
6812927 Cutler et al. Nov 2004 B1
6825843 Allen et al. Nov 2004 B2
6828980 Moreton et al. Dec 2004 B1
6828987 Swan Dec 2004 B2
6831652 Orr Dec 2004 B1
6842180 Maiyuran et al. Jan 2005 B1
6844879 Miyaushi Jan 2005 B2
6856320 Rubinstein et al. Feb 2005 B1
6864893 Zatz Mar 2005 B2
6864984 Naya et al. Mar 2005 B2
6870540 Lindholm et al. Mar 2005 B1
6876362 Newhall, Jr. et al. Apr 2005 B1
6885376 Tang-Petersen et al. Apr 2005 B2
6894687 Kilgard et al. May 2005 B1
6894689 Greene et al. May 2005 B1
6900810 Moreton et al. May 2005 B1
6938176 Alben et al. Aug 2005 B1
6940515 Moreton et al. Sep 2005 B1
6947047 Moy et al. Sep 2005 B1
6947865 Mimberg et al. Sep 2005 B1
6952206 Craighead Oct 2005 B1
6959110 Danskin et al. Oct 2005 B1
6961057 Van Dyke et al. Nov 2005 B1
6975319 Donovan et al. Dec 2005 B1
6980209 Donham et al. Dec 2005 B1
6982718 Kilgard et al. Jan 2006 B2
6985152 Rubinstein et al. Jan 2006 B2
6989840 Everitt et al. Jan 2006 B1
6992667 Lindholm et al. Jan 2006 B2
6995767 Donovan et al. Feb 2006 B1
6999076 Morein Feb 2006 B2
7002588 Lindholm et al. Feb 2006 B1
7015915 Diard Mar 2006 B1
7023437 Voorhies et al. Apr 2006 B1
7027972 Lee Apr 2006 B1
7038678 Bunnell May 2006 B2
7038685 Lindholm May 2006 B1
7038692 Priem et al. May 2006 B1
7051139 Peleg et al. May 2006 B2
7053901 Huang et al. May 2006 B2
7064763 Lindholm et al. Jun 2006 B2
7068272 Voorhies et al. Jun 2006 B1
7068278 Williams et al. Jun 2006 B1
7075541 Diard Jul 2006 B2
7080194 Van Dyke Jul 2006 B1
7081895 Papakipos et al. Jul 2006 B2
7091971 Morein Aug 2006 B2
7095414 Lindholm et al. Aug 2006 B2
7098922 Bastos et al. Aug 2006 B1
7119808 Goncalez et al. Oct 2006 B2
7120816 Williams et al. Oct 2006 B2
7123266 Wei et al. Oct 2006 B2
7129909 Dong et al. Oct 2006 B1
7130316 Kovacevic Oct 2006 B2
7142215 Papakipos et al. Nov 2006 B1
7145565 Everitt et al. Dec 2006 B2
7170513 Voorhies et al. Jan 2007 B1
7170515 Zhu Jan 2007 B1
7224359 Papakipos et al. May 2007 B1
7248261 Hakura Jul 2007 B1
7289125 Diard et al. Oct 2007 B2
7324111 Diamond Jan 2008 B2
7324547 Alfieri et al. Jan 2008 B1
7325086 Kong et al. Jan 2008 B2
7372465 Tamasi et al. May 2008 B1
7477256 Johnson Jan 2009 B1
20010029556 Priem et al. Oct 2001 A1
20020015055 Foran Feb 2002 A1
20020059302 Ebihara May 2002 A1
20020085007 Nelson et al. Jul 2002 A1
20020118308 Dujmenovic Aug 2002 A1
20020145612 Blythe et al. Oct 2002 A1
20020180740 Lindholm et al. Dec 2002 A1
20020196259 Lindholm et al. Dec 2002 A1
20030020720 Lindholm et al. Jan 2003 A1
20030034975 Lindholm et al. Feb 2003 A1
20030038808 Lindholm et al. Feb 2003 A1
20030080959 Morein May 2003 A1
20030103054 Montrym et al. Jun 2003 A1
20030112245 Lindholm et al. Jun 2003 A1
20030112246 Lindholm et al. Jun 2003 A1
20030117971 Aubury Jun 2003 A1
20030128197 Turner et al. Jul 2003 A1
20030151606 Morein Aug 2003 A1
20030164832 Alcorn Sep 2003 A1
20030164834 Lefebvre et al. Sep 2003 A1
20030171907 Gal-On et al. Sep 2003 A1
20030179220 Dietrich, Jr. et al. Sep 2003 A1
20030188075 Peleh et al. Oct 2003 A1
20030189565 Lindholm et al. Oct 2003 A1
20030212735 Hicok et al. Nov 2003 A1
20040012600 Deering et al. Jan 2004 A1
20040036159 Bruno Feb 2004 A1
20040066384 Ohba Apr 2004 A1
20040153778 Cheng Aug 2004 A1
20040169651 Everitt et al. Sep 2004 A1
20040179019 Sabella et al. Sep 2004 A1
20040196289 Langendorf et al. Oct 2004 A1
20040207618 Williams et al. Oct 2004 A1
20040210788 Williams et al. Oct 2004 A1
20040223003 Heirich et al. Nov 2004 A1
20050041031 Diard Feb 2005 A1
20050081115 Cheng et al. Apr 2005 A1
20050122330 Boyd et al. Jun 2005 A1
20050162437 Morein et al. Jul 2005 A1
20050166207 Baba et al. Jul 2005 A1
20050190189 Chefd'hotel et al. Sep 2005 A1
20050190190 Diard et al. Sep 2005 A1
20050195186 Mitchell et al. Sep 2005 A1
20050195187 Seiler et al. Sep 2005 A1
20050206646 Alcorn Sep 2005 A1
20050223124 Reed Oct 2005 A1
20050225558 Morein et al. Oct 2005 A1
20050237327 Rubinstein et al. Oct 2005 A1
20050237329 Rubinstein et al. Oct 2005 A1
20050243096 Possley et al. Nov 2005 A1
20050243215 Doswald et al. Nov 2005 A1
20050259103 Kilgard et al. Nov 2005 A1
20050265064 Ku et al. Dec 2005 A1
20050275760 Gritz et al. Dec 2005 A1
20060005178 Kilgard et al. Jan 2006 A1
20060028478 Rubinstein et al. Feb 2006 A1
20060055695 Abdalla et al. Mar 2006 A1
20060059494 Wexler et al. Mar 2006 A1
20060101218 Reed May 2006 A1
20060114260 Diard Jun 2006 A1
20060119607 Lindholm et al. Jun 2006 A1
20060120376 Duncan et al. Jun 2006 A1
20060123142 Duncan et al. Jun 2006 A1
20060156399 Parmar et al. Jul 2006 A1
20060202941 Morein et al. Sep 2006 A1
20060208960 Glen Sep 2006 A1
20060221086 Diard Oct 2006 A1
20060221087 Diard Oct 2006 A1
20060225061 Ludwig et al. Oct 2006 A1
20060248241 Danilak Nov 2006 A1
20060267987 Litchmanov Nov 2006 A1
20060268005 Hutchins et al. Nov 2006 A1
20060271713 Xie et al. Nov 2006 A1
20060274073 Johnson et al. Dec 2006 A1
20060282604 Temkine et al. Dec 2006 A1
20060290700 Gonzalez et al. Dec 2006 A1
20070159488 Danskin et al. Jul 2007 A1
20070195099 Diard et al. Aug 2007 A1
20080007559 Kalaiah et al. Jan 2008 A1
20080143731 Cheng et al. Jun 2008 A1
Foreign Referenced Citations (6)
Number Date Country
04 79 9376 Oct 2008 EP
WO 2004070652 Aug 2004 WO
PCTIL04001069 Jun 2005 WO
PCTIB0601529 Dec 2007 WO
PCTUS0726466 Jul 2008 WO
PCTIB0703464 Sep 2008 WO
Non-Patent Literature Citations (8)
Entry
Powerpoint presentation entitled, “Go Multiple” by Dennis Yang, Conference Platform , 2007,11 pages.
Scientific publication entitled, “Chromium; A Stream-Processing Framework for Interactive Rendering on Clusters” from Stanford University, Lawrence Livermore National Laboratory, and IBM T.J. Watson Research Center, 2007, 10 pages.
Scientific publication entitled “Hybrid Sort-First and Sort-Last Parallel Rendering with a Cluster of PCs” by Rudrajit Samanta et al., Princeton University, 12 pages, c. 2000.
Publication by TW Crockett entitled, “An Introduction to Parallel Rendering”, in Parallel Computing, 1997, Elsevier Science, 29 Pages.
Silicon Graphics, Inc. pdf. document entitled “OpenGL Multipipe™ SDK White Paper”, 2002,2003, pp. 1-32.
Silicon Graphics, Inc. online document entitled “Additional Information for: OpenGL Multipipe™ SDK White Paper (IRIX 6.5)”, published Feb. 1, 2003, 2 pages.
Technical publication by Li et al entiteled “ParVox—A Parallel Splatting Volume Rendering System for Distributed Visualization,” Oct. 1997, 7 Pages.
Department of Computer Science, University of North Carolina publication by Molnar et al. entitled, “PixelFlow: High-Speed Rendering Using Image Composition,” 1992, 10 Pages.
Related Publications (1)
Number Date Country
20080158236 A1 Jul 2008 US