This disclosure relates to display of graphics images.
Graphics processors are widely used to render two-dimensional (2D) and three-dimensional (3D) images for various applications, such as video games, graphics programs, computer-aided design (CAD) applications, simulation and visualization tools, and imaging. Display processors may be used to display the rendered output of the graphics processor for presentation to a user via a display device.
OpenGL® (Open Graphics Library) is a standard specification that defines an API (Application Programming Interface) that may be used when writing applications that produce 2D and 3D graphics. Other languages, such as Java, may define bindings to the OpenGL API's through their own standard processes. The interface includes multiple function calls, or instructions, that can be used to draw scenes from simple primitives. Graphics processors, multi-media processors, and even general purpose CPU's can then execute applications that are written using OpenGL function calls. OpenGL ES (embedded systems) is a variant of OpenGL that is designed for embedded devices, such as mobile wireless phones, digital multimedia players, personal digital assistants (PDA's), or video game consoles.
Graphics applications, such as 3D graphics applications, may describe or define contents of a scene by invoking API's, or instructions, that in turn use the underlying graphics hardware, such as one or more processors in a graphics device, to generate an image. The graphics hardware may undergo a series of state transitions that are exercised through these API's. A full set of states for each API call, such as a draw call or instruction, may describe the process with which the image is rendered from one or more graphics primitives, such as one or more triangles, by the hardware.
In addition, binning-based, or partitioning-based, graphics hardware may often be implemented using a process in which the individual graphics primitives destined for rendering may be clustered into binning partitions, or bins, in order to divide up a scene of images displayed on a screen of a display device. The hardware may do so due to on screen-size or resolution constraints, or due to on memory limitations associated with rendering operations. Graphics primitives that may span across multiple binning partitions may be divided into multiple fragments by the hardware along the edges of the partitions before the primitive fragments are rendered. The hardware may render all primitive fragments in each partition separately.
Thus, an individual primitive that may span, for example, across two binning partitions may be divided into two fragments, and each of these two fragments may then be independently rendered. However, the graphics images generated by each of these fragments may then need to be re-combined within a frame of image data before being displayed on the screen of the display device. Hence, primitive graphics data spanning across different partitions may be separately processed and rendered, and then the rendered image data may be recombined to form the final image.
In general, this disclosure relates to techniques for providing a visual representation of a graphical scene that includes a number of different graphical partitions, which may allow a user to identify portions of the graphics scene that exhibit reduced performance due to costs that may be associated with screen partitioning. A graphics device, such as a mobile device, may provide partitioning, or binning, information to an external computing device (e.g., personal computer) based upon the number and type of partitions that have been created by a graphics driver. The graphics device may also provide graphics instructions and state information to the computing device.
The computing device may display one or more graphics images in a graphical scene based upon the graphics instructions and the state information. The computing device may also display a graphical representation of partitions that overlay the scene based upon the received partitioning information, and may also provide analysis information regarding potential partitioning costs or performance overhead. An application developer may use this information to investigate alternate compositions of the scene to help reduce these costs and/or performance overhead.
In one aspect, a method comprises displaying one or more graphics images in a graphical scene, displaying a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene, and analyzing graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
In one aspect, a device comprises a display device and one or more processors. The one or more processors are configured to display one or more graphics images in a graphical scene on the display device, display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene on the display device, and analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
In one aspect, a computer-readable medium comprising computer-executable instructions for causing one or more processors to display one or more graphics images in a graphical scene, display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene, and analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
The techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in a processor, which may refer to one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP), or other equivalent integrated or discrete logic circuitry. Software comprising instructions to execute the techniques may be initially stored in a computer-readable medium and loaded and executed by a processor.
Accordingly, this disclosure also contemplates computer-readable media comprising instructions to cause a processor to perform any of a variety of techniques as described in this disclosure. In some cases, the computer-readable medium may form part of a computer program product, which may be sold to manufacturers and/or used in a device. The computer program product may include the computer-readable medium, and in some cases, may also include packaging materials.
The details of one or more aspects are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
In some cases, graphics device 2 may be capable of executing various applications, such as graphics applications, video applications, audio applications, and/or other multi-media applications. For example, graphics device 2 may be used for graphics applications, video game applications, video playback applications, digital camera applications, instant messaging applications, video teleconferencing applications, mobile applications, or video streaming applications.
Graphics device 2 may be capable of processing a variety of different data types and formats. For example, graphics device 2 may process still image data, moving image (video) data, or other multi-media data, as will be described in more detail below. The image data may include computer-generated graphics data. In the example of
In graphics device 2, graphics processing system 4 is coupled both to storage medium 8 and to display device 6. Storage medium 8 may include any permanent or volatile memory that is capable of storing instructions and/or data, such as, for example, synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), embedded dynamic random access memory (eDRAM), static random access memory (SRAM), or flash memory. Display device 6 may be any device capable of displaying image data for display purposes, such as an LCD (liquid crystal display), plasma display device, or other television (TV) display device.
Vertex processor 14 is capable of managing vertex information and processing vertex transformations. In one aspect, vertex processor 14 may comprise a digital signal processor (DSP). Graphics processor 12 may be a dedicated graphics rendering device utilized to render, manipulate, and display computerized graphics. Graphics processor 12 may implement various complex graphics-related algorithms. For example, the complex algorithms may correspond to representations of two-dimensional or three-dimensional computerized graphics. Graphics processor 12 may implement a number of so-called “primitive” graphics operations, such as forming points, lines, and triangles or other polygon surfaces, to create complex, three-dimensional images on a display, such as display device 6.
Graphics processor 12 may carry out instructions that are stored in storage medium 8. Storage medium 8 is capable of storing application instructions 21 for an application (such as a graphics or video application), as well as one or more graphics drivers 18. Application instructions 21 may be loaded from storage medium 8 into graphics processing system 4 for execution. For example, one or more of control processor 10, graphics processor 12, and display processor 16 may execute instructions 21. In one aspect, application instructions 21 may comprise one or more downloadable modules that are downloaded dynamically, over the air, into storage medium 8. In one aspect, application instructions 21 may comprise a call stream of binary instructions that are generated or compiled from application programming interface (API) instructions created by an application developer.
Graphics drivers 18 may also be loaded from storage medium 8 into graphics processing system 4 for execution. For example, one or more of control processor 10, graphics processor 12, and display processor 16 may execute certain instructions from graphics drivers 18. In one example aspect, graphics drivers 18 are loaded and executed by graphics processor 12. Graphics drivers 18 will be described in further detail below.
Storage medium 8 also includes graphics data mapping information 23. Graphics data mapping information 23 includes information to map one or more of application instructions 21 to graphics data that may be rendered during execution of application instructions 21. The graphics data, which may be stored in storage medium 8 and/or buffers 15, may include one or more primitives (e.g., polygons). Graphics data mapping information 23 may maintains a mapping of individual primitives that are to be rendered to individual instructions. After the primitives have been rendered, mapping information 23 allows a mapping from individual instructions back to original graphics data that was used to create one or more images that are ultimately displayed on graphics device 6. Mapping information 23 may, in some cases, be useful for debugging and/or performance analysis.
As also shown in
Applications instructions 21 may, in certain cases, include instructions for a graphics application, such as a 3D graphics application. Application instructions 21 may comprise instructions that describe or define contents of a graphics scene that includes one or more graphics images. When application instructions 21 are loaded into and executed by graphics processing system 4, graphics processing system 4 may undergo a series of state transitions. One or more instructions within graphics drivers 18 may also be executed to render or display graphics images on display device 6 during execution of application instructions 21.
A full set of states for an instruction, such as a draw call, may describe a process with which an image is rendered by graphics processing system 4. However, an application developer who has written application instructions 21 may often have limited ability to interactively view or modify these states for purposes of debugging or experimenting with alternate methods of describing or rendering images in a defined scene. In addition, different hardware platforms may have different hardware designs and implementations of these states and/or state transitions.
In addition, binning-based graphics hardware, such as one or more of processors 10, 12, 14, and 16, may often be implemented using a process in which the individual primitives destined for rendering are clustered into rectangular-shaped binning partitions, or bins, in order to divide up a scene of images displayed on a screen of display device 6. The hardware may do so based on screen size or resolution constraints of display device 6, or based on memory limitations of storage medium 8 associated with rendering operations. Primitives that may span across multiple binning partitions may be divided into multiple fragments by one or more of processors 10, 12, 14, or 16 along the edges of the partitions before the primitive fragments are rendered. The primitive fragments in each partition may then be rendered separately. Binning partitions, in general, may be varied in number, depending on the hardware architecture, and may have various sizes and shapes. For example, the binning partitions may include multiple (e.g., four, eight) rectangular-shaped partitions.
Thus, an individual primitive that may span, for example, across two binning partitions may be divided, into two fragments, and each of these two fragments may then be independently rendered. However, the graphics images generated by each of these fragments may then need to be re-combined within a frame of image data before being displayed on the screen of display device 6. Thus, dividing individual primitives that span across multiple binning partitions can have potential processing overhead, and cause overall performance degradation.
In one aspect, an application developer may use application computing device 20, shown in
Application computing device 20 includes one or more processors 22, a display device 24, and a storage medium 26 (which may comprise memory). Processors 22 may include one or more of a control processor, a graphics processor, a vertex processor, and a display processor, according to one aspect. Storage medium 26 may include any permanent or volatile memory that is capable of storing instructions and/or data, such as, for example, synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), static random access memory (SRAM), or flash memory. Display device 24 may be any device capable of displaying image data for display purposes, such as an LCD (liquid crystal display), plasma display device, or other television (TV) display device.
Application computing device 20 is capable of capturing and analyzing graphics instructions 30, along with state and/or performance information 32, which is sent from graphics device 2. In one aspect, graphics drivers 18 are configured to send graphics instructions 30 and state/performance information 32 to application computing device 20. Graphics instructions 30 may include one or more of application instructions 21, and state/performance information 32 may be generated or captured during execution of graphics instructions 30 within graphics processing system 4.
State/performance information 32 includes information about the state and/or performance of graphics processing system 4 during instruction execution, and will be described in more detail below. State/performance information 32 may include graphics data (e.g., primitive and/or rasterized graphics data) that may be used, or is otherwise associated, with graphics instructions 30. Graphics processing system 4 may execute graphics instructions 30 to display an image, or a scene of images, on display device 6. Application computing device 20 is capable of using graphics instructions 30, along with state/performance information 32, to re-create the graphics image or scene that is also shown on display device 6 of graphics device 2.
Graphics device 2 may also send mapping and/or partitioning information 33 to application computing device 20. In one aspect, graphics drivers 18 are configured to send mapping/partitioning information 33 to application computing device 20. Mapping/partitioning information 33 may include one or more portions of graphics data mapping information 23, which includes information to map graphics data to individual instructions within graphics instructions 30. For example, mapping/partitioning information 33 may include information to map one or more primitives (e.g., polygons) to individual instructions within graphics instructions 30.
Mapping/partitioning information 33 may also include partitioning information that is generated and provided by graphics device 2. This partitioning information, in some cases, may be generated and provided by one or more of processors 10, 12, 14, and 16, such as control processor 10. Partitioning information may include information that identifies the number, type, size, and/or shape of binning partitions, or bins, that may be used within graphics processing system 4 to render graphics data into one or more graphics images, and display such images on display device 6. As described previously, graphics device 2 may partition a screen space, or size, of display device 6 into partitions, based upon, for example, memory-size limitations of buffers 15 and/or storage medium 8 during rendering operations. The partitioning information provides information about the partitions that are created and used.
Simulation application 28 may be executed by one or more processors 22 of application computing device 20 to re-create the graphics image or scene upon receipt of graphics instructions 30 and state/performance information 32, and display the image, or scene of images, on display device 24. Simulation application 28 may comprise a software module that contains a number of application instructions. Simulation application 28 is stored in storage medium 26, and may be loaded and executed by processors 22. Simulation application 28 may be pre-loaded into storage medium 26, and may be customized to operate with graphics device 2.
In one aspect, simulation application 28 simulates the hardware operation of graphics device 2. Different versions of simulation application 28 may be stored in storage medium 26 and executed by processors 22 for different graphics devices having different hardware designs. In some cases, software libraries may also be stored within storage medium 26, which are used in conjunction with simulation application 28. In one aspect, simulation application 28 may be a generic application, and specific hardware or graphics device simulation functionality may be included within each separate library that may be linked with simulation application 28 during execution.
In one aspect, a visual representation of state/performance information 32 may be displayed to application developers on display device 24. In addition, a visual representation of graphics instructions 30 may also be displayed. Because, in many cases, graphics instructions 30 may comprise binary instructions, application computing device 20 may use instruction mapping information 31 to generate the visual representation of graphics instructions 30 on display device 24. Instruction mapping information 31 is stored within storage medium 26 and may be loaded into processors 22 in order to display a visual representation of graphics instructions 30.
In one aspect, instruction mapping information 31 may include mapping information, such as within a lookup table, to map graphics instructions 30 to corresponding API instructions that may have been previously compiled when generating graphics instructions 30. Application developers may write programs that use API instructions, but these API instructions are typically compiled into binary instructions, such as graphics instructions 30 (which are included within application instructions 21), for execution on graphics device 2. One or more instructions within graphics instructions 30 may be mapped to an individual API instruction. The mapped API instructions may then be displayed to an application developer on display device 24 to provide a visual representation of the graphics instructions 30 that are actually being executed.
In one aspect, a user, such as an application developer, may wish to change one or more of the graphics instructions 30 to determine, for example, the effects of such changes on performance. In this aspect, the user may change the visual representation of graphics instructions 30. Mapping information 31 may then be used to map these changes within the visual representation of graphics instructions 30 to binary instructions that can then be provided back to graphics device 2 within requested modifications 34, as will be described in more detail below.
As described above, the graphics image that is displayed on display device 24 of application computing device 20 may be a representation of an image that is displayed on graphics device 2. Because simulation application 28 may use graphics instructions 30 and state/performance information 32 to re-create an image or scene exactly as it is presented on graphics device 2, application developers that use application computing device 20 may be able to quickly identify potential performance issues or bottlenecks during execution of graphics applications 30, and even prototype modifications to improve the overall performance of graphics applications 30.
For example, an application developer may choose to make one or more requested modifications 34 to graphics instructions 30 and/or state/performance information 32 during execution of simulation application 28 on application computing device 20 and display of the re-created image on display device 24. Any such requested modifications 34 may be based upon observed performance issues, or bottlenecks, during execution of graphics instructions 30 or analysis of state/performance information 32. These requested modifications 34 may then be sent from application computing device 20 to graphics device 2, where they are processed by graphics processing system 4. In one aspect, one or more of graphics drivers 18 are executed within graphics processing system 4 to process requested modifications 34. Requested modifications 34, in some cases, may include modified instructions. In some cases, requested modifications may include modified state and/or performance information.
Upon processing of requested modifications 34, updated instructions and/or information 35 is sent back to application computing device 20, such as by one or more of graphics drivers 18. Updated instructions/information 35 may include updated graphics instructions for execution based upon requested modifications 34 that were processed by graphics device 2. Updated instructions/information 35 may also include updated state and/or performance information based upon the requested modifications 34 that were processed by graphics device 2.
The updated instructions/information 35 is processed by simulation application 28 to update the display of the re-created image information on display device 24, and also to provide a visual representation of updated instructions/information 35 to the application developer (which may include again using instruction mapping information 31). The application developer may then view the updated image information on display device 24, as well as the visual representation of updated instructions/information 35, to determine if the performance issues have been resolved or mitigated. The application developer may use an iterative process to debug graphics instructions 30 or prototype modifications to improve the overall performance graphics applications 30.
In one aspect, application computing device 20 uses mapping/partitioning information 23 to display a visual, graphical representation of partitions that overlay the graphics images displayed on display device 24. These partitions graphically divide the scene comprising these images on display device 24. For example, simulation application 28 may use partitioning module 27 to process mapping/partitioning information 33 to create the graphics representation of these partitions (e.g., multiple rectangular-shaped partitions) on a screen of display device 24.
Partitioning module 27 may be loaded from storage medium 26 and executed by processors 22. When executed, partitioning module 27 may also analyze graphics data, which may be included within state/performance information 32, for one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions. For example, partitioning module 27 may analyze one or more polygons that are used to create graphics images for display on display device 24, and determine which ones of these polygons may span across multiple partitions, as will be described in more detail below.
Storage medium 26 further includes a navigation module 29, which may also be executed by processors 22. Simulation application 28, during execution, may use navigation module 29 to display a navigation controller on display device. A user, such as an application developer, may interact with this navigation controller to view a modified perspective view of graphics images within a scene that is displayed on display device 24. Partitioning module 27 may then display a graphical representation of partitions that overlay the modified perspective view of the graphics images to graphically divide the modified scene. Partitioning module 27 may also then analyze one or more polygons that are used to create the graphics images in the modified perspective view to determine which ones of the polygons may span across multiple partitions.
As shown in
Control processor 10 may control one or more aspects of the flow of data or instruction execution through the pipeline, and may also provide geometry information for a graphics image to vertex processor 14. Vertex processor 14 may manage vertex transformation or geometry processing of the graphics image, which may be described or defined according to multiple vertices in primitive geometry form. Vertex processor 14 may provide its output to graphics processor 12, which may perform rendering or rasterization operations on the graphics image. Graphics processor 12 may provide its output to display processor 16, which prepares the graphics image, in pixel form, for display. Graphics processor 12 may also perform various operations on the pixel data, such as shading or scaling.
Often, graphics image data may be processed in this processing pipeline during execution of graphics instructions 30, which may be part of application instructions 21 (
In some cases, one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16 may have performance issues, or serve as potential bottlenecks within the processing pipeline, during the execution of graphics instructions 30. In these cases, overall performance within graphics processing system 4 may be deteriorated, and the application developer may wish to make changes the graphics instructions 30 to improve performance. However, the developer may not necessarily know which ones of processors 10, 12, 14, or 16 may be the ones that have performance issues. These performance issues may include, for example, issues related to processor usage or utilization for one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16.
In particular, binning-based operations, in which primitive graphics data is divided up across multiple binning partitions prior to rendering, may often create certain performance issues. For example, if a polygon (such as triangle 146 shown in the example of
To assist with the problem of identifying performance bottlenecks and potential solutions, the graphics driver 18A of graphics device 2 may capture, or collect, graphics instructions 30 from graphics processing system 4 and route them to application computing device 20, as shown in
Various forms of state data may be included within state/performance information 32. For example, the state data may include graphics data used during execution of, or otherwise associated with, graphics instructions 30. The state data may be related to a vertex array, such as position, color, coordinates, size, or weight data. State data may further include texture state data, point state data, line state data, polygon state data, culling state data, alpha test state data, blending state data, depth state data, stencil state data, or color state data. As described previously, state data may include both state information and actual data. In some cases, the state data may comprise data associated with one or more OpenGL tokens.
Various forms of performance data may also be included within state/performance information 32. In general, this performance data may include metrics or hardware counter data from one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. The performance data may include frame rate or cycle data. The cycle data may include data for cycles used for profiling, command arrays, vertex and index data, or other operations. In various aspects, various forms of state and performance data may be included within state/performance information 32 that is collected from graphics processing system 4 by graphics driver 18A.
As described previously, application computing device 20 may display a representation of a graphics image according to received graphics instructions 30 and state/performance information 32. Application computing device 20 may also display a visual representation of state/performance information 32. By viewing and interacting with the re-created graphics image and/or the visual representation of the state/performance information 32, an application developer may be able to quickly identify and resolve performance issues within graphics processing system 4 of graphics device 2 during execution of graphics instructions 30. For example, the application developer may be able to identify which specific ones of processors 10, 12, 14, and/or 16 may have performance issues.
In addition, graphics driver 18A also provides mapping and/or partitioning information 33 to application computing device 20. As described previously in reference to
In an attempt to identify a workaround or resolution to any identified performance issues, the developer may initiate one or more requested modifications 34 on application computing device 20. For example, the developer may interact with the re-created image or the representation of state/performance information 32 to create the requested modifications 34. In some cases, the developer may even directly change the state/performance information 32, as described in more detail below, to generate the requested modifications 34. In certain cases, requested modifications 34 may include one or more requests to disable execution of one or more of graphics instructions 30 in graphics processing system 4 of graphics device 2, or requests to modify one or more of graphics instructions 30.
In some cases, the user may interact with a navigation controller displayed on display device 24 to request that a modified perspective view of a graphics scene be displayed. Navigation module 29 may manage the display of and interaction with this navigation controller. Any requests entered by the user via a user interface may be included with requested modifications 34. These requests may include, for example, requests to rotate one or more graphics images within the scene, requests to zoom in, requests to zoom out, or other similar requests to change a perspective view of images within the scene.
Requested modifications 34 are sent from application computing device 20 to graphics driver 18A, which handles the requests for graphics device 2 during operation. In many cases, the requested modifications 34 may include requests to modify state information, which may include data, within one or more of processors 10, 12, 14, or 16 within graphics processing system 4 during execution of graphics instructions 30. Graphics driver 18A may then implement the changes within graphics processing system 4 that are included within requested modifications 34. These changes may alter the flow of execution among processors 10, 12, 14, and/or 16 for execution of graphics instructions 30. In certain cases, one or more of graphics instructions 30 may be disabled during execution in graphics processing system 4 according to requested modifications 34.
Graphics driver 18A is capable of sending updated instructions and/or information 35 to application computing device 20 in response to the processing of requested modifications 34. Updated instructions/information 35 may include updated state information collected from graphics processing system 4 by graphics driver 18A, including performance information. Updated instructions/information 35 may include updated graphics instructions and/or graphics data.
Application computing device 20 may use updated instructions/information 35 to display an updated representation of the graphics image, as well as a visual representation of updated instructions/information 35. The application developer may then be capable of assessing whether the previously identified performance issues have been resolved or otherwise addressed. For example, the application developer may be able to analyze the updated image, as well as the visual representation of updated instructions/information 35 to determine if certain textures, polygons, or other features have been optimized, or if other performance parameters have been improved.
Updated instructions/information 35 may also include updated mapping and/or partitioning information, such as an updated mapping of graphics data to instructions that are also included within instructions/information 35. If an updated perspective view of a scene is displayed on display device 24 as a result of updated instructions/information 35, partitioning module 27 may display a graphical representation of partitions that overlay the modified perspective view and that graphically divide the modified scene. Partitioning module 27 may also analyze graphics data for the modified perspective view (which may also be included within updated instructions/information 35) to determine which portions of the graphics data are associated with multiple ones of the partitions. The partitions may be determined based upon rendering operations performed on the graphics data.
In such fashion, the application developer may be able to rapidly and effectively debug or analyze execution of graphics instructions 30 within an environment on application computing device 20 that simulates the operation of graphics processing system 4 on graphics device 2. The developer may iteratively interact with the displayed image and state/performance information on application computing device 20 to analyze multiple graphics images in a scene or multiple image frames to maximize execution performance of graphics instructions 30. Examples of such interaction and displayed information on application computing device 20 will be presented in more detail below.
As described previously, control processor 10 may control one or more aspects of the flow of data or instruction execution through the graphics processing pipeline, and may also provide geometry information to vertex processor 14. As shown in
Vertex processor 14 may then obtain the geometry information for a given primitive provided by control processor and/or stored in buffers 15 for processing at 92. In certain cases, vertex processor 14 may manage vertex transformation of the geometry information. In certain cases, vertex processor 14 may perform lighting operations on the geometry information.
Vertex processor 14 may provide its output to graphics processor 12, which may perform rendering or rasterization operations on the data at 94. Graphics processor 12 may provide its output to display processor 16, which prepares one or more graphics images, in pixel form, for display. In some cases, graphics processor 12 may split graphics data for a geometry, such as one or more polygons, based upon determined binning partitions. As described previously, one or more of processors within graphics device 2, such as graphics processor 12, may create multiple binning partitions that are associated with different screen areas of display 102 based upon certain factors, such as memory requirements or limitations. If a certain geometry (e.g., triangle) spans across multiple partitions, graphics processor may split up the geometry along partition boundaries into fragments, and independently render the fragments. In some cases, graphics processor 12 may provide mapping/partitioning information 33 to application computing device 20 based upon the number, type, size, shape, etc., of the determined partitions.
Display processor 16 may perform various operations on the pixel data, including fragment processing to process various fragments of the data, at 98. In certain cases, this may include one or more of depth testing, stencil testing, blending, or texture mapping, as is known in the art. If graphics processor 12 previously rendered multiple geometry fragments, fragment processing 98 of display processor 16 may then combine the rendered fragments for storage into a frame buffer. When performing texture mapping, display processor 16 may incorporate texture storage and filtering information at 96. In some cases, graphics processor 16 may perform other operations on the rasterized data, such as shading or scaling operations.
Display processor 16 provides the output pixel information for storage into a frame buffer at 100. In some cases, the frame buffer may be included within buffers 15 (
As described previously, graphics instructions 30 may be executed by one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. Application developers may typically not have much knowledge or control of which particular processors within graphics processing system 4 execute which ones of graphics instructions 30. In certain cases, one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16 may have performance issues, or serve as potential bottlenecks within the processing pipeline, during the execution of graphics instructions 30.
It may often be difficult for an application developer to pinpoint the location of a bottleneck, or how best to resolve or mitigate the effects of such a bottleneck. Thus, in one aspect, graphics instructions 30 and/or state information may be provided from graphics device 2 to an external computing device, such as application computing device 20. The state information may include data from one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16 with respect to various operations, such as those shown in
Graphics driver 18A, when executed, includes various functional blocks, which are shown in
Processor usage module 112 collects and maintains processor usage information for one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. The processor usage information may include processor cycle and/or performance information. Cycle data may include data for clock cycles used for profiling, command arrays, vertex and index data, or other operations. Processor usage module 112 may then provide such processor usage information to application computing device 20 via transport interface module 110. In some cases, processor usage module 112 provides this information to device 20 as it receives the information, in an asynchronous fashion. In other cases, processor usage module 112 may provide the information upon receipt of a request from device 20.
Hardware counter module 114 collects and maintains various hardware counters during execution of instructions by one or more of control processor 10, graphics processor 12, vertex processor 14, or display processor 16. The counters may keep track of various state indicators and/or metrics with respect to instruction execution within graphics processing system 4. Hardware counter module 114 may provide information to device 20 asynchronously or upon request.
State/performance data module 116 collects and maintains other state and/or performance data for one or more of control processor 10, graphics processor 12, vertex processor 14, and display processor 16 in graphics processing system 4. For example, the state data may, in some cases, comprise graphics data. The state data may include data related to a vertex array, such as position, color, coordinates, size, or weight data. State data may further include texture state data, point state data, line state data, polygon state data, culling state data, alpha test state data, blending state data, depth state data, stencil state data, or color state data. Performance data may include various other metrics or cycle data. State/performance data module 116 may provide information to device 20 asynchronously or upon request.
Mapping/partitioning module 117 collects mapping and/or partitioning information 33 from one or more of control processor 10, graphics processor 12, vertex processor 14, and display processor 16, and may also collect information from graphics data mapping information 23 (
API trace module 118 manages a flow and/or trace of graphics instructions that are executed by graphics processing system 4 and transported to application computing device 20 via transport interface module 110. As described previously, graphics device 2 provides a copy of graphics instructions 30, which are executed by graphics processing system 4 in its processing pipeline, to device 20. API trace module 118 manages the capture and transport of these graphics instructions 30. API trace module 118 may also provide certain information used with instruction mapping information 31 (
Override module 120 allows graphics driver 18A to change, or override, the execution of certain instructions within graphics processing system 4. As described previously, application computing device 20 may send one or more requested modifications, such as modifications 34, to graphics device 2. In certain cases, requested modifications 34 may include one or more requests to disable execution of one or more of graphics instructions 30 in graphics processing system 4, or requests to modify one or more of graphics instructions 30. In some cases, requested modifications 34 may include requests to change state/performance information 32.
Override module 120 may accept and process requested modifications 34. For example, override module 120 may receive from device 20 any requests to modify one or more of graphics instructions 30, along with any requests to modify state/performance information 32, and send such requests to graphics processing system 4. One or more of control processor 10, graphics processor 12, vertex processor 14, and display processor 16 may then process these requests and generate updated instructions/information 35. Override module 120 may then send updated instructions/information 35 to application computing device 20 for processing, as described previously.
In such fashion, graphics driver 18A provides an interface between graphics device 2 and application computing device 20. Graphics driver 18A is capable of providing graphics instructions and state/performance information 32 to application computing device 20, and also receiving requested modifications 34 from application computing device 20. After processing such requested modifications 34, graphics driver 18A is subsequently able to provide updated instructions/information 35 back to application computing device 20.
In the example of
On the other hand, polygons 144 and 146 span across multiple partitions. Polygon 144 spans across all four partitions 132, 134, 136, and 138, while polygon 146 spans across two of the partitions 136 and 138. In order to render polygon 144, graphics processor 12 may split polygon 144 into four constituent fragments 144A, 144B, 144C, and 144D (shown in
After these fragments 144A, 144B, 144C, and 144D have been independently rendered, display processor 16 may need to combine the rendered images for each of these fragments in order to display an accurate graphical representation of polygon 144. These separate rendering and combining operations may cause performance overhead.
Similarly, in order to render polygon 146, graphics processor 12 may split polygon 146 into two constituent fragments 146A and 146B (shown in
The information shown in
When an application developer views the information displayed within window 130, the developer is able to obtain an idea of which polygons may be split by the hardware because they span across multiple partitions, and also where such partitions are located. The developer may be able to use this information to determine an optimized configuration or location of certain graphics data within a graphics application, such as an application that uses application instructions 21 (
Because the developer is presented with a representation of the partitions that overlay the graphics images within window 130, as these partitions are defined by graphics device 2, the developer may better understand how to define, configure, or locate polygons 144 and 146 such that they do not span across multiple partitions, or such that they span across only a minimal number of partitions. In some cases, the developer may determine to re-define a polygon as sub-polygons, such that they may not need to be combined by display processor 16 after rendering. For example, the developer may re-define polygon 146 in a modified version of application instructions 21 as two separate polygons 146A and 146B, as shown in
Within screen area 150, the eight partitions are partitions 152, 154, 156, 158, 160, 162, 164, and 166. Each of these partitions is rectangular in shape. If, for purposes of illustration, screen area 150 is substantially the same size in area as screen area 130 (
Application instructions 21 may again, in the example of
In one aspect, a graphical representation of partitions 152, 154, 156, 158, 160, 162, 164, and 166 may be displayed to an application developer on display device 24. Any graphical display of such partitions that overlay graphics images, such as representations of polygons 140, 142, 144, and 146, may be quite useful to the developer. Often, the developer will have little information or idea on the number, type, shape, size, etc., of the partitions that are created and used by any individual device, such as graphics device 2. By being able to view a graphical representation of such partitions overlaid upon graphics images in a scene, the developer obtains a better idea of which graphics images or primitive graphics data, for example, may span across multiple partitions, and may therefore have certain rendering performance overhead. As a result, the developer may be able to redefine, reconfigure, resize, or otherwise change the graphics data generated and manipulated by a graphics application, such as one that includes application instructions 21.
Application computing device 20 may further receive state and/or performance information 32 from graphics device 2 (174). State/performance information 32 is associated with execution of graphics instructions 30 on graphics device 2. State/performance information 32 may include state information that indicates one or more states of graphics device 2 as it renders a graphics image. The state information may include state information from one or more processors of graphics device 2 that execute graphics instructions 30, such as control processor 10, graphics processor 12, vertex processor 14, and/or display processor 16. In some cases, the state information may comprise graphics data, such as primitive polygon data that is used by graphics processor 12 to render graphics image data.
Application computing device 20 may display a representation of one or more graphics images based on graphics instructions 30 and the state/performance information 32 in a graphical scene (176). In such fashion, application computing device 20 is capable of displaying a representation of these graphics images within a simulated environment that simulates graphics device 2. The simulated environment may be provided via execution of simulation application 28 on processors 22 of application computing device 20.
Application computing device 20 may display a graphical representation of partitions that overlay the graphics images and that graphically divide the scene (178). For example, application computing device 20 may display a graphical representation of the partitions shown in
In addition, application computing device 20 may analyze graphics data for the displayed graphics images and determine which portions are associated with multiple partitions (180). For example, application computing device 20 may analyze graphics primitives, such as polygon data used to generate or render the display graphics images, and determine which polygons (e.g., triangles) span across multiple partitions.
The receiving of the graphics instructions (172), receiving of the state information (174), displaying the representation of the graphics image (176), displaying of the partitions (178), and the analyzing of the graphics data (180) may be repeated for multiple image frames of the one or more graphics images if there are more frames to process (182). In this fashion, application computing device 20 is capable of displaying both still and moving graphics images (including 3D images) on display device 24, and displaying a graphical representation of partitions that overlay the images and graphically divide the scene. As the graphics images change, or as alternate perspective views of the images are shown, the application developer can continuously ascertain the relationship between the graphics data associated with the images and the location of the partitions.
Application computing device 20 may receive mapping/partitioning information 33 from an external graphics device, such as graphics device 2 (190). Application computing device 20 may also display a perspective view of one or more graphics images on its display device 24 (191). For example, application computing device 20 may display a perspective view of graphics images based upon received graphics instructions 30 and/or state/performance information 32.
Application computing device 20 may display a graphics representation of partitions that overlay the graphics images on display device 24 based upon the received mapping/partitioning information 33 (192). Application computing device 20 may also analyze graphics data for the graphics images, such as graphics data included within state/performance information 32, to determine which portions of the graphics data are associated with multiple ones of the partitions. For example, the graphics data may comprise a plurality of graphics primitives, such as triangles. Application computing device 20 may determine which ones of the triangles span across multiple ones of the partitions (193). These triangles may comprise triangles that have been at least partially rendered in multiple partitions.
In one aspect, application computing device 20 displays a graphical representation of the triangles that span across multiple ones of the partitions on display device 24 in conjunction with displaying the graphical representation of the partitions. In some cases, application computing device 20 may display a graphical indication, such as a color, for each triangle that spans across multiple partitions (194).
For example, application computing device 20 may, in one aspect, display a “heat map” representation of the triangles on display device 24, where each triangle has an associated graphical indicator, such as a color. In addition to color, other forms of graphical indicators (e.g., dashed liens, blinking indicators, highlighted indictors) may be used in certain scenarios to distinguish triangles from one another. Triangles that do not span across multiple partitions may be displayed in one color (e.g., blue). Triangles that span across multiple partitions (e.g., two to three partitions) may be displayed in a second color (e.g., purple). Triangles that span across more than three partitions may be prominently displayed in a third color (e.g., red). Thus, in this example, an application developer can quickly determine which triangles span across multiple partitions, and which ones span across more partitions than others. The developer may be able to use this information to determine how to reconfigure, redefine, or otherwise restructure triangles that span across multiple partitions to reduce performance (e.g., rendering) overhead.
Application computing device 20 may use navigation module 29 (
Application computing device 20 may then display a modified perspective view of the graphics images in a modified graphics scene based upon the user input to the navigation controller. For example, the developer may interact with the navigation controller to rotate around a scene of images, to zoom in or zoom out of the scene, or to otherwise change a perspective view of the scene, which may then display a modified perspective view of images within the modified scene, including new images. The user input provided to the navigation controller may be sent back to graphics device 2 as requested modifications 34, and the display of the updated perspective view may be based upon the updated instructions/information 35 provided by graphics device 2 back to application computing device 20. In one aspect, requested modifications 34 may include at least one of a request to disable execution of one or more of graphics instructions 30 on graphics device 2, a request to modify one or more of graphics instructions 30 on graphics device 2, and a request to modify state/performance information 32 on graphics device 2.
In one aspect, application computing device 20 may also display a graphical representation of the partitions that overlay the modified perspective view of the graphics images and that graphically divided the modified scene. Application computing device 20 may analyze graphics data for the modified perspective view of the graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
The displaying of a perspective view of graphics image(s) (191), displaying of partitions that overlay the graphics image(s) (192), determining which primitive triangle(s) span across multiple partitions (193), displaying of a graphical indication for each determined triangle (194), and receiving user input via a navigation controller to modify a perspective view of a scene (195) may be repeated for multiple perspective views of the scene (196). As the graphics images change, or as alternate perspective views of the images are shown, the application developer can continuously ascertain the relationship between the graphics data associated with the images and the location of the partitions.
As described previously, graphics device 200 is capable of display 3D graphics image 202 (which is a cube in the example of
As shown in the example of
Graphics instructions area 208 includes a visual representation of one or more graphics instructions that have been received from graphics device 200. As described previously, the visual representation of such instructions may comprise a representation of such instructions. For example, if graphics device 200 sends binary graphics instructions, display device 201 may display a representation of such binary instructions in another form, such as higher-level application programming interface (API) instructions (e.g., OpenGL instructions). Mapping information (such as mapping information 31 shown in
State/performance information area 214 includes a visual representation of state and/or performance information that has been received from graphics device 200. The received graphics instructions and state/performance information may be used to display 3D graphics image 210 within display area. In one aspect, graphics device 200 may utilize a graphics driver that implements a state/performance data module (such as state/performance data module 116 shown in
Window 203 also includes one or more selectors 212A-212N. A user may select any of these selectors 212A-212N. Each selector 212A-212N may be associated with different functions, such as statistical and navigation functions, as will be described in more detail below. Window 203 further includes selectors 216A-216N and 218A-218N, each of which may be selected by a user. Each selector 216A-216N and 218A-218N may also be associated with different functions, such as metric functions, override functions, and/or texture functions, as will be described in more detail below in reference to
A user, such as an application developer, may change information displayed within window 203. For example, the user may modify one or more of the instructions displayed within graphics instructions area 208, or any of the state/performance information within state/performance information area 214.
Any changes initiated by the user within window 203 may then be sent back to graphics device 200 as requested modifications. Graphics device 200 may then process these modifications, and provide updated instructions and/or information which may then be displayed within graphics instructions area 208 and/or state/performance information area 214. The updated instructions and/or information may also be used to display a modified version of 3D graphics image 210 within display area 211.
In one aspect, the state and/or performance information that may be displayed within area 214 may be analyzed by the computing device that includes display device 201 (such as application computing device 20 shown in
In one aspect, window 203 may display a report on the bottlenecks encountered in the call-stream of the graphics instructions received from graphics device 200, and may also display possible workarounds. In some cases, these possible workarounds may be presented as “what-if” scenarios to the user. For example, rendering a non-optimized triangle-list in a call-stream may be presented as one possible scenario, while pre-processing that list through a triangle-strip optimization framework may be presented as a second possible scenario. The user may select any of these possible workaround scenarios as requested modifications, and the requested modifications are then transmitted back to graphics device 200, where the performance may be measured. Graphics device 200 then sends updated instructions/information, which may be presented within graphics instruction area 208 and/or state/performance information area 214. The user can then view the results, and compare results for various different potential workarounds to identify an optimum solution. The user can use this process to quickly identify a series of steps that can be taken in order to remove bottlenecks from their application.
The user may iteratively continue to make adjustments within window 203 for purposes of experimentation, or trial/error debugging. The user may experiment with various different forms or combinations of graphics instructions and state/performance information to identify changes in the images or scenes that are displayed within display area 211. The user can use the simulation environment provided by the contents of window 203 to interactively view and modify the graphics instructions, which may be part of a call-stream, and states provided by graphics device 200 without having to recompile any source code and re-execute the compiled code on graphics device 200.
In some cases, the user may manipulate one or more of buttons 212A-212N to manipulate a graphical navigation controller, such as graphical camera, to modify a perspective view of graphics image 210. Such manipulation may be captured as requested modifications that are then sent back to graphics device 200. The updated instructions/information provided by graphics device 200 is then used to modify the perspective view of graphics image 210.
In some cases, various texture and/or state information may be provided in area 214 of window 203 as modifiable entities. In addition, a user may even select, for example, a pixel of graphics image 210 within display area 211, such that one or more corresponding instructions within graphics instruction area 208 are identified. In this fashion, a user can effectively drill backwards to a rendering instruction or call that was used to render or create that pixel or other portions of graphics image 210. Because graphics device 201 may re-create image 210 in window 203 exactly as it is presented on graphics device 200, the user is able to quickly isolate issues in their application (which may be based on the various graphics instructions displayed in graphics instructions area 208), and modify any states within state/performance area 214 to prototype new effects.
In one aspect, display device 201 is also capable of displaying partitioning information, as well as polygon data that may span across multiple partitions. For example, the application developer may select a button, such as one of buttons 212A-212N, to cause display device 201 to display a graphical representation of partitions (e.g., rectangular-shaped partitions) that overlay image 210 and graphically divide the scene in display area 211. In some cases, when device 200 is part of graphics device 2, the displayed partitions may be based on received mapping/partitioning information 33 (
For example, within graphics instructions area 208, various graphics instructions 242 are shown. Graphics instructions 242 may be a subset of graphics instructions that are provided by graphics device 200. For example, if graphics device 200 is part of graphics device 2, graphics instructions 242 may be a subset of graphics instructions 30. In some cases, mapping information (such as mapping information 31 shown in
As is shown in the example of
Various selection buttons are shown below state/performance information area 214 in
For example, if metric button 234A is associated with the number of frames per second, the application developer may select metric button 234A to view additional details on the number of frames per second (related to performance) for graphics image 210, or select portions of graphics image 210. The developer may, in some cases, select metric button 234A, or drag metric button 234A into state/performance information area 214. The detailed information on the number of frames per second may be displayed within state/performance information area 214. The developer also may drag metric button 234A into display area 211, or select a portion of graphics image 210 for application of metric button 234A. For example, the developer may select a portion of graphics image 210 after selecting metric button 234A, and then detailed information on the number of frames per second for that selected portion may be displayed within state/performance information area 214. In such fashion, the developer may view performance data for any number of different metric types based upon selection of one or more of metric buttons 234A-234N, and even possible selection of graphics image 210 (or a portion thereof).
In one aspect, metric data that may be displayed within window 220 may be provided by a graphics driver (e.g., graphics driver 18 shown in
The developer may, in some cases, also select textures button 236. Upon selection, various forms of texture information related to graphics image 210 may be displayed by graphics device 201. For example, texture information may be displayed within window 220, such as within state/performance information area 214. In some cases, the texture information may be displayed within an additional (e.g., pop-up) window (not shown). The developer may view the displayed texture information, but may also, in some cases, modify the texture information. In these cases, any modifications to the texture information may be propagated back to graphics device 200 as requested modifications. Upon receipt of updated instructions/information from graphics device 200, changes to graphics images 210 may be displayed within display area 211.
The developer may, in some cases, also select override button 238. After selection of override button 238, certain information, such as instruction and/or state information, may be displayed (e.g., within window 220 or another window) which may be modified, or overridden, by the developer. Any modifications or overrides may be included within one or more requested modifications that are sent to graphics device 200. In one aspect, graphics device 200 may implement a graphics driver, such as graphics driver 18A (
In some cases, the developer may override one or more over graphics instructions 242 that are shown within graphics instructions area 208. In these cases, the developer may type or otherwise enter information within graphics instructions area 208 to modify or override one or more of graphics instructions 242. These modifications may then be sent to graphics device 200, which will provide updated instructions/information to update the display of graphics image 210 within display area 211. The developer may change, for example, parameters, ordering, type, etc., of graphics instructions 242 to override one or more functions that are provided by instructions 242. In one aspect, mapping information 31 (
In some cases, the developer may also select override button 238 to override one or more functions associated with the processing pipeline that is implemented by graphics device 200.
Window 220 further includes selection buttons 231 and 232. Selection button 231 is a partition button, and selection button 232 is a navigation button. The developer may select partition button 231 to view a graphical representation of partitions, such as rectangular-shaped partitions, that overlay graphics image 210 and graphically divide the scene displayed in display area 211. Upon user selection of partition button 231, the graphical partitions may be displayed in display area 211.
Display area 211, or a separate display area or window, may also display information based upon an analysis of graphics data for graphics image 210 that determines which portions of the data are associated with multiple partitions. For example, display area 211, or a separate display area or window, may display which polygons, which are used to render graphics image 210, span across multiple partitions in conjunction with the graphical representation of the partitions. In some cases, a graphical indication, such as a color, may be displayed for each polygon (e.g., triangle) that spans across multiple partitions.
For example, in one aspect, a “heat map” may be displayed, where each triangle is displayed in a particular color. Triangles that do not span across multiple partitions may be displayed in one color (e.g., blue). Triangles that span across multiple partitions (e.g., two to three partitions) may be displayed in a second color (e.g., purple). Triangles that span across more than three partitions may be prominently displayed in a third color (e.g., red). Thus, in this example, an application developer can quickly determine which triangles span across multiple partitions, and which ones span across more partitions than others. The developer may be able to use this information to determine how to reconfigure, redefine, or otherwise restructure triangles that span across multiple partitions to reduce performance (e.g., rendering) overhead when generating graphics image 210.
The developer may also select navigation button 232 to navigate within display area 211, and even possibly to change a perspective view of graphics image 210 within display area 211. For example, upon selection of navigation button 232, a 3D graphical camera or navigation controller may be displayed. The developer may interact with the controller to navigate to any area within display area 211. The developer may also use the controller to change a perspective view of graphics image 210, such as by rotating graphics image 210 or zooming in/out.
In one aspect, any developer-initiated changes through selection of navigation button 232 and interaction with a graphical navigation controller may be propagated back to graphics device 200 as requested modifications (e.g., part of requested modifications 84 shown in
In one aspect, a graphical partitions may be displayed and overlaid upon a modified perspective view of graphics image 210. In addition, graphics data contained within the updated instructions/information for the modified perspective view of the graphics image 210 may be analyzed to determine which portions of the data are associated with multiple partitions.
As a result, the developer may effectively and efficiently determine how alternate perspectives, orientations, views, etc., for rendering and displaying graphics image 210 may affect performance and state of graphics device 200. This may be very useful to the developer in optimizing the graphics instructions 242 that are used to create and render graphics image 210 in the simulation environment displayed on display device 201, and effectively of graphics image 202 that is displayed on graphics device 200. In one aspect, any changes in the position, perspective, orientation, etc., of graphics image 210, based upon developer-initiated selections and controls within window 220, may also be seen as changes for graphics image 202 that may be displayed on graphics device 200 during the testing process.
Through interaction with graphical window 220 within a graphical user interface, the application developer can attempt to identify performance issues and/or bottlenecks during execution of graphics instructions 242, which are a visual representation of graphics instructions that are executed by graphics device 200 to create graphics image 202. A representation of graphics image 202 (i.e., graphics image 210) is displayed within display area 211 based upon graphics instructions 242 and state/performance data received by graphics device 200. By viewing graphics instructions 242, graphics image 210, and the state/performance information, as well as the effects that are based upon user-initiated modifications to one or more of these, an application developer can interactively and dynamically engage in a trial-and-error, or debugging, process to optimize the execution of instructions on graphics device 200, and to eliminate or mitigate any performance issues (e.g., bottlenecks) during instruction execution.
In addition, the visual representation of a graphical scene that includes a number of different graphical partitions may allow a developer to identify portions of the graphics scene that exhibit reduced performance due to costs that may be associated with screen partitioning. The developer may review the partitioning and associated analysis information to investigate alternate compositions of the scene to help reduce these costs and/or related performance overhead.
The techniques described in this disclosure may be implemented within a general purpose microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other equivalent logic devices. Accordingly, the terms “processor” or “controller,” as used herein, may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein.
The various components illustrated herein may be realized by any suitable combination of hardware, software, firmware, or any combination thereof. In the figures, various components are depicted as separate units or modules. However, all or several of the various components described with reference to these figures may be integrated into combined units or modules within common hardware and/or software. Accordingly, the representation of features as components, units or modules is intended to highlight particular functional features for ease of illustration, and does not necessarily require realization of such features by separate hardware or software components. In some cases, various units may be implemented as programmable processes performed by one or more processors.
Any features described herein as modules, devices, or components, including graphics device 100 and/or its constituent components, may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. In various aspects, such components may be formed at least in part as one or more integrated circuit devices, which may be referred to collectively as an integrated circuit device, such as an integrated circuit chip or chipset. Such circuitry may be provided in a single integrated circuit chip device or in multiple, interoperable integrated circuit chip devices, and may be used in any of a variety of image, display, audio, or other multi-media applications and devices. In some aspects, for example, such components may form part of a mobile device, such as a wireless communication device handset.
If implemented in software, the techniques may be realized at least in part by a computer-readable medium comprising code with instructions that, when executed by one or more processors, performs one or more of the methods described above. The computer-readable medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), embedded dynamic random access memory (eDRAM), static random access memory (SRAM), flash memory, magnetic or optical data storage media.
The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by one or more processors. Any connection may be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Combinations of the above should also be included within the scope of computer-readable media. Any software that is utilized may be executed by one or more processors, such as one or more DSP's, general purpose microprocessors, ASIC's, FPGA's, or other equivalent integrated or discrete logic circuitry.
Various aspects have been described herein. These and other aspects are within the scope of the following claims.
The present Application for Patent claims priority to Provisional Application No. 61/083,659 entitled PARTITIONING-BASED PERFORMANCE ANALYSIS FOR GRAPHICS IMAGING filed Jul. 25, 2008, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. The present Application for Patent is related to the following co-pending U.S. patent applications: 61/083,656 filed Jul. 25, 2008, having Attorney Docket No. 080967P1, filed concurrently herewith, assigned to the assignee hereof, and expressly incorporated by reference herein; and 61/083,665 filed Jul. 25, 2008 having Attorney Docket No. 080971P1, filed concurrently herewith, assigned to the assignee hereof, and expressly incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
61083659 | Jul 2008 | US | |
61083656 | Jul 2008 | US | |
61083665 | Jul 2008 | US |