The present disclosure relates generally to processing systems and, more particularly, to one or more techniques for graphics processing.
Computing devices often utilize a graphics processing unit (GPU) to accelerate the rendering of graphical data for display. Such computing devices may include, for example, computer workstations, mobile phones such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles. GPUs execute a graphics processing pipeline that includes one or more processing stages that operate together to execute graphics processing commands and output a frame. A central processing unit (CPU) may control the operation of the GPU by issuing one or more graphics processing commands to the GPU. Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize the GPU during execution. A device that provides content for visual presentation on a display generally includes a GPU.
Typically, a GPU of a device is configured to perform the processes in a graphics processing pipeline. However, with the advent of wireless communication and smaller, handheld devices, there has developed an increased need for improved graphics processing.
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be a graphics processing unit (GPU). The apparatus can generate multiple processing units. In some aspects, the multiple processing units can be in a graphics processing pipeline of the GPU. The apparatus can also group the multiple processing units into one or more processing unit clusters. In some aspects, each of the one or more processing unit clusters can include one or more context registers. Also, the apparatus can determine one or more context states of the one or more context registers in each of the one or more processing unit clusters. The apparatus can also implement one or more execution counters in the graphics processing pipeline, where each of the one or more execution counters includes an execution value. Moreover, the apparatus can execute one or more draw call functions at each of the one or more processing unit clusters, where each of the one or more draw call functions is executed by at least one of the multiple processing units. The apparatus can also increase the execution value of one of the one or more execution counters when one of the one or more processing unit clusters starts executing one of the one or more draw call functions. Further, the apparatus can decrease the execution value of one of the one or more execution counters when one of the one or more processing unit clusters finishes executing one of the one or more draw call functions.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
Various aspects of systems, apparatuses, computer program products, and methods are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of this disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of this disclosure is intended to cover any aspect of the systems, apparatuses, computer program products, and methods disclosed herein, whether implemented independently of, or combined with, other aspects of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. Any aspect disclosed herein may be embodied by one or more elements of a claim.
Although various aspects are described herein, many variations and permutations of these aspects fall within the scope of this disclosure. Although some potential benefits and advantages of aspects of this disclosure are mentioned, the scope of this disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of this disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description. The detailed description and drawings are merely illustrative of this disclosure rather than limiting, the scope of this disclosure being defined by the appended claims and equivalents thereof.
Several aspects are presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, and the like (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors (which may also be referred to as processing units). Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), general purpose GPUs (GPGPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems-on-chip (SOC), baseband processors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software can be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The term application may refer to software. As described herein, one or more techniques may refer to an application, i.e., software, being configured to perform one or more functions. In such examples, the application may be stored on a memory, e.g., on-chip memory of a processor, system memory, or any other memory. Hardware described herein, such as a processor may be configured to execute the application. For example, the application may be described as including code that, when executed by the hardware, causes the hardware to perform one or more techniques described herein. As an example, the hardware may access the code from a memory and execute the code accessed from the memory to perform one or more techniques described herein. In some examples, components are identified in this disclosure. In such examples, the components may be hardware, software, or a combination thereof. The components may be separate components or sub-components of a single component.
Accordingly, in one or more examples described herein, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
In general, this disclosure describes techniques for having a graphics processing pipeline in a single device or multiple devices, improving the rendering of graphical content, and/or reducing the load of a processing unit, i.e., any processing unit configured to perform one or more techniques described herein, such as a GPU. For example, this disclosure describes techniques for graphics processing in any device that utilizes graphics processing. Other example benefits are described throughout this disclosure.
As used herein, instances of the term “content” may refer to “graphical content,” “image,” and vice versa. This is true regardless of whether the terms are being used as an adjective, noun, or other parts of speech. In some examples, as used herein, the term “graphical content” may refer to a content produced by one or more processes of a graphics processing pipeline. In some examples, as used herein, the term “graphical content” may refer to a content produced by a processing unit configured to perform graphics processing. In some examples, as used herein, the term “graphical content” may refer to a content produced by a graphics processing unit.
As used herein, instances of the term “content” may refer to graphical content or display content. In some examples, as used herein, the term “graphical content” may refer to a content generated by a processing unit configured to perform graphics processing. For example, the term “graphical content” may refer to content generated by one or more processes of a graphics processing pipeline. In some examples, as used herein, the term “graphical content” may refer to content generated by a graphics processing unit. In some examples, as used herein, the term “display content” may refer to content generated by a processing unit configured to perform displaying processing. In some examples, as used herein, the term “display content” may refer to content generated by a display processing unit. Graphical content may be processed to become display content. For example, a graphics processing unit may output graphical content, such as a frame, to a buffer (which may be referred to as a framebuffer). A display processing unit may read the graphical content, such as one or more frames from the buffer, and perform one or more display processing techniques thereon to generate display content. For example, a display processing unit may be configured to perform composition on one or more rendered layers to generate a frame. As another example, a display processing unit may be configured to compose, blend, or otherwise combine two or more layers together into a single frame. A display processing unit may be configured to perform scaling, e.g., upscaling or downscaling, on a frame. In some examples, a frame may refer to a layer. In other examples, a frame may refer to two or more layers that have already been blended together to form the frame, i.e., the frame includes two or more layers, and the frame that includes two or more layers may subsequently be blended.
The processing unit 120 may include an internal memory 121. The processing unit 120 may be configured to perform graphics processing, such as in a graphics processing pipeline 107. In some examples, the device 104 may include a display processor, such as the display processor 127, to perform one or more display processing techniques on one or more frames generated by the processing unit 120 before presentment by the one or more displays 131. The display processor 127 may be configured to perform display processing. For example, the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120. The one or more displays 131 may be configured to display or otherwise present frames processed by the display processor 127. In some examples, the one or more displays 131 may include one or more of: a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, a projection display device, an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
Memory external to the processing unit 120, such as system memory 124, may be accessible to the processing unit 120. For example, the processing unit 120 may be configured to read from and/or write to external memory, such as the system memory 124. The processing unit 120 may be communicatively coupled to the system memory 124 over a bus. In some examples, the processing unit 120 may be communicatively coupled to each other over the bus or a different connection.
The internal memory 121 or the system memory 124 may include one or more volatile or non-volatile memories or storage devices. In some examples, internal memory 121 or the system memory 124 may include RAM, SRAM, DRAM, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, a magnetic data media or an optical storage media, or any other type of memory.
The internal memory 121 or the system memory 124 may be a non-transitory storage medium according to some examples. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that internal memory 121 or the system memory 124 is non-movable or that its contents are static. As one example, the system memory 124 may be removed from the device 104 and moved to another device. As another example, the system memory 124 may not be removable from the device 104.
The processing unit 120 may be a central processing unit (CPU), a graphics processing unit (GPU), a general purpose GPU (GPGPU), or any other processing unit that may be configured to perform graphics processing. In some examples, the processing unit 120 may be integrated into a motherboard of the device 104. In some examples, the processing unit 120 may be present on a graphics card that is installed in a port in a motherboard of the device 104, or may be otherwise incorporated within a peripheral device configured to interoperate with the device 104. The processing unit 120 may include one or more processors, such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), arithmetic logic units (ALUs), digital signal processors (DSPs), discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combinations thereof. If the techniques are implemented partially in software, the processing unit 120 may store instructions for the software in a suitable, non-transitory computer-readable storage medium, e.g., internal memory 121, and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered to be one or more processors.
In some aspects, the content generation system 100 can include an optional communication interface 126. The communication interface 126 may include a receiver 128 and a transmitter 130. The receiver 128 may be configured to perform any receiving function described herein with respect to the device 104. Additionally, the receiver 128 may be configured to receive information, e.g., eye or head position information, rendering commands, or location information, from another device. The transmitter 130 may be configured to perform any transmitting function described herein with respect to the device 104. For example, the transmitter 130 may be configured to transmit information to another device, which may include a request for content. The receiver 128 and the transmitter 130 may be combined into a transceiver 132. In such examples, the transceiver 132 may be configured to perform any receiving function and/or transmitting function described herein with respect to the device 104.
Referring again to
As described herein, a device, such as the device 104, may refer to any device, apparatus, or system configured to perform one or more techniques described herein. For example, a device may be a server, a base station, user equipment, a client device, a station, an access point, a computer, e.g., a personal computer, a desktop computer, a laptop computer, a tablet computer, a computer workstation, or a mainframe computer, an end product, an apparatus, a phone, a smart phone, a server, a video game platform or console, a handheld device, e.g., a portable video game device or a personal digital assistant (PDA), a wearable computing device, e.g., a smart watch, an augmented reality device, or a virtual reality device, a non-wearable device, an augmented reality device, a virtual reality device, a display or display device, a television, a television set-top box, an intermediate network device, a digital media player, a video streaming device, a content streaming device, an in-car computer, any mobile device, any device configured to generate graphical content, or any device configured to perform one or more techniques described herein.
GPUs can process multiple types of data or data packets in a GPU pipeline. For instance, in some aspects of a GPU pipeline, the GPU can process two types of data or data packets, e.g., context register packets and draw call data. A context register packet can be a set of global state information, e.g., information regarding a global register, shading program, or constant data, which can regulate how a graphics context will be processed. For example, context register packets can include information regarding a Z test mode or color format. In some aspects of context register packets, there can be a bit that indicates which workload belongs to a context register. Additionally, there can be multiple functions or programming running at the same time and/or in parallel. For example, functions or programming can describe a certain operation, e.g., the color mode or color format. Accordingly, a context register can define multiple states of a GPU pipeline.
Context states can be utilized to determine how an individual processing unit functions, e.g., a vertex fetcher (VFD), a vertex shader (VS), a shader processor, or a geometry processor, and/or in what mode the processing unit functions. In order to do so, GPUs and GPU pipelines can use context registers and programming data. In some aspects, a GPU can generate a workload, e.g., a vertex or pixel workload, in the pipeline based on the context register definition of a mode or state. Certain processing units, e.g., a VFD, can use these states to determine certain functions, e.g., how a vertex is assembled. As these modes or states can change, GPUs may need to change the corresponding context. In addition, the workload that corresponds to the mode or state may follow the changing mode or state.
In some aspects, a GPU can utilize a command processor (CP) or hardware accelerator to parse a command buffer into context register packets and/or draw call data packets. The CP can then send the context register packets or draw call data packets through separate paths to the processing units or blocks in the GPU. Additionally, the command buffer can alternate different states of context registers and draw calls. For example, a command buffer can be structured as follows: context register of context N, draw call(s) of context N, context register of context N+1, and draw call(s) of context N+1.
In some aspects, for each GPU processing unit or block, a context register may need to be prepared before any draw call data can be processed. As context registers and draw calls can be serialized, e.g., because they can be within the same command buffer, it can be helpful to have an extra context register prepared before the next draw call. In some instances, draw calls of the next context can be fed through the GPU data pipeline in order to hide context register programming latency. Further, when a GPU is equipped with multiple sets of context registers, each processing unit can have sufficient context switching capacity to manage smooth context processing. In turn, this can enable the GPU to cover pipeline latency that can result from unpredictable memory access latency and/or extended processing pipeline latency.
In some instances, GPUs with a dual set of context registers may experience some processing delays. For example, contexts with small payloads may result in delays, as well as continuous drops in transition payload between GPU blocks. This can also cause downstream block starving, i.e., a burst of dead draw calls or a burst of pixel drops. In turn, this can result in upstream blocks performing much of the workload, while downstream blocks do not perform much workload. Furthermore, for GPUs with a binning architecture, a smaller global memory (GMEM) size may be desired in order to save costs and provide efficient memory access. However, this can cause the context payload to be reduced to even smaller amounts and make the aforementioned problems more severe. As a result, there may be a reduction in the utilization of more expensive resources, e.g., streaming processors (SPs) or arithmetic logic units (ALU).
As mentioned above, in a GPU pipeline, there can be two parallel workflows, e.g., context register data and draw call data. In some aspects, a context register can indicate multiple states, e.g., a state of zero or one. When a GPU has a certain workload to be performed, a workload identification (ID) can be utilized to match the state ID. For example, a vertex processing workload can use a workload ID of zero, which can match a context ID of zero. Accordingly, GPUs can have a one-to-one ID matching between the workload ID and the context ID. In some instances, the GPU pipeline workload can be handled by a few context registers. For example, the entire GPU pipeline can be handled by two sets of context registers. In these instances, some workloads may use one context state, while other workloads may use another context state. For example, a VFD workload may use a context state of one, while a render backend (RB) workload may use a context state of zero. As such, in some aspects, the difference between the first and last context states may be a single context state, e.g., if the available context states are one and zero. In some aspects, even if a certain workload is small, the workload may still have to go through the entire GPU pipeline. In these aspects, when there are two context states available, delays or wasted cycles may be experienced.
As shown in
In some aspects, even if a workload is relatively small, the workload may still have to progress through each processing unit in the entire pipeline, e.g., CP 210 to UCHE 238. Further, each processing unit may need some time to process the workload request, no matter how small the workload. As a result, there can be a large latency from when a workload starts at the first processing unit, e.g., VFD 220, until it reaches the last processing unit, e.g., UCHE 238. Accordingly, in some instances, if a workload is small, and the GPU pipeline is long, there can be latency issues, such as inefficient utilization of processing units or wasted cycles.
As shown in
As shown in
In some aspects, the processing cycles for each workload may not be equal. For example, the execution time of workloads 310 and 311 may not be equal to the execution time of workload 312. As shown in
As mentioned above, context states with small workloads or payloads may still take a long time to flush through the pipeline. Even in cases where the programming takes little time, a CP, e.g., CP 322, may wait for the payload to flush through the pipeline, which can cause significant overhead. As long as a payload execution is smaller than a pipeline depth, latency issues can occur, such as delays or inefficient use of processing units. Generally, if more contexts can be allowed to run in parallel in a GPU pipeline, i.e., more than two contexts or workloads at the same time, it can help the GPU pipeline achieve a higher utilization. In turn, this can improve the overall GPU performance. However, increasing the amount of context registers may not be cost efficient, as the costs associated with each context register are high. For instance, the amount of context registers can be increased, e.g., to 4, 8, or 16, but it will increase the costs associated with running the GPU pipeline. Further, as mentioned herein, the processing time for certain blocks, e.g., VFD 331 through ZPE 336, may take a long time compared to other blocks, so some other blocks, e.g., PI 337 through UCHE 340, may not be performing any work at this time.
In order to solve the aforementioned latency issues caused by using two sets of context registers, the present disclosure can group or separate the processing units or blocks, e.g., into processing unit clusters. By doing so, GPU pipelines according to the present disclosure can perform workloads at different processing unit clusters in parallel at the same time. For example, instead of performing workloads within a GPU pipeline in series, as in
In some aspects, the present disclosure can utilize multiple context registers with each of the processing unit clusters at the same time. Therefore, the multiple context registers at each of the processing unit clusters can process multiple workloads at the same time. Accordingly, the present disclosure can process more workloads at the same time, which allows the present disclosure to process workloads more efficiently, as well as maintain the costs associated with a GPU pipeline. In some instances, the number of workloads the present disclosure can process at one time may be equal to the number of processing unit clusters multiplied by the number of context registers in each processing unit cluster. For example, if the processing units are divided into five different processing unit clusters, and there are two sets of context registers associated with each cluster, then the present disclosure can process ten workloads at the same time.
As shown in
As shown in
In some aspects of the present disclosure, there may be no limit to the number of processing unit clusters or groups. As mentioned above, there may be a small cost associated with the amount of cluster groups, however, these costs are minor compared to increasing the number of context registers. In some aspects, the present disclosure can group the processing units into clusters according to the workload boundaries of the GPU pipeline. This is similar to how the processing units 420-438 in
GPU pipeline 400 can also implement a number of execution counters or switches 480-484 that can count the number of workloads or context register packets per processing unit cluster. For example, these execution counters or switches 480-484 can act as a gate keeper logic function for each of the processing unit clusters 491-495. In some aspects, the execution counters 480-484 can be before or adjacent to the processing unit clusters 491-495. Accordingly, the number of execution counters 480-484 can be equal to the number of processing unit clusters 491-495. In some instances, the execution counters 480-484 can limit the amount of workloads or context states within each processing unit cluster 491-495 to two workloads or context states. For example, the execution counters 480-484 can limit adding workloads or context states until the amount of workloads or context states decreases to less than two. The execution counters 480-484 can each include an execution value, which can keep track of the number of workloads or context states within each processing unit cluster 491-495.
In some aspects, GPU pipeline 400 can include a programming end (prog_end) function for each processing unit cluster 491-495 that can record when the context programming is finished. For example, once the programming is finished for a workload, and the execution for the workload is started, an execution counter can increase its execution value by one. Once the execution or workload processing is finished, a draw call end (drawcall_end) function can decrease the execution value by one. In some aspects, the CP 410 can prevent any additional context register packet programming from being processed until the execution value of the execution counter is less than the number of context states per cluster, e.g., two. If the execution value is less than the number of context states per cluster, e.g., two, the CP 410 can accept a draw call packet and process it. As such, the present disclosure can keep track of how many workloads or contexts are being programmed or executed, which can be limited to the number of context states in a processing unit cluster. For example, when the execution value of an execution counter is zero or one, the present disclosure can have room for additional workload programming or execution. Accordingly, the execution counters 480-484 can be a gate keeper for programming or execution workload.
In some aspects, the number of processing unit clusters can be equal to the number of execution counters in GPU pipeline 400. For example, as shown in
As shown in
In some aspects, the number of execution counters 480-484 can be equal to the number of processing unit clusters 491-495. Also, the number of context registers in each of the processing unit clusters 491-495 can be two. In further aspects, the number of context states 461-470, e.g., ten, can be equal to the number of context registers, e.g., two, multiplied by the number of processing unit clusters 491-495, e.g., five. As shown in
As mentioned above, CP 410 can generate a prog_end function and feed it through a programming path at the end of the context register, as well as generate a drawcall_end function and feed it through a draw call packet path at the end of draw call, e.g., as a pair per context packet or state. In some aspects, this can ensure robust synchronization between a draw call packet path and a context register packet path, as well as finer grain context handling among GPU blocks or processing units. As mentioned above, the present disclosure can also split the GPU blocks or processing units into multiple clusters, where the processing units can form a cluster based on workload boundaries to allow for a maximum amount of contexts in the GPU pipeline 400. Each processing unit cluster can manage a context register packet and a draw call packet for two workloads or context states. Also, each processing unit cluster can run a different workload or context state. For instance, cluster boundaries can be set when data packet transition can be increased or decreased, e.g., ZPE 430 can increase or decrease pixels based on a Z comparison.
As mentioned previously, each processing unit cluster can include a gate keeper logic function or execution counter that can increase based on a prog_end function acknowledgement and/or decrease based on a drawcall_end function acknowledgement. In some aspects, if an execution value of the execution counter equals zero, then the cluster can prevent any draw call packet transition from entering the upper stream of the GPU pipeline 400. Also, if the execution counter equals two, then the CP 410 can prevent any additional context register packet programming until the execution counter decreases to less than two. Otherwise, the GPU pipeline 400 can accept the next draw call packet and process. In one aspect, as an example of efficient implementation, the CP 410 can have a single shared memory pool to hold multiple context register packets, e.g., as long as the memory pool has available space. In further aspects, the memory pool can manage a ringer buffer with multiple read or write pointers per processing unit cluster. As mentioned herein, the CP 410 can process more context register packets in advance of draw call packet execution. By doing so, the present disclosure can provide faster programming cycles and/or pipeline cycles for each processing unit cluster.
As shown in
In some aspects, the GPU pipeline in
As mentioned above, the present disclosure can extend the dual context scheme for GPU pipelines, such as by having a finer grain dual context for multiple processing unit clusters 501-505. Additionally, by extending the dual context scheme for the entire GPU pipeline to a finer grain dual context for multiple processing unit clusters, the present disclosure can enable the execution of more contexts with little added cost. In some aspects, the present disclosure can be applied to context schemes that are different from dual context schemes, e.g., context schemes that include three or more context registers. The present disclosure can also improve the utilization and/or resource efficiency of processing units in a GPU pipeline.
At 608, the apparatus can execute one or more draw call functions at each of the one or more processing unit clusters, as described in connection with the examples in
In some aspects, a number of the one or more execution counters can be equal to a number of the one or more processing unit clusters, as described in connection with the examples in
In some aspects, the graphics processing pipeline can include a command processor and a system memory, as described in connection with the examples in
In one configuration, a method or apparatus for graphics processing is provided. The apparatus may be a GPU or some other processor that can perform graphics processing. In one aspect, the apparatus may be the processing unit 120 within the device 104, or may be some other hardware within device 104 or another device. The apparatus may include means for generating multiple processing units, where the multiple processing units are in a graphics processing pipeline of the GPU. The apparatus may also include means for grouping the multiple processing units into one or more processing unit clusters, where each of the one or more processing unit clusters includes one or more context registers. Also, the apparatus may include means for determining one or more context states of the one or more context registers in each of the one or more processing unit clusters. The apparatus may also include means for implementing one or more execution counters in the graphics processing pipeline, wherein each of the one or more execution counters includes an execution value. Additionally, the apparatus can include means for executing one or more draw call functions at each of the one or more processing unit clusters, where each of the one or more draw call functions is executed by at least one of the multiple processing units. The apparatus can also include means for increasing the execution value of one of the one or more execution counters when one of the one or more processing unit clusters starts executing one of the one or more draw call functions. Moreover, the apparatus can include means for decreasing the execution value of one of the one or more execution counters when one of the one or more processing unit clusters finishes executing one of the one or more draw call functions.
The subject matter described herein can be implemented to realize one or more benefits or advantages. For instance, the described graphics processing techniques can be used by GPUs or other graphics processors to enable more data or context execution within the GPU pipeline. This can also be accomplished at a low cost compared to other graphics processing techniques. Additionally, the graphics processing techniques herein can improve or speed up data processing or execution. Further, the graphics processing techniques herein can improve a GPU's resource or data utilization and resource efficiency.
In accordance with this disclosure, the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used for some features disclosed herein but not others; the features for which such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
In one or more examples, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. For example, although the term “processing unit” has been used throughout this disclosure, such processing units may be implemented in hardware, software, firmware, or any combination thereof. If any function, processing unit, technique described herein, or other module is implemented in software, the function, processing unit, technique described herein, or other module may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. A computer program product may include a computer-readable medium.
The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), arithmetic logic units (ALUs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in any hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.