This disclosure relates generally to consumers, and, more particularly, to multiple asynchronous consumers.
Computer hardware manufacturers develop hardware components for use in various components of computer platforms. For example, computer hardware manufacturers develop motherboards, chipsets for motherboards, central processing units (CPUs), hard disk drives (HDDs), solid state drives (SSDs), and other computer components. Additionally, computer hardware manufacturers develop processing elements, known as accelerators, to accelerate the processing of a workload. For example, an accelerator can be a CPU, a graphics processing units (GPUs), a vision processing units (VPUs), and/or a field programmable gate arrays (FPGAs).
The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other.
Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.
Many computing hardware manufacturers develop processing elements, known as accelerators, to accelerate the processing of a workload. For example, an accelerator can be a CPU, a GPU, a VPU, and/or an FPGA. Moreover, accelerators, while capable of processing any type of workload, are designed to optimize particular types of workloads. For example, while CPUs and FPGAs can be designed to handle more general processing, GPUs can be designed to improve the processing of video, games, and/or other physics and mathematically based calculations, and VPUs can be designed to improve the processing of machine vision tasks.
Additionally, some accelerators are designed specifically to improve the processing of artificial intelligence (AI) applications. While a VPU is a specific type of AI accelerator, many different AI accelerators can be used. In fact, many AI accelerators can be implemented by application specific integrated circuits (ASICs). Such ASIC-based AI accelerators can be designed to improve the processing of tasks related to a particular type of AI, such as machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic including support vector machines (SVMs), neural networks (NNs), recurrent neural networks (RNNs), convolutional neural networks (CNNs), long short term memory (LSTM), gate recurrent units (GRUs), etc.
Computer hardware manufactures also develop heterogeneous systems that include more than one type of processing element. For example, computer hardware manufactures may combine both general purpose processing elements, such as CPUs, with either general purpose accelerators, such as FPGAs, and/or more tailored accelerators, such as GPUs. VPUs. and/or other AI accelerators. Such heterogeneous systems can be implemented as systems on a chip (SoCs).
When a developer desires to execute a function, algorithm, program, application, and/or other code on a heterogeneous system, the developer and/or software generates a schedule (e.g., a graph) for the function, algorithm, program, application, and/or other code at compile time. Once a schedule is generated, the schedule is combined with the function, algorithm, program, application, and/or other code specification to generate an executable file (either for Ahead of Time or Just in Time paradigms). Moreover, the schedule combined with the function, algorithm, program, application, kernel, and/or other code may be represented as a graph including nodes, where the graph represents a workload and each node (e.g., a workload node) represents a particular task to be executed of that workload. Furthermore, the connections between the different nodes in the graph represent edges. The edges of the in workload represent a stream of data from one node to another. The stream of data is identified as an input stream or an output stream.
In some examples, one node (e.g., a producer) may be connected via an edge to a different node (e.g., a consumer). In this manner, the producer node streams data (e.g., writes data) to a consumer node who consumes (e.g., reads) the data. In other examples, a producer node can have one or more consumer nodes, such that the producer node streams data via one or more edges to the one or more consumer nodes. A producer node generates the stream of data for a consumer node, or multiple consumer nodes, to read the data and operate on. A node can be identified as a producer or consumer during the compilation of the graph. For example, a graph compiler receives a schedule (e.g., a graph) and assigns various workload nodes of the workload to various compute building blocks (CBBs) located within an accelerator. During the assignment of workload nodes, a graph compiler assigns the CBB with a node that produces data, and that CBB can become a producer. Additionally, the graph compiler can assign the CBB with a node that consumes the data of the workload, and that CBB can become a consumer. In some examples, the CBB to which a node is assigned may include multiple roles simultaneously. For example, the CBB is the consumer of data produced by nodes in the graph connected via incoming edges, and the producer of data consumed by nodes in the graph connected by outgoing edges.
The amount of data a producer node streams is a run-time variable. For example, when a stream of data is a run-time variable, the consumer does not know ahead of time the amount of data in that stream. In this manner, the data in the stream might be data dependent which indicates that a consumer node will not know the amount of data the consumer node receives until the stream is complete.
In some applications where a graph has configured more than one consumer node for a single producer node, the relative speed of execution of the consumer nodes and the producer nodes can be unknown. For example, a producer node can produce data exponentially faster than a consumer node can consume (e.g., read) that data. Additionally, the consumer nodes may vary in speed of execution such that one consumer node can read data faster than a second consumer node can read data, or vice versa. In this example, it can be difficult to configure/compile a graph to perform a workload with multiple consumer nodes because not all of the consumer nodes will execute synchronously.
Examples disclosed herein include methods and apparatus to seamlessly implement multi-consumer data streams. For example, methods and apparatus disclosed herein allow a plurality of different types of consumers to read data provided by a single producer by abstracting away data types, amount of data, and number of consumers. For example, examples disclosed herein utilize a cyclic buffer to store data for writing to and reading from by consumers and producer. As used herein, “circular buffer,” “circular que,” “ring buffer,” “cyclic buffer,” etc., are defined as a data structure that uses a single, fixed-size buffer as if the buffer were connected end-to-end. Cyclic buffers are utilized for buffering data streams. A data buffer is a region of physical memory storage used to temporarily store data while the data is being moved from one place to another (e.g., from a producer to one or more consumers).
Additionally, examples disclosed herein utilize a credit manager to assign credits to a producer and multiple consumers as a means to allow multi-consumer data streams between one producer and multiple consumers in an accelerator. For example, a credit manager communicates information between the producer and multiple consumers indicative of when a producer can write data to the buffer and when a consumer can read data from the buffer. In this manner, the producer and each one of the consumers are indifferent to the number of consumers the producer is to write to.
In examples disclosed herein, a “credit” is similar to a semaphore. A semaphore is a variable or abstract data type used to control access to a common resource (e.g., a cyclic buffer) by multiple processes (e.g., producers and consumers) in a concurrent system (e.g., a workload). In some examples, the credit manager generates a specific number of credits or adjusts the number of credits available based on availability in a buffer and the source of the credit (e.g., where did the credit come from). In this manner, the credit manager eliminates the need for a producer to be configured to communicate directly with a plurality of consumers. To configure the producer to communicate directly with a plurality of consumers is computationally intensive because the producer would need to know the type of consumer, the speed at which the consumer can read data, the location of the consumer, etc.
In the example of
In
In the example of
In the illustrated example of
In examples disclosed herein, each of the host processor 106, the first accelerator 110a, the second accelerator 110b, and the third accelerator 110c is in communication with the other elements of the computing system 100 and/or the system memory 102. For example, the host processor 106, the first accelerator 110a, the second accelerator 110b, the third accelerator 110c, and/or the system memory 102 are in communication via the first communication bus 108. In some examples disclosed herein, the host processor 106, the first accelerator 110a, the second accelerator 110b, the third accelerator 110c, and/or the system memory 102 may be in communication via any suitable wired and/or wireless communication method. Additionally, in some examples disclosed herein, each of the host processor 106, the first accelerator 110a, the second accelerator 110b, the third accelerator 110c, and/or the system memory 102 may be in communication with any component exterior to the computing system 100 via any suitable wired and/or wireless communication method.
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In examples disclosed herein, each of the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, the DSP 120, and the controller 122 is in communication with the other elements of the first accelerator 110a. For example, the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, the DSP 120, and the controller 122 are in communication via an example second communication bus 140. In some examples, the second communication bus 140 may be implemented by a computing fabric. In some examples disclosed herein, the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, the DSP 120, and the controller 122 may be in communication via any suitable wired and/or wireless communication method. Additionally, in some examples disclosed herein, each of the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, the DSP 120, and the controller 122 may be in communication with any component exterior to the first accelerator 110a via any suitable wired and/or wireless communication method.
As previously mentioned, any of the example first accelerator 110a, the example second accelerator 110b, and/or the example third accelerator 110c may include a variety of CBBs either generic and/or specific to the operation of the respective accelerators. For example, each of the first accelerator 110a, the second accelerator 110b, and the third accelerator 110c includes generic CBBs such as memory, an MMU, a controller, and respective schedulers for each of the CBBs. Additionally or alternatively, external CBBs not located in any of the first accelerator 110a, the example second accelerator 110b, and/or the example third accelerator 110c may be included and/or added. For example, a user of the computing system 100 may operate an external RNN engine utilizing any one of the first accelerator 110a, the second accelerator 110b, and/or the third accelerator 110c.
While, in the example of
While the heterogeneous system 104 of
In the example of
In the illustrated example of
In operation, the compiler 204 receives the input 202 and compiles the input 202 (e.g., workload) into one or more executable files to be executed by the accelerator 206. For example, the compiler 204 receives the input 202 and assigns various workload nodes of the input 202 (e.g., the workload) to various CBBs (e.g., any of the convolution engine 214, the MMU 216, the RNN engine 218, the DSP 220, and/or the DMA 226) of the accelerator 206. Additionally, the compiler 204 allocates memory for one or more buffers 228 in the memory 222 of the accelerator 206.
In the example of
In the example of
Additionally, the configuration controller 208 is provided with buffer characteristic data from the executables of the compiler 204. In this manner, the configuration controller 208 initializes the buffers (e.g., the buffer 228) in memory to be the size specified in the executables. In some examples, the configuration controller 208 provides configuration control messages to one or more CBBs including the size and location of each buffer initialized by the configuration controller 208.
In the example of
In examples disclosed herein, in response to instructions received from the configuration controller 208 indicating to execute a certain workload node, the credit manager 210 provides corresponding credits to the CBB acting as the initial producer. Once the CBB acting as the initial producer completes the workload node, the credits are sent back to the point of origin as seen by the CBB (e.g., the credit manager 210). The credit manager 210, in response to obtaining the credits from the producer, transmits the credits to the CBB acting as the consumer. Such an order of producer and consumers is determined using the executable generated by the compiler 204 and provided to the configuration controller 208. In this manner, the CBBs communicate an indication of ability to operate via the credit manager 210, regardless of their heterogenous nature. A producer CBB produces data that is utilized by another CBB whereas a consumer CBB consumes and/or otherwise processes data produced by another CBB. The credit manager 210 is discussed in further detail below in connection with
In the example of
In the illustrated example of
In the illustrated example of
In
In the example of
In the example of
In the illustrated example of
In the example of
In the example of
In some examples, the communication processor 302 receives configuration information from a producing CBB. For example, during execution of a workload, a producing CBB determines the current slot of a buffer and provides a notification to the communication processor 302 for use in initializing the generating of a number of credits. In some examples, the communication processor 302 may communicate information between the credit generator 304, the counter 306, the source identifier 308, the duplicator 310, and/or the aggregator 312. For example, the communication processor 302 initiates the duplicator 310 or the aggregator 312 depending on the source identifier 308 identification. Additionally, the communication processor 302 receives information corresponding to a workload. For example, the communication processor 302 receives, via the CnC fabric 212, information determined by the compiler 204 and the configuration controller 208 indicative of the CBB initialized as the producer and the CBBs initialized as consumers. The example communication processor 302 of
In the example of
In the example of
In the example of
In the example
In the example of
In examples disclosed herein, the aggregator 312 waits to receive all the credits for a single space in a buffer because the space in the buffer is not obsolete until the data of that space in the buffer has been consumed by all appropriate consumers. The consumption of data is determined by the workload such that the workload decides what CBB must consume data in order to execute the workload in the intended manner. In this manner, the aggregator 312 queries the counter 306 to determine when to combine the multiple returned credits into the single producer credit. For example, the counter 306 may control a slot credits counter. The slots credit counter may be indicative of a number of credits corresponding to a slot in the buffer. If the slot credits counter equals the m number of consumers of the workload, the aggregator 312 may combine the credits to generate the single producer credit. Additionally, in some examples, when execution of a workload is complete, the producer may have extra credits not used. In this manner, the aggregator 312 zeros credits at the producer by removing the extra credits from the producer. The example aggregator 312 of
While an example manner of implementing the credit manager of
Turning to
In
In
In examples disclosed herein, each buffer (e.g., the buffer 228 of
In
In examples disclosed herein, the first consumer 410 includes a first consumer credits counter 412 and the second consumer 414 includes a second consumer credits counter 416. The first and second consumer credits counters 412, 416 count credits provided by the credit manager 210. In some examples, the first and second consumer credits counters 412, 416 are internal digital logic devices included in the first and second consumer 410, 414. In other examples, the first and second consumer credits counters 412, 416 are external digital logic devices located in the credit manager 210 at the counter 306 and associated with the consumers 410, 414.
In
Turning to
The producer 402 has 2 credits because there are three slots (e.g., first slot 408A, fourth slot 408D, and fifth slot 408E) filled and only 2 slots available to fill (e.g., write or produce to). The first consumer 410 has 1 credit because the first consumer 410 consumed the tiles in the fourth slot 408D and the fifth slot 408E. In this manner, there is only one more slot (e.g., first slot 408A) for the first consumer 410 to read from. The second consumer 414 has 3 credits because after the producer filled three slots, the credit manager 210 provided both the first consumer 410 and the second consumer 414 with 3 credits each in order to access and consume 3 tiles from the three slots (e.g., first slot 408A, fourth slot 408D, and fifth slot 408E). In the illustrated example, the second consumer 414 has not consumed any tiles from the buffer 408. In this manner, the second consumer 414 may be slower than first consumer 410 such that the second consumer 414 reads data at a lower bit-per-minute than the first consumer 410.
In the illustrated example of
Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the credit manager 210 of
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, etc. in order to make them directly readable and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein. In another example, the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
As mentioned above, the example processes of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C.
As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B. and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
The program of
The example producer 402 determines a buffer (block 504) (e.g., the buffer 228 of
In response to the producer 402 initializing the buffer current slot to equal first slot (block 506), the producer 402 provides a notification to the credit manager 210 (block 508) over the configuration controller 208 (
When the write pointer is initialized and the credit manager 210 has been notified, the producer 402 waits to receive credits from the credit manager 210 (block 510). For example, in response to the producer 402 notifying the credit manager 210, the credit manager 210 may generate n number of credits and provide them back to the producer 402. In some examples, the credit manager 210 receives the configuration control messages from the configuration controller 208 corresponding to the buffer size and location.
If the producer 402 does not receive credits from the credit manager 210 (e.g., block 510 returns a NO), the producer 402 waits until the credit manager 210 provides the credits. For example, the producer 402 cannot perform an assigned task until credits are given because the producer 402 does not have access to the buffer until a credit verifies the producer 402 does have access. If the producer 402 does receive credits from the credit manager 210 (e.g., block 510 returns a YES), the producer credits counter increments to equal the credits received (block 512). For example, the producer credits counter may increment by one until the producer credits counter equals n number of received credits.
The producer 402 determines if the data stream is ready to be written to the buffer (e.g., block 514). For example, if the producer 402 has not yet partitioned and packaged tiles for production or the producer credits counter has not received a correct number of credits (e.g., block 514 retums a NO) then control returns to block 512. If the example producer 402 has partitioned and packaged tiles of the data stream for production (e.g., block 514 returns a YES), then the producer 402 writes data to current slot (block 516). For example, the producer 402 stores data into the current slot indicated by the write pointer and originally initialized by the producer 402.
In response to the producer 402 writing data into the current slot (block 516), the producer credits counter is decremented (block 518). For example, the producer 402 may decrement the producer credits counter and/or the credit manager 210 may decrement the producer credits counter. In this example, the producer 402 provides one credit back to the credit manager 210 (block 520). For example, the producer 402 utilizes a credit and the producer 402 passes the credit for use by a consumer.
The producer 402 determines if the producer 402 has any more credits to use (block 522). If the producer 402 determines there are additional credits (e.g., block 522 returns a YES), control returns to block 516. If the producer 402 determines the producer 402 does not have additional credits to use (e.g., block 522 returns a NO) but still includes data to produce (e.g., block 524 returns a YES), the producer 402 waits to receive credits from the credit manager 210 (e.g., control returns to block 510). For example, the consumers may not have consumed tiles produced by the producer 402 and therefore, there are no available slots in the buffer to write to. If the producer 402 does not have additional data to produce (e.g., block 524 returns a NO), then data producing is complete (block 526). For example, the data stream has been fully produced into the buffer and consumed by the consumers. The program of
In the example program of
Additionally, the slot credits counter assists the aggregator 312 in determining when each consumer 410, 414 has read the tile stored in the slot. For example, if there are 3 consumers who are to read a tile from a slot in the buffer, the slot credits counter will increment up to 3, and when the slot credits counter equals 3, the aggregator 312 may combine the credits to generate a single producer 402 credit for that one slot.
The communication processor 302 notifies the credit generator 304 to generate credits for the producer 402 based on received buffer characteristics (block 606). The credit generator 304 generates corresponding credits. For example, the communication processor 302 receives information from the configuration controller 208 corresponding to buffer characteristics and additionally receives a notification that the producer 402 initialized a pointer.
In response to the credit generator 304 generating credits (block 606), the communication processor 302 packages the credits and sends the producer 402 credits, where the producer credits equal the number of slots in the buffer (block 608). For example, the credit generator 304 may specifically generate credits for the producer 402 (e.g., producer credits) because the buffer is initially empty and may be filled by the producer 402 when credits become available. Additionally, the credit generator 304 generates n number of credits for the producer 402, such that n equals a number of slots in the buffer available for the producer 402 to write to.
The credit manager 210 waits to receive a returned credit (block 610). For example, when the producer 402 writes to a slot in a buffer, a credit corresponding to that slot is returned to the credit manager 210. When the credit manager 210 does not receive a returned credit (e.g., block 610 returns a NO), the credit manager 210 waits until a credit is provided back. When the credit manager 210 receives a returned credit (e.g., block 610 returns a YES), the communication processor 302 provides the credit to the source identifier 308 to identify the source of the credit (block 612). For example, the source identifier 308 may analyze a package corresponding to the returned credit that includes a header. The header of the package may be indicative of where the package was sent from, such that the package was sent from a CBB assigned as a producer 402 or consumer 410, 414.
Further, the source identifier 308 determines if the source of the credit was from the producer 402 or at least one of the consumers 410, 414. If the source identifier 308 determines the source of the credit was from the producer 402 (e.g., block 612 returns a YES), source identifier 308 initializes the duplicator 310 (
In response to the duplicator 310 multiplying credits for each m number of consumers 410, 414, the communication processor 302 packages the credits and send a consumer credit to m consumers 410, 414 (block 616). Control returns to block 610 until the credit manager 210 does not receive a returned credit.
In the example program of
In response to the counter 306 incrementing a counter assigned to one of the consumers 410, 414 who returned the credit, the aggregator 312 queries the counter assigned to one of the consumers 410, 414 to determine if the slot credits counter is greater than zero (block 620). If the counter 306 notifies the aggregator 312 the slot credits counter is not greater than zero (e.g., block 620 returns a NO), control returns to block 610. If the counter 306 notifies the aggregator 312 the slot credits counter is greater than zero (e.g., block 620 returns a YES), the aggregator 312 multiplies consumer credits into a single producer credit (block 622). For example, the aggregator 312 is informed by the counter 306, via the communication processor 302, that one or more credits have been returned by one or more consumers. In some examples, the aggregator 312 analyzes the returned credit to determine the slot the credit was used to consume by one of the consumers 410, 414.
In response to the aggregator 312 combining consumer credits, the communication processor 302 packages the credit and send the credit to the producer 402 (block 624). For example, the aggregator 312 passes the credit to the communication processor 302 for packaging and transmitting the credit over the CnC fabric 212 to the intended CBB. In response to the communication processor 302 sending a credit to the producer 402, the counter 306 decrements the slot credits counter (block 626) and control returns to block 610.
At block 610, the credit manager 210 waits to receive a returned credit. When the credit manager 210 does not receive a returned credit after a threshold amount of time (e.g., block 610 returns a NO), the credit manager 210 checks for extra producer credits that are unused (block 628). For example, if the credit manager 210 is not receiving returned credits from the producer 402 or the consumers 410, 414, the data stream is fully consumed and has been executed by the consumers 410, 414. In some examples, a producer 402 may have unused credits left over from production, such as credits that were not needed to produce the last few tiles into the buffer. In this manner, the credit manager 210 zeros the producer credits (block 630). For example, the credit generator 304 removes credits from the producer 402 and the counter 306 decrements the producer credits counter (e.g., producer credits counter 404) until the producer credits counter equals zero.
The program of
The at least one of the consumers 410, 414 further determines an internal buffer (block 604). For example, the configuration controller 208 sends messages and control signals to CBBs (e.g., any one of the convolution engine 214, the MMU 216, the RNN engine 218, and/or the DSP 220) informing the CBBs of a configuration mode. In this manner, the CBB is configured to be a consumer, 410 or 414, with an internal buffer for storing data produced by a different CBB (e.g., a producer).
After determining of the internal buffers (block 704) are complete, the consumers 410, 414 wait to receive consumer credits from the credit manager 210 (block 706). For example, the communication processor 302 of the credit manager 210 provides the consumers 410, 414 a credit after the producer 402 has used the credit for writing data in the buffer. If the consumers 410, 414 receive a credit from the credit manager (e.g., block 706 returns a YES), the counter 306 increments the consumer credits counter (block 708). For example, the consumer credits counter is incremented by a number of credits the credit manager 210 passes to the consumers 410, 414.
In response to receiving a credit/credits from the credit manager 210, the consumers 410, 414 determine if they are ready to consume data (block 710). For example, the consumers 410, 414 can read data from a buffer when initialization is complete and when there are enough credits available for the consumers 410, 414 to access the data in the buffer. If the consumers 410, 414 are not ready to consume data (e.g., block 710 returns a NO), control returns to block 706.
If the consumers 410, 414 are ready to consume data from the buffer (e.g., block 710 returns a YES), the consumers 410, 414 read a tile from the next slot in the buffer (block 712). For example, a read pointer is initialized after the producer 402 writes data to a slot in the buffer. In some examples, the read pointer follows the write pointer in order of production. When the consumers 410, 414 read data from a slot, the read pointer moves to the next slot produced by the producer 402.
In response to reading a tile from the next slot in the buffer (block 712), the counter 306 decrements the consumer credits counter (block 714). For example, a credit is used each time the consumer consumes (e.g., reads) a tile from a slot in a buffer. Therefore, the consumer credits counter decrements and concurrently, the consumers 410, 414 send a credit back to the credit manager 210 (block 716). The consumer checks if there are additional credits available for the consumers 410, 414 to use (block 718). If there are additional credits for the consumers 410, 414 to use (e.g., block 718 returns a YES), control returns to block 712. For example, the consumers 410, 414 continue to read data from the buffer.
If there are no additional credits for the consumers 410, 414 to use (e.g., block 718 returns a NO), the consumers 410, 414 determine if additional data is to be consumed (block 720). For example, if the consumers 410, 414 do not have enough data to execute a workload, then there is additional data to consume (e.g., block 720 returns a YES). In this manner, control returns to block 706 where the consumers 410, 414 wait for a credit. If the consumers 410, 414 have enough data to execute an executable compiled by the compiler 204, then there is no additional data to consume (e.g., block 720 returns a NO), then data consuming is complete (block 722). For example, the consumers 410, 414 read the whole data stream produced by the producer 402.
The program of
The processor platform 800 of the illustrated example includes a processor 810 and an accelerator 812. The processor 810 of the illustrated example is hardware. For example, the processor 810 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. Additionally, the accelerator 812 can be implemented by, for example, one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, FPGAs, VPUs, controllers, and/or other CBBs from any desired family or manufacturer. The accelerator 812 of the illustrated example is hardware. The hardware accelerator may be a semiconductor based (e.g., silicon based) device. In this example, the accelerator 812 implements the example credit manager 210, the example CnC fabric 212, the example convolution engine 214, the example MMU 216, the example RNN engine 218, the example DSP 220, the example memory 222, the example configuration controller 208, the example kernel bank 230, and/or the example data fabric 232. In this example, the processor 810 may implement the example credit manager 210 of
The processor 810 of the illustrated example includes a local memory 811 (e.g., a cache). The processor 810 of the illustrated example is in communication with a main memory including a volatile memory 814 and a non-volatile memory 816 via a bus 818. Moreover, the accelerator 812 of the illustrated example includes a local memory 813 (e.g., a cache). The accelerator 812 of the illustrated example is in communication with a main memory including the volatile memory 814 and the non-volatile memory 816 via the bus 818. The volatile memory 814 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS' Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 816 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 814, 816 is controlled by a memory controller.
The processor platform 800 of the illustrated example also includes an interface circuit 820. The interface circuit 820 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one or more input devices 822 are connected to the interface circuit 820. The input device(s) 822 permit(s) a user to enter data and/or commands into the processor 1012. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices 824 are also connected to the interface circuit 820 of the illustrated example. The output devices 824 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 820 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.
The interface circuit 820 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 826. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
The processor platform 800 of the illustrated example also includes one or more mass storage devices 828 for storing software and/or data. Examples of such mass storage devices 828 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
The machine executable instructions 832 of
Example methods, apparatus, systems, and articles of manufacture for multiple asynchronous consumers are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an apparatus comprising a communication processor to receive configuration information from a producing compute building block, a credit generator to generate a number of credits for the producing compute building block corresponding to the configuration information, the configuration information including characteristics of a buffer, a source identifier to analyze a returned credit to determine whether the returned credit originates from the producing compute building block or a consuming compute building block, and a duplicator to, when the returned credit originates from the producing compute building block, multiply the returned credit by a first factor, the first factor indicative of a number of consuming compute building blocks identified in the configuration information.
Example 2 includes the apparatus of example 1, wherein the producing compute building block is to produce a stream of data for one or more consuming compute building blocks to operate on.
Example 3 includes the apparatus of example 1, further including an aggregator to, when the source identifier identifies the returned credit originates from the consuming compute building block, combine multiple returned credits from a number of consuming compute building blocks corresponding to the first factor into a single producer credit.
Example 4 includes the apparatus of example 3, wherein the aggregator is to query a counter to determine when to combine the multiple returned credits into the single producer credit, the counter is to increment each time a credit corresponding to a location in a memory is returned.
Example 5 includes the apparatus of example 4, wherein a producing compute building block cannot receive the single producer credit until each of the number of consuming compute building blocks corresponding to the first factor have returned a credit.
Example 6 includes the apparatus of example 1, wherein the communication processor is to send a credit to each of the number of consuming compute building blocks.
Example 7 includes the apparatus of example 1, wherein the producing compute building block is to determine a size of the buffer, the buffer to have a number of slots corresponding to a second factor for storing data produced by the producing compute building block.
Example 8 includes the apparatus of example 1, wherein the configuration information identifies the number of consuming compute building blocks per single producing compute building block.
Example 9 includes a non-transitory computer readable storage medium comprising instructions that, when executed, cause a processor to at least receive configuration information from a producing compute building block, generate a number of credits for the producing compute building block corresponding to the configuration information, the configuration information including characteristics of a buffer, analyze a returned credit to determine whether the returned credit originates from the producing compute building block or a consuming compute building block, and when the returned credit originates from the producing compute building block, multiply the returned credit by a first factor indicative of a number of consuming compute building blocks identified in the configuration information.
Example 10 includes the non-transitory computer readable storage medium as defined in example 9, wherein the instructions, when executed, cause the processor to produce a stream of data for one or more consuming compute building blocks to operate on.
Example 11 includes the non-transitory computer readable storage medium as defined in example 9, wherein the instructions, when executed, cause the processor to, when the returned credit originates from the consuming compute building block, combine multiple returned credits from the number of consuming compute building blocks corresponding to the first factor into a single producer credit.
Example 12 includes the non-transitory computer readable storage medium as defined in example 11, wherein the instructions, when executed, cause the processor to query a counter to determine when to combine the multiple returned credits into the single producer credit, the counter to increment each time a credit corresponding to a location in a memory is returned.
Example 13 includes the non-transitory computer readable storage medium as defined in example 12, wherein the instructions, when executed, cause the processor to not provide the producing compute building block the single producer credit until each of the number of consuming compute building blocks corresponding to the first factor have returned a credit.
Example 14 includes the non-transitory computer readable storage medium as defined in example 9, wherein the instructions, when executed, cause the processor to send a credit to each of the number of consuming compute building blocks.
Example 15 includes the non-transitory computer readable storage medium as defined in example 9, wherein the instructions, when executed, cause the processor to determine the number of consuming compute building blocks per single producing compute building block based on the configuration information.
Example 16 includes a method comprising receiving configuration information from a producing compute building block, generating a number of credits for the producing compute building block corresponding to the configuration information, the configuration information including characteristics of a buffer, analyzing a returned credit to determine whether the returned credit originates from the producing compute building block or a consuming compute building block, and when the returned credit originates from the producing compute building block, multiplying the returned credit by a first factor is the first factor indicative of a number of consuming compute building blocks identified in the configuration information.
Example 17 includes the method of example 16, further including combining multiple returned credits from the number of consuming compute building blocks corresponding to the first factor into a single producer credit when the returned credit originates from the consuming compute building block.
Example 18 includes the method of example 17, further including querying a counter to determine when to combine the multiple returned credits into the single producer credit, the counter to increment each time a credit corresponding to a location in a memory is returned.
Example 19 includes the method of example 18, further including waiting to provide the producing compute building block the single producer credit until each of the number of consuming compute building blocks have returned a credit.
Example 20 includes the method of example 16, further including sending a credit to each of the number of consuming compute building blocks corresponding to the first factor.
Example 21 includes an apparatus comprising means for communicating, the means for communicating to receive configuration information from a producing compute building block, means for generating, the means for generating to generate a number of credits for the producing compute building block corresponding to the configuration information, the configuration information including characteristics of a buffer, means for analyzing to determine whether the returned credit originates from the producing compute building block or a consuming compute building block, and means for duplicating to, when the returned credit originates from the producing compute building block, multiply the returned credit by a first factor, the first factor indicative of a number of consuming compute building blocks identified in the configuration information.
Example 22 includes the apparatus of example 21, further including a means for aggregating, the means for aggregating to combine multiple returned credits from the number of consuming compute building blocks corresponding to the first factor into a single producer credit when the returned credit originates from the consuming compute building block.
Example 23 includes the apparatus of example 22, wherein the means for aggregating are to query a counter to determine when to combine the multiple returned credits into the single producer credit, the counter to increment each time a credit corresponding to a location in a memory is returned.
Example 24 includes the apparatus of example 23, wherein the means for communicating are to wait to provide the producing compute building block the single producer credit until each of the number of consuming compute building blocks have returned a credit.
Example 25 includes the apparatus of example 21, wherein the means for communicating are to send a credit to each of the number of consuming compute building blocks corresponding to the first factor.
From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that manage a credit system between one producing computational building block and multiple consuming computational building blocks. The disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by providing a credit manager to abstract away a number of consuming CBBs to remove and/or eliminate the logic typically required for a consuming CBB to communicate with a producing CBB during execution of a workload. As such, a configuration controller does not need to configure the producing CBB to communicate directly with a plurality of consuming CBBs. Such configuring of direct communication is computationally intensive because the producing CBB would need to know the type of consuming CBB, the speed at which the consuming CBB can read data, the location of the consuming CBB, etc. Additionally, the credit manager facilitates multiple consuming CBBs for execution of a workload, regardless of the speed at which the multiple consuming CBBs operate. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.