In conventional multiprocessor systems, processors may exchange data with each other to facilitate multiprocessor communication. The data exchange may be performed using a first-in-first-out (FIFO) component. Additionally, the data exchange may be performed using a write FIFO for storing data output from one or more producer processors and a read FIFO for storing data to be read by one or more consumers.
In one aspect of the present disclosure, a computer implemented method is disclosed. The method includes receiving, at a state machine from a first processing element, a first read request for a first value from a first read first-in first-out (FIFO) memory component and a second value from a second read FIFO memory component. The method also includes determining, at the state machine, if the first FIFO memory component has data. The method further includes transferring, from the state machine, the first value from the first read FIFO memory component to a first register. The method still further includes determining, at the state machine, if the second FIFO memory component has data. The method still yet further includes transferring, from the state machine, the second value from the second FIFO memory component to a second register.
Another aspect of the present disclosure is directed to an apparatus including means for receiving, from a first processing element, a first read request for a first value from a first read FIFO memory component and a second value from a second read FIFO memory component. The apparatus also includes means for determining if the first FIFO memory component has data. The apparatus further includes means for transferring the first value from the first read FIFO memory component to a first register. The apparatus still further includes means for determining if the second FIFO memory component has data. The apparatus still yet further includes means for transferring the second value from the second FIFO memory component to a second register.
In another aspect of the present disclosure, a non-transitory computer-readable medium with non-transitory program code recorded thereon is disclosed. The program code includes program code to receive, from a first processing element, a first read request for a first value from a first read FIFO memory component and a second value from a second read FIFO memory component. The program code also includes program code to determine if the first FIFO memory component has data. The program code further includes program code to transfer the first value from the first read FIFO memory component to a first register. The program code still further includes program code to determine if the second FIFO memory component has data. The program code still yet further includes program code to transfer the second value from the second FIFO memory component to a second register.
Another aspect of the present disclosure is directed to a semiconductor chip having a first processing element, a state machine, a first read FIFO memory component, and a second read FIFO memory component. The first processing element is configured to submit a first read request to the state machine for a first value from the first read FIFO memory component and a second value from the second read FIFO memory component, and change from an active state to a second state. The state machine is configured to determine if the first FIFO memory component has data and transfer the first value from the first read FIFO memory component to a first register. The state machine is also configured to determine if the second FIFO memory component has data and transfer the second value from the second FIFO memory component to a second register. The first processing element is also configured to change to the active state, and process the first value and the second value to generate a third value.
Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.
For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.
One method for communication between processors in conventional parallel processing systems is for one processing element (e.g., a processor core and associated peripheral components) to write data to a location in a shared general-purpose memory, and another processing element to read that data from that memory. Typically, in such systems, processing elements have little or no direct communication with each other. Instead, processors exchange data by having a producer store the data in a shared memory and having the consumer copy the data from the shared memory into its own internal registers for processing.
That is, as an example, a consumer may be specified to gather data from multiple data producers to perform an operation on all the data inclusively. For example, the consumer may be specified to execute a function with four parameters (A, B, C, and D). The function may only be executed when the consumer has data for all four parameters. Furthermore, each parameter may be output from a different producer (e.g., write processor).
Conventional systems may use multiple direct data transports over one or more link layers from each producer to the consumer. Alternatively, conventional systems may specify a shared data memory region for each producer.
Multiple direct data transports suffer disadvantages in time, power, and queuing overhead to accommodate differing rates of production from each of the producers. Also, the size of the program memory and/or data memory for such approaches is relatively large and can be considered a disadvantage in small memory systems (e.g., those having substantially less than 1 GB of program memory and/or data memory).
Multiple shared memory regions may be undesirable due to the costs in time, power, and/or overhead, of concurrent access by both the producer—for writing the data—and the consumer—for reading that data. In some cases, a mutual exclusion protocol may be used to safeguard critical data such as the read and write queue pointers.
Still, a shared data memory region protected by a mutual exclusion protocol may be undesirable due to reduced security in the data exchange. Furthermore, failure to abide by the mutual exclusion protocol may lead to defects which may be difficult to detect. Finally, in most cases, the shared data memory region cannot scale the mutual exclusion protocol to several independent shared memory regions.
In parallel processing systems that may be scaled to include many processor cores, what is needed is a method for software running on one processing element to communicate data directly to software running on another processing element, while continuing to follow established programming models, so that, for example, in a typical programming language, the data transmission appears to take place as a simple assignment.
In one configuration, multiple read FIFO components and multiple write FIFO components are specified for a semiconductor chip. Each FIFO component may be a dedicated FIFO structure for a processor. Furthermore, each FIFO component includes multiple FIFO locations. The dedicated FIFO structure allows for improved data delivery to a consumer from multiple producers. To improve time and resource usage, a register component associated with the consumer may synchronize the data delivery from multiple read FIFO memory components to a consumer. The FIFO components may be referred to as FIFO memory components.
The multiple read FIFO components and the multiple write FIFO components may be used with a multiprocessor system as shown in
Each processing element 170 has direct access to some (or all) of the operand registers 284 of the other processing elements, such that each processing element 170 may read and write data directly into operand registers 284 used by instructions executed by the other processing element, thus allowing the processor core 290 of one processing element to directly manipulate the operands used by another processor core for opcode execution.
An “opcode” instruction is a machine language instruction that specifies an operation to be performed by the executing processor core 290. Besides the opcode itself, the instruction may specify the data to be processed in the form of operands. An address identifier of a register from which an operand is to be retrieved may be directly encoded as a fixed location associated with an instruction as defined in the instruction set (i.e. an instruction permanently mapped to a particular operand register), or may be a variable address location specified together with the instruction.
Each operand register 284 may be assigned a global memory address comprising an identifier of its associated processing element 170 and an identifier of the individual operand register 284. The originating processing element 170 of the read/write transaction does not need to take special actions or use a special protocol to read/write to another processing element's operand register, but rather may access another processing element's registers as it would any other memory location that is external to the originating processing element. Likewise, the processing core 290 of a processing element 170 that contains a register that is being read by or written to by another processing element does not need to take any action during the transaction between the operand register and the other processing element.
Conventional processing elements commonly include two types of registers: those that are both internally and externally accessible, and those that are only internally accessible. The hardware registers 276 in
The internally accessible registers in conventional processing elements include instruction registers and operand registers, which are internal to the processor core itself. These registers are ordinarily for the exclusive use of the core for the execution of operations, with the instruction registers storing the instructions currently being executed, and the operand registers storing data fetched from hardware registers 276 or other memory as needed for the currently executed instructions. These internally accessible registers are directly connected to components of the instruction execution pipeline (e.g., an instruction decode component, an operand fetch component, an instruction execution component, etc.), such that there is no reason to assign them global addresses. Moreover, since these registers are used exclusively by the processor core, they are single “ported,” since data access is exclusive to the pipeline.
In comparison, the execution registers 280 of the processor core 290 in
As will be described further below, communication between processing elements 170 may be performed using packets, with each data transaction interface 272 connected to one or more busses, where each bus comprises at least one data line. Each packet may include a target register's address (i.e., the address of the recipient) and a data payload. The busses may be arranged into a network, such as the hierarchical network of busses illustrated in
For example, referring to
Other addressing schemes may also be used, and different addressing hierarchies may be used. Whereas a processor core 290 may directly access its own execution registers 280 using address lines and data lines, communications between processing elements through the data transaction interfaces 272 may be via a variety of different bus architectures. For example, communication between processing elements and other addressable components may be via a shared parallel bus-based network (e.g., busses comprising address lines and data lines, conveying addresses via the address lines and data via the data lines). As another example, communication between processing elements and other components may be via one or more shared serial busses.
Addressing between addressable elements/components may be packet-based, message-switched (e.g., a store-and-forward network without packets), circuit-switched (e.g., using matrix switches to establish a direct communications channel/circuit between communicating elements/components), direct (i.e., end-to-end communications without switching), or a combination thereof. In comparison, to message-switched, circuit-switched, and direct addressing, a packet-based conveys a destination address in a packet header and a data payload in a packet body via the data line(s).
As an example of an architecture using more than one bus type and more than one protocol, inter-cluster communications may be packet-based via serial busses, whereas intra-cluster communications may be message-switched or circuit-switched using parallel busses between the intra-cluster router (L4) 160, the processing elements 170a to 170h within the cluster, and other intra-cluster components (e.g., cluster memory 162). In addition, within a cluster, processing elements 170a to 170h may be interconnected to shared resources within the cluster (e.g., cluster memory 162) via a shared bus or multiple processing-element-specific and/or shared-resource-specific busses using direct addressing (not illustrated).
The source of a packet is not limited only to a processor core 290 manipulating the operand registers 284 associated with another processor core 290, but may be any operational element, such as a memory controller 114, a data feeder 164 (discussed further below), an external host processor connected to the chip 100, a field programmable gate array, or any other element communicably connected to a processor chip 100 that is able to communicate in the packet format.
A data feeder 164 may execute programmed instructions which control where and when data is pushed to the individual processing elements 170. The data feeder 164 may also be used to push executable instructions to the program memory 274 of a processing element 170 for execution by that processing element's instruction pipeline.
In addition to any operational element being able to write directly to an operand register 284 of a processing element 170, each operational element may also read directly from an operand register 284 of a processing element 170, such as by sending a read transaction packet indicating the global address of the target register to be read, and the global address of the destination address to which the reply including the target register's contents is to be copied.
A data transaction interface 272 associated with each processing element may execute such read, write, and reply operations without necessitating action by the processor core 290 associated with an accessed register. Thus, if the destination address for a read transaction is an operand register 284 of the processing element 170 initiating the transaction, the reply may be placed in the destination register without further action by the processor core 290 initiating the read request. Three-way read transactions may also be undertaken, with a first processing element 170x initiating a read transaction of a register located in a second processing element 170y, with the destination address for the reply being a register located in a third processing element 170z.
Memory within a system including the processor chip 100 may also be hierarchical. Each processing element 170 may have a local program memory 274 containing instructions that will be fetched by the micro-sequencer 291 in accordance with a program counter 293. Processing elements 170 within a cluster 150 may also share a cluster memory 162, such as a shared memory serving a cluster 150 including eight processor cores 290. While a processor core 290 may experience no latency (or a latency of one-or-two cycles of the clock controlling timing of the instruction pipeline 292) when accessing its own execution registers 280, accessing global addresses external to a processing element 170 may experience a larger latency due to (among other things) the physical distance between processing elements 170. As a result of this additional latency, the time needed for a processor core to access an external main memory, a shared cluster memory 162, and the registers of other processing elements may be greater than the time needed for a core 290 to access its own program memory 274 and execution registers 280.
Data transactions external to a processing element 170 may be implemented with a packet-based protocol carried over a router-based or switch-based on-chip network. The chip 100 in
The superclusters 130a-130d may be interconnected via an inter-supercluster router (L2) 120 which routes transactions between superclusters and between a supercluster and the chip-level router (L1) 110. Each supercluster 130 may include an inter-cluster router (L3) 140 which routes transactions between each cluster 150 in the supercluster 130, and between a cluster 150 and the inter-supercluster router (L2). Each cluster 150 may include an intra-cluster router (L4) 160 which routes transactions between each processing element 170 in the cluster 150, and between a processing element 170 and the inter-cluster router (L3). The level 4 (L4) intra-cluster router 160 may also direct packets between processing elements 170 of the cluster and a cluster memory 162. Tiers may also include cross-connects (not illustrated) to route packets between elements in a same tier in the hierarchy. A processor core 290 may directly access its own operand registers 284 without use of a global address.
Memory of different tiers may be physically different types of memory. Operand registers 284 may be a faster type of memory in a computing system, whereas as external general-purpose memory typically may have a higher latency. To improve the speed with which transactions are performed, operand instructions may be pre-fetched from slower memory and stored in a faster program memory (e.g., program memory 274 in
Aspects of the present disclosure are directed to implementing one or more FIFO components via hardware to facilitate a data exchange between processors of a multiprocessor system. The FIFO components may be implemented in cluster memory 162 or in some other memory space. Further, control of the FIFO components may be implemented by memory controller 114. By configuring FIFO components in hardware, as described below, improved data flow between processing elements may be achieved.
As is known to those of skill in the art, a FIFO component may include multiple cells (0-N−1) for storing data values, where N is the depth of the FIFO component. Each cell may be referred to as a FIFO location or a FIFO storage location. A processor may read from FIFO location 0. After a read from FIFO location 0, the data from FIFO location 1 is moved to FIFO location 0. Furthermore, a processor may read from and write to each FIFO location using read/write functions that are specified for the FIFO component. A register may be specified to synchronize the output of data, received from multiple read FIFO components, to a consumer. For example, a read FIFO component may stall a consumer to allow for synchronization of reads from the multiple read FIFO components. In this example, if a FIFO read is received at a read FIFO component when the read FIFO component is empty, the consumer may be stalled until a write occurs at the read FIFO component. Furthermore, data from the read FIFO component is popped (e.g., transmitted) to a specific register location of a register component once the data has arrived at the read FIFO component. Finally, the consumer receives an indication (e.g., interrupt) from a state machine associated with the read request when all of the requested data has arrived at the specific register locations. The state machine may be implemented in hardware or software, for example the state machine may be implemented as an electronic circuit that monitors the state of other components (e.g., FIFOs and a consumer) and performs actions based on their state.
In one example, multiple processors may be specified in a cluster (for example a first processing element 170a, a second processing element 170b, a third processing element 170c, a fourth processing element 170d, and a fifth processing element 170e), such that one processor is a consumer and the other processors are producers. Of course, aspects of the present disclosure are also contemplated for multiple consumers. Each producer 170a-170d may process data and write the result(s) of the processed data to a write FIFO component corresponding to a specific producer 170a-170d.
As shown in
Additionally, as shown in
After receiving the request, the state machine determines if read FIFO component A has the first value. If the first value is present, the state machine transfers the first value from read FIFO component A to a register of the consumer. Likewise, after receiving the request, the state machine determines if read FIFO component B has the second value. If the second value is present, the state machine transfers the second value from read FIFO component B to a register of the consumer.
Alternatively, if either read FIFO component A or read FIFO component B do not have the requested values (e.g., first value or second value), the state machine may, in some implementations, cause the consumer to change states. For example, the state machine may cause the consumer to transition from an active state to an idle state, a state with a de-gated clock, a reduced voltage state, or a powered down state. Where the consumer is already in a state other than the active state, the state machine may cause, for example, the consumer to change from one of the non-active states to another of the non-active states (e.g., change from the idle state to a state with a de-gated clock).
In one configuration, the consumer is in the active state when sending the read request. The active state refers to a state where a consumer is fully powered and processing instructions. After sending the read request, or in response to other activity, the consumer may transition to a state that is different from the active state. The state that is different from the active state may be referred to as a non-active state.
In one aspect of the present disclosure, the consumer transitions from the active state to an idle state. The idle state refers to a state where the processor is active and is not processing instructions. That is, when in the idle state, the consumer's clock may be running, still, the consumer does not retrieve or execute instructions. The consumer may be referred to as spinning when in the idle state. The consumer's power may not be reduced in the idle state.
In another aspect, the consumer may transition from the active state to a clock de-gated state. In the clock de-gated state, the clock signal to the consumer is de-gated (e.g., cut-off), such that the consumer does not receive clock signals. As a result of the clock de-gating, the consumer's power use is reduced in comparison to the power used during the active state. That is, in the clock de-gated state, the clock is no longer in communication with the consumer. In still yet another aspect, the consumer may transition from the active state to a low powered state or fully powered down state. The low power state consumes less power in comparison to the activate state.
In one configuration, the register component transmits all of the data values requested in a read request to a consumer when all of the requested data values are stored in the register component. Furthermore, as shown in
As previously discussed, the consumer waits for all of the data to be written to respective read FIFO components. That is, the consumer is stalled until data has arrived at each read FIFO component. Accordingly, the state machine may functionally provide for the synchronization of the producer and the consumer.
Aspects of the present disclosure are directed to hardware based FIFOs. In one configuration, each FIFO component has a configurable depth up to 128 64-bit words. Each FIFO component may act as write FIFOs (at the producer) or a read FIFO (at the consumer). Aspects of the present disclosure may use a customized transport protocol for sending data from a write FIFO component to a read FIFO component. In one configuration, multiple write FIFO components may send data packets to read FIFO components corresponding to the same consumer. Additionally, the same data may also be sent to multiple consumers and their corresponding read FIFO components. In one configuration, a header is associated with each data packet to address a specific read FIFO component. The number of headers may be implementation dependent.
Aspects of the present disclosure do not interpret the data exchange. Thus, aspects of the present disclosure may be specified for applications other than argument transport. That is, in some cases, rather than sending data to a read FIFO component (e.g., argument transport), multiple producers may send a command for the read FIFO component to perform an action (e.g., dispatch). Thus, aspects of the present disclosure may be specified for inter-processor synchronization systems, such as work unit dispatch specified for actor based systems. In one configuration, the dispatch is issued via hardware queues.
In one configuration, a state machine is specified to synchronize a data flow of requested data values to a consumer for a first operation. The consumer is configured to perform the first operation before proceeding to a second operation. The data flow may have one or more data items (e.g., scalars). As an example, a function may use four parameters (A, B, C, D) to produce a result (X) using an instruction or function (F). That is, a consumer may be tasked with executing a function (F) to produce the result (X) using the four correlated parameters (A, B, C, D).
Each parameter may be generated from a different producer. In conventional systems, a mailbox approach may be specified where each producer independently generates and sends a data packet with a specific parameter to the consumer. Thus, the different parameters may arrive at different times. In such an example the consumer may be tasked with receiving and storing each particularly parameter as it arrives, thus taking resources of the consumer away from other tasks as it handles incoming data before the data is ready to be operated on by the consumer. That is, in a conventional system the consumer has to pay attention to parameters A, B and C before performing the specified function (since parameter D is still missing). As an example, in a conventional system, a software component of the consumer may receive a data packet with the A parameter before receiving the data packets with the B, C, and D parameters. Therefore, the consumer expends time receiving and storing the A parameter, and other received parameters, in a location and may stall until all of the parameters are received such that the function F can be executed. After receiving all of the parameters, the consumer retrieves the stored parameters from the location and executes the function F to output the result (X) to another consumer. The consumer may output the result X to a specific FIFO location of a FIFO component of the another consumer.
In one configuration, one or more read FIFO components are specified as hardware components for receiving one or more data values. Furthermore, a state machine may be specified to synchronize the output of the data values from a register component to a consumer when all of the requested data values have been transferred from the read FIFO components to the specific register locations of the register component. For example, as previously discussed, a consumer may be tasked with executing a function F using four parameters (A, B, C, D). In this example, the consumer may transmit a read command to request parameters A, B, C, and D from read FIFO components A-D. Each read FIFO component A-D may store a different parameter. Furthermore, after receiving the read command, a state machine may determine whether the requested parameter is stored in a FIFO location of each read FIFO component A-D. If a FIFO location of one or more read FIFO components A-D is empty, the consumer receives an indication the read FIFO component is empty and the consumer enters a non-active state.
The read FIFO components may be dedicated to receive specific parameters from different producers. In this example, a first read FIFO component is dedicated to receiving the A parameters from a first producer, a second read FIFO component is dedicated to receiving the B parameters from a second producer, a third read FIFO component is dedicated to receiving the C parameters from a third producer, and a fourth read FIFO component is dedicated to receiving the D parameters from a fourth producer. Each processor generates the parameters at a different rate such that each processor has an independent data flow. As previously discussed each read FIFO component may receive parameters (e.g., data values) from packets transmitted from a write FIFO components corresponding to specific producer. Alternatively, each read FIFO component may receive packets directly from a producer.
As an example, the first processor may generate parameters faster than the other processors. Therefore, the first read FIFO component may fill at a faster rate in comparison to the other read FIFO components. In conventional systems, a FIFO component takes a top value from a FIFO location and sends the value to corresponding registers of the consumer to be used in executing the function F. Still, according to aspects of the present disclosure, a state machine monitors each read FIFO component. When a data value has arrived at a read FIFO component, the state machine copies the data value from a FIFO location of the read FIFO component and places the data value in a specific register location. Furthermore, the state machine transmits an indication to the consumer after all of the data values of the requested parameters are stored in register locations. The indication may cause the consumer to transition from a non-active state to an active state. As discussed below, the non-active state may be an idle state, a clock de-gated state, a low power state, or a fully powered down state. Of course, aspects of the present disclosure are not limited to the aforementioned non-active states and are contemplated for other types of non-active states. As previously discussed, the consumer may enter a non-active state when a read FIFO component is empty.
In one configuration, the state machine joins independent data flows from different producers, such that the data arrival at the consumer is synchronized. As discussed above, each read FIFO component may be filled at different rates based on a corresponding producers data flow. Still, the state machine synchronizes the output to the consumer so that the different parameters do not arrive at the consumer at different times. That is, in one configuration, the state machine joins multiple read FIFO components that receive multiple data flows from multiple processors to synchronize the output of data values received from each read FIFO component. Specifically, in the present example, a read command is not cleared until each register location has received data from a read FIFO component.
Aspects of the present disclosure may reduce costs, such as time and energy, associated with a consumer having to store and retrieve different data elements (e.g., parameters) specified for performing a task. That is, by synchronizing the data arrival, a consumer may no longer be specified to store and retrieve different data elements (e.g., parameters) used for a specific task. In one configuration, the number of read FIFO components is based on the number of parameters (e.g., arguments) designated for a task.
According to aspects of the present disclosure, a header, such as a write header, is specified for communicating a data packet from a producer to a consumer. In one configuration, a data packet generated from a producer is combined with a write header to generate a write packet that addresses a specific FIFO location in a target FIFO component associated with a consumer. The size of the packet may be sixteen 64-bit elements, though the size is configurable for different implementations.
As previously discussed, multiple processors may be specified in a cluster. The producers 170a-170d may process data and write the result(s) of the processed data to a corresponding write FIFO component. In one example, first producer 170a may produce data values for a first parameter. The data values from the first producer 170a may be transmitted a first write FIFO component corresponding to the first producer 170a. Furthermore, a second producer 170b, being unaware of the other producers 170a 170c 170d, may produce data values for a second parameter. The data values from the second producer 170b may be transmitted to a second write FIFO component corresponding to the second producer 170b. Each producer 170a-170d may transmit data values to write FIFO components corresponding to each of the producers 170a-170d. Additionally, each producer may be associated with a specific write FIFO component based on software architecture and/or other considerations.
Furthermore, being unaware of the processing times of the producers 170a-170d, the consumer 170e may execute a read command to multiple read FIFO components. The read command may be referred to as a multi-read command. In this example, the read will be satisfied when all of the read FIFO components have received the data values specified in the read command. Furthermore, after executing the read command, if one or more of the read FIFO components is empty, the consumer 170e may be in a non-active state until all of the requested data values are available. In one configuration, when all of the requested data is available, a register component triggers the consumer 170e to transition from the non-active state to the active state.
The read command may indicate register locations where a data values are to be stored. For example, the read command may command read FIFO component A to store data into register location A, read FIFO component B to store data into register location B, read FIFO component C to store data into register location C, and read FIFO component D to store data into register location D.
In one configuration, each FIFO component has an empty state, a non-empty state, and a full state. When all FIFO locations of a FIFO component have data, the FIFO component is in a full state, such that the FIFO component can no longer accept data. When one or more FIFO locations have data and one or more FIFO locations, including the N−1 FIFO location, do not have data, the FIFO component is in a non-empty state. Furthermore, when all of the FIFO locations are empty, the FIFO component is in an empty state. When receiving a read request, a read FIFO component may determine whether the read FIFO component is in an empty state. If the data value is stored in one of FIFO locations, the data value is popped from the FIFO location and stored data into the designated register location. When all of the designated register locations have the data values received from the different FIFO components, the current instruction is released so that the consumer may proceed a subsequent instruction. Alternatively, if a read FIFO component is in an empty state, the consumer may enter a non-active state based on an empty state indication received from a read FIFO component.
It should be noted that aspects of the present disclosure are distinguishable from a conventional polling architecture that outputs an item upon request. That is, conventional systems may be referred to as active systems and aspects of the present disclosure are directed to reactive systems. Specifically, according to aspects of the present disclosure, a consumer may request data and the consumer waits until the data is available. In one configuration, the consumer is not cognizant of the producer. Furthermore, according to aspects of the present disclosure, a producer produces data and transmits data to a destination regardless of whether the data has been requested. Still, although the consumer is not cognizant of the producer, and vice versa, the production and consumption of data may be synchronized to mitigate data overflow.
In one configuration, a tuple space may be specified as an additional layer of synchronization between processors. For example, the tuple space may be specified between a read FIFO component and a consumer. The tuple space may be used by a consumer to indicate the data bandwidth of the consumer. That it, the consumer may indicate to the tuple space that amount of data that the consumer can handle. The tuple space may be implemented as described in U.S. patent application Ser. No. 15/157,982 filed on May 18, 2016, in the names of Lopez et al., the disclosure of which is express incorporated by reference herein in its entirety.
In one configuration, a write FIFO component is specified to coordinate production of a write packet. The write packet includes a write header and a data generated from a producer. In this configuration, the output of a producer is accumulated in the write FIFO component. That is, the write FIFO component may be an intermediary between a producer and a read FIFO component. As previously discussed, the write FIFO component may be specified to reduce communication overhead between producers and read FIFO components. In another configuration, a producer bypasses the write FIFO component and the output of the producer is combined with a header and sent directly to a read FIFO component. The write FIFO component may be referred to as a source FIFO component and the read FIFO component may be referred to as a target FIFO component.
In one configuration, the write FIFO component is configured to know which read FIFO component should receive a specific write packet (e.g., data packet). Furthermore, the write FIFO component may be configured with a data threshold that specifies when to output the data accumulated in the write FIFO component. That is, the write FIFO component may be configured to output the accumulated data when a condition has been satisfied, such as, when the number of non-empty FIFO locations is greater than or equal to a threshold. As previously discussed, the data that is output from the write FIFO component may be combined with a header to generate a write packet. Furthermore, the header may specify a one of the multiple read FIFO components associated with a consumer.
In another configuration, a timer is specified in addition to, or alternate from, the threshold. That is, the write FIFO may output the accumulated data when the number of non-empty FIFO locations is greater than a threshold and/or the write FIFO component may output the accumulated data when a time between write FIFO component outputs is greater than a time threshold. A timer may initialize to zero after the write FIFO component outputs data. Furthermore, in this configuration, after initializing to zero, an event is generated when the timer exceeds a time threshold. The write FIFO component may output the accumulated data in response to the event from the timer. In yet another configuration, the write FIFO component outputs the accumulated data in response to an explicit request.
According to an aspect of the present disclosure, a number of FIFO components may be based on an amount of physical resources available. For example, the system may have sixty-four resources available. In one example, thirty-two FIFO locations may be allocated to a first FIFO component and another thirty-two FIFO locations may be allocated to a second FIFO component. In another example, the system may configure four FIFO components, each with sixteen locations.
In one configuration, a state machine is specified to determine whether a requested data value is stored in a FIFO location. Furthermore, the state machine may copy the data value to a register location and set a flag when the register location includes the data value. For example, after the consumer issues a read request, the state machine monitors multiple FIFO components to determine whether each FIFO component has a request data value. If a FIFO component includes a requested data value, the state machine pops the data value from the FIFO component to a specific register location. Furthermore, after the data value has been copied to the register location, a flag associated with the register location is set to indicate that the register location includes data. Additionally, the state machine continues to monitor the FIFO components while the consumer is in a non-active state. When a data value arrives at a FIFO component, while the consumer is in the non-active state, the state machine pops the data value from the FIFO component to a specific register location. Furthermore, once all the flags indicate that the register locations include data, the state machine transitions the consumer from the non-active state to an active state.
Furthermore, as shown in
After receiving the write packet (time T5), a state machine, at time T6, determines that the requested data value is now stored in a FIFO location of the read FIFO component A. Thus, at time T6, the state machine may pop the data from the FIFO location. Furthermore, at time T7, the state machine copies the popped data to a corresponding register location of the consumer 170E. Furthermore, after the data is stored in the register location, the state machine sets a flag indicating that the register location includes data. In the present example, read FIFO B is still empty, thus, consumer remains in the non-active state as the register component has yet to receive the requested data vale from the read FIFO component B.
Additionally, as shown in
After receiving the write packet (time T10), a state machine, at time T11, determines that the requested data value is now stored in a FIFO location of the read FIFO component B. Thus, at time T11, the state machine may pop the data from the FIFO location. Furthermore, at time T12, the state machine copies the popped data to a corresponding register location of the consumer 170E. After the data is stored in the register location, the state machine sets a flag indicating that the register location includes data. In the present example, after time T12, the register locations for the register component associated with consumer 170E are full. Thus, at time T13, the consumer 170E enters an active state in response to an interrupt (e.g., indication) received from the state machine. The state machine transmits the interrupt when all of the flags of the register locations associated with the read request indicate that the register locations are full. Finally, after transmitting the interrupt, the state machine releases the read command so that the consumer 170e can proceed to executing a subsequent instruction.
The timing of
In the example of
As shown in
Alternatively, if the requested data is not stored in a FIFO location, the state machine may transmit an indication that data is not available (block 910). The indication may cause the consumer to transition to a different state (e.g., from an active state to an idle state, a clock de-gated state, a low power state, or a fully powered down state). While the consumer is in a non-active state, the read FIFO component may receive the requested data from a write FIFO component (block 912). In response to the received data, the state machine may read the received data from a FIFO location and transmit the data to a register location of a register component of the consumer (block 914). After the data is stored in the register location, the state machine may indicate (e.g., by setting a flag) that data is stored in the register location. The consumer may change states in response to receiving the indication that the data is stored in the register location (e.g., by changing from a non-active state to an active state). Furthermore, after the data is transmitted to a register location, the read FIFO component waits for a subsequent request from the consumer (block 902).
The state machine may perform the operations of
Alternatively, if the data is not available at the read FIFO components, in response to the receiving the indication that data is not available at all FIFOs, the consumer may transition from one state to another state (block 1006). For example, the consumer may transition from an active state to an non-active state or from one non-active state to another inactive state. In some cases, one or more read FIFO components may have the requested data while one or more read FIFO components may be empty. In this case, the state machine may read the data from one or more read FIFO components to corresponding register locations. Furthermore, the consumer may enter the non-active state until the remaining data is available at the other read FIFO components. While in an non-active state, the consumer may transition to the active state when the data is available at the corresponding register locations (block 1008). That is, the state machine may cause the consumer to transition to the active state when the data has been received from the multiple read FIFO components. Furthermore, at block 1010, the consumer processes the data received from the register locations. After processing the data, the consumer may issue a subsequent request for data (block 1002).
In one configuration, a processor chip 100 or a processing element 170 includes means for receiving, means for entering, and/or means for submitting. In one aspect, the aforementioned means may be the cluster memory 162, data feeder 164, memory controller 114, and/or program memory 27 configured to perform the functions recited by the means for receiving, means for sending, and/or means for determining. In another aspect, the aforementioned means may be any module or any apparatus configured to perform the functions recited by the aforementioned means.
Embodiments of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk and/or other media.
The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers, microprocessor design, and network architectures should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein.
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is to be understood with the context as used in general to convey that an item, term, etc. may be either X, Y, Z, or a combination thereof. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each is present.
As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.