This application is related to U.S. patent application Ser. Nos. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD, 10/683,929 entitled PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD, 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD, and 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD, which have a common filing date and owner and which are incorporated by reference.
A common computing architecture for processing relatively large amounts of data in a relatively short period of time includes multiple interconnected processors that share the processing burden. By sharing the processing burden, these multiple processors can often process the data more quickly than a single processor can for a given clock frequency. For example, each of the processors can process a respective portion of the data or execute a respective portion of a processing algorithm.
In general, the computing machine 10 effectively divides the processing of raw data among the master processor 12 and the coprocessors 14. The remote source (not shown in
In an example of operation, the computing machine 10 processes the raw data by sequentially performing n+1 respective operations on the raw data, where these operations together compose a processing algorithm such as a Fast Fourier Transform (FFT). More specifically, the machine 10 forms a data-processing pipeline from the master processor 12 and the coprocessors 14. For a given frequency of the clock signal, such a pipeline often allows the machine 10 to process the raw data faster than a machine having only a single processor.
After retrieving the raw data from the raw-data FIFO (not shown) in the memory 26, the master processor 12 performs a first operation, such as a trigonometric function, on the raw data. This operation yields a first result, which the processor 12 stores in a first-result FIFO (not shown) defined within the memory 26. Typically, the processor 12 executes a program stored in the memory 22, and performs the above-described actions under the control of the program. The processor 12 may also use the memory 22 as working memory to temporarily store data that the processor generates at intermediate intervals of the first operation.
Next, after retrieving the first result from the first-result FIFO (not shown) in the memory 26, the coprocessor 141 performs a second operation, such as a logarithmic function, on the first result. This second operation yields a second result, which the coprocessor 141 stores in a second-result FIFO (not shown) defined within the memory 26. Typically, the coprocessor 141 executes a program stored in the memory 241, and performs the above-described actions under the control of the program. The coprocessor 141 may also use the memory 241 as working memory to temporarily store data that the coprocessor generates at intermediate intervals of the second operation.
Then, the coprocessors 242-24n sequentially perform third—nth operations on the second—(n−1)th results in a manner similar to that discussed above for the coprocessor 241.
The nth operation, which is performed by the coprocessor 24n, yields the final result, i.e., the processed data. The coprocessor 24n loads the processed data into a processed-data FIFO (not shown) defined within the memory 26, and the remote device (not shown in
Because the master processor 12 and coprocessors 14 are simultaneously performing different operations of the processing algorithm, the computing machine 10 is often able to process the raw data faster than a computing machine having a single processor that sequentially performs the different operations. Specifically, the single processor cannot retrieve a new set of the raw data until it performs all n+1 operations on the previous set of raw data. But using the pipeline technique discussed above, the master processor 12 can retrieve a new set of raw data after performing only the first operation. Consequently, for a given clock frequency, this pipeline technique can increase the speed at which the machine 10 processes the raw data by a factor of approximately n+1 as compared to a single-processor machine (not shown in
Alternatively, the computing machine 10 may process the raw data in parallel by simultaneously performing n+1 instances of a processing algorithm, such as an FFT, on the raw data. That is, if the algorithm includes n+1 sequential operations as described above in the previous example, then each of the master processor 12 and the coprocessors 14 sequentially perform all n+1 operations on respective sets of the raw data. Consequently, for a given clock frequency, this parallel-processing technique, like the above-described pipeline technique, can increase the speed at which the machine 10 processes the raw data by a factor of approximately n+1 as compared to a single-processor machine (not shown in
Unfortunately, although the computing machine 10 can process data more quickly than a single-processor computer machine (not shown in
Consequently, the speed at which the computing machine 10 processes data is often significantly lower than the frequency of the clock that drives the master processor 12 and the coprocessors 14. For example, if the processor 12 is clocked at 1.0 Gigahertz (GHz) but requires an average of 2.5 clock cycles per data value, then the effective data-processing speed equals (1.0 GHz)/2.5=0.4 GHz. This effective data-processing speed is often characterized in units of operations per second. Therefore, in this example, for a clock speed of 1.0 GHz, the processor 12 would be rated with a data-processing speed of 0.4 Gigaoperations/second (Gops).
For example, the pipeline 30 can often solve the following equation faster than a processor can for a given clock frequency:
Y(xk)=(5xk+3)2xk|
where xk represents a sequence of raw data values. In this example, the operator circuit 321 is a multiplier that calculates 5xk, the circuit 322 is an adder that calculates 5xk+3, and the circuit 32n (n=3) is a multiplier that calculates (5xk+3)2xk|.
During a first clock cycle k=1, the circuit 321 receives data value x1 and multiplies it by 5 to generate 5x1.
During a second clock cycle k=2, the circuit 322 receives 5x1 from the circuit 321 and adds 3 to generate 5x1+3. Also, during the second clock cycle, the circuit 321 generates 5x2.
During a third clock cycle k=3, the circuit 323 receives 5x1+3 from the circuit 322 and multiplies by 2x1|(effectively left shifts 5x1+3 by x1) to generate the first result (5x1+3)2|x1|. Also during the third clock cycle, the circuit 321 generates 5x3 and the circuit 322 generates 5x2+3.
The pipeline 30 continues processing subsequent raw data values xk in this manner until all the raw data values are processed.
Consequently, a delay of two clock cycles after receiving a raw data value x1—this delay is often called the latency of the pipeline 30—the pipeline generates the result (5x1+3)2x1|, and thereafter generates one result—e.g., (5x2+3)2x2|, (5x3+3)2x3, . . . , 5xn+3)2xn|—each clock cycle.
Disregarding the latency, the pipeline 30 thus has a data-processing speed equal to the clock speed. In comparison, assuming that the master processor 12 and coprocessors 14 (
Still referring to
Unfortunately, the hardwired pipeline 30 typically cannot execute all algorithms, particularly those that entail significant decision making. A processor can typically execute a decision-making instruction (e.g., conditional instructions such as “if A, then go to B, else go to C”) approximately as fast as it can execute an operational instruction (e.g., “A+B”) of comparable length. But although the pipeline 30 may be able to make a relatively simple decision (e.g., “A>B?”), it typically cannot execute a relatively complex decision (e.g., “if A, then go to B, else go to C”). And although one may be able to design the pipeline 30 to execute such a complex decision, the size and complexity of the required circuitry often makes such a design impractical, particularly where an algorithm includes multiple different complex decisions.
Consequently, processors are typically used in applications that require significant decision making, and hardwired pipelines are typically limited to “number crunching” applications that entail little or no decision making.
Furthermore, as discussed below, it is typically much easier for one to design/modify a processor-based computing machine, such as the computing machine 10 of
Computing components, such as processors and their peripherals (e.g., memory), typically include industry-standard communication interfaces that facilitate the interconnection of the components to form a processor-based computing machine.
Typically, a standard communication interface includes two layers: a physical layer and a service layer.
The physical layer includes the circuitry and the corresponding circuit interconnections that form the interface and the operating parameters of this circuitry. For example, the physical layer includes the pins that connect the component to a bus, the buffers that latch data received from the pins, and the drivers that drive data onto the pins. The operating parameters include the acceptable voltage range of the data signals that the pins receive, the signal timing for writing and reading data, and the supported modes of operation (e.g., burst mode, page mode). Conventional physical layers include transistor-transistor logic (TTL) and RAMBUS.
The service layer includes the protocol by which a computing component transfers data. The protocol defines the format of the data and the manner in which the component sends and receives the formatted data. Conventional communication protocols include file-transfer protocol (FTP) and TCP/IP (expand).
Consequently, because manufacturers and others typically design computing components having industry-standard communication interfaces, one can typically design the interface of such a component and interconnect it to other computing components with relatively little effort. This allows one to devote most of his time to designing the other portions of the computing machine, and to easily modify the machine by adding or removing components.
Designing a computing component that supports an industry-standard communication interface allows one to save design time by using an existing physical-layer design from a design library. This also insures that he/she can easily interface the component to off-the-shelf computing components.
And designing a computing machine using computing components that support a common industry-standard communication interface allows the designer to interconnect the components with little time and effort. Because the components support a common interface, the designer can interconnect them via a system bus with little design effort. And because the supported interface is an industry standard, one can easily modify the machine. For example, one can add different components and peripherals to the machine as the system design evolves, or can easily add/design next-generation components as the technology evolves. Furthermore, because the components support a common industry-standard service layer, one can incorporate into the computing machine's software an existing software module that implements the corresponding protocol. Therefore, one can interface the components with little effort because the interface design is essentially already in place, and thus can focus on designing the portions (e.g., software) of the machine that cause the machine to perform the desired function(s).
But unfortunately, there are no known industry-standard communication interfaces for components, such as PLICs, used to form hardwired pipelines such as the pipeline 30 of
Consequently, to design a pipeline having multiple PLICs, one typically spends a significant amount of time and exerts a significant effort designing and debugging the communication interface between the PLICs “from scratch.” Typically, such an ad hoc communication interface depends on the parameters of the data being transferred between the PLICs. Likewise, to design a pipeline that interfaces to a processor, one would have to spend a significant amount of time and exert a significant effort in designing and debugging the communication interface between the pipeline and the processor from scratch.
Similarly, to modify such a pipeline by adding a PLIC to it, one typically spends a significant amount of time and exerts a significant effort designing and debugging the communication interface between the added PLIC and the existing PLICs. Likewise, to modify a pipeline by adding a processor, or to modify a computing machine by adding a pipeline, one would have to spend a significant amount of time and exert a significant effort in designing and debugging the communication interface between the pipeline and processor.
Consequently, referring to
Therefore, a need has arisen for a new computing architecture that allows one to combine the decision-making ability of a processor-based machine with the number-crunching speed of a hardwired-pipeline-based machine.
In an embodiment of the invention, a computing machine includes a first buffer and a processor coupled to the buffer. The processor is operable to execute an application, a first data-transfer object, and a second data-transfer object, publish data under the control of the application, load the published data into the buffer under the control of the first data-transfer object, and retrieve the published data from the buffer under the control of the second data-transfer object.
According to another embodiment of the invention, the processor is operable to retrieve data and load the retrieved data into the buffer under the control of the first data-transfer object, unload the data from the buffer under the control of the second data-transfer object, and process the unloaded data under the control of the application.
Where the computing machine is a peer-vector machine that includes a hardwired pipeline accelerator coupled to the processor, the buffer and data-transfer objects facilitate the transfer of data—whether unidirectional or bidirectional—between the application and the accelerator.
Still referring to
The host processor 42 includes a processing unit 62 and a message handler 64, and the processor memory 46 includes a processing-unit memory 66 and a handler memory 68, which respectively serve as both program and working memories for the processor unit and the message handler. The processor memory 46 also includes an accelerator-configuration registry 70 and a message-configuration registry 72, which store respective configuration data that allow the host processor 42 to configure the functioning of the accelerator 44 and the structure of the messages that the message handler 64 sends and receives.
The pipeline accelerator 44 is disposed on at least one PLIC (not shown) and includes hardwired pipelines 741-74n, which process respective data without executing program instructions. The firmware memory 52 stores the configuration firmware for the accelerator 44. If the accelerator 44 is disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed on multiple circuit boards, i.e., daughter cards (not shown). The accelerator 44 and daughter cards are discussed further in previously cited U.S. patent application Ser. Nos. 10/683,929 entitled PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD and 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD. Alternatively, the accelerator 44 may be disposed on at least one ASIC, and thus may have internal interconnections that are unconfigurable. In this alternative, the machine 40 may omit the firmware memory 52. Furthermore, although the accelerator 44 is shown including multiple pipelines 74, it may include only a single pipeline. In addition, although not shown, the accelerator 44 may include one or more processors such as a digital-signal processor (DSP).
The general operation of the peer-vector machine 40 is discussed in previously cited U.S. patent application Ser. No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD, and the functional topology and operation of the host processor 42 is discussed below in conjunction with
Still referring to
Furthermore, during initialization of the peer-vector machine 40 (
The operation of the host processor 42 of
Data Processing
The data-processing application 80 includes a number of threads 1001-100n, which each perform a respective data-processing operation. For example, the thread 1001 may perform an addition, and the thread 1002 may perform a subtraction, or both the threads 1001 and 1002 may perform an addition.
Each thread 100 generates, i.e., publishes, data destined for the pipeline accelerator 44 (
Still referring to
Referring to
During initialization of the host processor 42, the object factory 98 instantiates the data-transfer objects 86 and defines the buffers 104. Specifically, the object factory 98 downloads the configuration data from the registry 72 and generates the software code for each data-transfer object 86xb that the data-processing application 80 may need. The identity of the data-transfer objects 86xb that the application 80 may need is typically part of the configuration data—the application 80, however, need not use all of the data-transfer objects 86. Then, from the generated objects 86xb, the object factory 98 respectively instantiates the data objects 86xa. Typically, as discussed in the example below, the object factory 98 instantiates data-transfer objects 86xa and 86xb that access the same buffer 104 as multiple instances of the same software code. This reduces the amount of code that the object factory 98 would otherwise generate by approximately one half. Furthermore, the message handler 64 may determine which, if any, data-transfer objects 86 the application 80 does not need, and delete the instances of these unneeded data-transfer objects to save memory. Alternatively, the message handler 64 may make this determination before the object factory 98 generates the data-transfer objects 86, and cause the object factory to instantiate only the data-transfer objects that the application 80 needs. In addition, because the data-transfer objects 86 include the addresses of the interface memory 48 where the respective buffers 104 are located, the object factory 98 effectively defines the sizes and locations of the buffers when it instantiates the data-transfer objects.
For example, the object factory 98 instantiates the data-transfer objects 861a and 861b in the following manner. First, the factory 98 downloads the configuration data from the registry 72 and generates the common software code for the data-transfer object 861a and 861b. Next, the factory 98 instantiates the data-transfer objects 861a and 861b as respective instances of the common software code. That is, the message handler 64 effectively copies the common software code to two locations of the handler memory 68 or to other program memory (not shown), and executes one location as the object 861a and the other location as the object 861b.
Still referring to
An example of the data-processing application 80 sending data to the accelerator 44 is discussed in conjunction with the channel 1041.
First, the thread 1001 generates and publishes data to the data-transfer object 861a. The thread 1001 may generate the data by operating on raw data that it receives from the accelerator 44 (further discussed below) or from another source (not shown) such as a sonar array or a data base via the port 54.
Then, the data-object 861a loads the published data into the buffer 1061.
Next, the data-transfer object 861b determines that the buffer 1061 has been loaded with newly published data from the data-transfer object 861a. The output reader object 92 may periodically instruct the data-transfer object 861b to check the buffer 1061 for newly published data. Alternatively, the output reader object 92 notifies the data-transfer object 861b when the buffer 1061 has received newly published data. Specifically, the output queue object 96 generates and stores a unique identifier (not shown) in response to the data-transfer object 861a storing the published data in the buffer 1061. In response to this identifier, the output reader object 92 notifies the data-transfer object 861b that the buffer 1061 contains newly published data. Where multiple buffers 106 contain respective newly published data, then the output queue object 96 may record the order in which this data was published, and the output reader object 92 may notify the respective data-transfer objects 86xb in the same order. Thus, the output reader object 92 and the output queue object 96 synchronize the data transfer by causing the first data published to be the first data that the respective data-transfer object 86xb sends to the accelerator 44, the second data published to be the second data that the respective data-transfer object 86xb sends to the accelerator, etc. In another alternative where multiple buffers 106 contain respective newly published data, the output reader and output queue objects 92 and 96 may implement a priority scheme other than, or in addition to, this first-in-first-out scheme. For example, suppose the thread 1001 publishes first data, and subsequently the thread 1002 publishes second data but also publishes to the output queue object 96 a priority flag associated with the second data. Because the second data has priority over the first data, the output reader object 92 notifies the data-transfer object 862b of the published second data in the buffer 1062 before notifying the data-transfer object 861b of the published first data in the buffer 1061.
Then, the data-transfer object 861b retrieves the published data from the buffer 1061 and formats the data in a predetermined manner. For example, the object 861b generates a message that includes the published data (i.e., the payload) and a header that, e.g., identifies the destination of the data within the accelerator 44. This message may have an industry-standard format such as the Rapid IO (input/output) format. Because the generation of such a message is conventional, it is not discussed further.
After the data-transfer object 861b formats the published data, it sends the formatted data to the communication object 88.
Next, the communication object 88 sends the formatted data to the pipeline accelerator 44 via the bus 50. The communication object 88 is designed to implement the communication protocol (e.g., Rapid IO, TCP/IP) used to transfer data between the host processor 42 and the accelerator 44. For example, the communication object 88 implements the required hand shaking and other transfer parameters (e.g., arbitrating the sending and receiving of messages on the bus 50) that the protocol requires. Alternatively, the data-transfer object 86xb can implement the communication protocol, and the communication object 88 can be omitted. However, this latter alternative is less efficient because it requires all the data-transfer objects 86xb to include additional code and functionality.
The pipeline accelerator 44 then receives the formatted data, recovers the data from the message (e.g., separates the data from the header if there is a header), directs the data to the proper destination within the accelerator, and processes the data.
Still referring to
First, the pipeline accelerator 44 generates and formats data. For example, the accelerator 44 generates a message that includes the data payload and a header that, e.g., identifies the destination threads 1001 and 1002, which are the threads that are to receive and process the data. As discussed above, this message may have an industry-standard format such as the Rapid IO (input/output) format.
Next, the accelerator 44 drives the formatted data onto the bus 50 in a conventional manner.
Then, the communication object 88 receives the formatted data from the bus 50 and provides the formatted data to the data-transfer object 862b. In one embodiment, the formatted data is in the form of a message, and the communication object 88 analyzes the message header (which, as discussed above, identifies the destination threads 1001 and 1002) and provides the message to the data-transfer object 862b in response to the header. In another embodiment, the communication object 88 provides the message to all of the data-transfer objects 86nb, each of which analyzes the message header and processes the message only if its function is to provide data to the destination threads 1001 and 1002. Consequently, in this example, only the data-transfer object 862b processes the message.
Next, the data-transfer object 862b loads the data received from the communication object 88 into the buffer 1062. For example, if the data is contained within a message payload, the data-transfer object 862b recovers the data from the message (e.g., by stripping the header) and loads the recovered data into the buffer 1062.
Then, the data-transfer object 862a determines that the buffer 1062 has received new data from the data-transfer object 862b. The input reader object 90 may periodically instruct the data-transfer object 862a to check the buffer 1062 for newly received data. Alternatively, the input reader object 90 notifies the data-transfer object 862a when the buffer 1062 has received newly published data. Specifically, the input queue object 94 generates and stores a unique identifier (not shown) in response to the data-transfer object 862b storing the published data in the buffer 1062. In response to this identifier, the input reader object 90 notifies the data-transfer object 862a that the buffer 1062 contains newly published data. As discussed above in conjunction with the output reader and output queue objects 92 and 96, where multiple buffers 106 contain respective newly published data, then the input queue object 94 may record the order in which this data was published, and the input reader object 90 may notify the respective data-transfer objects 86xa in the same order. Alternatively, where multiple buffers 106 contain respective newly published data, the input reader and input queue objects 90 and 94 may implement a priority scheme other than, or in addition to, this first-in-first-out scheme.
Next, the data-object 862a transfers the data from the buffer 1062 to the subscriber threads 1001 and 1002, which perform respective operations on the data.
Referring to
In one embodiment, the thread 1003 publishes the data directly to the thread 1004 via the optional connection (dashed line) 102.
In another embodiment, the thread 1003 publishes the data to the thread 1004 via the channels 1045 and 1046. Specifically, the data-transfer object 865a loads the published data into the buffer 1065. Next, the data-transfer object 865b retrieves the data from the buffer 1065 and transfers the data to the communication object 88, which publishes the data to the data-transfer object 866b. Then, the data-transfer object 866b loads the data into the buffer 1066. Next, the data-transfer object 866a transfers the data from the buffer 1066 to the thread 1004. Alternatively, because the data is not being transferred via the bus 50, then one may modify the data-transfer object 865b such that it loads the data directly into the buffer 1066, thus bypassing the communication object 88 and the data-transfer object 866b. But modifying the data-transfer object 865b to be different from the other data-transfer objects 86 may increase the complexity modularity of the message handler 64.
Still referring to
The exception manager 82 receives and logs exceptions that may occur during the initialization or operation of the pipeline accelerator 44 (
The exception manager 82 may also handle exceptions that occur during the initialization or operation of the pipeline accelerator 44 (
To log and/or handle accelerator exceptions, the exception manager 82 subscribes to data from one or more subscriber threads 100 (
In one alternative, the exception manager 82 subscribes to the same data as the subscriber threads 100 (
In another alternative, the exception manager 82 subscribes to data from dedicated channels 106 (not shown), which may receive data from sections of the accelerator 44 (
To determine whether an exception has occurred, the exception manager 82 compares the data to exception codes stored in a registry (not shown) within the memory 66 (
In another alternative, the exception manager 82 analyzes the data to determine if an exception has occurred. For example, the data may represent the result of an operation performed by the accelerator 44. The exception manager 82 determines whether the data contains an error, and, if so, determines that an exception has occurred and the identity of the exception.
After determining that an exception has occurred, the exception manager 82 logs, e.g., the corresponding exception code and the time of occurrence, for later use such as during a debug of the accelerator 44. The exception manager 82 may also determine and convey the identity of the exception to, e.g., the system designer, in a conventional manner.
Alternatively, in addition to logging the exception, the exception manager 82 may implement an appropriate procedure for handling the exception. For example, the exception manager 82 may handle the exception by sending an exception-handling instruction to the accelerator 44, the data-processing application 80, or the configuration manager 84. The exception manager 82 may send the exception-handling instruction to the accelerator 44 either via the same respective channels 104p (e.g., channel 1041 of
Still referring to
When sent to the accelerator 44, the exception-handling instruction may change the soft configuration or the functioning of the accelerator. For example, as discussed above, if the exception is a buffer overflow, the instruction may change the accelerator's soft configuration (i.e., by changing the contents of a soft configuration register) to increase the size of the buffer. Or, if a section of the accelerator 44 that performs a particular operation is malfunctioning, the instruction may change the accelerator's functioning by causing the accelerator to take the disabled section “off line.” In this latter case, the exception manager 82 may, via additional instructions, cause another section of the accelerator 44, or the data-processing application 80, to “take over” the operation from the disabled accelerator section as discussed below. Altering the soft configuration of the accelerator 44 is further discussed in previously cited U.S. patent application Ser. No. 10/683,929 entitled PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD.
When sent to the data-processing application 80, the exception-handling instructions may cause the data-processing application to “take over” the operation of a disabled section of the accelerator 44 that has been taken off line. Although the processing unit 62 (
And when sent to the configuration manager 84, the exception-handling instruction may cause the configuration manager to change the hard configuration of the accelerator 44 so that the accelerator can continue to perform the operation of a malfunctioning section that has been taken off line. For example, if the accelerator 44 has an unused section, then the configuration manager 84 may configure this unused section to perform the operation that was to be the malfunctioning section. If the accelerator 44 has no unused section, then the configuration manager 84 may reconfigure a section of the accelerator that currently performs a first operation to perform a second operation of, i.e., take over for, the malfunctioning section. This technique may be useful where the first operation can be omitted but the second operation cannot, or where the data-processing application 80 is more suited to perform the first operation than it is the second operation. This ability to shift the performance of an operation from one section of the accelerator 44 to another section of the accelerator increases the flexibility, reliability, maintainability, and fault-tolerance of the peer-vector machine 40 (
Referring to
During initialization of the peer-vector machine 40, the configuration manager 84 receives configuration data from the accelerator configuration registry 70, and loads configuration firmware identified by the configuration data. The configuration data are effectively instructions to the configuration manager 84 for loading the firmware. For example, if a section of the initialized accelerator 44 performs an FFT, then one designs the configuration data so that the firmware loaded by the manager 84 implements an FFT in this section of the accelerator. Consequently, one can modify the hard configuration of the accelerator 44 by merely generating or modifying the configuration data before initialization of the peer-vector machine 40. Because generating and modifying the configuration data is often easier than generating and modifying the firmware directly—particularly if the configuration data can instruct the configuration manager 84 to load existing firmware from a library—the configuration manager 84 typically reduces the complexity of designing and modifying the accelerator 44.
Before the configuration manager 84 loads the firmware identified by the configuration data, the configuration manager determines whether the accelerator 44 can support the configuration defined by the configuration data. For example, if the configuration data instructs the configuration manager 84 to load firmware for a particular PLIC (not shown) of the accelerator 44, then the configuration manager 84 confirms that the PLIC is present before loading the data. If the PLIC is not present, then the configuration manager 84 halts the initialization of the accelerator 44 and notifies an operator that the accelerator does not support the configuration.
After the configuration manager 84 confirms that the accelerator supports the defined configuration, the configuration manager loads the firmware into the accelerator 44, which sets its hard configuration with the firmware, e.g., by loading the firmware into the firmware memory 52. Typically, the configuration manager 84 sends the firmware to the accelerator 44 via one or more channels 104t that are similar in generation, structure, and operation to the channels 104 of
After the hard configuration of the accelerator 44 is set, the configuration manager 84 may set the accelerator's hard configuration in response to an exception-handling instruction from the exception manager 84 as discussed above in conjunction with
The configuration manager 84 may also reconfigure the data-processing application 80 in response to an exception-handling instruction from the exception manager 84 as discussed above in conjunction with
Still referring to
The preceding discussion is presented to enable a person skilled in the art to make and use the invention. Various modifications to the embodiments will be readily apparent to those skilled in the art, and the generic principles herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
This application claims priority to U.S. Provisional Application Ser. No. 60/422,503, filed on Oct. 31, 2002, which is incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3665173 | Bouricius et al. | May 1972 | A |
4703475 | Dretzka et al. | Oct 1987 | A |
4774574 | Daly et al. | Sep 1988 | A |
4782461 | Mick et al. | Nov 1988 | A |
4862407 | Fette et al. | Aug 1989 | A |
4873626 | Gifford | Oct 1989 | A |
4914653 | Bishop et al. | Apr 1990 | A |
4956771 | Neustaedter | Sep 1990 | A |
4985832 | Grondalski | Jan 1991 | A |
5185871 | Frey et al. | Feb 1993 | A |
5283883 | Mishler | Feb 1994 | A |
5317752 | Jewett et al. | May 1994 | A |
5339413 | Koval et al. | Aug 1994 | A |
5361373 | Gilson | Nov 1994 | A |
5371896 | Gove et al. | Dec 1994 | A |
5377333 | Nakagoshi et al. | Dec 1994 | A |
5421028 | Swanson | May 1995 | A |
5440682 | Deering | Aug 1995 | A |
5524075 | Rousseau et al. | Jun 1996 | A |
5544067 | Rostoker et al. | Aug 1996 | A |
5583964 | Wang | Dec 1996 | A |
5623418 | Rostoker et al. | Apr 1997 | A |
5640107 | Kruse | Jun 1997 | A |
5648732 | Duncan | Jul 1997 | A |
5649135 | Pechanek et al. | Jul 1997 | A |
5655069 | Ogawara et al. | Aug 1997 | A |
5694371 | Kawaguchi | Dec 1997 | A |
5710910 | Kehl et al. | Jan 1998 | A |
5712922 | Loewenthal et al. | Jan 1998 | A |
5732107 | Phillips et al. | Mar 1998 | A |
5752071 | Tubbs et al. | May 1998 | A |
5784636 | Rupp | Jul 1998 | A |
5801958 | Dangelo et al. | Sep 1998 | A |
5867399 | Rostoker et al. | Feb 1999 | A |
5892962 | Cloutier et al. | Apr 1999 | A |
5909565 | Morikawa et al. | Jun 1999 | A |
5910897 | Dangelo et al. | Jun 1999 | A |
5916307 | Piskiel et al. | Jun 1999 | A |
5930147 | Takei | Jul 1999 | A |
5931959 | Kwiat | Aug 1999 | A |
5933356 | Rostoker et al. | Aug 1999 | A |
5941999 | Matena et al. | Aug 1999 | A |
5963454 | Dockser et al. | Oct 1999 | A |
5978578 | Azarya et al. | Nov 1999 | A |
5987620 | Tran | Nov 1999 | A |
5996059 | Porten et al. | Nov 1999 | A |
6009531 | Selvidge et al. | Dec 1999 | A |
6018793 | Rao | Jan 2000 | A |
6023742 | Ebeling et al. | Feb 2000 | A |
6028939 | Yin | Feb 2000 | A |
6049222 | Lawman | Apr 2000 | A |
6096091 | Hartmann | Aug 2000 | A |
6108693 | Tamura | Aug 2000 | A |
6112288 | Ullner | Aug 2000 | A |
6115047 | Deering | Sep 2000 | A |
6128755 | Bello et al. | Oct 2000 | A |
6192384 | Dally et al. | Feb 2001 | B1 |
6202139 | Witt et al. | Mar 2001 | B1 |
6205516 | Usami | Mar 2001 | B1 |
6216191 | Britton et al. | Apr 2001 | B1 |
6216252 | Dangelo et al. | Apr 2001 | B1 |
6237054 | Freitag, Jr. | May 2001 | B1 |
6247118 | Zumkehr et al. | Jun 2001 | B1 |
6247134 | Sproch et al. | Jun 2001 | B1 |
6253276 | Jeddeloh | Jun 2001 | B1 |
6282578 | Aizono et al. | Aug 2001 | B1 |
6282627 | Wong et al. | Aug 2001 | B1 |
6308311 | Carmichael et al. | Oct 2001 | B1 |
6324678 | Dangelo et al. | Nov 2001 | B1 |
6326806 | Fallside et al. | Dec 2001 | B1 |
6363465 | Toda | Mar 2002 | B1 |
6405266 | Bass et al. | Jun 2002 | B1 |
6470482 | Rostoker et al. | Oct 2002 | B1 |
6477170 | Lu et al. | Nov 2002 | B1 |
6516420 | Audityan et al. | Feb 2003 | B1 |
6526430 | Hung et al. | Feb 2003 | B1 |
6532009 | Fox et al. | Mar 2003 | B1 |
6611920 | Fletcher et al. | Aug 2003 | B1 |
8606360 | Dunning et al. | Aug 2003 | |
6624819 | Lewis | Sep 2003 | B1 |
6625749 | Quach | Sep 2003 | B1 |
6662285 | Sastry et al. | Dec 2003 | B1 |
6684314 | Manter | Jan 2004 | B1 |
6704816 | Burke | Mar 2004 | B1 |
6708239 | Ellerbrock et al. | Mar 2004 | B1 |
6769072 | Kawamura et al. | Jul 2004 | B1 |
6785841 | Akrout et al. | Aug 2004 | B2 |
6785842 | Zumkehr et al. | Aug 2004 | B2 |
6829697 | Davis et al. | Dec 2004 | B1 |
6839873 | Moore | Jan 2005 | B1 |
6915502 | Schott et al. | Jul 2005 | B2 |
6925549 | Cook et al. | Aug 2005 | B2 |
6982976 | Galicki et al. | Jan 2006 | B2 |
6985975 | Chamdani et al. | Jan 2006 | B1 |
7000213 | Banerjee et al. | Feb 2006 | B2 |
7024654 | Bersch et al. | Apr 2006 | B2 |
7036059 | Carmichael et al. | Apr 2006 | B1 |
7073158 | McCubbrey | Jul 2006 | B2 |
7117390 | Klarer et al. | Oct 2006 | B1 |
7134047 | Quach | Nov 2006 | B2 |
7137020 | Gilstrap et al. | Nov 2006 | B2 |
7143302 | Pappalardo et al. | Nov 2006 | B2 |
7143418 | Patterson | Nov 2006 | B1 |
7177310 | Inagaki et al. | Feb 2007 | B2 |
7228520 | Keller et al. | Jun 2007 | B1 |
7260794 | Butts | Aug 2007 | B2 |
7284225 | Ballagh et al. | Oct 2007 | B1 |
7373432 | Rapp et al. | May 2008 | B2 |
7386704 | Schulz et al. | Jun 2008 | B2 |
7404170 | Schott et al. | Jul 2008 | B2 |
7418574 | Mathur et al. | Aug 2008 | B2 |
7487302 | Gouldey et al. | Feb 2009 | B2 |
20010014937 | Huppenthal et al. | Aug 2001 | A1 |
20010025338 | Zumkehr et al. | Sep 2001 | A1 |
20010047509 | Mason et al. | Nov 2001 | A1 |
20020018470 | Galicki et al. | Feb 2002 | A1 |
20020041685 | McLoone et al. | Apr 2002 | A1 |
20020066910 | Tamemoto et al. | Jun 2002 | A1 |
20020087829 | Snyder et al. | Jul 2002 | A1 |
20020112091 | Schott et al. | Aug 2002 | A1 |
20020120883 | Cook et al. | Aug 2002 | A1 |
20020144175 | Long et al. | Oct 2002 | A1 |
20020162086 | Morgan | Oct 2002 | A1 |
20030009651 | Najam et al. | Jan 2003 | A1 |
20030014627 | Krishna et al. | Jan 2003 | A1 |
20030061409 | RuDusky | Mar 2003 | A1 |
20030115500 | Akrout et al. | Jun 2003 | A1 |
20030177223 | Erickson | Sep 2003 | A1 |
20030229877 | Bersch et al. | Dec 2003 | A1 |
20030231649 | Awoseyi et al. | Dec 2003 | A1 |
20040019771 | Quach | Jan 2004 | A1 |
20040019883 | Banerjee et al. | Jan 2004 | A1 |
20040044915 | Bose et al. | Mar 2004 | A1 |
20040045015 | Haji-Aghajani et al. | Mar 2004 | A1 |
20040061147 | Fujita et al. | Apr 2004 | A1 |
20040064198 | Reynolds et al. | Apr 2004 | A1 |
20040123258 | Butts | Jun 2004 | A1 |
20040130927 | Schulz et al. | Jul 2004 | A1 |
20040133763 | Mathur et al. | Jul 2004 | A1 |
20040136241 | Rapp et al. | Jul 2004 | A1 |
20040153752 | Sutardja et al. | Aug 2004 | A1 |
20040170070 | Rapp et al. | Sep 2004 | A1 |
20040181621 | Mathur et al. | Sep 2004 | A1 |
20050104743 | Ripolone et al. | May 2005 | A1 |
20050149898 | Hakewill et al. | Jul 2005 | A1 |
20060036774 | Schott et al. | Feb 2006 | A1 |
20060085781 | Rapp et al. | Apr 2006 | A1 |
20060087450 | Schulz et al. | Apr 2006 | A1 |
20060101250 | Rapp et al. | May 2006 | A1 |
20060101253 | Rapp et al. | May 2006 | A1 |
20060101307 | Rapp et al. | May 2006 | A1 |
20060123282 | Gouldey et al. | Jun 2006 | A1 |
20060149920 | Rapp et al. | Jul 2006 | A1 |
20060206850 | McCubbrey | Sep 2006 | A1 |
20060230377 | Rapp et al. | Oct 2006 | A1 |
20060236018 | Dao et al. | Oct 2006 | A1 |
20070055907 | Sutardja et al. | Mar 2007 | A1 |
20070271545 | Eng | Nov 2007 | A1 |
20080222337 | Schulz et al. | Sep 2008 | A1 |
Number | Date | Country |
---|---|---|
2003287317 | Jun 2004 | AU |
2003287318 | Jun 2004 | AU |
2003287319 | Jun 2004 | AU |
2003287320 | Jun 2004 | AU |
2003287321 | Jun 2004 | AU |
2503611 | May 2004 | CA |
2503613 | May 2004 | CA |
2503617 | May 2004 | CA |
2503620 | May 2004 | CA |
2503622 | May 2004 | CA |
0 694 847 | Jan 1996 | EP |
0 945 788 | Sep 1999 | EP |
1 061 438 | Dec 2000 | EP |
1 061 439 | Dec 2000 | EP |
0 945 788 | Jun 2002 | EP |
1559005 | Aug 2005 | EP |
1570344 | Sep 2005 | EP |
1573514 | Sep 2005 | EP |
1573515 | Sep 2005 | EP |
1576471 | Sep 2005 | EP |
63-234343 | Sep 1988 | JP |
5108347 | Apr 1993 | JP |
6282432 | Oct 1994 | JP |
09-097204 | Apr 1997 | JP |
2001-236496 | Aug 2001 | JP |
2002-132489 | May 2002 | JP |
2002-149424 | May 2002 | JP |
2002149424 | May 2002 | JP |
2002-269063 | Sep 2002 | JP |
2002-281079 | Sep 2002 | JP |
2006515941 | Jun 2006 | JP |
2006518056 | Aug 2006 | JP |
2006518057 | Aug 2006 | JP |
2006518058 | Aug 2006 | JP |
2006518495 | Aug 2006 | JP |
20050084628 | Aug 2005 | KR |
20050084629 | Aug 2005 | KR |
20050086423 | Aug 2005 | KR |
20050086424 | Aug 2005 | KR |
20050088995 | Sep 2005 | KR |
470914 | Jan 2002 | TW |
497074 | Aug 2002 | TW |
200416594 | Sep 2004 | TW |
2004042560 | May 2004 | WO |
2004042561 | May 2004 | WO |
2004042562 | May 2004 | WO |
2004042569 | May 2004 | WO |
2004042574 | May 2004 | WO |
2006039710 | Apr 2006 | WO |
2006039711 | Apr 2006 | WO |
2006039713 | Apr 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20040181621 A1 | Sep 2004 | US |
Number | Date | Country | |
---|---|---|---|
60422503 | Oct 2002 | US |