Pipeline configuration protocol and configuration unit communication

Information

  • Patent Grant
  • 8468329
  • Patent Number
    8,468,329
  • Date Filed
    Friday, June 8, 2012
    12 years ago
  • Date Issued
    Tuesday, June 18, 2013
    11 years ago
Abstract
In a method of synchronizing data processing of processor arrangement, responsive to reaching, during execution of a program, a barrier included in a program sequence, the processor arrangement halts the program execution until it is determined that all instructions preceding the barrier in the program sequence have been successfully scheduled for execution.
Description

Example embodiments of the present invention include methods which permit efficient configuration and reconfiguration of one or more reconfigurable subassemblies by one or more configuration units (CT) at high frequencies. An efficient and synchronized network may be created to control multiple CTs.


A subassembly or cell may include conventional FPGA cells, bus systems, memories, peripherals and ALUs as well as arithmetic units of processors. A subassembly may be any type of configurable and reconfigurable elements. For parallel computer systems, a subassembly may be a complete node of any function, e.g., the arithmetic, memory or data transmission functions.


The example method described here may be used in particular for integrated modules having a plurality of subassemblies in a one-dimensional or multidimensional arrangement, interconnected directly or through a bus system.


“Modules” may include systolic arrays, neural networks, multiprocessor systems, processors having multiple arithmetic units and logic cells, as well as known modules of the types FPGA, DPGA, XPUTER, etc.


For example, modules of an architecture whose arithmetic units and bus systems are freely configurable are used. An example architecture has been described in German Patent 4416881 as well as PACT02, PACT08, PACT10, PACT13. This architecture is referred to below as VPU. This architecture may include any desired arithmetic cells, logic cells (including memories) or communicative (IO) cells (“PAEs”), which may be arranged in a one-dimensional or multidimensional matrix “processing array” or “PA”. The matrix may have different cells of any design. The bus systems may also have a cellular structure. The matrix as a whole or parts thereof may be assigned a configuration unit (CT) which influences the interconnections and function of the PA.


A special property of VPUs is the automatic and deadlock-free reconfiguration at run time. Protocols and methods required for have been described in PACT04, 05, 08, 10 and 13, the full content of which is included here through this reference. The publication numbers for these internal file numbers can be found in the addendum.







DESCRIPTION OF THE EXAMPLE EMBODIMENTS
Example Initial States of PAEs and Bus Protocol of the Configuration

Each PAE may be allocated states that may influence configurability. These states may be locally coded or may be managed through one or more switch groups, in particular the CT itself. A PAE may have at least two states:


“Not configured”—In this state, the PAE is inactive and is not processing any data and/or triggers. The PAE does not receive any data and/or triggers, nor does it generate any data and/or triggers. Only data and/or triggers relevant to the configuration may be received and/or processed. The PAE is completely neutral and may be configured. Registers for the data and/or triggers to be processed may be initialized, in e.g., by the CT.


“Configured”—The function and interconnection of the PAE is configured. The PAE may process and generate data and/or triggers to be processed. Such states may also be present repeatedly, largely independently of one another, in independent parts of a PAE.


It will be appreciated that there may be a separation between data and/or triggers for processing on the one hand and data and/or triggers for configuration of one or more cells on the other hand.


During configuration, the CT may send, together with a valid configuration word (KW), a signal indicating the configuration word's validity (RDY). This signal may be omitted if validity is ensured by some other means, e.g., in the case of continuous transmission or by a code in the KW. In addition, the address of the PAE to be configured may be coded in a KW.


According to the criteria described below and in the patent applications referenced, a PAE may decide whether it can accept the KW and alter its configuration or whether data processing must not be interrupted or corrupted by a new configuration. Information regarding whether or not configurations are accepted may be relayed to the CT if the decision has not already been made there. The following protocol may be used: If a PAE accepts the configuration, it sends an acknowledgment ACK to the CT. If the configuration is rejected, a PAE will indicate this by sending REJ (reject) to the CT.


Within the data processing elements (PAEs), a decision may be made by one or more of the elements regarding whether they can be reconfigured because the data processing is concluded or whether they are still processing data. In addition, no data is corrupted due to unconfigured PAEs.


Example Approach to Deadlock Freedom and Correctness of the Data FILMO Principle


Efficient management of a plurality of configurations, each of which may be composed of one or more KWs and possibly additional control commands may be provided. The plurality of configurations may be configured overlappingly on the PA. When there is a great distance between the CT and the cell(s) to be configured, this may be a disadvantage in the transmission of configurations. It will be appreciated that no data or states are corrupted due to a reconfiguration. To ensure this, the following rules, which are called the FILMO principle, may be defined:

  • a) PAEs which are currently processing data are not reconfigured. A reconfiguration should take place only when data processing is completely concluded or it is certain that no further data processing is necessary. (Reconfiguration of PAEs, which are currently processing data or are waiting for outstanding data may lead to faulty calculation or loss of data.)
  • b) The status of a PAE should not change from “configured” to “not configured” during a FILMO run. In addition to the method described in PACT10, a special additional method which allows exceptions (explicit/implict LOCK) is described below. A SubConf is a quantity of configuration words to be configured jointly into the cell array at a given time or for a given purpose. A situation may occur where two different SubConfs (A, D) are supposed to share the same resources, e.g., a PAE X. For example, SubConf A may chronologically precede SubConf B. SubConf A must therefore occupy the resources before SubConf D. If PAE X is still “configured” at the configuration time of SubConf A, but its status changes to “not configured” before the configuration of SubConf D, then a deadlock situation may occur if no special measures are taken. An example deadlock is if SubConf A can no longer configure the PAE X and SubConf D occupies only PAE X, but the remaining resources which are already occupied by SubConf A can perform no more configuration. Neither SubConf A nor SubConf D can be executed. A deadlock would occur.
  • c) A SubConf should have either successfully configured or allocated all the PAEs belonging to it or it should have received a reject (REJ) before the following SubConf is configured. However, this is true only if the two configurations share the same resources entirely or in part. If there is no resource conflict, the two SubConfs may be configured independently of one another. Even if PAEs reject a configuration (REJ) for a SubConf, then the configuration of the following SubConfs is performed. Since the status of PAEs does not change during a FILMO run (LOCK, according to section b), this ensures that no PAEs which would have required the preceding configuration may be configured during the following configuration. It will be appreciated that deadlock may occur if a SubConf which is to be configured later were to allocate the PAEs to a SubConf which is to be configured previously, e.g., because no SubConf could be configured completely.
  • d) Within one SubConf, it may be necessary for certain PAEs to be configured or started in a certain sequence. For example, PAE may be switched to a bus, only after the bus has also been configured for the SubConf. Switching to a different bus may lead to processing of false data.
  • e) In the case of certain algorithms, the sequence in the configuration of SubConf may need to correspond exactly to the sequence of triggers arriving at the CT. For example, if the trigger which initiates the configuration of SubConf 1 arrives before the trigger which initiates the configuration of SubConf 3, then SubConf 1 must be configured completely before SubConf 3 may be configured. If the order of triggers were reversed, this could lead to a defective sequence of subgraphs, depending on the algorithm (see PACT13).


Methods which meet most or all of the requirements listed above are described in PACT05 and PACT10.


Management of the configurations, their timing and arrangement and the design of the respective components, e.g., the configuration registers, etc., may be used to provide the technique described here, however, and possible improvements over known related art are described below.


To ensure that requirement e) is met as needed, the triggers received, which pertain to the status of a SubConf and a cell and/or reconfigurability, may be stored in the correct sequence by way of a simple FIFO, e.g., a FIFO allocated to the CT. Each FIFO entry include the triggers received in a clock cycle. All the triggers received in one clock cycle may be stored. If there are no triggers, no FIFO entry is generated. The CT may process the FIFO in the sequence in which the triggers were received. If one entry contains multiple triggers, the CT may first process each trigger individually, optionally either (I) prioritized or (ii) unprioritized, before processing the next FIFO entry. Since a trigger is usually sent to the CT only once per configuration, it may be sufficient to define the maximum depth of the FIFO relative to the quantity of all trigger lines wired to the CT. As an alternative method, a time stamp protocol as described in PACT18 may also be used.


Two basic types of FILMO are described in PACT10:


Separate FILMO: The FILMO may be designed as a separate memory and may be separated from the normal CT memory which caches the SubConf. Only KWs that could not be configured in the PA are copied to the FILMO.


Integrated FILMO: The FILMO may be integrated into the CT memory. KWs that could not be configured are managed by using flags and pointers.


Example methods, according to the present invention, may be applied to both types of FILMO or to one type.


2.2. Example Differential Reconfiguration


With many algorithms, it may be advisable only to make minimal changes in configuration during operation on the basis of certain events represented by triggers or by time tuning without completely deleting the configuration of the PAEs. This may apply to the wiring of the bus systems or to certain constants. For example, if only one constant is to be changed, it may be advisable to be able write a KW to the respective PAE without the PAE being in an “unconfigured” state, reducing the amount of configuration data to be transferred. This may be achieved with a “differential reconfiguration” configuration mode, where the KW contains the information “DIFFERENTIAL” either in encoded form or explicitly in writing the KW. “DIFFERENTIAL” indicates that the KW is to be sent to a PAE that has already been configured. The acceptance of the differential configuration and the acknowledgment may be inverted from the normal configuration; e.g., a configured PAE receives the KW and sends an ACK. An unconfigured PAE rejects the KW and sends REJ because the prerequisite for “DIFFERENTIAL” is a configured PAE.


There may be various approaches to performing a differential reconfiguration. The differential reconfiguration may forced without regard for the data processing operation actually taking place in a cell. In that case, it is desirable to guarantee accurate synchronization with the data processing, which may be accomplished through appropriate design and layout of the program. To relieve the programmer of this job, however, differential reconfigurability may also be made to depend on other events, e.g., the existence of a certain state in another cell or in the cell that is to be partially reconfigured. It may be advantageous to store the configuration data, e.g., the differential configuration data, in or on the cell, e.g., in a dedicated register. The register contents may be called up, depending on a certain state; and entered into the cell. This may be accomplished, for example, by switching a multiplexer.


The wave reconfiguration methods described below may also be used. A differential configuration may be made dependent on the results (ACK/REJ) of a configuration performed previously in the normal manner. In this case, the differential configuration may be performed only after arrival of ACK for the previous nondifferential configuration.


An variant of synchronization of the differential configuration may be used, depending on how many different differential configurations are needed. The differential configuration is not prestored locally. Instead, on recognition of a certain state, e.g., the end of a data input, a signal may be generated with a first cell, stopping the cell which is to be differentially reconfigured. Such a signal may be a STOP signal. After or simultaneously with stopping data processing in the cell which is to be reconfigured differentially, a signal may be sent to the CT, requesting differential reconfiguration of the stopped cell. This request signal for differential reconfiguration may be generated and sent by the cell which also generates the STOP signal. The CT may then send the data needed for differential reconfiguration to the stopped cell and may trigger the differential reconfiguration. After differential reconfiguration, the STOP mode may be terminated, e.g., by the CT. It will be appreciated that Cache techniques may also be used in the differential reconfiguration method.


3. Example Function of Triggers


Triggers may be used in VPU modules to transmit simple information. Examples are listed below. Triggers may be transmitted by any desired bus system (network), e.g., a configurable bus system. The source and target of a trigger may be programmed.


A plurality of triggers may be transmitted simultaneously within a module. In addition to direct transmission from a source to a target, transmission from one source to multiple destinations or from multiple sources to one destination may also be provided.


Triggers transmissions may include:

    • Status information from arithmetic units (ALUs), e.g.,
      • carry
      • division by zero
      • zero
      • negative
      • underflow/overflow
    • Results of comparisons
    • n-bit information (for small n)
    • Interrupt request generated internally or externally
    • Blocking and enable orders
    • Requests for configurations


Triggers may be generated by any cells and may be triggered in the individual cells by events. For example, the status register and/or the flag register may be used by ALUs or processors to generate triggers. Triggers may also be generated by a CT and/or an external unit arranged outside the cell array or the module.


Triggers may be received by any number of cells and may be analyzed in any manner. For example, triggers may be analyzed by a CT or an external unit arranged outside the cell array or the module.


Triggers may be used for synchronization and control of conditional executions and/or sequence controls in the array. Conditional executions and sequence controls may be implemented by sequencers.


3.1. Example Semantics of Triggers


Triggers may be used for actions within PAEs, for example:


STEP: Execute an operation within a PAE upon receipt of the trigger.


GO: Execute operations within a PAE upon receipt of the trigger. The execution is stopped by STOP.


STOP: Stop the execution started with GO; in this regard, see also the preceding discussion of the STOP signal.


LOCAL RESET: Stop the execution and transfer from the “allocated” or “configured” state to the “not configured” state.


WAVE: Stop the execution of operations and load a wave reconfiguration from by the CT. In wave reconfiguration, one or more PAEs may be subsequently reconfigured to run through the end of a data packet. Then, the processing of another data packet may take place, e.g., directly after reconfiguration, which may also be performed as a differential reconfiguration.


For example, a first audio data packet may be processed with first filter coefficients; after running through the first audio data packet, a partial reconfiguration may take place, and then a different audio data packet may be processed with a second set of filter coefficients. To do so, the new reconfiguration data, e.g., the second filter coefficients, may be deposited in or at the cell, and the reconfiguration may be prompted automatically on recognition of the end of the first data packet without requiring further intervention of a CT or another external control unit.


Recognition of the end of the first data packet, e.g., the time when the reconfiguration is to be performed, may be accomplished by generating a wave reconfiguration trigger. The trigger may be generated, for example, in a cell which recognizes a data end. Reconfiguration then may run from cell to cell with the trigger as the cells finish processing of the first data packet, comparable to a “wave” running through a soccer stadium.


For example, a single cell may generate the trigger and send it to a first cell, for to indicate to the first cell that the end of a first packet has been run through. This first cell to be reconfigured, addressed by the wave trigger generating cell, may also relay the wave trigger signal simultaneously with the results derived from the last data of the first packet, which may be sent to one or more subsequently processing cells, sending the signal to these subsequently processing cells. The wave trigger signal may also be sent or relayed to those cells which are not currently involved in processing the first data packet and/or do not receive any results derived from the last data. Then the first cell to be reconfigured, which is addressed by the wave trigger signal generating cell, is reconfigured and begins processing the data of the second data packet. During this period of time, the subsequent cells may still be processing the first data packet. It should be pointed out that the wave trigger signal generating cell may address not only individual cells, but also multiple cells which are to be reconfigured. This may result in an avalanche-like propagation of the wave configuration.


Data processing may be continued as soon as the wave reconfiguration has been configured completely. In WAVE, it is possible to select whether data processing is continued immediately after complete configuration or whether there is a wait for arrival of a STEP or GO.


SELECT: Selects an input bus for relaying to the output. Example: Either a bus A or a bus B may be switched to an output. The setting of the multiplexer and thus the selection of the bus are selected by SELECT.


Triggers are used for the following actions within CTs, for example:


CONFIG: A configuration is to be configured by the CT into the PA.


PRELOAD: A configuration is to be preloaded by the CT into its local memory. Therefore, the configuration need be loaded only upon receipt of CONFIG. It will be appreciated that this may result in more predictable caching.


CLEAR: A configuration is to be deleted by the CT from its memory.


Incoming triggers may reference a certain configuration. The corresponding method is described below.


Semantics need not be assigned to a trigger signal in the network. Instead, a trigger may represent only a state. How this state may be utilized by a respective receiving PAE may be configured in the respective receiving PAE. For example, a sending PAE may send only its status, and the receiving PAE generates the semantics valid for the received status. If several PAEs receive one trigger, different semantics may be used in each PAE, e.g., a different response may occur in each PAE. For example, a first PAE may be stopped, and a second PAE may be reconfigured. If multiple PAEs send one trigger, the event generating the trigger may be different in each PAE.


It should be pointed out that a wave reconfiguration and/or a partial reconfiguration can also take place in bus systems and the like. A partial reconfiguration of a bus can take place, for example, in reconfiguration by sections.


3.2. Example System Status and Program Pointer


A system may include a module or an interlinked group of modules, depending on the implementation. For managing an array of PAEs, which is designed to include several modules in the case of a system, it may not be necessary to know the status or program pointer of each PAE. Several cases are differentiated below in order to explain this further:

    • PAEs as components not having a processor property. Such PAEs do not need their own program pointer. The status of an individual PAE is may be irrelevant, because only certain PAEs have a usable status (see PACT01, where the status represented by a PAE is not a program counter but instead is a data counter). The status of a group of PAEs may be determined by the linking of the states of the individual relevant PAEs. The information within the network of triggers may represent the status.
    • PAEs as processors. These PAEs may have their own internal program pointer and status. The information of only one PAE which is relevant for other PAEs may be exchanged by triggers.


The interaction among PAEs may yield a common status which may be analyzed, e.g., in the CT, to determine how a reconfiguration is to take place. The analysis may include the instantaneous configuration of the network of lines and/or buses used to transmit the triggers if the network is configurable.


The array of PAEs (PA) may have a global status. Information may be sent through certain triggers to the CT. The CT may control the program execution through reconfiguration based on these triggers. A program counter may be omitted.


4. Example (Re)Configuration


VPU modules may be configured or reconfigured on the basis of events. These events may be represented by triggers (CONFIG) transmitted to a CT. An incoming trigger may reference a certain configuration (SubConf) for certain PAEs. The referenced SubConf may be sent to one or more PAEs. Referencing may take place by using a conventional lookup system or any other address conversion or address generation procedure. For example, the address of the executing configuration (SubConf) may be calculated as follows on the basis of the number of an incoming trigger if the SubConfs have a fixed length:

offset+(trigger number*SubConf length).


VPU modules may have three configuration modes:


a) Global configuration: The entire VPU may be reconfigured if the entire VPU is in a configurable state, e.g., unconfigured.


b) Local configuration: A portion of the VPU may be reconfigured. The local portion of the VPU which is to be reconfigured may need to be in a configurable state, e.g., unconfigured.


c) Differential configuration: An existing configuration may be modified. PAEs to be reconfigured may need to be in a configured state, e.g, they must be configured.


A configuration may include a set of configuration words (KWs). Each configuration may be referenced by a reference number (ID), which may be unique.


A set of KWs identified by an ID is referred to below as a subconfiguration (SubConf). Multiple SubConfs, which may run simultaneously on different PAEs, may be configured in a VPU. These SubConfs may be different or identical.


A PAE may have one or more configuration registers, one configuration word (KW) describing one configuration register. A KW may be assigned the address of the PAE to be configured. Information indicating the type of configuration may also be assigned to a KW. This information may be implemented using various methods, e.g., flags or coding. Flags are described in detail below.


4.1. Example ModuleID


For some operations, it may be sufficient for the CT to know the allocation of a configuration word and of the respective PAE to a SubConf. For more complex operations in the processing array, the ID of the SubConf assigned to an operation may be stored in each PAE.


An ID stored in the PA is referred to below as moduleID to differentiate the IDs within the CTs. There are several reasons for introducing moduleID, some of which are described here:

    • A PAE may be switched only to a bus which also belongs to the corresponding SubConf. If a PAE is switched to the wrong (different) bus, this may result in processing of incorrect data. This problem can be solved by configuring buses prior to PAEs, which leads to a rigid order of KWs within a SubConf. By introducing moduleID, this pre configuration can be avoided, because a PAE compares its stored moduleID with that of the buses assigned to it and switches to a bus only when its moduleID matches that of the PAE. As long as the two moduleIDs are different, the bus connection is not established. As an alternative, a bus sharing management can also be implemented, as described in PACT 07.
    • PAEs may be converted to the “unconfigured” state by a local reset signal. Local reset may originate from a PAE in the array and not from a CT, and therefore is “local”.
    • The signal may need to be connected between all PAEs of a SubConf. This procedure may become problematical when a SubConf that has not yet been completely configured is to be deleted, and therefore not all PAEs are connected to local reset. By using moduleID, the CT can broadcast a command to all PAEs. PAEs with the corresponding moduleID may change their status to “not configured”.
    • In many applications, a SubConf may be started only at a certain time, but it may already be configured in advance. By using the moduleID, the CT can broadcast a command to all PAEs. The PAEs with the corresponding moduleID then start the data processing.


The moduleID may also be identical to the ID stored in the CT.


The moduleID may be written into a configuration register in the respective PAR Since IDs may have a considerable width, e.g., more than 10 bits in most cases, it may not be efficient to provide such a large register in each PAE.


Alternatively, the moduleID of the respective SubConf be derived from the ID. The alternative module ID may have a small width and may be unique. Since the number of all modules within a PA is typically comparatively small, a moduleID width of a few bits (e.g., 4 to 5 bits) may be sufficient. The ID and moduleID can be mapped bijectively on one another. In other words, the moduleID may uniquely identify a configured module within an array at a certain point in time. The moduleID may be issued to a SubConf before configuration so that the SubConf is uniquely identifiable in the PA at the time of execution. A SubConf may be configured into the PA multiple times simultaneously (see macros, described below). A unique moduleID may be issued for each configured SubConf for unambiguous allocation.


The transformation of an ID to a moduleID may be accomplished with lookup tables or lists. Since there are numerous conventional mapping methods for this purpose, only one is explained in greater detail here:


A list whose length is 2ModuleID contains the number of all IDs configured in the array at the moment, one ID being allocated to each list entry. The entry “0” characterizes an unused moduleID. If a new ID is configured, it must be assigned to a free list entry, whose address yields the corresponding moduleID. The ID is entered into the list at the moduleID address. On deletion of an ID, the corresponding list entry is reset at “0”.


It will be appreciated that other mapping methods may be employed.


4.2. Example PAE States


Each KW may be provided with additional flags which may be used to check and control the status of a PAE:


CHECK: An unconfigured PAE is allocated and configured. If the status of the PAE is “not configured,” the PAE is configured with the KW. This procedure may be acknowledged with ACK.


If the PAE is in the “configured” or “allocated” state, the KW is not accepted. The rejection may be acknowledged with REJ.


After receipt of CHECK, a PAE may be switched to an “allocated” state. Any additional CHECK is rejected, but data processing is not started.


DIFFERENTIAL: The configuration registers of a PAE that has already been configured may be modified. If the status of the PAE is “configured” or “allocated,” then the PAE may be modified using the KW. This procedure may be acknowledged with ACK. If the PAE is in the “unconfigured” state, the KW is not accepted but is acknowledged by REJ (reject).


GO: Data processing may be started. GO may be sent individually or together with CHECK or DIFFERENTIAL.


WAVE: A configuration may be linked to the data processing. When the WAVE trigger is received, the configuration characterized with the WAVE flag may be loaded into the PAE. If WAVE configuration is performed before receipt of the trigger, the KWs characterized with the WAVE flag remain stored until receipt of the trigger and become active only with the trigger. If the WAVE trigger is received before the KW which has the WAVE flag, data processing is stopped until the KW is received.


At least CHECK or DIFFERENTIAL must be set for each KW transmitted. However, CHECK and DIFFERENTIAL are not allowed at the same time. CHECK and GO or DIFFERENTIAL and GO are allowed and will start data processing.


In addition, a flag which is not assigned to any KW and is set explicitly by the CT may also be implemented:


LOCK: It will be appreciated that PAE may not always switch to the “not configured” state at will. If this were the case, the cell could still be configured, for example, and it could be involved with the processing of data while an attempt is being made to write a first configuration from the FILMO memory into the cell; then the cell terminates its activity during the additional FILMO run. Therefore, without any additional measures, it is possible that a second following configuration, which is stored in FILMO and may actually be executed only after the first configuration, could occupy this cell. This could then result in DEADLOCK situations. By temporarily limiting the change of configurability of the cell through the LOCK command, such a DEADLOCK can be avoided by preventing the cell from being configurable at an unwanted time. This locking of the cell against reconfiguration can take place in particular either when FILMO is run through, regardless of whether it is a cell which is in fact accessed for the purpose of reconfiguration, or alternatively, the cell may be locked to prevent reconfiguration by prohibiting the cell from being reconfigured for a certain phase, after the first unsuccessful access to the cell by a first configuration of the cell in the FILMO; this prevents inclusion of the second configuration only in those cells which are to be accessed with an earlier configuration.


Thus, according to the FILMO principle, a change may be allowed in FILMO only during certain states. As discussed above, the FILMO state machine controls the transition to the “not configured” state through LOCK.


Depending on the implementation, the PAE may transmit its instantaneous status to a higher-level control unit (e.g., the respective CT) or stores it locally.


Example Transition Tables


A simple implementation of a state machine for observing the FILMO protocol is possible without using WAVE or CHECK/DIFFERENTIAL. Only the GO flag is implemented here, a configuration being composed of KWs transmitted together with GO. The following states may be implemented:


Not configured: The PAE behaves completely neutrally, e.g., it does not accept any data or triggers, nor does it send any data or triggers. The PAE waits for a configuration. Differential configurations, if implemented, are rejected.


Configured: The PAE is configured and it processes data and triggers. Other configurations are rejected; differential configurations, if implemented, are accepted.


Wait for lock: The PAE receives a request for reconfiguration (e.g., through local reset or by setting a bit in a configuration register). Data processing may be stopped, and the PAE may wait for cancellation of LOCK to be able to change to the “not configured” state.
















Current PAE status
Event
Next status








not configured
GO flag
configured



configured
Local Reset Trigger
wait for lock



wait for lock
LOCK flag
not configured









A completed state machine according to the approach described here makes it possible to configure a PAE which requires several KWs. This is the case, for example, when a configuration which refers to several constants is to be transmitted, and these constants are also to be written into the PAE after or together with the actual configuration. An additional status is required for this purpose.


Allocated: The PAEs have been checked by CHECK and are ready for configuration. In the allocated state, the PAE is not yet processing any data. Other KWs marked as DIFFERENTIAL are accepted. KWs marked with CHECK are rejected.


An Example


A corresponding transition table is shown below; WAVE is not included:
















Current PAE status
Event
Next status








not configured
CHECK flag
allocated



not configured
GO flag
configured



allocated
GO flag
configured



configured
Local Reset Trigger
wait for lock



wait for lock
LOCK flag
not configured










4.2.1. Example Implementation of GO


GO may be set immediately during the configuration of a PAE together with the KW in order to be able to start data processing immediately. Alternatively, GO may be sent to the respective PAEs after conclusion of the entire SubConf.


The GO flag may be implemented in various ways, including the examples described below:


a) Register


Each PAE may have a register which is set at the start of processing. The technical implementation is comparatively simple, but a configuration cycle may be required for each PAE. GO is transmitted together with the KW as a flag according to the previous description.


If it is important in which order PAEs of different PACs belonging to one EnhSubConf are configured, an alternative approach may be used to ensure that this chronological dependence is maintained. Since there are also multiple CTs when there are multiple PACs, the CTs may notify one another regarding whether all PAEs which must be configured before the next in each PAE have already accepted their GO from the same configuration.


One possibility of resolving the chronological dependencies and preventing unallowed GOs from being sent is to reassign the KWs. With reassignment, a correct order may ensured by FILMO. FILMO then marks, e.g., by a flag for each configuration, whether all GOs of the current configuration have been accepted. If this is not the case, no additional GOs of this configuration are sent. Each new configuration may have an initial status indicating all GOs have been accepted.


To increase the probability that some PAEs are no longer being configured during the configuration, the KWs of an at least partially sequential configuration can be re-sorted. The re-sorting permits the configuration the KWs of the respective PAEs at a later point in time. Certain PAEs may be activated sooner, e.g., by rearranging the KWs of the respective configuration so that the respective PAEs are configured earlier. These approaches may be used if the order of the KWs is not already determined completely by time dependencies that must be maintained after resorting.


b) Wiring by Conductor


As is the case in use of the local reset signal, PAEs may be combined into groups which are to be started jointly. Within this group, all PAEs are connected to a line for distribution of GO. Hone group is to be started, GO is signaled to a first PAE. The signalling may be accomplished by sending a signal or setting a register (see a)) of the first PAE. From the first PAE, GO may be relayed to the other PAEs. One configuration cycle may be necessary for starting. For relaying, a latency time may be needed to bridge great distances.


c) Broadcast


An alternative to a) and b) offers a high performance (only one configuration cycle) with a comparatively low complexity.


All modules may receive a moduleID which may be different from the SubConfID.


It will be appreciated that it may be desirable to keep the size of the moduleID as small as possible. A width of a few bits (3 to 5) may be sufficient. The use of moduleID is explained in greater detail below.


During configuration, the corresponding moduleID may be written to each PAE.


GO is then started by a broadcast, by sending the moduleID together with the GO command to the array. The command is received by all PAEs, but is executed only by the PAEs having the proper moduleID.


4.2.2. Locking the PAE Status


The status of a PAE may need to be prevented from changing from “configured” to “not configured” within a configuration or a FILMO run. Example: Two different SubConfs (A, D) share the same resources, in particular, a PAE X. In FILMO, SubConf A precedes SubConf D in time. SubConf A must therefore occupy the resources before SubConf D. PAE X is “configured” at the configuration time of SubConf A, but it changes its status to “not configured” before the configuration of SubConf D. This may result in a deadlock situation, because now SubConf A can no longer configure PAE X, but SubConf D can no longer configure the remaining resources which are already occupied by SubConf A. Neither SubConf A nor SubConf D can be executed. As mentioned previously, LOCK may ensure that the status of a PAE does not change in an inadmissible manner during a FILMO run. For the FILMO principle it is irrelevant how the status is locked. Several possible locking approaches are discussed below:


Basic LOCK


Before beginning the first configuration and with each new run of FILMO, the status of the PAEs is locked. After the end of each run, the status is released again. Thus, certain changes in status may be allowed only once per run.


Explicit LOCK


The lock signal is set only after the first REJ from the PA since the start of a FILMO run. This is possible because previously all the PAEs could be configured and thus already were in the “unconfigured” state. Only a PAE which generates a REJ could change its status from “configured” to “not configured” during the additional FILMO run. A deadlock could occur only after this time, namely when a first KW receives a REJ and a later one is configured. However, the transition from “configured” to “not configured” is prevented by immediately setting LOCK after a REJ. With this approach, during the first run phase, PAEs can still change their status, which means that they can change to the “unconfigured” state. If a PAE thus changes from “configured” to “not configured” during a run before a failed configuration attempt, then it can be configured in the same configuration phase.


Implicit LOCK


A more efficient extension of the explicit LOCK is the implicit handling of LOCK within a PAE.


In general, only PAEs which have rejected (REJ) a configuration may be affected by the lock status. Therefore, it is sufficient during a FILMO run to lock the status only within PAEs that have generated a REJ. All other PAEs may remain unaffected. LOCK is no longer generated by a higher-level instance (CT). Instead, after a FILMO run, the lock status in the respective PAEs may be canceled by a FREE signal. FREE can be broadcast to all PAEs directly after a FILMO run and can also be pipelined through the array.


Example Extended Transition Tables for Implicit LOCK:


A reject (REJ) generated by a PAE may be stored locally in each PAE (REJD=rejected). The information is deleted only on return after “not configured.”














Current PAE status
Event
Next status







not configured
CHECK flag
Allocated


not configured
GO flag
Configured


allocated
GO flag
Configured


configured
Local reset trigger and reject
Wait for free



(REJD)



configured
Local reset trigger and no
not configured



reject (not REJD)



wait for free
FREE flag
not configured









It will be appreciated that the transition tables are given as examples and that other approaches may be employed.


4.2.3. Example Configuration of a PAE


An example configuration sequence is described again in this section from the standpoint of the CT. A PAE shall also be considered to include parts of a PAE if they manage the states described previously, independently of one another.


If a PAE is to be reconfigured, the first KW may need to set the CHECK flag to check the status of the PAE. A configuration for a PAE is constructed so that either (a) only one KW is configured:

















CHECK
DIFFERENTIAL
GO
KW





X

*
KW0










or (b) multiple KWs are configured, with CHECK being set with the


first KW and DIFFERENTIAL being set with all additional KWs.













CHECK
DIFFERENTIAL
GO
KW





X


KW0



X

KW1



X

KW2



X
*
KWn





(X) set, (—) not set, GO is optional (*).






If CHECK is rejected (REJ), no subsequent KW with a DIFFERENTIAL flag is sent to the PAE. After CHECK is accepted (ACK), all additional CHECKs are rejected until the return to the state “not configured” and the PAE is allocated for the accepted SubConf. Within this SubConf, the next KWs may be configured exclusively with DIFFERENTIAL. It will be appreciated that this is allowed because it is known by CHECK that this SubConf has access rights to the PAE.


4.2.4. Resetting to the Status “Not Configured”


With a specially designed trigger (e.g., local reset), a signal which triggers local resetting of the “configured” state to “not configured” is triggered in the receiving PAEs. This occurs, at the latest, after a LOCK or FREE signal is received. Resetting may also be triggered by other sources, such as a configuration register.


Local reset can be relayed from the source generating the signal over all existing configurable bus connections, e.g., all trigger buses and all data buses, to each PAE connected to the buses. Each PAE receiving a local reset may in turn relay the signal over all the connected buses.


However, it may be desirable to prevent the local reset trigger from being relayed beyond the limit of a local group. Each cell may be independently configured. Each cell configuration may indicate whether and over which connected buses the local reset is to be relayed.


4.2.4.1. Deleting an Incompletely Configured SubConf


It may be found that the SubConf is not needed during configuration of a SubConf. For example, local reset may not change the status of all PAEs to “not configured” because the bus has not yet been completely established. Two alternative approaches are proposed. In both approaches, the PAE which would have generated the local reset sends a trigger to the CT. Then the CT informs the PAEs as follows:


4.2.4.2. When Using ModuleID


If a possibility for storage of the moduleID is provided within each PAE, then each PAE can be requested to go to the status “not configured” with this specific ID. This may be accomplished with a simple broadcast in which the ID is also sent.


4.2.4.3. When Using the GO Signal


If a GO line is wired in exactly the order in which the PAEs are configured, a reset line may be assigned to the GO line. The reset line may set all the PAEs in the state “not configured.”


4.2.4.4. Explicit Reset by the Configuration Register


In each PAE, a bit or a code may be defined within the configuration register. When this bit or code is set by the CT, the PAE is reset in the state “not configured.”


4.3. Holding the Data in the PAEs


It is advantageous to hold the data and states of a PAE beyond a reconfiguration. Data stored within a PAE may be preserved despite reconfiguration. Appropriate information in the KWs, may define for each relevant register whether the register is reset by the reconfiguration.


Example:


For example, if a bit within a KW is logical 0, the current register value of the respective data register or status register may be retained. A logical 1 resets the value of the register. A corresponding KW may then have the following structure:














Input register
Output register
Status flags





















A
B
C
H
L
equal/
overflow







zero









Whether or not the data will be preserved, may then be selected with each reconfiguration.


4.4. Setting Data in the PAEs


Date may be written into the registers of the PAEs during reconfiguration of the CT. The relevant registers may be addressed by KWs. A separate bit may indicate whether the data is to be treated as a constant or as a data word.

    • A constant may be retained until it is reset.
    • A data word may be valid for precisely a certain number of counts, e.g., precisely one count. After processing the data word, the data word written to the register by the CT may no longer exist.


      5. Example Extensions


The bus protocol may be extended by also pipelining the KWs and ACK/REJ signals through registers.


One KW or multiple KWs may be sent in each clock cycle. The FILMO principle may be maintained. An allocation to a KW may be written to the PA in such a way that the delayed acknowledgment is allocated subsequently to the KW. KWs depending on the acknowledgment may be re-sorted so that they are processed only after receipt of the acknowledgment.


Several alternative approaches are described below:


5.1. Example Lookup Tables (STATELUT)


Each PAE may send its status to a lookup table (STATELUT). The lookup table may be implemented locally in the CT. In sending a KW, the CT may check the status of the addressed PAE via a lookup in the STATELUT. The acknowledgment (ACK/REJ) may be generated by the STATELUT.


In a CT, the status of each individual PAE may be managed in a memory or a register set. For each PAE there is an entry indicating in which mode (“configured,” “not configured”) the PAE is. On the basis of this entry, the CT checks on whether the PAE can be reconfigured. This status is checked internally by the CT, e.g., without checking back with the PAEs. Each PAE sends its status independently or after a request, depending on the implementation, to the internal STATELUT within the CT. When LOCK is set or there is no FREE signal, no changes in status are sent by the PAEs to the STATELUT and none are received by the STATELUT.


The status of the PAEs may be monitored by a simple mechanism, with the mechanisms of status control and the known states that have already been described being implemented.


Setting the “Configured” Status


When writing a KW provided with a CHECK flag, the addressed may be marked as “allocated” in the STATELUT.

    • When the PAE is started (GO), the PAE may be entered as “configured.”


      Resetting the “Configured” Status to “Not Configured”


Several methods may be used, depending on the application and implementation:

  • a) Each PAE may send a status signal to the table when the PAEs' status changes from “configured” to “not configured.” This status signal may be sent pipelined.
  • b) A status signal (local reset) may be sent for a group of PAEs, indicating that the status for the entire group has changed from “configured” to “not configured”. All the PAEs belonging to the group may be selected according to a list, and the status for each individual PAE may be changed in the table. The status signal may need to be sent to the CT from the last PAE of a group removed by a local reset signal. Otherwise, there may be inconsistencies between the STATELUT and the actual status of the PAEs. For example, the STATELUT may list a PAE as “not configured” although it is in fact still in a “configured” state.
  • c) After receipt of a LOCK signal, possibly pipelined, each PAE whose status has changed since the last receipt of LOCK may send its status to the STATELUT. LOCK here receives the “TRANSFER STATUS” semantics. However, PAEs transmit their status only after this request, and otherwise the status change is locked, so the approach remains the same except for the inverted semantics.


To check the status of a PAE during configuration, the STATELUT may be queried when the address of the target PAE of a KW is sent. An ACK or REJ may be generated accordingly. A KW may be sent to a PAE only if no REJ has been generated or if the DIFFERENTIAL flag has been set.


This approach ensures the chronological order of KWs. Only valid KWs are sent to the PAEs. One disadvantage here is the complexity of the implementation of the STATELUT and the resending of the PAE states to the STATELUT. Bus bandwidth and running time may also be required for this approach.


5.2. Example Re-Sorting the KWs


The use of the CHECK flag for each first KW (KW1) sent to a PAE may be needed in the following approach.


The SubConf may be resorted as follows:

    • 1. First, KW1 of a first PAE may be written. In the time (DELAY) until the receipt of the acknowledgment (ACK/REJ), there follow exactly as many dummy cycles (NOPs) as cycles have elapsed.
    • 2. Then the KW1 of a second PAE may be written. During DELAY the remaining KWs of the first PAE may be written. Any remaining cycles are filled with dummy cycles. The configuration block from KW1 until the expiration of DELAY is referred to here as an “atom”.
    • 3. The same procedure may be followed with each additional PAE.
    • 4. If more KWs are written for a PAE than there are cycles during DELAY, the remaining portion may distributed among the following atoms. As an alternative, the DELAY may also be actively lengthened, so a larger number of KWs may be written in the same atom.


Upon receipt of ACK for a KW1, all additional KW1s for the corresponding PAE may be configured. If the PAE acknowledges this with REJ, no other KW pertaining to the PAE may be configured.


This procedure guarantees that the proper order will be maintained in configuration.


A disadvantage of this approach is that the optimum configuration speed may not be achieved. To maintain the proper order, the waiting time of an atom may optionally have to be filled with dummy cycles (NOPs), so the usable bandwidth and the size of a SubConf are increased by the NOPs.


This restriction on the configuration speed may be difficult to avoid. To minimize the amount of configuration data and configuration cycles, the number of configuration registers may need to be minimized. At higher frequencies, DELAY necessarily becomes larger, so this collides with the requirement that DELAY be used appropriately by filling up with KW.


Therefore, approach is most appropriate for use in serial transmission of configuration data. Due to the serialization of KWs, the data stream is long enough to fill up the waiting time.


5.3. Analyzing the ACK/REJ Acknowledgment with Latency (CHECK, ACK/REJ)


The CHECK signal may be sent to the addressed PAE with the KWs over one or more pipeline stages. The addressed PAE acknowledges (ACK/REJ) this to the CT, also pipelined.


In each cycle, a KW may be sent. The KW's acknowledgment (ACK/REJ) is received by the CT n cycles later. The KW and its acknowledgment may be analyzed. However, during this period of time, no additional KWs are sent. This results in two problem areas:

    • Controlling the FILMO
    • Maintaining the sequence of KWs


      5.3.1. Controlling the FILMO


Within the FILMO, it must be noted which KWs have been accepted by a PAE (ACK) and which have been rejected (REJ). Rejected KWs may be sent again in a later FILMO run. In this later run, it may be more efficient to run through only the KWs that have been rejected.


The requests described here may be implemented as follows: Another memory (RELJMP) which has the same depth as the FILMO may be assigned to the FILMO. A first counter (ADR_CNT) points to the address in the FILMO of the KW currently being written into the PAE array. A second counter (ACK/REJ_CNT) points to the position in the FILMO of the KW whose acknowledgment (ACK/REJ) is currently returning from the array. A register (LASTREJ) stores the value of ACK/REJ_CNT which points to the address of the last KW whose configuration was acknowledged with REJ. A subtractor calculates the difference between ACK/REJ_CNT and LASTREJ. On occurrence of a REJ, this difference is written into the memory location having the address LASTREJ in the memory RELJMP.


RELJMP thus contains the relative jump width between a rejected KW and the following KW.

    • 1. A RELJMP entry of “0” (zero) is assigned to each accepted KW.
    • 2. A RELJMP entry of “>0” (greater than zero) is assigned to each rejected KW. The address of the next rejected KW is calculated in the FILMO by adding the current address having the RELJMP entry.
    • 3. A RELJMP entry of “0” (zero) is assigned to the last rejected KW, indicating the end.


The memory location of the first address of a SubConf is occupied by a NOP in the FILMO. The associated RELJMP contains the relative jump to the first KW to be processed.

    • 1. In the first run of the FILMO, the value is “1” (one).
    • 2. In a subsequent run, the value points to the first KW to be processed, so it is “>0” (greater than zero).
    • 3. If all KWs of the SubConf have been configured, the value is “0” (zero), by which the state machine determines that the configuration has been completely processed.


It will be appreciated that other approaches to coding various conditions may be employed.


5.3.2. Observing the Sequence (BARRIER)


The method described in section 5.3, may not guarantee a certain configuration sequence. This method only ensures the FILMO requirements according to 2.1 a)-c).


In certain applications, it is relevant to observe the configuration sequence within a SubConf (2.1 e)) and to maintain the configuration sequence of the individual SubConfs themselves (2.1 d)).


Observing sequences may be accomplished by partitioning SubConf into multiple blocks. A token (BARRIER) may be inserted between individual blocks, and can be skipped only if none of the preceding KWs has been rejected (REJ).


If the configuration reaches a BARRIER, and REJ has occurred previously, the BARRIER must not be skipped. A distinction is made between at least two types of barriers:


a) Nonblocking: The configuration is continued with the following SubConf.


b) Blocking: The configuration is continued with additional runs of the current SubConf. BARRIER is not skipped until the current SubConf has been configured completely.


Optimizing Configuration Speed.


Considerations on optimization of the configuration speed:


It is not normally necessary to observe the sequence of the configuration of the individual KWs. However, the sequence of activation of the individual PAEs (GO) may need to be observed exactly. The speed of the configuration can be increased by re-sorting the KWs so that all the KWs in which the GO flag has not been set are pulled before the BARRIER. Likewise, all the KWs in which the CHECK flag has been set may need to be pulled before the BARRIER. If a PAE is configured with only one KW, the KW may need to be split into two words, the CHECK flag being set before the BARRIER and the GO flag after the BARRIER.


At the BARRIER it is known whether all CHECKS have been acknowledged with ACK. Since a reject (REJ) occurs only when the CHECK flag is set, all KWs behind the barrier are may be executed in the correct order. The KWs behind the barrier may be run through only once, and the start of the individual PAEs occurs properly.


5.3.3. Garbage Collector


Two different implementations of a garbage collector (GC) are suggested for the approach described in to 5.3.


a) A GC may be implemented as an algorithm or a simple state machine: At the beginning, two pointers point to the starting address of the FILMO: a first pointer (read pointer) points to the current KW to be read by the GC, and a second pointer (write pointer) points to the position to which the KW is to be written. Read pointer is incremented linearly. Each KW whose RelJmp is not equal to “0” (zero) is written to the write pointer address. RelJmp is set at “1” and write pointer is incremented.


b) The GC may be integrated into the FILMO by adding a write pointer to the readout pointer of the FILMO. At the beginning of the FILMO run, the write pointer points to the first entry. Each KW that has been rejected with a REJ in configuration of a PAE is written to the memory location to which the write pointer points. Then write pointer is incremented. An additional FIFO-like memory (e.g., including a shift register) may be needed to temporarily store the KW sent to a PAE in the proper order until the ACK/REJ belonging to the KW is received by the FILMO again. Upon receipt of an ACK, the KW may be ignored. Upon receipt of REJ, the KW may be written to the memory location to which the write pointer is pointing (as described above). Here, the memory of the FILMO may be designed as a multiport memory. In this approach, there is a new memory structure at the end of each FILMO run, with the unconfigured KWs standing in linear order at the beginning of the memory. No additional GC runs may be necessary. Implementation of RelJmp and the respective logic may be completely omitted.


5.4. Prefetching of the ACK/REJ Acknowledgment with Latency


Alternative to 5.3 may be used. The disadvantage of this alternative approach is the comparatively long latency time, corresponding to three times the length of the pipeline.


The addresses and/or flags of the respective PAEs to be configured may be sent on a separate bus system before the actual configuration. The timing may be designed so that at the time the configuration word is to be written into a PAE, its ACK/REJ information is available. If acknowledged with ACK, the CONFIGURATION may be performed; in the case of a reject (REJ), the KWs are not sent to the PAE (ACK/REJ-PREFETCH). FILMO protocol, in particular LOCK, ensures that there will be no unallowed status change of the PAEs between ACK/REJ-PREFETCH and CONFIGURATION.


5.4.1. Structure of FILMO


FILMO may function as follows: KWs may be received in the correct order, either (i) from the memory of the CT or (ii) from the FILMO memory.


The PAE addresses of the KWs read out may be sent to the PAEs, pipelined through a first bus system. The complete KWs may be written to a FIFO-like memory having a fixed delay time (e.g., a shift register).


The respective PAE addressed may acknowledges this by sending ACK or REJ, depending on the PAE's status. The depth of the FIFO corresponds to the number of cycles that elapse between sending the PAE address to a PAE and receipt of the acknowledgment of the PAE. The cycle from sending the address to a PAE until the acknowledgment of the PAE is received is known as prefetch.


Due to the certain delay in the FIFO-like memory, which is not identical to FILMO here, the acknowledgment of a PAE may be received at the CT exactly at the time when the KW belonging to the PAE appears at the output of the FIFO. Upon receipt of ACK, the KW may be sent to the PAE. Here, no acknowledgment is expected. The PAE status has not changed in an admissible manner in the meantime, so that acceptance is guaranteed.


Upon receipt of REJ, the KW is not sent to the PAE but instead may be written back into the FILMO memory. An additional pointer is available for this, which points to the first address at the beginning of linear readout of the FILMO memory. The counter may be incremented with each value written back to the memory. In this way, rejected KWs are automatically packed linearly, which corresponds to an integrated garbage collector run (see also 5.3).


5.4.2. Sending and Acknowledging Over a Register Pipeline


The approach described here may be used to ensure a uniform clock delay between messages sent and responses received if different numbers of registers are connected between one transmitter and multiple possible receivers of messages. One example of this would be if receivers are located at different distances from the transmitter. The message sent may reach nearby receivers sooner than more remote receivers.


To achieve the same transit time for all responses, the response is not sent back directly by the receiver. Instead the response is sent further, to the receiver at the greatest distance from the sender. This path must have the exact number of receivers so that the response will be received at the time when a response sent simultaneously with the first message would be received at this point. From here out, the return takes place exactly as if the response were generated in this receiver at the greatest distance from the sender.


It will be appreciated that it does not matter here whether the response is actually sent to the most remote receiver or whether it is sent to another chain having registers with the same time response.


6. Hierarchical CT Protocol


As described in PACT10, VPU modules may be scalable by constructing a tree of CTs, the lowest CTs (low-level CTs) of the PAs being arranged on the leaves. A CT together with the PA assigned to the CT is known as a PAC. In general, any desired data or commands may be exchanged between CTs. Any technically appropriate protocol can be used for this purpose.


However, if the communication (inter-CT communication) causes SubConf to start on various low-level CTs within the CT tree (CTTREE), the requirements of the FILMO principle should be ensured to guarantee freedom from deadlock.


In general, two cases are to be distinguished:

  • 1. In the case a low-level CT, the start of a SubConf may be requested. The SubConf may run only locally on the low-level CT and the PA assigned the low-level CT. This case can be processed at any time within the CTTREE and does not require any special synchronization with other low-level CTs.
  • 2. In the case of a low-level CT, the start of a configuration may be requested. The SubConf may run on multiple low-level CTs and the PAs assigned to them. In this case, it is important to be sure that the configuration is called up “atomically” or invisibly on all the CTs involved. This may be accompanied by ensuring that no other SubConf is started during call-up and start of a given SubConf. Such a protocol is known from PACT10. However, a protocol that is even more optimized is desirable.


The protocol described in PACT10 may be inefficient as soon as a pipelined transmission at higher frequencies is necessary. This is because bus communication is subject to a long latency time.


An alternative approach is described in the following sections.


A main function of inter-CT communication is to ensure that SubConfs involving multiple PACs are started without deadlock. Enhanced subconfiguration (“EnhSubConfs”) are SubConfs that are not just executed locally on one PAC but instead may be distributed among multiple PACs. An EnhSubConf may include multiple SubConfs, each started by way of low-level CTs. A PAC may include a PAE group having at least one CT.


In order for multiple EnhSubConfs to be able to run on identical PACs without deadlock, a prioritization of their execution may be defined by a suitable mechanism (for example, within the CTTREE). If SubConfs are to be started from multiple different EnhSubConfs running on the same PACs, then these SubConfs may be started on the respective PACs in a chronological order corresponding to their respective priorities.


Example: Two EnhSubConfs are to be started, namely EnhSubConf-A on PACs 1, 3, 4, 6 and EnhSubConf-B on PACs 3, 4, 5, 6. It is important to ensure that EnhSubConf-A is always configured on PACs 3, 4 and 6 exclusively either before or after EnhSubConf-B. For example, if EnhSubConf-A is configured before EnhSubConf-B on PACs 3 and 4, and if EnhSubConf-A is to be configured on PAC 6 after EnhSubConf-B, a deadlock occurs because EnhSubConf-A could not be started on PAC 6, and EnhSubConf-B could not be started on PACs 3 and 4. Such a case is referred to below as crossed or a cross.


To prevent deadlock, it is sufficient to prevent EnhSubConfs from crossing. If there is an algorithmic dependence between two EnhSubConfs, e.g., if one EnhSubConf must be started after the other on the basis of the algorithm, this is normally resolved by having one EnhSubConf start the other.


Example Protocol


Inter-CT communication may distinguish two types of data:

  • a) a SubConf containing the configuration information,
  • b) an ID chain containing a list of IDs to be started, together with the information regarding on which PAC the SubConf referenced by the ID is to be started. One EnhSubConf may be translated to the individual SubConfs to be executed by an ID chain: IDEnhSubConf} ID chain {PAC1: IDSubConf1), (PAC2: IDSubConf2), (PAC3: IDSubConf3), . . . (PACn: IDSubConfn)}


Inter-CT communication may differentiate between the following transmission modes:


REQUEST: The start of an EnhSubConf may be requested by a low-level CT from the higher-level CT, or by a higher-level CT from another CT at an even higher level. This is repeated until reaching a CT which has stored the ID chain or reaching the root CT, which always has the ID chain in memory.


GRANT: A higher-level CT orders a lower-level CT to start a SubConf. This may be either a single SubConf or multiple SubConfs, depending on the ID chain.


GET: A CT requests a SubConf from a higher-level CT by sending the proper ID. If the higher-level CT has stored (cached) the SubConf, it sends this to the lower-level CT; otherwise, it requests the SubConf from an even higher-level CT and sends it to the lower-level CT after receipt. At the latest, the root CT SubConf will have stored the SubConf.


DOWNLOAD: Loading a SubConf into a lower-level CT.


REQUEST activates the CTTREE either until reaching the root CT, the highest CT in the CTTREE, or until a CT in the CTTREE has stored the ID chain. The ID chain may only be stored by a CT which contains all the CTs included in the list of the ID chain as leaves or branches. In principle, the root CT (e.g., CTR, as describe in PACT10) has access to the ID chain in its memory. GRANT is then sent to all CTs listed in the ID chain. GRANT is sent “atomically.” All the branches of a CT may receive GRANT either simultaneously or sequentially but without interruption by any other activity between one of the respective CTs and any other CT which could have an influence on the sequence of the starts of the SubConfs of different EnhSubConfs on the PACs. A low-level CT which receives a GRANT may configure the corresponding SubConf into the PA immediately. The configuration may occur without interruption. Alternatively the SubConf may write into FILMO or into a list which gives the configuration sequence. This sequence may be needed to prevent a deadlock. If the SubConf is not already stored in the low-level CT, the low-level CT may need to request the SubConf using GET from the higher-level CT. Local SubConfs (SubConfs that are not called up by an EnhSubConf but instead concern only the local PA) may be configured or loaded into FILMO between GET and the receipt of the SubConf (DOWNLOAD) if allowed or required by the algorithm. SubConfs of another EnhSubConf started by a GRANT received later may be started only after receipt of DOWNLOAD, as well as configuration and loading into FILMO.


Examples of the structure of SubConf have been described in patent applications PACT05 and PACT10.


The approach discussed here includes separate handling of call-up of SubConf by ID chains. An ID chain is a SubConf having the following property:


Individual SubConfs may be stored within the CTTREE, e.g., by caching them. A SubConf need not be reloaded completely, but instead may be sent directly to the lower-level CT from a CT which has cached the corresponding SubConf. In the case of an ID chain, all the lower-level CTs may need to be loaded from a central CT according to the protocol described previously. It may be efficient if the CT at the lowest level in the CTTREE, which still has all the PACs listed in the ID chain as leaves, has the ID chain in its cache. CTs at an even lower level may need to not store anything in their cache, because they are no longer located centrally above all the PACs of the ID chain. Higher-level CTs may lose efficiency because a longer communication link is necessary. If a request reaches a CT having a complete ID chain for the EnhSubConf requested, this CT may trigger GRANTs to the lower-level CTs involved. The information may be split out of the ID chain so that at least the part needed in the respective branches is transmitted. To prevent crossing in such splitting, it may be necessary to ensure that the next CT level will also trigger all GRANTs of its part of the EnhSubConf without being interrupted by GRANTs of other EnhSubConfs. One approach to implementing this is to transmit the respective parts of the ID chain “atomically.” To control the caching of ID chains, it may be useful to mark a split ID chain with a “SPLIT” flag, for example, during the transmission.


An ID chain may be split when it is loaded onto a CT which is no longer located centrally within the hierarchy of the CTTREE over all the PACs referenced within the ID chain. In this case, the ID chain may no longer be managed and cached by a single CT within the hierarchy. Multiple CTs may process the portion of the ID chain containing the PACs which are leaves of the respective CT. A REQUEST may need to be relayed to a CT which manages all the respective PACs. It will be appreciated that the first and most efficient CT in terms of hierarchy (from the standpoint of the PACs) which can convert REQUEST to GRANT may be the first CT in ascending order, starting from the leaves, which has a complete, unsplit ID chain. Management of the list having allocations of PAC to ID does not require any further explanation. The list can be processed either by a program running within a CT or it may be created from a series of assembler instructions for controlling lower-level CTs.


A complete ID chain may then have the following structure:


IDEnhSubConf} ID chain {SPLIT, (PAC1: IDSubConf1), (PAC2: IDSubConf2), (PAC3: IDSubConf3), (PACn: IDSubConfn)}


6.1. Example Procedure for Precaching SubConfs


Within the CTTREE, SubConfs may be preloaded according to certain conditions, e.g., the SubConfs may be cached before they are actually needed. This method may greatly improve performance within the CTTREE.


A plurality of precache requests may be provided. These may include:


a) A load request for an additional SubConf may be programmed within a SubConf being processed on a low-level CT.


b) During data processing within the PA, a decision may be made as to which SubConf is to be preloaded. The CT assigned to the PA may be requested by a trigger. Accordingly, the trigger may be translated to the ID of a SubConf within the CT, to preload a SubConf. It may also be possible for the ID of a SubConf to be calculated in the PA or to be configured in advance in the PA. The message to the assigned CT may contain the ID directly.


The SubConf to be loaded may be cached without being started. The start may take place at the time when the SubConf would have been started without prior caching. The difference is that at the time of the start request, the SubConf is already stored in the low-level CT or one of the middle-level CTs and either may be configured immediately or may be loaded very rapidly onto the low-level CT and then started. This may eliminate a time-consuming run-through of the entire CTTREE.


A compiler, which generates the SubConf, makes it possible to decide which SubConf is to be cached next. Within the program sequence graphs, it may be possible to see which SubConfs could be executed next. These are then cached. The program execution decides in run time which of the cached SubConfs is in fact to be started.


A preloading mechanism may be provided which removes the cached SubConf to make room in the memory of the CT for other SubConfs. Like precaching, deletion of certain SubConfs by the compiler can be predicted on the basis of program execution graphs.


Mechanisms for deletion of SubConfs as described in PACT 10, (e.g., the one configured last, the one configured first, the one configured least often (see PACT10)) may be provided in the CTs in order to manage the memory of the CT accordingly. It will be appreciated that not only explicitly precached SubConfs can be deleted, but also any SubConf in a CT memory generally be deleted. If the garbage collector has already removed a certain SubConf, the explicit deletion becomes invalid and may be ignored.


An explicit deletion can be brought about through a command which may be started by any SubConf. This includes any CT within the tree, its own CT or explicit deletion of the same SubConf (e.g., deletion of its own SubConf in which the command stands, in which case correct termination must be ensured).


Another possibility of explicit deletion is to generate, on the basis of a certain status within the PAs, a trigger which is relayed to the CT and analyzed as a request for explicit deletion.


6.2. Interdependencies Among PAEs


For the case when the sequence in which PAEs of different PACs belonging to one EnhSubConf are configured is relevant, an alternative procedure may be provided to ensure that this chronological dependence is maintained. Since there may be multiple CTs in the case of multiple PACs, these CTs may exchange information to determine whether all PAEs which must be configured before the next PAE in each PAC have already accepted their GO from the same configuration. One possibility of breaking up the time dependencies and preventing unallowed GOs from being sent is to exchange the exclusive right to configuration among the CTs. The KWs may be recognized so that a correct order is ensured through the sequence of their configurations and the transfer of the configuration rights. Depending on how strong the dependencies are, it may be sufficient if both CTs configure their respective PA in parallel up to a synchronization point. The CTs may then wait for one another and continue configuring in parallel until the next synchronization point. Alternatively, if no synchronization point is available, the CTs may continue configuring in parallel until the end of the EnhSubConf.


7. Example SubConf Macros


It will be appreciated that caching of SubConf may be especially efficient if as many SubConfs as possible can be cached. Efficient use of caching may be particularly desirable with high-level language compilers, because compilers often generate recurring routines on an assembler level, e.g., on a SubConf level in VPU technology.


In order to maximize reuse of SubConf, special SubConf macros (SubConfM) having the following properties may be introduced:

    • no absolute PAE addresses are given; instead a SubConf is a prelaid-out macro which uses only relative addresses;
    • application-dependent constants are transferred as parameters.


With a special SubConf macros, the absolute addresses are not calculated until the time when the SubConf is loaded into the PA. Parameters may be replaced by their actual values. To do so, a modified copy of the original special SubConf may be created so that either (I) this copy is stored in the memory of the CT (integrated FILMO) or (ii) it is written immediately to the PA, and only rejected KWs (REJ) are written into FILMO (separate FILMO). It will be appreciated that in case (ii), for performance reasons, the address adder in the hardware may sit directly on the interface port of the CT to the PA/FILMO. Likewise, hardware implementations of parameter transformation may also me employed, e.g., through a lookup table which is loaded before configuration.


8. Re-storing cache statistics


International Patent WO 99/44120 (PACT10) describes application-dependent cache statistics and control. This method permits an additional data-dependent optimization of cache performance because the data-dependent program performance is expressed directly in cache optimization.


One disadvantage of the known method is that cache performance is optimized only during run time. When the application is restarted, the statistics are lost. When a SubConf is removed from the cache, its statistics are also lost and are no longer available, even when called up again even within the same application processing.


In an example embodiment according to the present invention, on termination of an application or removal of a SubConf from the cache, the cache statistics may be sent first together with the respective ID to the next higher-level CT by way of the known inter-CT communication until the root CT receives the respective statistics. The statistics may be stored in a suitable memory, e.g., in a volatile memory, a nonvolatile memory or a bulk memory, depending on the application. The memory may be accessed by way of a host. The statistics may be stored so that they are allocated to the respective SubConf. The statistics may also be loaded again when reloading the SubConf. In a restart of SubConf, the statistics may also be loaded into the low-level CT.


The compiler may either compile neutral blank statistics or generates statistics which seem to be the most suitable statistics for a particular approach. These statistics preselected by the compiler may then be optimized in run time according to the approach described here. The preselected statistics may also be stored and made available in the optimized version the next time the application is called up.


If a SubConf is used by several applications or by different low-level CTs within one application (or if the SubConf is called up from different routines), then it may not be appropriate to keep cache statistics because the request performance and run performance in each case may produce different statistics. Depending on the application, either no statistics are used or a SubconfM may be used.


When using a SubConfM, the transfer of parameters may be extended so that cache statistics are transferred as parameters. If a SubConfM is terminated, the cache statistics may be written back to the SubConf (ORIGIN) which previously called up the SubConfM. In the termination of ORIGIN, the parameters may then be stored together with the cache statistics of ORIGIN. The statistics may be in a subsequent call-up and again be transferred as parameters to the SubConfM.


Keeping and storing application-based cache statistics may be also be suitable for microprocessor, DIPS, FPGA and similar modules.


9. Structure of the Configuration Bus System


PACT07 describes an address- and pipeline-based data bus system structure. This bus system is suitable for transmitting configuration data.


In an example embodiment of the present invention, in order to transmit data and configurations over the same bus system, status signals indicating the type of data transmitted may be introduced. The bus system may be designed so that the CT can optionally read back configuration registers and data registers from a PAE addressed previously by the CT.


Global data as describe in PACT07 as well as KWs may be transmitted over the bus system. The CT may act as its own bus node. A status signal may be employed to characterize the transmission mode. For example, the following structure is possible with signals S0 and S1:
















S1
S0
Meaning








0
0
Write data



0
1
Read data



1
0
Write a KW and/or a PAE address



1
1
Return a KW or any register from the addressed PAE









The REJ signal may be added to the bus protocol (ACK) according to PACT07 to signal rejects to the CT describe in FILMO protocol.


10. Example Procedure for Combining Individual Registers


Independent configuration registers may be used for a logical separation of configuration data. The logical separation may be needed for the differential configuration because logically separated configuration data is not usually known when carrying out a differential configuration. This may result in a large number of individual configuration registers, each individual register containing a comparatively small amount of information. In the following example, the 3-bit configuration values KW-A, B, C, D can be written or modified independently of one another:




















0000
0000
0000
0
KW-A



0000
0000
0000
0
KW-B



0000
0000
0000
0
KW-C



0000
0000
0000
0
KW-D









Such a register set may be inefficient, because only a fraction of the bandwidth of the CT bus is used.


The structure of configuration registers may be greatly optimized by assigning an enable to each configuration value, indicating whether the value is to be overwritten in the current configuration transfer.


Configuration values KW-A, B, C, D of the above example are combined in one configuration register. An enable is assigned to each value. For example, if EN-x is logical “0,” the KW-x is not changed in the instantaneous transfer; if EN-x is logical “1,” KW-x is overwritten by the instantaneous transfer.





















En-A
KW-A
En_B
KW-B
En-C
KW-C
En-D
KW-D










11. Wave Reconfiguration (WRC)


PACT13 describes a reconfiguration method (“wave reconfiguration “or” “WRC”) in which reconfiguration is synchronized directly and chronologically with the data stream See, e.g., FIG. 24 in PACT13.


The proper functioning of Wave reconfiguration, may require that unconfigured PAEs can neither accept nor send data or triggers. This means that an unconfigured PAE behaves completely neutrally. This may be provided in VPU technology by using handshake signals (e.g., RDY/ACK) for trigger buses and data buses (see, e.g., U.S. Pat. No. 6,425,068). An unconfigured PAE then generates

    • no RDYs, so no data or triggers are sent,
    • no ACKs, so no data or triggers are received.


This mode of functioning is not only helpful for wave reconfiguration, but it is also one of the possible bases for run time reconfigurability of VPU technology.


An extension of this approach is explained below. Reconfiguration may be synchronized with ongoing data processing. Within data processing in the PA, it is possible to decide

  • I. which next SubConf becomes necessary in the reconfiguration;
  • ii. at what time the SubConf must become active, e.g., with which data packet (ChgPkt) the SubConf must be linked.


The decision as to which configuration is loaded may be made based on conditions and is represented by triggers (wave configuration preload=WCP).


Linking of the data packets to the KWs of a SubConf may be ensured by the data bus protocol (RDY/ACK) and the CT bus protocol (CHECK, ACK/REJ). An additional signal (wave configuration trigger=WCT) may indicate in which data packet (ChgPkt) reconfiguration is to be performed and optionally which new configuration is to be carried out or loaded. WCT can be implemented through simple additional lines or the trigger system of the VPU technology. Multiple VPUs may be used simultaneously in the PA, and each signal may control a different reconfiguration.


11.1. Example Procedure for Controlling the Wave Reconfiguration


It will appreciated that a distinction may be made between two application-dependent WRCs:


A1) wave reconfiguration within one SubConf,


A2) wave reconfiguration of different SubConfs.


In terms of the hardware, a distinction may be made between two basic types of implementation:


I1) implementation in the CT and execution on request


I2) implementation through additional configuration registers (WRCReg) in the PAEs.


Example embodiments of the WRCRegs are described below. The WRCs are either be

  • a) preloaded by the CT at the time of the first configuration of the respective SubConf, or
  • b) preloaded by the CT during execution of a SubConf depending on incoming WCPs.


During data processing, the WRCRegs that are valid at that time may be selected by one or more WCTs.


The effects of wave reconfiguration on the FILMO principle are discussed below.


11.1.1. Performing WRC According to A1


Reconfiguration by WRC may be possible at any time within a SubConf (A1). First, the SubConf may be configured normally, so the FILMO principle is ensured. During program execution, WRCs may need to use only resources already allocated for the SubConf.


Case I1)


WRC may performed by differential configuration of the respective PAEs. WCP may be sent to the CT. Depending on the WCP, there may be a jump to a token within the configured SubConf:


An example code is given below:


begin SubConf






    • main:
      • PAE 1, CHECK&GO
      • PAE 2, CHECK&GO
      • . . .
      • PAE n, CHECK&GO
      • set TriggerPort 1//WCT 1
      • set TriggerPort 2//WCT 2

    • scheduler:
      • on TriggerPort 1, do main1//jump depending on WCT
      • on TriggerPort 2, do main2//jump depending on WCT

    • wait:
      • wait for trigger

    • main1:
      • PAE 1, DIFFERENTIAL&GO
      • PAE 2, DIFFERENTIAL&GO
      • . . .
      • PAE n, DIFFERENTIAL&GO
      • wait for trigger

    • main2:
      • PAE 1, DIFFERENTIAL&GO
      • PAE 2, DIFFERENTIAL&GO
      • . . .
      • PAE n, DIFFERENTIAL&GO
      • wait for trigger

    • end SubConf





The interface (TrgIO) between CT and WCP may be configured by “set Triggerport.” According to the FILMO protocol, TrgIO behaves like a PAE with respect to the CT, e.g., TrgIO corresponds exactly to the CHECK, DIFFERENTIAL, GO protocol and responds with ACK or REJ for each trigger individually or for the group as a whole.


If a certain trigger has already been configured, it may respond with REJ.


If the trigger is ready for configuration, it responds with ACK.



FIG. 8 from PACT10 is to be extended accordingly by including this protocol.


Upon receipt of WCT, the respective PAE may start the corresponding configuration.


Case I2)


If the WRCRegs have already been written during the configuration, the WCP may be omitted because the complete SubConf has already been loaded into the respective PAE


Alternatively, depending on certain WCPs, certain WRCs may be loaded by the CT into different WRCRegs defined in the WRC. This may be necessary when, starting from one SubConf, it branches off into more different WRCs due to WRTs than are present as physical WRCRegs.


The trigger ports within the PAEs may be configured so that certain WRCRegs are selected due to certain incoming WRTs:


begin SubConf






    • main:
      • PAE1_TriggerPort 1
      • PAE1_TriggerPort 2
      • PAE1_WRCReg1
      • PAE1_WRCReg2
      • PAE1_BASE, CHECK&GO
      • . . .
      • PAE2_TriggerPort 1
      • PAE2_TriggerPort 2
      • PAE2_WRCReg1
      • PAE2_WRCReg2
      • PAE2_BASE, CHECK&GO
      • . . .
      • PAEn_TriggerPort 1
      • PAEn_TriggerPort 2
      • PAEn_WRCReg1
      • PAEn_WRCReg2
      • PAEn_BASE, CHECK&GO

    • endSubConf


      11.1.2. Performing WRC According to A2


      Case I1)





The CT performing a WRC between different SubConfs corresponds in principle to A1/I1. The trigger ports and the CT-internal sequencing may need to correspond to the FILMO principle. KWs rejected by the PAEs (REJ) may be written to FILMO. These principles have been described in PACT10.


All WCPs may be executed by the CT. It will be appreciated that this may guarantee a deadlock-free (re)configuration. Likewise, the time of reconfiguration, which may be marked by WCT, may be sent to the CT and may be handled atomically by the CT. For example, all PAEs affected by the reconfiguration may receive the reconfiguration request through WCT either simultaneously or at least without interruption by another reconfiguration request. It will be appreciated that this approach may guarantee freedom from deadlock.


Case I2)


If the WRCRegs are already written during the configuration the WCP may be omitted because the complete SubConf is already loaded into the respective PAE.


Alternatively, depending on certain WCPs, WRCs determined by the CT may be loaded into different WRCRegs defined in the WRC. It will be appreciated this approach may be necessary when, starting from a SubConf, branching off into more different WRCs due to WRTs than there are physical WRCRegs.


Several WCTs being sent to different PAEs at different times may need to be prevented because this may result in deadlock. For example: WCT1 of a SubConf SA reaches PAE p1 in cycle t1, and WCT2 of a SubConf SB reaches PAE p2 at the same time. The PAEs are configured accordingly. At time t2, WCT1 reaches p2 and WCT2 reaches p1. A deadlock has occurred. It should also be pointed out that this example can also be applied in principle to A2-I1. It will be appreciated that, this is why WCT there may be sent through the trigger port of the CT and may be handled by the CT.


A deadlock may also be prevented by the fact that the WCTs generated by different PAEs (sources) are prioritized by a central instance (ARB). This permits exactly one WCT is sent to the respective PAEs in one cycle. Various approaches to prioritization may be used. Example prioritization approaches are listed below.

  • a) An arbiter may be used. For example, the round robin arbiter described in PACT10 is especially suitable. It will be appreciated that the exact chronological order of occurrence of WCTs may be lost.
  • b) If chronological order is to be preserved, the following example methods are suggested:
    • b1) A FIFO first stores the incoming WCTs in order of receipt. WCTs received simultaneously are stored together. If no WCT occurs at a given time, no entry is generated. An arbiter downstream from the FIFO selects one of the entries if there have been several at the same time.
    • b2) A method described in PACT18 permits time sorting of events on the basis of an associated time information (time stamp). The correct chronological order of WCTs may be ensured by analyzing this time stamp.


Suitable relaying of WCTs from ARB to the respective PAEs may ensure that prioritized WCTs are received by the PAEs in the correct order. An example approach to ensuring this order is for all triggers going from ARB to the respective PAEs to have exactly the same length and transit time. This may be ensured by suitable programming. This may also be ensured by a suitable layout through a router, e.g., by adjusting the wiring using registers to compensate for latency at the corresponding points. To ensure correct relaying, the procedure described in PACT18 may also be used for time synchronization of information.


No explicit prioritization of WCPs may be needed because the WCPs sent to the CT may be processed properly by the FILMO principle within the CT. It may be possible to ensure that the time sequence is maintained, e.g., by using the FILMO principle (see 2.1e).


11.1.3. Note for all Cases


The additional configuration registers of the PAEs for wave reconfiguration may be configured to behave according to the FILMO principle, i.e., the registers may support the states described and the sequences implemented and respond to protocols such as CHECK and ACK/REJ.


11.2. Example Reconfiguration Protocols and Structure of WRCReg


The wave reconfiguration procedure will now be described in greater detail. Three alternative reconfiguration protocols are described below.


Normal CT protocol: The CT may reconfigure each PAE individually only after receipt of a reconfiguration reques. For example, the CT may receive a reconfiguration request for each PAE reached by ChgPkt. This approach may not be efficient because it entails a very high communication complexity, e.g., for pipelined bus systems.


Synchronized pipeline: This protocol may be much more efficient. The pipelined CT bus may be used as a buffer. The pipeline register assigned to a PAE may store the KWs of this PAE until the PAE can receive the KWs. Although the CT bus pipeline (CBP) is blocked, it can be filled completely with the KWs of the wave reconfiguration.


a) If the CBP runs in the same direction as the data pipeline, a few cycles of latency time may be lost. The loss may occur until a KW of the PAE which follows directly is received by its pipeline register after a PAE has received a KW.


b) If the CBP runs opposite the data pipeline, the CBP can be filled completely with KWs which are already available at the specific PAEs. Thus, wave reconfiguration without any time lag may be possible.


Synchronized shadow register: (This protocol may be the most efficient). Immediately after selection of the SubConf (I) and before receipt of ChgPkt (ii), the CT may write new KWs into the shadow registers of all PAEs. The shadow registers may be implemented in any embodiment. The following possibilities are suggested in particular: a) a register stage connected upstream from the actual configuration register, b) a parallel register set which is selected by multiplexers, c) a FIFO stage upstream from the actual configuration registers. At the time when ChgPkt (ii) is received by a PAE, it copies the shadow register into the corresponding configuration register. In the optimum case, this copying may take place in such a way that no working cycle is lost. If no writing into the shadow register takes place (e.g., if it is empty) despite the receipt of ChgPkt, data processing may stop until the KW is received by the shadow register. If necessary, the reconfiguration request may be relayed together with ChgPkt from one PAE to the next within a pipeline.


12. Forms of Parallelism and Sequential Processing


Due to a sufficiently high reconfiguration performance, sequential computational models can be mapped in arrays. Fro example, the low-level CTs may represent a conventional code fetcher. The array may operate with microprogrammable networking as a VLIW-ALU. Different forms of parallelism may be mapped in arrays of computing elements. Examples may include:


Pipelining: Pipelines may be made up of series-connected PAEs. VPU-like protocols may allow simple control of the pipeline.


Instruction level parallelism: Parallel data paths may be constructed through parallel-connected PAEs. VPU-like protocols, e.g., the trigger signals, allow a simple control.


SMP, multitasking and multiuser: Independent tasks may be executed automatically in parallel in one PA. It will be appreciated that this parallel execution may be facilitated by the freedom from deadlock of the configuration methods.


With a sufficient number of PAEs, all the essential parts of conventional microprocessors may be configured on the PA. This may allow sequential processing of a task even without a CT. The CT need not become active again until the configured processor is to have a different functionality, e.g., in the ALU, or is to be replaced completely.


13. Exemplary Embodiments and Diagrams



FIGS. 1 through 3 show the structure of an example SubConf. CW-PAE indicates the number of a KW within a PAE having the address PAE (e.g., 2-3 is the second KW for the PAE having address 3). In addition, this also shows the flags C=check, D=differential, G=go), a set flag being indicated with “*” symbol.



FIG. 1 illustrates the simplest linear structure of a SubConf. This structure has been described in PACT10. A PAE may be tested during the first configuration (C), then may be configured further (D) and finally is started (G) (see PAE having address 0). Simultaneous testing and starting are also possible (CG,) This is illustrated for the PAE having address 1 (0101).



FIG. 2 illustrates a SubConf which has been re-sorted so that a barrier (0201) has been introduced. All PAEs must be tested before the barrier. The barrier then waits until receipt of all ACKs or REJs. If no REJ occurs, the barrier is skipped, the differential configurations are performed, and the PAEs are started. If a REJ occurs, the barrier is not skipped, and instead FILMO runs are executed until no more REJ occurs and then the barrier is skipped. Before the barrier, each PAE must be tested, and only thereafter can the PAEs be configured differentially and started. If testing and starting originally took place in the same cycle, the KW must now be separated (0101 Ψ 0202, 0203).



FIG. 3 illustrates an example a SubConf that has been re-sorted so that no barrier is necessary. Instead a latency period during which no further check can be performed is inserted between check and receipt of ACK/REJ. This may be accomplished by combining the KWs into atoms (0301). The first KW of an atom may perform a check (0302). The block may then be filled with differential KWs or optionally NOPs (0303) until the end of the latency period. The number of differential KWs depends on the latency period. For reasons of illustration, a latency period of three cycles has been selected as an example. ACK/REJ is received at 0304. At this point a decision may be made as to whether configuration is to be continued with the next KW, which may (but need not necessarily) contain a check (0305). Alternatively, the configuration may be terminated on the basis of a REJ to preserve the order.


It will be appreciated that in configuring a PAE X a check may first be performed then, receipt of ACK/REJ may be waited on. A PAE that has already been checked may be configured further during this period of time, or NOPs must be introduced. PAE X may then be configured further. Example: Check of PAE (0302), continuation of configuration (0306). At 0307, NOPs may need to be introduced after a check because no differential configurations are available. Points 0308 illustrate the splitting of configurations over multiple blocks (three in this case), with one check being omitted (0309).



FIG. 4 illustrates an example state machine for implementation of PAE states, according to an example embodiment of the present invention. The initial status is IDLE (0401). By configuring the check flag (0405), the state machine goes into the “allocated” state (0402). Configuring the LAST flag (0409, 0408) starts the PAE; the status is “configured” (0404). By local reset (0407) the PAE goes into the “unconfigured” state (0403). In this embodiment, the PAE returns to IDLE only after a query about its status by LOCK/FREE (0406).


Local reset and LAST can also be sent by the CT through a broadcast (see moduleID).



FIGS. 5 through 9 show possible implementations of FILMO procedures, as described in section 5. It will be appreciated that only the relevant subassemblies which function as an interface with the PA are shown. Interfaces with the CT are not described here. These can be implemented as described in PACT10, with minor modifications, if any.



FIG. 5 illustrates the structure of a CT interface to the PA when using a STATELUT, according to an example embodiment of the present invention. According to 5.1. A CT 0501 having RAM and integrated FILMO (0502) is shown in abstracted form and is not the function of the CT is described in PACT10 and PACT05. The CT may inquire as to the status of the PA (0503) by setting the LOCK signal (0504). Each PAE whose status has changed since the last LOCK relays (0506) this change to the STATELUT (0505). This relaying may take place so that the STATELUT can allocate its status uniquely to each PAE. Several conventional approaches may be used for this purpose. For example, each PAE may send its address and status to the STATELUT, which then stores the status of each PAE under its address.


The CT may write KWs (0510) first into a register (0507). At the same time, a lookup may performed under the address (#) of the PAE pertaining to the respective KW in the STATELUT (0505). If the status of the PAE is “not configured,” the CT may receive an ACK (0509), otherwise a REJ. A simple protocol converter (0508) converts an ACK into a RDY in order to write the KW to the PA, and REJ is converted to notRDY to prevent writing to the PA.


It will be appreciated that relaying LOCK, RDY and KW to the PA and in the PA, like the acknowledgment of the status of the PAEs by the PA, may be pipelined, e.g., by running through registers.



FIG. 6 illustrates an example procedure for re-sorting KWs, according to an embodiment of the present invention. This procedure has a relatively low level of complexity. A CT (0601) having integrated FILMO (0602) is modified so that an acknowledgment (0605) (ACK/REJ) is expected only for the first KW (0604) of an atom sent to the PA (0603). The acknowledgment may be analyzed for the last KW of an atom. In the case of ACK, the configuration may be continued with the next atom, and REJ causes termination of configuration of the SubConf.



FIG. 7 illustrates an example FILMO (0701), according to an example embodiment of the present invention. The RELJMP memory (0702) may be assigned to FILMO, each entry in RELJMP being assigned to a FILMO entry. FILMO here is designed as an integrated FILMO, as described in PACT10. It will be appreciated that RELJMP may represent a concatenated list of KWs to be configured. It will also be appreciated that FILMO may contain CT commands and concatenation, as described in PACT10. The concatenated list in RELJMP may be generated as follows: The read pointer (0703) points to the KW which is being configured. The address of the KW rejected (RE)) most recently is stored in 0704. If the KW (0706) being configured is accepted by the PA (0707) (ACK, 0708), then the value stored in 0702 at the address to which 0703 points may be added to 0703. This results in a relative jump.


The KW being configured at the moment may be rejected (REJ, 0708. Then, the difference between 0703 and 0704, may be calculated by a subtractor (0705. The difference may be stored in RelJmp, e.g., at the address of the KW rejected last and stored in 0704. The current value of 0703 may be stored in 0704. Then the value stored in 0702 at the address to which 0703 points may be added to 0703. This yields a relative jump. Control may be assumed by a state machine (0709). The state machine may be implemented according to the sequence described here. The address for RelJmp may be determined by the state machine 0709, e.g., using a multiplexer (0710). Depending on the operation, the address may be selected from 0703 or 0704. To address 0701 and 0702 efficiently and differently at the same time, 0702 may be physically separated from 0701, so that there are two separate memories which can be addressed separately.



0711 illustrates the functioning of the relative addressing. The address pointing at an entry in RelJmp may be added to the content of RelJmp, yielding the address of the next entry.



FIG. 8 illustrates an example procedure for analyzing acknowledgments, possible implementation of the method according to an example embodiment of the present invention. Entries in FILMO (0801) may be managed linearly, so RelJmp may not be needed. FILMO 0801 is implemented as a separate FILMO. KWs (0803) written into the PA (0802) may be addressed by a read pointer (0804). All KWs may be written in the order of their configuration into a FIFO or a FIFO-like memory (0805). The FIFO may be implemented as a shift register. The depth of the memory is exactly equal to the number of cycles elapsing from sending a KW to the PA until receipt of the acknowledgment (RDY/ACK, 0806).

    • Upon receipt of a REJ, the rejected KW, which is assigned to the REJ and is at the output of the FIFO, may be written into 0801. REJ is used here as a write signal for FILMO (REJ->WR). The write address may be generated by a write pointer (0807), which may be incremented after the write access.
    • Upon receipt of an ACK, nothing happens, the configured KW assigned to the ACK is ignored and 0807 remains unchanged.


It will be appreciated that this procedure may result in a new linear sequence of rejected KWs in the FILMO. The FILMO may be implemented as a dual-ported RAM with separate read and write ports.



FIG. 9 illustrated an example procedure for pre-fetching, according to an example embodiment of the present invention. It will be appreciated that this procedure is a modification of the procedure described in 5.3.


The KW (0902) to be written into the PA (0901) may be addressed by a read pointer (0909) in FILMO (0910). The address and flags (0902a) of the PAE to be configured may be sent to the PA as a test. The KW having the address of the PAE to be configured may be written to a FIFO-like memory (0903). It will be appreciated that this FIFO may correspond to 0805. 0902a may be transmitted to the PA in a pipeline. Access is analyzed and acknowledged in the PAE addressed. Acknowledgment (RDY/ACK) may also be sent back pipelined (0904). 0903 delays exactly for as many cycles as have elapsed from sending 0902a to the PA until receipt of the acknowledgment (RDY/ACK, 0904).

    • If acknowledged with ACK, the complete KW (0905) (address+data) at the output of 0903 which is assigned to the respective acknowledgment may be pipelined to the PA (0906). No acknowledgment is expected for this, because it is already known that the addressed PAE will accept the KW.
    • In the case of REJ, the KW may be written back into the FILMO (0907). A write pointer (0708) which corresponds to 0807, may be used for this purpose. The pointer may be incremented in this process.



0904 may be converted here by a simple protocol converter (0911) (i) into a write signal for the PA (RDY) in the case of ACK and (ii) into a write signal 0901 for the FILMO (WR) in the case of REJ.


It will be appreciated that a new linear sequence of rejected KWs may be stored in the FILMO. The FILMO may be implemented as a dual-ported RAM with separate read and write ports.



FIG. 10 illustrates an example inter-CT protocol, according to an example embodiment of the present invention. Four levels of CT are shown: the root CT (1001), CTs of two intermediate levels (1002a-b and 1003a-d), the low-level CTs (1004a-h) and their FILMOs (1005a-h). In the PA assigned to 1004e, a trigger may be generated. The trigger can not be translated to any local SubConf within 1004e. Instead, the trigger may be assigned to an EnhSubConf. CT 1004e may send a REQUEST for this EnhSubConf to CT 1003c, CT 1003c has not cached the ID chain. EnhSubConf is partially also carried out on CT 1004g, which is not a leaf of CT 1003c. Thus, CT 1003c may relay the REQUEST to CT 1002b. The hatching indicates that CT 1002b might have cached the ID chain because CT 1004g is a leaf of CT 1002b. However, CT 1002b has neither accepted nor cached the ID chain and therefore may request it from CT 1001. CT 1001 may load the ID chain from the CTR, as described in see PACT10. CT 1001 may send the ID chain to CT 1002b. This process is referred to below as GRANT. CT 1002b has cached the ID chain because all participating CTs are leaves of CT 1002b. Then CT 1002b may send GRANT to CT 1003c and CT 1003d as an atom, e.g., without interruption by another GRANT. The ID chain may be split here and sent to two different CTs, so none of the receivers may be a common arbiter of all leaves. The SPLIT flag may be set; the receivers and all lower-level CTs can no longer cache the ID chain. CT 1003c and CT 1003d again send GRANT to low-level CTs 1004f and 1004g as an atom. The low-level CTs store the incoming GRANT directly in a suitable list, indicating the order of SubConf to be configured. This list may be designed to be separate, or it may be formed by performing the configuration directly by optionally entering the rejected KWs into FILMO. Two example variants for the low-level CTs:

    • They have already cached the SubConf to be started, corresponding to the ID according to the ID chain. Here, the configuration is started immediately,
    • They have not yet cached the SubConf corresponding to the ID according to the ID chain. Here, they may need to request it first from the higher-level CTs. The request (GET) is illustrated in FIG. 11, where it is again assumed that none of the CTs from the intermediate level has cached the SubConf. Therefore, the respective SubConf may be loaded by the root CT from the CTR and sent to the low-level CTs (DOWNLOAD). This sequence is described in more detail in PACT10.


After receipt of a GRANT, the received GRANT may need to be executed before any other GRANT. For example, if GRANT A is received before GRANT B, then GRANT A may need to be configured before GRANT B. This may also be needed if the SubConf of GRANT A needs to be loaded first while the SubConf of GRANT B would be cached in the low-level CT and could be started immediately. The order of incoming GRANTs may need to be maintained, because otherwise a deadlock can occur among the EnhSubConf.


In an alternative embodiment of the procedure described here, CTs of the CTTREE may directly access configurations without including the higher-level CTs. The CTs may have a connection to any type of volatile memory, nonvolatile memory or bulk memory. For example, this memory may be an SRAM, DRAM, ROM, flash, CDROM, hard drive or server system, which may be connected via a network (WAN, LAN, Internet). It will be appreciated that a CT may directly access a memory for configuration data, bypassing the higher-level CTs. In such a case, the configuration may be synchronized within the CTTREE, including higher-level CTs, e.g., with EnhSubConf.



FIG. 12 illustrates three examples (FIGS. 12a-12c), a configuration stack of 8 CTs (1201-1208), according to an example embodiment of the present invention. The configuration stack contains the list of SubConfs to be configured. The SubConfs may be configured in the same order as they are entered in the list. For example, a configuration stack may be formed by concatenation of individual SubConfs as described in PACT10 (FIGS. 26 through 28). Another alternative is a simple list of IDs pointing to SubConfs, as shown FIG. 12. Lower-level entries may be configured first, and higher-level entries may be configured last. FIG. 12a illustrates two EnhSubConfs (1210, 1211) which are positioned correctly within the configuration stack of the individual CTs. The individual SubConfs of the EnhSubConfs are configured in the proper order without a deadlock. The order of GRANTs was preserved.


The example in FIG. 12b is also correct. Three EnhSubConf are shown (1220, 1221, 1222). 1220 is a large EnhSubConf affecting all CTs. 1221 pertains only to CTs 1202-1206, and another pertains only to CTs 1207 and 1208. All SubConfs are configured in the proper order without a deadlock. The GRANT for 1222 was processed completely before the GRANT for 1220, and the latter was processed before the GRANT for 1221.


The example in FIG. 12c illustrates several deadlock situations. In 1208, the order of GRANTs from 1230 and 1232 has been reversed, resulting in resources for 1230 being occupied in the PA allocated to 1208 and resources for 1232 being occupied in the PA allocated to 1208. These resources are always allocated in a fixed manner. This results in a deadlock, because no EnhSubConf can be executed or configured to the end.


Likewise, GRANTs of 1230 and 1231 are also chronologically reversed in CTs 1204 and 1205. This also may result in a deadlock for the same reasons.



FIG. 13
a illustrates a performance-optimized version of inter-CT communication according to an example embodiment of the present invention. A download may be performed directly to the low-level CT. Here, mid-level CTs need not first receive, store and then relay the SubConfs. Instead, these CTs may “listen” (1301, 1302, 1303, LISTENER) and cache the SubConfs. An example schematic bus design is illustrated in FIG. 13b, according to an example embodiment of the present invention. A bypass (1304, 1305, 1306), may carry the download past the mid-level CTs. This bypass may be provided as a register.



FIG. 14 illustrates an example circuit providing simple configuration of SubConf macros, according to an example embodiment of the present invention. The example circuit may be provided between a CT and a PA. A KW may be transmitted by the CT over the bus (1401). The KW is broken down into its configuration data (1402) plus PAE addresses X (1403) and Y (1404). It will be appreciated that, in the case of multidimensional addressing, more addresses may be broken down. 1405 adds an X offset to the X address, and 1406 adds a Y offset to the Y address. The offsets may be different and may be stored in a register (1407). The parameterizable part of the data (1408) may be sent as an address to a lookup table (1409) where the actual values are stored. The values may be linked (1410) to the nonparameterizable data (1412). A multiplexer (1413) may be used to select whether a lookup is to be performed or whether the data should be used directly without lookup. The choice may be made using a bit (1411). All addresses and the data may be linked again and sent on a bus to the PA. Depending on implementation, the FILMO may be connected upstream or downstream from the circuit described here. Integrated FILMOs may be connected upstream, and separate FILMOs may be connected downstream. The CT may set the address offsets and the parameter translation in 1409


via bus 1415. 1409 may be implemented as a dual-ported RAM.


A corresponding KW may be structured as follows:


















X address
Y address
Data
Address for 1409
MUX = 1


X address
Y address
Data
Data
MUX = 0





If MUX = 1, then a lookup may be performed in 1409. If MUX = 0, data may be relayed directly to 1414.







FIG. 15 illustrates the execution of an example graph, according to an example embodiment of the present invention. The next possible nodes (1 . . . 13) of the graph may be preloaded (prefetch), and preceding nodes and unused jumps may be deleted (delete). Within a loop, the nodes of the loop are not deleted (10, 11, 12), and corresponding nodes are removed only after termination. Nodes may be loaded only if they are not already present in the memory of the CT. Therefore, multiple processing of 11 need not result in multiple loading of 12 or 10; e.g., “delete 8, 9” is ignored in 11 if 8 and/or 9 has already been removed.



FIG. 16 illustrates multiple instantiation of an example SubConf macro (1601), according to an example embodiment of the present invention. Various SubConfs (1602, 1603, 1604) call up 1601. Parameters for 1601 may be preloaded (1610) in a lookup table (1605) by the requesting SubConf. 1605 is implemented only once but is shown several times in FIG. 16 to represent the various contents.



1601 may be called up. The KWs may be transmitted to 1605, 1606 and 1607. These elements operate as follows: Based on a lookup, the corresponding content of 1605 is linked again (1606) to the KWs. The KW is sent to the PA (1608) after the multiplexer 1413 (1607) selects whether the original KW is valid or whether a lookup has been performed.



FIG. 17 shows the sequence of an example wave reconfiguration, according to an example embodiment of the present invention. Areas shown with simple hatching represent data-processing PAEs, with 1701 representing PAEs after reconfiguration and 1703 representing PAEs before reconfiguration. Areas shown with crosshatching (1702) indicate PAEs which are in the process of being reconfigured or are waiting for reconfiguration.



FIG. 17
a shows the influence of wave reconfiguration on a simple sequential algorithm, according to an example embodiment of the present invention. Exactly those PAEs to which a new function has been allocated maybe reconfigured. Since a PAE can receive a new function in each cycle, this may be performed efficiently, e.g., simultaneously.


One row of PAEs from the matrix of all PAEs of a VPU is shown as an example. The states in the cycles after cycle t are shown with a delay of one cycle each.



FIG. 17
b illustrates the time effect of reconfiguration of large portions of a VPU, according to an example embodiment of the present invention. A number of PAEs of one VPU is shown as an example, indicating the states in the cycles after cycle t with a different delay of several cycles each.


Although at first only a small portion of the PAEs is reconfigured or is waiting for reconfiguration, this area becomes larger over time, until all the PAEs have been reconfigured. The increase in size of this area (1702) shows that, due to the time delay in reconfiguration, more and more PAEs are waiting for reconfiguration. This may result in lost computing performance.


A broader bus system may be used between the CT (in particular, the memory of the CT) and the PAEs, providing enough lines to reconfigure several PAEs at the same time within one cycle.

















Not
Wave






configured
trigger
W
C
D







X

X
X

Wave reconfiguration


X

X

X
REJ



X
X
X

REJ



X
X

X
Differential wave reconfiguration







Normal configuration










FIG. 18 illustrates example configuration strategies for a reconfiguration procedure like the “synchronized shadow register”, according to an example embodiment of the present invention. The CT (1801), as well as one of several PAEs (1804), are shown schematically with only the configuration registers (1802, 1803) within the PAE and a unit for selecting the active configuration (1805) being illustrated. To simplify the drawings, additional functional units within the PAE have not been shown. Each CT has n SubConfs (1820), the corresponding KWs of a SubConf being loaded when a WCP occurs (1(n)), in cases −I1, and in the cases −I2, the KWs of m SubConfs from the total number of n are loaded (m(n)). The different tie-ins of WCT (1806) and WCP (1807) are shown, as are the optional WCPs (1808), as described below.


In A1-I1, a next configuration may be selected within the same SubConfs by a first trigger WCP. This configuration may use the same resources, alternative resources may be used that already prereserved and are not occupied by any other SubConfs except for that optionally generating the WCP. The configuration may be loaded by the CT (1801). In the example shown here, the configuration is not executed directly, but instead is loaded into one of several alternative registers (1802). By a second trigger WCT, one of the alternative registers is selected at the time of the required reconfiguration. This causes the configuration previously loaded on the basis of WCP to be executed.


It will be appreciated that a certain configuration may be determined and preloaded by WCP. The time of the actual change in function corresponding to the preloaded reconfiguration may be determined by WCT.


WCP and WCT may each be a vector, so that one of several configurations may be preloaded by WCT(v1). The configuration to be preloaded may be specified by the source of WCP. Accordingly, WCT(v2) may select one of several preloaded configurations. In this case, a number of configuration registers 1802 corresponding to the quantity of configurations selectable by v2 may be needed. The number of such registers may be fixedly predetermined so that v2 corresponds to the maximum number.


An example version having a register set 1803 with a plurality of configuration registers 1802 is shown in A1-I2. If the number of registers in 1803 is large enough that all possible following configurations can be preloaded directly, the WCP can be eliminated. In this case, only the time of the change of function as well as the change itself may need to be specified by WCT(v2).


A2-I1 illustrates an example WRC where the next configuration does not utilize the same resources or whose resources are not prereserved or are occupied by another SubConf in addition to that optionally generating the WCP(v1). The freedom from deadlock of the configuration may be guaranteed by the FILMO-compliant response and the configuration on WCP(v1). The CT also may start configurations by WCT(v2) (1806) through FILMO-compliant atomic response to the receipt of triggers (ReconfReq) characterizing a reconfiguration time.


In A2-I2, all the following SubConfs are either preloaded into configuration register 1803 with the first loading of a SubConf. Alternatively, if the number of configuration registers is not sufficient, the following SubConfs may be re-loaded by the CT, e.g., by way of running a WCP(v1).


The triggers (ReconfReq, 1809) which may determine a reconfiguration time and trigger the actual reconfiguration may first be isolated in time by way of a suitable prioritizer (1810). The triggers may then be sent as WCT(v2) to the PAEs so that exactly only one WCT(v2) is always active on one PAE at a time, and the order of incoming WCT(v2)s is always the same with all the PAEs involved.


In the case of A2-I1 and A2-I2, an additional trigger system may be used. In processing of WCT by CT 1801, i.e., in processing by 1810, there may be a considerable delay until relaying to PAE 1804. However, the timing of ChgPkt may need to be rigorously observed because otherwise the PAEs may process the following data incorrectly. Therefore, another trigger (1811, WCS=wave configuration stop) may be used. The WCS trigger only stops data processing of PAEs until the new configuration has been activated by arrival of the WCT. WCS is may be generated within the SubConf active at that time. The ReconfReq and WCS may be identical, because if ReconfReq is generated within the SubConf currently active, this signal may indicate that ChgPkt has been reached.



FIG. 19 illustrates an alternative implementation of A1-I2 and A2-I2, according to an example embodiment of the present invention. A FIFO memory (1901) may be used to manage the KW instead of using a register set. The order of SubConfs preselected by WCP may be fixed. Due to the occurrence of WCT (or WCS, alternatively represented by 1902), only the next configuration can be loaded from FIFO. The function of WCS, e.g., stopping ongoing data processing, may be exactly the same as that described in conjunction with FIG. 18.



FIG. 20 illustrates a section of a row of PAEs carrying out a reconfiguration method like the “synchronized pipeline” according to an example embodiment of the present invention. One CT (2001) may be allocated to multiple CT interface subassemblies (2004) of PAEs (2005). 2004 may be integrated into 2005 and is shown with an offset only to better illustrate the function of WAIT and WCT. It will be appreciated that signals for transmission of configuration data from 2004 to 2005 are not shown here.


The CT may be linked to PAEs 2004 by a pipelined bus system, 2002 representing the pipeline stages. 2002 may include a register (2003b) for the configuration data (CW) and another register (2003a) having an integrated decoder and logic. Register 2003a may decode the address transmitted in CW and sends a RDY signal to 2004 if the respective local PAE is addressed. Register 2003a may send a RDY signal to the next step 2002, if the local PAE is not addressed. Accordingly, 2003a may receive the acknowledgment (GNT), from 2002 or 2004, e.g., as a RDY/ACK. This results in a pipelined bus which transmits the CW from the CT to the addressed PAE and its acknowledgment back to the CT.


When WCT is active at 2004, pending CWs which are characterized with WAVE as part of the description may be configured in 2004. Here, GNT may acknowledged with ACK. If WCT is not active but CWs are pending for configuration, then GNT may not be acknowledged. The pipeline may be blocked until the configuration has been performed.


If 2005 is expecting a wave reconfiguration, characterized by an active WCT, and no CWs characterized with WAVE are already present at 2004, then 2004 may acknowledge with WAIT. This may put the PAE (2005) in a waiting, non-data-processing status until CWs characterized with WAVE have been configured in 2004. CWs that have not been transmitted with WAVE may be rejected with REJ during data processing.


It will be appreciated that optimization may be performed by special embodiments for particular applications. For example, incoming CWs characterized with WAVE and the associated reconfiguration may be stored temporarily by a register stage in 2004, preventing blocking of the pipeline if CWs sent by the CT are not accepted immediately by the addressed 2004. For further illustration, 2010 and 2011 may be used to indicate the direction of data processing.


If data processing proceeds in direction 2010, a rapid wave reconfiguration of the PAEs is possible as follows. The CT may send CWs characterized with WAVE into the pipeline so that first the CWs of the most remote PAE are sent. If CWs cannot be configured immediately, the most remote pipeline stage (2002) may be blocked. Then, the CT may send CWs to the PAE which is then the most remote and so forth, until the data is ultimately sent to the next PAE.


As soon as ChkPkt runs through the PAEs, the new CWs may be configured in each cycle. It will be appreciated that this approach may also be efficient if ChgPkt is running simultaneously with transmission of CWs from the CT through the PAEs. In this case, the respective CW required for configuration may also be pending at the respective PAE in each cycle.


If data processing proceeds in the opposite direction (2011), the pipeline may optionally be configured from the PAE most remote from the CT to the PAE next to the CT. If ChgPkt does not take place simultaneously with data transmission of the CWs, the method may remain optimal. On occurrence of ChgPkt, the CWs may be transmitted immediately from the pipeline to 2004.


However, if ChgPkt appears simultaneously with CWs of wave reconfiguration, this may result in waiting cycles. For example, PAE B is to be configured on occurrence of ChgPkt in cycle n. CWs are pending and are configured in 2004. In cycle n+1, ChgPkt (and thus WCT) are pending at PAE C. However, in the best case, CWs of PAE C are transmitted only to 2002 of PAE B in this cycle, because in the preceding cycle, 2002 of PAE B was still occupied with its CW. Only in cycle n+2 are the CWs of PAE C in 2002 and can be configured. A waiting cycle has occurred in cycle n+1.



FIG. 21 illustrates a general synchronization strategy for a wave reconfiguration, according to an example embodiment of the present invention. A first PAE 2101 may recognize the need for reconfiguration on the basis of a status that is occurring. This recognition may take place according to the usual methods, e.g., by comparison of data or states. Due to this recognition, 2101 sends a request (2103) to one or more PAEs (2102) to be reconfigured. This may be accomplished through a trigger. This may stop the data processing. In addition, 2101 sends a signal (2105), which may also be the same as signal 2103, to a CT (2104) to request reconfiguration. CT 2104 may reconfigure 2102 (2106). After successful reconfiguration of all PAEs to be reconfigured, the CT may inform 2101 (2107) regarding the end of the procedure, e.g., by way of reconfiguration. Then 2001 may take back stop request 2103, and data processing may be continued. Here, 2108 and 2109 each symbolize data and trigger inputs and outputs.



FIG. 22 illustrates an example approach for using routing measures to ensure a correctly timed relaying of WCT, according to an example embodiment of the present invention. Several WCTs may be generated for different PAEs (2201) by a central instance (2203). The WTCs may need to be coordinated with one another in time. The different distances to PAEs 2201 in the matrix may result in different transmit times or latency times. Timing coordination may be achieved in the present example through suitable use of pipeline stages (2202). These may be allocated using a router assigned to the compiler, as described in PACT13. The resulting latencies indicated here as d1-d5. It can be seen here that the same latencies occur in the direction of data flow (2204) in each stage (column). For example, 2205 may not be necessary, because the distance of 2206 from 2003 is very small. However, one 2202 each must be inserted for 2207 and 2208 because of the transit time resulting from the longer distance, so 2205 may be needed to equalize the transmit time.



FIG. 23 illustrates an example application of wave reconfiguration, according to an example embodiment of the present invention. This figure also illustrates optional utilization of PAE resources or reconfiguration time to perform a task, yielding an intelligent trade-off between cost and performance that can be adjusted by the compiler or the programmer.


A data stream is to be calculated (2301) in an array (2302) of PAEs (2304-2308). A CT (2303) assigned to the array is responsible for its reconfiguration. 2304 is responsible for recognition of the end state of data processing which makes reconfiguration necessary. This recognition is signaled to the CT. 2306 marks the beginning and 2309 the end of a branch represented by 2307a, 2307b or 2307ab. PAEs 2308 are not used. The various triggers are represented by 2309.


In FIG. 23a, one of two branches 2307a, 2307b may be selected by 2305 and activated by trigger simultaneously with data received from 2306.


In FIG. 23b, branches 2307a and 2307b may not need to be completely preconfigured, but instead both possible branches should share resources 2307ab by reconfiguration. 2305 also selects the branch necessary for data processing. Information may now be sent to 2303 and also to 2306 to stop data processing until reconfiguration of 2307ab has been completed according to FIG. 21.



FIG. 24 illustrates an example implementation according to of a state machine for sequence control of the PAE, according to an example embodiment of the present invention. The following states may be implemented:


Not Configured (2401)


Allocated (2402)


Wait for lock (2403)


Configured (2404)


The following signals trigger may trigger a change of status:


LOCK/FREE (2404, 2408)


CHECK (2405, 2407)


RECONFIG (2406, 2409)


GO (2410, 2411)



FIG. 25 illustrates an example high-level language compiler, according to an example embodiment of the present invention. This compiler has also been described in PACT13. The compiler may translate ordinary sequential high-level languages (C, Pascal, Java) to a VPU system. Sequential code (2511) may be separated from parallel code (2508) so that 2508 is processed directly in the array of PAEs.


There are three possible embodiments for 2511:


1. Within a sequencer of a PAE. (See PACT13, 2910)


2. By using a sequencer configured into the VPU. The compiler generates a sequencer optimized for the task, while directly generating the algorithm-specific sequencer code See PACT13, 2801.


3. On an ordinary external processor. (See PACT13, 3103)


4. By rapid configuration by a CT. Here the ratio between the number of PAEs within a PAC and the number of PACs may be selected so that one or more PACs can be set up as dedicated sequencers. The dedicated sequencer's op codes and command execution may be configured by the respective CT in each operating step. The respective CT may respond to the status of the sequencer to determine the following program sequence. The status may be transmitted by the trigger system. The possibility that is selected may depend on the architecture of the VPU, the computer system and the algorithm.


This principle was described generally PACT13. However, the example embodiment of the present invention may include extensions of the router and placer (2505).


The code (2501) may first be separated in a preprocessor (2502) into data flow code (2516) and ordinary sequential code (2517). The data flow code may be written in a special version of the respective programming language optimized for data flow. 2517 may be tested for parallelizable subalgorithms (2503) and the sequential subalgorithms may be sorted out (2518). Parallelizable subalgorithms may be placed and routed as macros on a provisional basis.


In an iterative procedure, the macros may be placed, routed and partitioned (2505) together with the data flow-optimized code (2513). Statistics (2506) evaluate the individual macros as well as their partitioning with regard to efficiency, with the reconfiguration time, and the complexity of the reconfiguration. Inefficient macros may be removed and sorted out as sequential code (2514).


The remaining parallel code (2515) may be compiled and assembled (2507) together with 2516. VPU object code may be output (2508).


Statistics regarding the efficiency of the code generated as well as individual macros (including those removed with 2514) may be output (2509). It will be appreciated that the programmer thus receives important information regarding optimization of the speed of the program.


Each macro of the remaining sequential code may be tested for its complexity and requirements (2520). The suitable sequencer in each case may be selected from a database, which depends on the VPU architecture and the computer system (2519). The selected sequencer may output as VPU code (2521). A compiler (2521) may generate the assembler code of the respective macro for the respective sequencer selected by 2520. The assembler code may then be output (2511). 2510 and 2520 are closely linked together. Processing may proceed iteratively to find the most suitable sequencer having the fastest and minimal assembler code.


A linker (2522) may compile the assembler codes (2508, 2511, 2521) and generate executable object code (2523).


DEFINITION OF TERMS
Example



  • ACK/REJ: Acknowledgment protocol of a PAE to a (re)configuration attempt. ACK may indicate that the configuration has been accepted, REJ may indicate that the configuration has been rejected. The protocol may provide for waiting for receipt of either ACK or REJ and optionally inserting waiting cycles until the receipt.

  • CT: Unit for interactive configuration and reconfiguration of configurable elements. A CT may have a memory for temporary storage and/or caching of SubConfs. CTs that are not root CTs may also have a direct connection to a memory for SubConfs, which may not need to be loaded by a higher-level CT.

  • CTTREE: One-dimensional or multidimensional tree of CTs.

  • EnhSubConf: Configuration containing multiple SubConfs to be executed on different PACs.

  • Configuration: An executable algorithm

  • Configurable element: An element whose function may be determined by a configuration from a range of possible function. For example, a configurable element may be designed as a logical function unit, arithmetic function unit, memory, peripheral interface or bus system; this includes in particular elements of known technologies such as FPGA (e.g., CLBs), DPGAs, VPUs and other elements known under the term “reconfigurable computing.” A configurable element may also be a complex combination of multiple different function units, e.g., an arithmetic unit with an integrated allocated bus system.

  • KW: Configuration word. One or more pieces of data intended for the configuration or part of a configuration of a configurable element.

  • Latency: Delay within a data transmission, which usually takes place in synchronous systems based on cycles. Latency may be measured in clock cycles.

  • PA: Processing array. This may include an arrangement of multiple PAEs, including PAEs of different designs.

  • PAC: A PA with an associated CT responsible for configuration and reconfiguration of the PA.

  • PAE: Processing array element, configurable element.

  • ReconfReq: Triggers based on a status which may require a reconfiguration.

  • Reconfiguration may include loading a new configuration. This loading may occur simultaneously or overlapping or in parallel with data processing, without interfering with or corrupting the ongoing data processing.

  • Root CT: Highest CT in the CTTREE. The Root CT may have a connection to the configuration memory. It may be the only CT so connected.

  • SubConf: Part of a configuration composed of multiple KWs.

  • WCT: The WCT may indicate the time at which a reconfiguration is to take place. A WCT may optionally select one of several possible configurations via transmission of additional information. A WCT may run in exact synchronization with the termination of the data processing underway, which may be terminated for the reconfiguration. If WCT is transmitted later for reasons of implementation, WCS may be used for synchronization of data processing.

  • WCP: A request for one or more alternative next configuration(s) from the CT for (re)configuration.

  • WCS: Stops the data processing until receipt of WCT. May need to be used if WCT does not indicate the exact time of a required reconfiguration.

  • Cell: Configurable element



REFERENCES



  • PACT01 4416881

  • PACT02 19781412.3 and U.S. Pat. No. 6,425,068

  • PACT04 19654842.2-53

  • PACT05 19654593.5-53

  • PACT07 19880128.9

  • PACT08 19880129.7

  • PAC10 19980312.9 and 19980309.9 and PCT/DE99/00504

  • PACT13 PCT/DE00/01869

  • PACT18 10110530.4


Claims
  • 1. A method of synchronizing data processing of a hardware processor arrangement on an integrated module that includes a plurality of data processing units, the method comprising: for each of at least one barrier included in a program sequence, during execution of the program: responsive to reaching the respective barrier, determining, by the processor arrangement, whether all instructions preceding the respective barrier have been successfully scheduled for execution; andcontinuing, by the processor arrangement, execution of the program beyond the respective barrier in accordance with the determination;wherein the processor arrangement is adapted for the continuing to include, if a result of the determination is that at least one of the instructions preceding the respective barrier has not been successfully scheduled for execution, initially stopping the program execution until all of the instructions preceding the respective barrier have been successfully scheduled for execution.
  • 2. A method of synchronizing data processing of a hardware processor arrangement on an integrated module that includes a plurality of data processing units, the method comprising: responsive to reaching, during execution of a program, a barrier included in the program sequence, halting, by the processor arrangement, the execution of the program until the processor arrangement determines that all instructions preceding the barrier in the program sequence have been successfully scheduled for execution.
  • 3. The method of claim 1, wherein the instructions preceding the respective barrier include instructions for at least one arithmetic logic unit.
  • 4. The method of claim 3, wherein the hardware processor arrangement includes multiple processors.
  • 5. The method of claim 4, wherein the barrier ensures that instructions are executed in correct order.
  • 6. The method of claim 5, wherein the barrier is implemented by a token.
  • 7. The method of claim 5, wherein the barrier ensures that all instructions behind the barrier are executed in order.
  • 8. The method of claim 1, wherein the hardware processor arrangement includes multiple processors.
  • 9. The method of claim 8, wherein the barrier ensures that instructions are executed in correct order.
  • 10. The method of claim 9, wherein the barrier ensures that all instructions behind the barrier are executed in correct order.
  • 11. The method of claim 8, wherein the barrier is formed by a token.
  • 12. The method of claim 1, wherein the barrier ensures that all instructions behind the barrier are executed in order.
  • 13. The method of claim 12, wherein the barrier ensures that all instructions behind the barrier are executed in correct order.
  • 14. The method of claim 12, wherein the barrier is formed by a token.
  • 15. The method of claim 1, wherein the barrier is formed by a token.
  • 16. The method of claim 2, wherein the instructions preceding the respective barrier include instructions for at least one arithmetic logic unit.
  • 17. The method of claim 16, wherein the hardware processor arrangement includes multiple processors.
  • 18. The method of claim 17, wherein the barrier ensures that instructions are executed in correct order.
  • 19. The method of claim 18, wherein the barrier is implemented by a token.
  • 20. The method of claim 17, wherein the barrier ensures that all instructions behind the barrier are executed in order.
  • 21. The method of claim 2, wherein the hardware processor arrangement includes multiple processors.
  • 22. The method of claim 21, wherein the barrier ensures that instructions are executed in correct order.
  • 23. The method of claim 22, wherein the barrier ensures that all instructions behind the barrier are executed in correct order.
  • 24. The method of claim 21, wherein the barrier is formed by a token.
  • 25. The method of claim 2, wherein the barrier ensures that all instructions behind the barrier are executed in order.
  • 26. The method of claim 25, wherein the barrier ensures that all instructions behind the barrier are executed in correct order.
  • 27. The method of claim 25, wherein the barrier is formed by a token.
  • 28. The method of claim 2, wherein the barrier is formed by a token.
Priority Claims (2)
Number Date Country Kind
100 28 397 Jun 2000 DE national
101 10 530 Mar 2001 DE national
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of and claims priority to U.S. patent application Ser. No. 11/122,500, filed on May 4, 2005 and issued as U.S. Pat. No. 8,301,872; which: (a) is a continuation of, and claims priority under 35 U.S.C. §120 to, U.S. patent application Ser. No. 10/297,959, filed Jun. 19, 2003 now U.S. Pat. No. 7,003,660 as a national stage entry of Int. Pat. App. No. PCT/EP01/06703 file May 29, 2001 and issued as U.S. Pat. No. 7,003,660, and (b) is also a continuation-in-part application of, and claims priority under 35 U.S.C. §120 to, U.S. patent application Ser. No. 10/764,159, filed Jan. 23, 2004 and issued as U.S. Pat. No. 6,990,555; which latter application is a continuation application of U.S. patent application Ser. No. 10/191,926, filed Jul. 9, 2002 and issued as U.S. Pat. No. 6,687,788; which is a continuation application of U.S. patent application Ser. No. 09/623,052, filed Jan. 9, 2001, which issued as U.S. Pat. No. 6,480,937, and a national stage entry of Int. Pat. App. No. PCT/DE99/00504 filed on Feb. 25, 1999, the full content of which latter mentioned International patent application is incorporated herein by reference in its entirety.

US Referenced Citations (664)
Number Name Date Kind
3564506 Bee et al. Feb 1971 A
3681578 Stevens Aug 1972 A
3753008 Guarnaschelli Aug 1973 A
3754211 Rocher et al. Aug 1973 A
3855577 Vandierendonck Dec 1974 A
3956589 Weathers et al. May 1976 A
3970993 Finnila Jul 1976 A
4151611 Sugawara et al. Apr 1979 A
4233667 Devine et al. Nov 1980 A
4414547 Knapp et al. Nov 1983 A
4428048 Berlin, Jr. Jan 1984 A
4498134 Hansen et al. Feb 1985 A
4498172 Bhavsar Feb 1985 A
4566102 Hefner Jan 1986 A
4571736 Agrawal et al. Feb 1986 A
4590583 Miller May 1986 A
4591979 Iwashita May 1986 A
4594682 Drimak Jun 1986 A
4623997 Tulpule Nov 1986 A
4646300 Goodman et al. Feb 1987 A
4663706 Allen et al. May 1987 A
4667190 Fant et al. May 1987 A
4682284 Schrofer Jul 1987 A
4686386 Tadao Aug 1987 A
4706216 Carter Nov 1987 A
4720778 Hall et al. Jan 1988 A
4720780 Dolecek Jan 1988 A
4733393 Morton Mar 1988 A
4739474 Holsztynski Apr 1988 A
4748580 Ashton et al. May 1988 A
4760525 Webb Jul 1988 A
4761755 Ardini et al. Aug 1988 A
4791603 Henry Dec 1988 A
4811214 Nosenchuck et al. Mar 1989 A
4852043 Guest Jul 1989 A
4852048 Morton Jul 1989 A
4860201 Stolfo et al. Aug 1989 A
4870302 Freeman Sep 1989 A
4873666 Lefebvre et al. Oct 1989 A
4882687 Gordon Nov 1989 A
4884231 Mor et al. Nov 1989 A
4891810 de Corlieu et al. Jan 1990 A
4901268 Judd Feb 1990 A
4910665 Mattheyses et al. Mar 1990 A
4918440 Furtek et al. Apr 1990 A
4939641 Schwartz et al. Jul 1990 A
4959781 Rubenstein et al. Sep 1990 A
4967340 Dawes Oct 1990 A
4972314 Getzinger et al. Nov 1990 A
4992933 Taylor Feb 1991 A
5010401 Murakami et al. Apr 1991 A
5014193 Garner et al. May 1991 A
5015884 Agrawal et al. May 1991 A
5021947 Campbell et al. Jun 1991 A
5023775 Poret Jun 1991 A
5031179 Yoshida et al. Jul 1991 A
5034914 Osterlund Jul 1991 A
5036473 Butts et al. Jul 1991 A
5036493 Nielsen Jul 1991 A
5041924 Blackborow et al. Aug 1991 A
5043978 Nagler et al. Aug 1991 A
5047924 Fujioka et al. Sep 1991 A
5055997 Sluijter et al. Oct 1991 A
5065308 Evans Nov 1991 A
5070475 Normoyle et al. Dec 1991 A
5072178 Matsumoto Dec 1991 A
5076482 Kozyrski et al. Dec 1991 A
5081375 Pickett et al. Jan 1992 A
5081575 Hiller et al. Jan 1992 A
5099447 Myszewski Mar 1992 A
5103311 Sluijter et al. Apr 1992 A
5109503 Cruickshank et al. Apr 1992 A
5113498 Evan et al. May 1992 A
5115510 Okamoto et al. May 1992 A
5119290 Loo et al. Jun 1992 A
5123109 Hillis Jun 1992 A
5128559 Steele Jul 1992 A
5142469 Weisenborn Aug 1992 A
5144166 Camarota et al. Sep 1992 A
5193202 Jackson et al. Mar 1993 A
5203005 Horst Apr 1993 A
5204935 Mihara et al. Apr 1993 A
5208491 Ebeling et al. May 1993 A
5212716 Ferraiolo et al. May 1993 A
5212777 Gove et al. May 1993 A
5218302 Loewe et al. Jun 1993 A
5226122 Thayer et al. Jul 1993 A
RE34363 Freeman Aug 1993 E
5233539 Agrawal et al. Aug 1993 A
5237686 Asano et al. Aug 1993 A
5243238 Kean Sep 1993 A
5245616 Olson Sep 1993 A
5247689 Ewert Sep 1993 A
RE34444 Kaplinsky Nov 1993 E
5274593 Proebsting Dec 1993 A
5276836 Fukumaru et al. Jan 1994 A
5287472 Horst Feb 1994 A
5287511 Robinson et al. Feb 1994 A
5287532 Hunt Feb 1994 A
5291389 Iwata Mar 1994 A
5301284 Estes et al. Apr 1994 A
5301344 Kolchinsky Apr 1994 A
5303172 Magar et al. Apr 1994 A
5311079 Ditlow et al. May 1994 A
5327125 Iwase et al. Jul 1994 A
5336950 Popli et al. Aug 1994 A
5343406 Freeman et al. Aug 1994 A
5347639 Rechtschaffen et al. Sep 1994 A
5349193 Mott et al. Sep 1994 A
5353432 Richek et al. Oct 1994 A
5355508 Kan Oct 1994 A
5361373 Gilson Nov 1994 A
5365125 Goetting et al. Nov 1994 A
5379444 Mumme Jan 1995 A
5386154 Goetting et al. Jan 1995 A
5386518 Reagle et al. Jan 1995 A
5392437 Matter et al. Feb 1995 A
5408643 Katayose Apr 1995 A
5410723 Schmidt et al. Apr 1995 A
5412795 Larson May 1995 A
5418952 Morley et al. May 1995 A
5418953 Hunt et al. May 1995 A
5421019 Holsztynski et al. May 1995 A
5422823 Agrawal et al. Jun 1995 A
5425036 Liu et al. Jun 1995 A
5426378 Ong Jun 1995 A
5428526 Flood et al. Jun 1995 A
5430687 Hung et al. Jul 1995 A
5435000 Boothroyd et al. Jul 1995 A
5440245 Galbraith et al. Aug 1995 A
5440538 Olsen et al. Aug 1995 A
5442790 Nosenchuck Aug 1995 A
5444394 Watson et al. Aug 1995 A
5448186 Kawata Sep 1995 A
5450022 New Sep 1995 A
5455525 Ho et al. Oct 1995 A
5457644 McCollum Oct 1995 A
5465375 Thepaut et al. Nov 1995 A
5469003 Kean Nov 1995 A
5473266 Ahanin et al. Dec 1995 A
5473267 Stansfield Dec 1995 A
5475583 Bock et al. Dec 1995 A
5475803 Stearns et al. Dec 1995 A
5475856 Kogge Dec 1995 A
5477525 Okabe Dec 1995 A
5483620 Pechanek et al. Jan 1996 A
5485103 Pedersen et al. Jan 1996 A
5485104 Agrawal et al. Jan 1996 A
5489857 Agrawal et al. Feb 1996 A
5491353 Kean Feb 1996 A
5493239 Zlotnick Feb 1996 A
5493663 Parikh Feb 1996 A
5497498 Taylor Mar 1996 A
5502838 Kikinis Mar 1996 A
5504439 Tavana Apr 1996 A
5506998 Kato et al. Apr 1996 A
5510730 El Gamal et al. Apr 1996 A
5511173 Yamaura et al. Apr 1996 A
5513366 Agarwal et al. Apr 1996 A
5521837 Frankle et al. May 1996 A
5522083 Gove et al. May 1996 A
5525971 Flynn Jun 1996 A
5530873 Takano Jun 1996 A
5530946 Bouvier et al. Jun 1996 A
5532693 Winters et al. Jul 1996 A
5532957 Malhi Jul 1996 A
5535406 Kolchinsky Jul 1996 A
5537057 Leong et al. Jul 1996 A
5537580 Giomi et al. Jul 1996 A
5537601 Kimura et al. Jul 1996 A
5541530 Cliff et al. Jul 1996 A
5544336 Kato et al. Aug 1996 A
5548773 Kemeny et al. Aug 1996 A
5550782 Cliff et al. Aug 1996 A
5555434 Carlstedt Sep 1996 A
5559450 Ngai et al. Sep 1996 A
5561738 Kinerk et al. Oct 1996 A
5568624 Sites et al. Oct 1996 A
5570040 Lytle et al. Oct 1996 A
5572710 Asano et al. Nov 1996 A
5574927 Scantlin Nov 1996 A
5574930 Halverson, Jr. et al. Nov 1996 A
5581731 King et al. Dec 1996 A
5581734 DiBrino et al. Dec 1996 A
5583450 Trimberger et al. Dec 1996 A
5584013 Cheong et al. Dec 1996 A
5586044 Agrawal et al. Dec 1996 A
5587921 Agrawal et al. Dec 1996 A
5588152 Dapp et al. Dec 1996 A
5590345 Barker et al. Dec 1996 A
5590348 Phillips et al. Dec 1996 A
5596742 Agarwal et al. Jan 1997 A
5600265 El Gamal et al. Feb 1997 A
5600597 Kean et al. Feb 1997 A
5600845 Gilson Feb 1997 A
5602999 Hyatt Feb 1997 A
5603005 Bauman et al. Feb 1997 A
5606698 Powell Feb 1997 A
5608342 Trimberger Mar 1997 A
5611049 Pitts Mar 1997 A
5617547 Feeney et al. Apr 1997 A
5617577 Barker et al. Apr 1997 A
5619720 Garde et al. Apr 1997 A
5625806 Kromer Apr 1997 A
5625836 Barker et al. Apr 1997 A
5627992 Baror May 1997 A
5634131 Matter et al. May 1997 A
5635851 Tavana Jun 1997 A
5642058 Trimberger et al. Jun 1997 A
5646544 Iadanza Jul 1997 A
5646545 Trimberger et al. Jul 1997 A
5649176 Selvidge et al. Jul 1997 A
5649179 Steenstra et al. Jul 1997 A
5652529 Gould et al. Jul 1997 A
5652894 Hu et al. Jul 1997 A
5655069 Ogawara et al. Aug 1997 A
5655124 Lin Aug 1997 A
5656950 Duong et al. Aug 1997 A
5657330 Matsumoto Aug 1997 A
5659785 Pechanek et al. Aug 1997 A
5659797 Zandveld et al. Aug 1997 A
5675262 Duong et al. Oct 1997 A
5675743 Mavity Oct 1997 A
5675757 Davidson et al. Oct 1997 A
5675777 Glickman Oct 1997 A
5677909 Heide Oct 1997 A
5680583 Kuijsten Oct 1997 A
5682491 Pechanek et al. Oct 1997 A
5682544 Pechanek et al. Oct 1997 A
5687325 Chang Nov 1997 A
5694602 Smith Dec 1997 A
5696791 Yeung Dec 1997 A
5696976 Nizar et al. Dec 1997 A
5701091 Kean Dec 1997 A
5705938 Kean Jan 1998 A
5706482 Matsushima et al. Jan 1998 A
5713037 Wilkinson et al. Jan 1998 A
5717890 Ichida et al. Feb 1998 A
5717943 Barker et al. Feb 1998 A
5727229 Kan et al. Mar 1998 A
5732209 Vigil et al. Mar 1998 A
5734869 Chen Mar 1998 A
5734921 Dapp et al. Mar 1998 A
5737516 Circello et al. Apr 1998 A
5737565 Mayfield Apr 1998 A
5742180 DeHon et al. Apr 1998 A
5745734 Craft et al. Apr 1998 A
5748872 Norman May 1998 A
5748979 Trimberger May 1998 A
5752035 Trimberger May 1998 A
5754459 Telikepalli May 1998 A
5754820 Yamagami May 1998 A
5754827 Barbier et al. May 1998 A
5754871 Wilkinson et al. May 1998 A
5754876 Tamaki et al. May 1998 A
5760602 Tan Jun 1998 A
5761484 Agarwal et al. Jun 1998 A
5768629 Wise et al. Jun 1998 A
5773994 Jones Jun 1998 A
5774704 Williams Jun 1998 A
5778237 Yamamoto et al. Jul 1998 A
5778439 Timberger et al. Jul 1998 A
5781756 Hung Jul 1998 A
5784313 Trimberger et al. Jul 1998 A
5784630 Saito et al. Jul 1998 A
5784636 Rupp Jul 1998 A
5794059 Barker et al. Aug 1998 A
5794062 Baxter Aug 1998 A
5801547 Kean Sep 1998 A
5801715 Norman Sep 1998 A
5801958 Dangelo et al. Sep 1998 A
5802290 Casselman Sep 1998 A
5804986 Jones Sep 1998 A
5815004 Trimberger et al. Sep 1998 A
5815715 Kayhan Sep 1998 A
5815726 Cliff Sep 1998 A
5821774 Veytsman et al. Oct 1998 A
5828229 Cliff et al. Oct 1998 A
5828858 Athanas et al. Oct 1998 A
5831448 Kean Nov 1998 A
5832288 Wong Nov 1998 A
5838165 Chatter Nov 1998 A
5838988 Panwar et al. Nov 1998 A
5841973 Kessler et al. Nov 1998 A
5844422 Trimberger et al. Dec 1998 A
5844888 Markkula, Jr. et al. Dec 1998 A
5848238 Shimomura et al. Dec 1998 A
5854918 Baxter Dec 1998 A
5857097 Henzinger et al. Jan 1999 A
5857109 Taylor Jan 1999 A
5859544 Norman Jan 1999 A
5860119 Dockser Jan 1999 A
5862403 Kanai et al. Jan 1999 A
5867691 Shiraishi Feb 1999 A
5867723 Chin et al. Feb 1999 A
5870620 Kadosumi et al. Feb 1999 A
5884075 Hester et al. Mar 1999 A
5887162 Williams et al. Mar 1999 A
5887165 Martel et al. Mar 1999 A
5889533 Lee Mar 1999 A
5889982 Rodgers et al. Mar 1999 A
5892370 Eaton et al. Apr 1999 A
5892961 Trimberger Apr 1999 A
5892962 Cloutier Apr 1999 A
5894565 Furtek et al. Apr 1999 A
5895487 Boyd et al. Apr 1999 A
5898602 Rothman et al. Apr 1999 A
5901279 Davis, III May 1999 A
5913925 Kahle et al. Jun 1999 A
5915099 Takata et al. Jun 1999 A
5915123 Mirsky et al. Jun 1999 A
5924119 Sindhu et al. Jul 1999 A
5926638 Inoue Jul 1999 A
5933023 Young Aug 1999 A
5933642 Greenbaum et al. Aug 1999 A
5936424 Young et al. Aug 1999 A
5943242 Vorbach et al. Aug 1999 A
5956518 DeHon et al. Sep 1999 A
5960193 Guttag et al. Sep 1999 A
5960200 Eager et al. Sep 1999 A
5966143 Breternitz, Jr. Oct 1999 A
5966534 Cooke et al. Oct 1999 A
5970243 Klein et al. Oct 1999 A
5970254 Cooke et al. Oct 1999 A
5978260 Trimberger et al. Nov 1999 A
5978583 Ekanadham et al. Nov 1999 A
5996048 Cherabuddi et al. Nov 1999 A
5996083 Gupta et al. Nov 1999 A
5999990 Sharrit et al. Dec 1999 A
6003143 Kim et al. Dec 1999 A
6011407 New Jan 2000 A
6014509 Furtek et al. Jan 2000 A
6020758 Patel et al. Feb 2000 A
6020760 Sample et al. Feb 2000 A
6021490 Vorbach et al. Feb 2000 A
6023564 Trimberger Feb 2000 A
6023742 Ebeling et al. Feb 2000 A
6026478 Dowling Feb 2000 A
6026481 New et al. Feb 2000 A
6034538 Abramovici Mar 2000 A
6035371 Magloire Mar 2000 A
6038650 Vorbach et al. Mar 2000 A
6038656 Martin et al. Mar 2000 A
6044030 Zheng et al. Mar 2000 A
6045585 Blainey Apr 2000 A
6047115 Mohan et al. Apr 2000 A
6049222 Lawman Apr 2000 A
6049866 Earl Apr 2000 A
6052524 Pauna Apr 2000 A
6052773 DeHon et al. Apr 2000 A
6054873 Laramie Apr 2000 A
6055619 North et al. Apr 2000 A
6058266 Megiddo et al. May 2000 A
6058469 Baxter May 2000 A
6064819 Franssen et al. May 2000 A
6072348 New et al. Jun 2000 A
6075935 Ussery et al. Jun 2000 A
6076157 Borkenhagen et al. Jun 2000 A
6077315 Greenbaum et al. Jun 2000 A
6078736 Guccione Jun 2000 A
6081903 Vorbach et al. Jun 2000 A
6084429 Trimberger Jul 2000 A
6085317 Smith Jul 2000 A
6086628 Dave et al. Jul 2000 A
6088795 Vorbach et al. Jul 2000 A
6092174 Roussakov Jul 2000 A
RE36839 Simmons et al. Aug 2000 E
6096091 Hartmann Aug 2000 A
6105105 Trimberger et al. Aug 2000 A
6105106 Manning Aug 2000 A
6108760 Mirsky et al. Aug 2000 A
6118724 Higginbottom Sep 2000 A
6119181 Vorbach et al. Sep 2000 A
6122719 Mirsky et al. Sep 2000 A
6125072 Wu Sep 2000 A
6125408 McGee et al. Sep 2000 A
6127908 Bozler et al. Oct 2000 A
6128720 Pechanek et al. Oct 2000 A
6134166 Lytle et al. Oct 2000 A
6137307 Iwanczuk et al. Oct 2000 A
6145072 Shams et al. Nov 2000 A
6150837 Beal et al. Nov 2000 A
6150839 New et al. Nov 2000 A
6154048 Iwanczuk et al. Nov 2000 A
6154049 New Nov 2000 A
6154826 Wulf et al. Nov 2000 A
6157214 Marshall Dec 2000 A
6170051 Dowling Jan 2001 B1
6172520 Lawman et al. Jan 2001 B1
6173419 Barnett Jan 2001 B1
6173434 Wirthlin et al. Jan 2001 B1
6178494 Casselman Jan 2001 B1
6185256 Saito et al. Feb 2001 B1
6185731 Maeda et al. Feb 2001 B1
6188240 Nakaya Feb 2001 B1
6188650 Hamada et al. Feb 2001 B1
6191614 Schultz et al. Feb 2001 B1
6198304 Sasaki Mar 2001 B1
6201406 Iwanczuk et al. Mar 2001 B1
6202163 Gabzdyl et al. Mar 2001 B1
6202182 Abramovici et al. Mar 2001 B1
6204687 Schultz et al. Mar 2001 B1
6211697 Lien et al. Apr 2001 B1
6212544 Borkenhagen et al. Apr 2001 B1
6212650 Guccione Apr 2001 B1
6215326 Jefferson et al. Apr 2001 B1
6216223 Revilla et al. Apr 2001 B1
6219833 Solomon et al. Apr 2001 B1
RE37195 Kean May 2001 E
6230307 Davis et al. May 2001 B1
6240502 Panwar et al. May 2001 B1
6243808 Wang Jun 2001 B1
6247147 Beenstra et al. Jun 2001 B1
6249756 Bunton et al. Jun 2001 B1
6252792 Marshall et al. Jun 2001 B1
6256724 Hocevar et al. Jul 2001 B1
6260114 Schug Jul 2001 B1
6260179 Ohsawa et al. Jul 2001 B1
6262908 Marshall et al. Jul 2001 B1
6263430 Trimberger et al. Jul 2001 B1
6266760 DeHon et al. Jul 2001 B1
6279077 Nasserbakht et al. Aug 2001 B1
6282627 Wong et al. Aug 2001 B1
6282701 Wygodny et al. Aug 2001 B1
6285624 Chen Sep 2001 B1
6286134 Click, Jr. et al. Sep 2001 B1
6288566 Hanrahan et al. Sep 2001 B1
6289369 Sundaresan Sep 2001 B1
6289440 Casselman Sep 2001 B1
6290403 Onishi et al. Sep 2001 B1
6292916 Abramovici et al. Sep 2001 B1
6298043 Mauger et al. Oct 2001 B1
6298396 Loyer et al. Oct 2001 B1
6298472 Phillips et al. Oct 2001 B1
6301706 Maslennikov et al. Oct 2001 B1
6311200 Hanrahan et al. Oct 2001 B1
6311265 Beckerle et al. Oct 2001 B1
6317819 Morton Nov 2001 B1
6317841 Nagae et al. Nov 2001 B1
6321298 Hubis Nov 2001 B1
6321366 Tseng et al. Nov 2001 B1
6321373 Ekanadham et al. Nov 2001 B1
6324673 Luo et al. Nov 2001 B1
6338106 Vorbach et al. Jan 2002 B1
6339424 Ishikawa et al. Jan 2002 B1
6339840 Kothari et al. Jan 2002 B1
6341318 Dakhil Jan 2002 B1
6347346 Taylor Feb 2002 B1
6349346 Hanrahan et al. Feb 2002 B1
6353841 Marshall et al. Mar 2002 B1
6362650 New et al. Mar 2002 B1
6370596 Dakhil Apr 2002 B1
6373779 Pang et al. Apr 2002 B1
6374286 Gee Apr 2002 B1
6378068 Foster et al. Apr 2002 B1
6381624 Colon-Bonet et al. Apr 2002 B1
6389379 Lin et al. May 2002 B1
6389579 Phillips et al. May 2002 B1
6392912 Hanrahan et al. May 2002 B1
6400601 Sudo et al. Jun 2002 B1
6404224 Azegami et al. Jun 2002 B1
6405185 Pechanek et al. Jun 2002 B1
6405299 Vorbach et al. Jun 2002 B1
6421808 McGeer Jul 2002 B1
6421809 Wuytack et al. Jul 2002 B1
6421817 Mohan et al. Jul 2002 B1
6425054 Nguyen Jul 2002 B1
6425068 Vorbach Jul 2002 B1
6426649 Fu et al. Jul 2002 B1
6427156 Chapman et al. Jul 2002 B1
6430309 Pressman et al. Aug 2002 B1
6434642 Camilleri et al. Aug 2002 B1
6434672 Gaither Aug 2002 B1
6434695 Esfahani et al. Aug 2002 B1
6434699 Jones et al. Aug 2002 B1
6437441 Yamamoto Aug 2002 B1
6438747 Schreiber et al. Aug 2002 B1
6446242 Lien et al. Sep 2002 B1
6449283 Chao et al. Sep 2002 B1
6456628 Greim et al. Sep 2002 B1
6457116 Mirsky et al. Sep 2002 B1
6476634 Bilski Nov 2002 B1
6477643 Vorbach et al. Nov 2002 B1
6480937 Vorbach et al. Nov 2002 B1
6480954 Trimberger et al. Nov 2002 B2
6483343 Faith et al. Nov 2002 B1
6487709 Keller et al. Nov 2002 B1
6490695 Zagorski et al. Dec 2002 B1
6496740 Robertson et al. Dec 2002 B1
6496902 Faanes et al. Dec 2002 B1
6496971 Lesea et al. Dec 2002 B1
6504398 Lien et al. Jan 2003 B1
6507898 Gibson et al. Jan 2003 B1
6507947 Schreiber et al. Jan 2003 B1
6512804 Johnson et al. Jan 2003 B1
6513077 Vorbach et al. Jan 2003 B2
6516382 Manning Feb 2003 B2
6518787 Allegrucci et al. Feb 2003 B1
6519674 Lam et al. Feb 2003 B1
6523107 Stansfield et al. Feb 2003 B1
6525678 Veenstra et al. Feb 2003 B1
6526520 Vorbach et al. Feb 2003 B1
6538468 Moore Mar 2003 B1
6538470 Langhammer et al. Mar 2003 B1
6539415 Mercs Mar 2003 B1
6539438 Ledzius et al. Mar 2003 B1
6539477 Seawright Mar 2003 B1
6542394 Marshall et al. Apr 2003 B2
6542844 Hanna Apr 2003 B1
6542998 Vorbach Apr 2003 B1
6553395 Marshall et al. Apr 2003 B2
6553479 Mirsky et al. Apr 2003 B2
6567834 Marshall et al. May 2003 B1
6571381 Vorbach et al. May 2003 B1
6587939 Takano Jul 2003 B1
6598128 Yoshioka et al. Jul 2003 B1
6606704 Adiletta et al. Aug 2003 B1
6624819 Lewis Sep 2003 B1
6625631 Ruehle Sep 2003 B2
6631487 Abramovici et al. Oct 2003 B1
6633181 Rupp Oct 2003 B1
6657457 Hanrahan et al. Dec 2003 B1
6658564 Smith et al. Dec 2003 B1
6665758 Frazier et al. Dec 2003 B1
6665865 Ruf Dec 2003 B1
6668237 Guccione et al. Dec 2003 B1
6681388 Sato et al. Jan 2004 B1
6687788 Vorbach et al. Feb 2004 B2
6694434 McGee et al. Feb 2004 B1
6697979 Vorbach et al. Feb 2004 B1
6704816 Burke Mar 2004 B1
6708223 Wang et al. Mar 2004 B1
6708325 Cooke et al. Mar 2004 B2
6717436 Kress et al. Apr 2004 B2
6721830 Vorbach et al. Apr 2004 B2
6721884 De Oliveira Kastrup Pereira et al. Apr 2004 B1
6725334 Barroso et al. Apr 2004 B2
6728871 Vorbach et al. Apr 2004 B1
6745317 Mirsky et al. Jun 2004 B1
6748440 Lisitsa et al. Jun 2004 B1
6751722 Mirsky et al. Jun 2004 B2
6754805 Juan Jun 2004 B1
6757847 Farkash et al. Jun 2004 B1
6757892 Gokhale et al. Jun 2004 B1
6782445 Olgiati et al. Aug 2004 B1
6785826 Durham et al. Aug 2004 B1
6802026 Patterson et al. Oct 2004 B1
6803787 Wicker, Jr. Oct 2004 B1
6820188 Stansfield et al. Nov 2004 B2
6829697 Davis et al. Dec 2004 B1
6836842 Guccione et al. Dec 2004 B1
6847370 Baldwin et al. Jan 2005 B2
6859869 Vorbach Feb 2005 B1
6868476 Rosenbluth Mar 2005 B2
6871341 Shyr Mar 2005 B1
6874108 Abramovici et al. Mar 2005 B1
6886092 Douglass et al. Apr 2005 B1
6901502 Yano et al. May 2005 B2
6928523 Yamada Aug 2005 B2
6957306 So et al. Oct 2005 B2
6961924 Bates et al. Nov 2005 B2
6975138 Pani et al. Dec 2005 B2
6977649 Baldwin et al. Dec 2005 B1
7000161 Allen et al. Feb 2006 B1
7007096 Lisitsa et al. Feb 2006 B1
7010667 Vorbach Mar 2006 B2
7010687 Ichimura Mar 2006 B2
7028107 Vorbach et al. Apr 2006 B2
7036114 McWilliams et al. Apr 2006 B2
7038952 Zack et al. May 2006 B1
7043416 Lin May 2006 B1
7144152 Rusu et al. Dec 2006 B2
7155708 Hammes et al. Dec 2006 B2
7164422 Wholey et al. Jan 2007 B1
7210129 May et al. Apr 2007 B2
7216204 Rosenbluth May 2007 B2
7237087 Vorbach et al. Jun 2007 B2
7249351 Songer et al. Jul 2007 B1
7254649 Subramanian et al. Aug 2007 B2
7266725 Vorbach et al. Sep 2007 B2
7340596 Crosland et al. Mar 2008 B1
7346644 Langhammer et al. Mar 2008 B1
7350178 Crosland et al. Mar 2008 B1
7382156 Pani et al. Jun 2008 B2
7455450 Liu et al. Nov 2008 B2
7500043 Wood Mar 2009 B2
7595659 Vorbach et al. Sep 2009 B2
7650448 Vorbach et al. Jan 2010 B2
7657877 Vorbach et al. Feb 2010 B2
7759968 Hussein et al. Jul 2010 B1
7873811 Wolinski et al. Jan 2011 B1
7971051 Paul et al. Jun 2011 B2
8305835 Schreiber et al. Nov 2012 B2
20010001860 Bieu May 2001 A1
20010003834 Shimonishi Jun 2001 A1
20010010074 Nishihara et al. Jul 2001 A1
20010018733 Fujii et al. Aug 2001 A1
20010032305 Barry Oct 2001 A1
20020004916 Marchand et al. Jan 2002 A1
20020010853 Trimberger et al. Jan 2002 A1
20020013861 Adiletta et al. Jan 2002 A1
20020038414 Taylor Mar 2002 A1
20020045952 Blemel Apr 2002 A1
20020051482 Lomp May 2002 A1
20020073282 Chauvel et al. Jun 2002 A1
20020083308 Pereira et al. Jun 2002 A1
20020099759 Gootherts Jul 2002 A1
20020103839 Ozawa Aug 2002 A1
20020124238 Metzgen Sep 2002 A1
20020138716 Master et al. Sep 2002 A1
20020143505 Drusinsky Oct 2002 A1
20020144229 Hanrahan Oct 2002 A1
20020147932 Brock et al. Oct 2002 A1
20020152060 Tseng Oct 2002 A1
20020156962 Chopra et al. Oct 2002 A1
20020162097 Meribout Oct 2002 A1
20020165886 Lam Nov 2002 A1
20030001615 Sueyoshi et al. Jan 2003 A1
20030014743 Cooke et al. Jan 2003 A1
20030046495 Venkitakrishnan et al. Mar 2003 A1
20030046607 May et al. Mar 2003 A1
20030052711 Taylor Mar 2003 A1
20030055861 Lai et al. Mar 2003 A1
20030056062 Prabhu Mar 2003 A1
20030056085 Vorbach Mar 2003 A1
20030056091 Greenberg Mar 2003 A1
20030056202 May et al. Mar 2003 A1
20030061542 Bates et al. Mar 2003 A1
20030062922 Douglass et al. Apr 2003 A1
20030070059 Dally et al. Apr 2003 A1
20030086300 Noyes et al. May 2003 A1
20030093662 Vorbach et al. May 2003 A1
20030097513 Vorbach et al. May 2003 A1
20030123579 Safavi et al. Jul 2003 A1
20030135686 Vorbach et al. Jul 2003 A1
20030154349 Berg et al. Aug 2003 A1
20030192032 Andrade et al. Oct 2003 A1
20030226056 Yip et al. Dec 2003 A1
20040015899 May et al. Jan 2004 A1
20040025005 Vorbach et al. Feb 2004 A1
20040039880 Pentkovski et al. Feb 2004 A1
20040078548 Claydon et al. Apr 2004 A1
20040088689 Hammes May 2004 A1
20040088691 Hammes et al. May 2004 A1
20040168099 Vorbach et al. Aug 2004 A1
20040199688 Vorbach et al. Oct 2004 A1
20050066213 Vorbach et al. Mar 2005 A1
20050066314 Bates et al. Mar 2005 A1
20050091468 Morita et al. Apr 2005 A1
20050144210 Simkins et al. Jun 2005 A1
20050144212 Simkins et al. Jun 2005 A1
20050144215 Simkins et al. Jun 2005 A1
20060036988 Allen et al. Feb 2006 A1
20060095716 Ramesh May 2006 A1
20060230094 Simkins et al. Oct 2006 A1
20060230096 Thendean et al. Oct 2006 A1
20070050603 Vorbach et al. Mar 2007 A1
20070083730 Vorbach et al. Apr 2007 A1
20070143577 Smith Jun 2007 A1
20080034184 Norman Feb 2008 A1
20080313383 Morita et al. Dec 2008 A1
20090085603 Paul et al. Apr 2009 A1
20090193384 Sima et al. Jul 2009 A1
20100306602 Kamiya et al. Dec 2010 A1
Foreign Referenced Citations (125)
Number Date Country
42 21 278 Jan 1994 DE
44 16 881 Nov 1994 DE
38 55 673 Nov 1996 DE
196 51 075 Jun 1998 DE
196 54 593 Jul 1998 DE
196 54 595 Jul 1998 DE
196 54 846 Jul 1998 DE
197 04 044 Aug 1998 DE
197 04 728 Aug 1998 DE
197 04 742 Sep 1998 DE
198 22 776 Mar 1999 DE
198 07 872 Aug 1999 DE
198 61 088 Feb 2000 DE
199 26 538 Dec 2000 DE
100 28 397 Dec 2001 DE
100 36 627 Feb 2002 DE
101 29 237 Apr 2002 DE
102 04 044 Aug 2003 DE
0 208 457 Jan 1987 EP
0 221 360 May 1987 EP
0 398 552 Nov 1990 EP
0 428 327 May 1991 EP
0 463 721 Jan 1992 EP
0 477 809 Apr 1992 EP
0 485 690 May 1992 EP
0 497 029 Aug 1992 EP
0 539 595 May 1993 EP
0 638 867 Aug 1994 EP
0 628 917 Dec 1994 EP
0 678 985 Oct 1995 EP
0 686 915 Dec 1995 EP
0 696 001 Feb 1996 EP
0 707 269 Apr 1996 EP
0 726 532 Aug 1996 EP
0 735 685 Oct 1996 EP
0 746 106 Dec 1996 EP
0 748 051 Dec 1996 EP
0 926 594 Jun 1999 EP
1 061 439 Dec 2000 EP
1 115 204 Jul 2001 EP
1 146 432 Oct 2001 EP
1 669 885 Jun 2006 EP
2 752 466 Feb 1998 FR
2 304 438 Mar 1997 GB
58-058672 Apr 1983 JP
1044571 Feb 1989 JP
1-229378 Sep 1989 JP
2-130023 May 1990 JP
2-226423 Sep 1990 JP
5-265705 Oct 1993 JP
5-276007 Oct 1993 JP
5-509184 Dec 1993 JP
6-266605 Sep 1994 JP
7-086921 Mar 1995 JP
7-154242 Jun 1995 JP
8-148989 Jun 1995 JP
7-182160 Jul 1995 JP
7-182167 Jul 1995 JP
8-044581 Feb 1996 JP
8-069447 Mar 1996 JP
8-101761 Apr 1996 JP
8-102492 Apr 1996 JP
8-106443 Apr 1996 JP
8-221164 Aug 1996 JP
8-250685 Sep 1996 JP
9-027745 Jan 1997 JP
9-237284 Sep 1997 JP
9-294069 Nov 1997 JP
11-046187 Feb 1999 JP
11-184718 Jul 1999 JP
11-307725 Nov 1999 JP
2000-076066 Mar 2000 JP
2000-181566 Jun 2000 JP
2000-201066 Jul 2000 JP
2000-311156 Nov 2000 JP
2001-500682 Jan 2001 JP
2001-167066 Jun 2001 JP
2001-510650 Jul 2001 JP
2001-236221 Aug 2001 JP
2002-0033457 Jan 2002 JP
3-961028 Aug 2007 JP
WO9004835 May 1990 WO
WO9011648 Oct 1990 WO
WO9201987 Feb 1992 WO
WO9311503 Jun 1993 WO
WO9406077 Mar 1994 WO
WO9408399 Apr 1994 WO
WO9526001 Sep 1995 WO
WO9810517 Mar 1998 WO
WO9826356 Jun 1998 WO
WO9828697 Jul 1998 WO
WO9829952 Jul 1998 WO
WO9831102 Jul 1998 WO
WO9835294 Aug 1998 WO
WO9835299 Aug 1998 WO
WO9900731 Jan 1999 WO
WO9900739 Jan 1999 WO
WO9912111 Mar 1999 WO
WO9932975 Jul 1999 WO
WO9940522 Aug 1999 WO
WO9944120 Sep 1999 WO
WO9944147 Sep 1999 WO
WO0017771 Mar 2000 WO
WO0038087 Jun 2000 WO
WO0045282 Aug 2000 WO
WO0049496 Aug 2000 WO
WO0077652 Dec 2000 WO
WO0155917 Aug 2001 WO
WO0213000 Feb 2002 WO
WO0229600 Apr 2002 WO
WO0250665 Jun 2002 WO
WO02071196 Sep 2002 WO
WO02071248 Sep 2002 WO
WO02071249 Sep 2002 WO
WO02103532 Dec 2002 WO
WO03017095 Feb 2003 WO
WO03023616 Mar 2003 WO
WO03025781 Mar 2003 WO
WO03036507 May 2003 WO
WO03091875 Nov 2003 WO
WO2004053718 Jun 2004 WO
WO2004114128 Dec 2004 WO
WO-2005010632 Feb 2005 WO
WO2005045692 May 2005 WO
WO2007030395 Mar 2007 WO
Non-Patent Literature Citations (408)
Entry
U.S. Reexamination Application Control No. 90/010,979, Vorbach et al., filed May 4, 2010.
U.S. Reexamination Application Control No. 90/011,087, Vorbach et al., filed Jul. 8, 2010.
U.S. Reexamination Application Control No. 90/010,450, Vorbach et al. filed Mar. 27, 2009.
U.S. Appl. No. 60/109,417, Jefferson et al., filed Nov. 18, 1998.
Abnous et al., “Ultra-Low-Power Domain-Specific Multimedia Processors,” U.C. Berkeley, 1996 IEEE, pp. 461-470.
Abnous, A., et al., “The Pleiades Architecture,” Chapter I of The Application of Programmable DSPs in Mobile Communications, A. Gatherer and A. Auslander, Ed., Wiley, 2002, pp. 1-33.
Ade, et al., “Minimum Memory Buffers in DSP Applications,” Electronics Letters, vol. 30, No. 6, Mar. 17, 1994, pp. 469-471.
Advanced RISC Machines, “Introduction to AMBA,” Oct. 1996, Section 1, pp. 1-7.
ARM, “The Architecture for the Digital World,” http://www.arm.com/products/ Mar. 18, 2009, 3 pages.
ARM, “The Architecture for the Digital World; Milestones,” http://www.arm.com/aboutarm/milestones.html Mar. 18, 2009, 5 pages.
ARM Limited, “ARM Architecture Reference Manual,” Dec. 6, 2000, pp. A10-6-A10-7.
Albahama, O.T. et al., “On the Viability of FPGA-Based Integrated Coprocessors,” Dept. of Electrical and Electronic Engineering, Imperial College of Science, London, 1999 IEEE, pp. 206-215.
Alippi, et al., “Determining the Optimum Extended Instruction Set Architecture for Application Specific Reconfigurable VLIW CPUs,” IEEE, 2001, pp. 50-56.
Altera, “Implementing High-Speed Search Applications with Altera CAM,” Jul. 2001, Ver. 2.1, Application Note 119, 50 pages.
Altera, “Flex 8000 Programmable Logic Device Family,” Altera Corporation Data Sheet, Jan. 2003, pp. 1-62.
Altera, “Flex 10K Embedded Programmable Logic Device Family,” Altera Corporation Data Sheet, Jan. 2003, pp. 1-128.
Altera, “APEX 20K Programmable Logic Device Family,” Altera Corporation Data Sheet, Mar. 2004, ver. 5.1, pp. 1-117.
Altera, “2. TriMatrix Embedded Memory Blocks in Stratix & Stratix GX Devices,” Altera Corporation, Jul. 2005, 28 pages.
Altera, “Apex II Programmable Logic Device Family,” Altera Corporation Data Sheet, Aug. 2002, Ver. 3.0, 99 pages.
Arabi, et al., “PLD Integrates Dedicated High-speed Data Buffering, Complex State machine, and Fast Decode Array,” conference record on WESCON '93, Sep. 28, 1993, pp. 432-436.
Asari, K. et al., “FeRAM circuit technology for system on a chip,” Proceedings First NASA/DoD Workshop on Evolvable Hardware (1999), pp. 193-197.
Athanas, “A Functional Reconfigurable Architecture and Compiler for Adoptive Computing,” IEEE 1993, pp. 49-55.
Athanas, et al., “An Adaptive Hardware Machine Architecture and Compiler for Dynamic Processor Recongifugation,” IEEE, Laboratory for Engineering man/Machine Systems Division of Engineering, Box D, Brown University, Providence, Rhode Island, 1991, pp. 397-400.
Athanas et al., “Processor Reconfiguration Through Instruction-Set Metamorphosis,” 1993, IEEE Computers, pp. 11-18.
Atmel, 5-K-50K Gates Coprocessor FPGA with Free Ram, Data Sheet, Jul. 2006, 55 pages.
Atmel, FPGA-based FIR Filter Application Note, Sep. 1999, 10 pages.
Atmel, “An Introduction to DSP Applications using the AT40K FPGA,” FPGA Application Engineering, San Jose, CA, Apr. 2004, 15 pages.
Atmel, Configurable Logic Design & Application Book, Atmel Corporation, 1995, pp. 2-19 through 2-25.
Atmel, Field Programmable Gate Array Configuration Guide, AT6000 Series Configuration Data Sheet, Sep. 1999, pp. 1-20.
Bacon, D. et al., “Compiler Transformations for High-Performance Computing,” ACM Computing Surveys, 26(4):325-420 (1994).
Bakkes, P.J., et al., “Mixing Fixed and Reconfigurable Logic for Array Processing,” Dept. of Electrical and Electronic Engineering, University of Stellenbosch, South Africa, 1996 IEEE, pp. 118-125.
Ballagh et al., “Java Debug Hardware Models Using JBits,” 8th Reconfigurable Architectures Workshop, 2001, 8 pages.
Baumgarte, V. et al., “PACT XPP—A Self-reconfigurable Data Processing Architecture,” PACT Info. GMBH, Munchen Germany, 2001, 7 pages.
Beck et al., “From control flow to data flow,” TR 89/1050, Oct. 1989, Dept. of Computer Science, Cornell University, Ithaca, NY, pp. 1-25.
Becker, J., “A Partitioning Compiler for Computers with Xputer-based Accelerators,” 1997, Kaiserslautern University, 326 pp.
Becker, J. et al., “Architecture, Memory and Interface Technology Integration of an Industrial/Academic Configurable System-on-Chip (CSoC),” IEEE Computer Society Annual Workshop on VLSI (WVLSI 2003), (Feb. 2003), 6 pages.
Becker, J., “Configurable Systems-on-Chip (CSoC),” (Invited Tutorial), Proc. of 9th Proc. of XV Brazilian Symposium on Integrated Circuit, Design (SBCCI 2002), (Sep. 2002), 6 pages.
Becker et al., “Automatic Parallelism Exploitation for FPL-Based Accelerators,” 1998, Proc. 31st Annual Hawaii International Conference on System Sciences, pp. 169-178.
Becker, J. et al., “Parallelization in Co-compilation for Configurable Accelerators—A Host/accelerator Partitioning Compilation Method,” Proceedings of Asia and South Pacific Design Automation Conference, Yokohama, Japan, Feb. 10-13, 1998, 11 pages.
Bellows et al., “Designing Run-Time Reconfigurable Systems with JHDL,” Journal of VLSI Signal Processing 28, Kluwer Academic Publishers, The Netherlands, 2001, pp. 29-45.
Bittner, “Wormhole Run-time Reconfiguration: Conceptualization and VLSI Design of a High Performance Computing System,” Dissertation Jan. 23, 1997, pp. I-XX, 1-415.
“BlueGene/L—Hardware Architecture Overview,” BlueGene/L design team, IBM Research, Oct. 17, 2003 slide presentation, pp. 1-23.
“BlueGene/L: the next generation of scalable supercomputer,” Kissel et al., Lawrence Livermore National Laboratory, Livermore, California, Nov. 18, 2002, 29 pages.
BlueGene Project Update, Jan. 2002, IBM slide presentation, 20 pages.
BlueGene/L, “An Overview of the BlueGene/L Supercomputer,” The BlueGene/L Team, IBM and Lawrence Livermore National Laboratory, 2002 IEEE. pp. 1-22.
Bolsens, Ivo (CTO Xilinx), “FPGA, a history of interconnect,” Xilinx slide presentation, posted on the internet Oct. 30, 2008 at http://www.docstoc.com/docs/2198008/FPGA-a-history-of-interconnect, 32 pages.
Bondalapati et al., “Reconfigurable Meshes: Theory and Practice,” Dept. of Electrical Engineering-Systems, Univ. of Southern California, Apr. 1997, Reconfigurable Architectures Workshop, International Parallel Processing Symposium, 15 pages.
Bratt, A, “Motorola field programmable analogue arrays, present hardware and future trends,” Motorola Programmable Technology Centre, Gadbrook Business Centre, Northwich, Cheshire, 1998, The Institute of Electrical Engineers, IEE. Savoy Place, London, pp. 1-5.
Cadambi, et al., “Managing Pipeline-reconfigurable FPGAs,” ACM, 1998, pp. 55-64.
Callahan, et al., “The Garp Architecture and C Compiler,” Computer, Apr. 2000, pp. 62-69.
Cardoso, J.M.P., et al., “A novel algorithm combining temporal partitioning and sharing of functional units,” University of Algarve, Faro, Portugal, 2001 IEEE, pp. 1-10.
Cardoso, Joao M.P., and Markus Weinhardt, “XPP-VC: A C Compiler with Temporal Partitioning for the PACT-XPP Architecture,” Field-Programmable Logic and Applications. Reconfigurable Computing is Going Mainstream, 12th International Conference FPL 2002, Proceedings (Lecture Notes in Computer Science, vol. 2438) Springer-Verlag Berlin, Germany, 2002, pp. 864-874.
Cardoso, J.M.P., “Compilation of Java™ Algorithms onto Reconfigurable Computing Systems with Exploitation of Operation-Level Parallelism,” Ph.D. Thesis, Universidade Tecnica de Lisboa (UTL), Lisbon, Portugal Oct. 2000 (Table of Contents and English Abstract only).
Cardoso, J.M.P., et al., “Compilation and Temporal Partitioning for a Coarse-Grain Reconfigurable Architecture,” New Algorithms, Architectures and Applications for Reconfigurable Computing, Lysacht, P. & Rosentiel, W. eds., (2005) pp. 105-115.
Cardoso, J.M.P., et al., “Macro-Based Hardware Compilation of Java™ Bytecodes into a Dynamic Reconfigurable Computing System,” IEEE, Apr. 21, 1999, pp. 2-11.
Chaudhry, G.M. et al., “Separated caches and buses for multiprocessor system,” Circuits and Systems, 1993; Proceedings of the 36th Midwest Symposium on Detroit, MI, USA, Aug. 16-18, 1993, New York, NY IEEE, Aug. 16, 1993, pp. 1113-1116, XP010119918 ISBN: 0-7803-1760-2.
Chen et al., “A reconfigurable multiprocessor IC for rapid prototyping of algorithmic-specific high-speed DSP data paths,” IEEE Journal of Solid-State Circuits, vol. 27, No. 12, Dec. 1992, pp. 1895-1904.
Cherbaka, Mark F., “Verification and Configuration of a Run-time Reconfigurable Custom Computing Integrated Circuit for DSP Applications,” Thesis: Virginia Polytechnic Institute and State University, Jul. 8, 1996, 106 pages.
Clearspeed, CSX Processor Architecture, Whitepaper, PN-1110-0702, 2007, pp. 1-15, www.clearspeed.com.
Clearspeed, CSX Processor Architecture, Whitepaper, PN-1110-0306, 2006, pp. 1-14, www.clearspeed.com.
Coelho, F., “Compiling dynamic mappings with array copies,” Jul. 1997, 12 pages, http://delivery.acm.org/10.1145/270000/263786/p168-coelho.pdf.
Compton, K., et al., “Configurable Computing: A Survey of Systems and Software,” Northwestern University, Dept. of ECE, Technical Report, 1999, (XP-002315148), 39 pages.
Cong et al., “Structural Gate Decomposition for Depth-Optimal Technology Mapping in LUT-Based FPGA Designs,” Univ. of California, ACM Transactions on Design Automation of Electronic Systems, vol. 5, No. 2, Apr. 2000, pp. 193-225.
Cook, Jeffrey J., “The Amalgam Compiler Infrastructure,” Thesis at the University of Illinois at Urbana-Champaign (2004) Chapter 7 & Appendix G.
Cronquist, D., et al., “Architecture Design of Reconfigurable Pipelined Datapaths,” Department of Computer Science and Engineering, University of Washington, Seattle, WA, Proceedings of the 20th Anniversary Conference on Advanced Research in VSLI, 1999, pp. 1-15.
Culler, D.E; Singh, J.P., “Parallel Computer Architecture,” pp. 434-437, 1999, Morgan Kaufmann, San Francisco, CA USA, XP002477559.
Culler, D.E; Singh, J.P., “Parallel Computer Architecture,” p. 17, 1999, Morgan Kaufmann, San Francisco, CA USA, XP002477559.
DeHon, A., “DPGA Utilization and Application,” MIT Artificial Intelligence Laboratory, Proceedings of the Fourth International ACM Symposium on Field-Programmable Gate Arrays (FPGA 1996), IEEE Computer Society, pp. 1-7.
DeHon, Andre, “Reconfigurable Architectures for General-Purpose Computing,” Massachusetts Institute of Technology, Technical Report AITR-1586, Oct. 1996, XP002445054, Cambridge, MA, pp. 1-353.
Del Corso et al., “Microcomputer Buses and Links,” Academic Press Inc. Ltd., 1986, pp. 138-143, 277-285.
Diniz, P., et al., “Automatic Synthesis of Data Storage and Control Structures for FPGA-based Computing Engines,” 2000, IEEE, pp. 91-100.
Diniz, P., et al., “A behavioral synthesis estimation interface for configurable computing,” University of Southern California, Marina Del Rey, CA, 2001 IEEE, pp. 1-2.
Donandt, “Improving Response Time of Programmable Logic Controllers by use of a Boolean Coprocessor,” AEG Research Institute Berlin, IEEE, 1989, pp. 4-167-4-169.
Dutt, et al., “If Software is King for Systems-in-Silicon, What's New in Compilers?” IEEE, 1997, pp. 322-325.
Ebeling, C., et al., “Mapping Applications to the RaPiD Configurable Architecture,” Department of Computer Science and Engineering, University of Washington, Seattle, WA, FPGAs for Custom Computing Machines. 1997. Proceedings., The 5th Annual IEEE Symposium, Publication Date: Apr. 16-18, 1997, 10 pages.
Equator, Pixels to Packets, Enabling Multi-Format High Definition Video, Equator Technologies BSP-15 Product Brief, www.equator.com, 2001, 4 pages.
Fawcett, B.K., “Map, Place and Route: The Key to High-Density PLD Implementation,” Wescon Conference, IEEE Center (Nov. 7, 1995) pp. 292-297.
Ferrante, J., et al., “The Program Dependence Graph and its Use in Optimization ACM Transactions on Programming Languages and Systems,” Jul. 1987, USA, [online] Bd. 9, Nr., 3, pp. 319-349, XP002156651 ISSN: 0164-0935 ACM Digital Library.
Fineberg, S, et al., “Experimental Analysis of a Mixed-Mode Parallel Architecture Using Bitonic Sequence Sorting,” Journal of Parallel and Distributed Computing, vol. 11, No. 3, Mar. 1991, pp. 239-251.
FOLDOC, The Free On-Line Dictionary of Computing, “handshaking,” online Jan. 13, 1995, retrieved from Internet Jan. 23, 2011 at http://foldoc.org/handshake.
Fornaciari, et al., System-level power evaluation metrics, 1997 Proceedings of the 2nd Annual IEEE International Conference on Innovative Systems in Silicon, New York, NY, Oct. 1997, pp. 323-330.
Forstner, “Wer Zuerst Kommt, Mahlt Zuerst!: Teil 3: Einsatzgebiete und Anwendungbeispiele von FIFO-Speichern,” Elektronik, Aug. 2000, pp. 104-109.
Franklin, Manoj, et al., “A Fill-Unit Approach to Multiple Instruction Issue,” Proceedings of the Annual International Symposium on Microarchitecture, Nov. 1994, pp. 162-171.
Freescale Slide Presentation, An Introduction to Motorola's RCF (Reconfigurable Compute Fabric) Technology, Presented by Frank David, Launched by Freescale Semiconductor, Inc., 2004, 39 pages.
Galanis, M.D. et al., “Accelerating Applications by Mapping Critical Kernels on Coarse-Grain Reconfigurable Hardware in Hybrid Systems,” Proceedings of the 13th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 2005, 2 pages.
Genius, D., et al., “A Case for Array Merging in Memory Hierarchies,” Proceedings of the 9th International Workshop on Compilers for Parallel Computers, CPC'01 (Jun. 2001), 10 pages.
Gokhale, M.B., et al., “Automatic Allocation of Arrays to Memories in FPGA processors with Multiple Memory Banks,” Field-Programmable Custom Computing Machines, 1999, IEEE, pp. 63-69.
Guccione et al., “JBits: Java based interface for reconfigurable computing,” Xilinx, Inc., San Jose, CA, 1999, 9 pages.
Guo, Z. et al., “A Compiler Intermediate Representation for Reconfigurable Fabrics,” University of California, Riverside, Dept. of Electrical Engineering, IEEE 2006, 4 pages.
Gwennap, Linley, “P6 Underscores Intel's Lead,” Microprocessor Report, vol. 9., No. 2, Feb. 16, 1995 (MicroDesign Resources), p. 1 and pp. 6-15.
Gwennap, Linley, “Intel's P6 Bus Designed for Multiprocessing,” Microprocessor Report, vol. 9, No. 7 (MicroDesign Resources), May 30, 1995, p. 1 and pp. 6-10.
Hammes, Jeff, et al., “Cameron: High Level Language Compilation for Reconfigurable Systems,” Department of Computer Science, Colorado State University, Conference on Parallel Architectures and Compilation Techniques, Oct. 12-16, 1999, 9 pages.
Hartenstein, R. et al., “A new FPGA architecture for word-oriented datapaths,” Proc. FPL'94, Springer LNCS, Sep. 1994, pp. 144-155.
Hartenstein, R., “Coarse grain reconfigurable architectures,” Design Automation Conference, 2001, Proceedings of the ASP-DAC 2001 Asia and South Pacific, Jan. 30-Feb. 2, 2001, IEEE Jan. 30, 2001, pp. 564-569.
Hartenstein et al., “Parallelizing Compilation for a Novel Data-Parallel Architecture,” 1995, PCAT-94, Parallel Computing: Technology and Practice, 13 pp.
Hartenstein et al., “A Two-Level Co-Design Framework for Xputer-based Data-driven Reconfigurable Accelerators,” 1997, Proceedings of the Thirtieth Annual Hawaii International Conference on System Sciences, 10 pp.
Hastie et al., “The implementation of hardware subroutines on field programmable gate arrays,” Custom Integrated Circuits Conference, 1990, Proceedings of the IEEE 1990, May 16, 1990, pp. 31.3.1-31.4.3 (3 pages).
Hauck, “The Roles of FPGAs in Reprogrammable Systems,” IEEE, Apr. 1998, pp. 615-638.
Hauser, J.R., et al., “Garp: A MIPS Processor with a Reconfigurable Coprocessor,” University of California, Berkeley, IEEE, Apr. 1997, pp. 12-23.
Hauser, John Reid, (Dissertation) “Augmenting a Microprocessor with Reconfigurable Hardware,” University of California, Berkeley, Fall 2000, 255 pages. (submitted in 3 PDFs, Parts 1-3).
Hauser, John R., “The Garp Architecture,” University of California at Berkeley, Computer Science Division, Oct. 1997, pp. 1-55.
Hedge, S.J., “3D WASP Devices for On-line Signal and Data Processing,” 1994, International Conference on Wafer Scale Integration, pp. 11-21.
Hendrich, N., et al., “Silicon Compilation and Rapid Prototyping of Microprogrammed VLSI-Circuits with MIMOLA and SOLO 1400,” Microprocessing & Microprogramming (Sep. 1992) vol. 35(1-5), pp. 287-294.
Huang, Libo et al., “A New Architecture for Multiple-Precision Floating-Point Multiply-Add Fused Unit Design,” School of Computer National University of Defense Technology, China, IEEE 2007, 8 pages.
Hwang, K., “Advanced Computer Architecture—Parallelism, Scalability, Programmability,” 1993, McGraw-Hill, Inc., pp. 348-355.
Hwang, K., “Computer Architecture and Parallel Processing,” Data Flow Computers and VLSI Computations, XP-002418655, 1985 McGraw-Hill, Chapter 10, pp. 732-807.
Hwang, L., et al., “Min-cut Replication in Partitioned Networks,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, [online] Bd. 14, Nr. 1, Jan. 1995, pp. 96-106, XP00053228 USA ISSN: 0278-0070 IEEE Xplore.
IBM Technical Disclosure Bulletin, IBM Corp., New York, XP000424878, Bd. 36, Nr. 11, Nov. 1, 1993, pp. 335-336.
“IEEE Standard Test Access Port and Boundary-Scan Architecture,” IEEE Std. 1149.1-1990, 1993, pp. 1-127.
IMEC, “ADRES multimedia processor & 3MF multimedia platform,” Transferable IP, IMEC Technology Description, (Applicants believe the date to be Oct. 2005), 3 pages.
Intel, “Pentium Pro Family Developer's Manual, vol. 3: Operating System Writer's Guide,” Intel Corporation, Dec. 1995, [submitted in 4 PDF files: Part I, Part II, Part III and Part IV], 458 pages.
Intel, Intel MXP5800/MXP5400 Digital Media Processors, Architecture Overview, Jun. 2004, Revision 2.4, pp. 1-24.
Inside DSP, “Ambric Discloses Massively Parallel Architecture,” Aug. 23, 2006, http://www.insidedsp.com/Articles/tabid/64/articleType/ArticleView/articleId/155/Default.aspx, 2 pages.
Iseli, C., et al. “A C++ Compiler for FPGA Custom Execution Units Synthesis,” IEEE, 1995, pp. 173-179.
Isshiki, Tsuyoshi, et al., “Bit-Serial Pipeline Synthesis for Multi-FPGA Systems with C++ Design Capture,” 1996 IEEE, pp. 38-47.
Jacob, J., et al., “Memory Interfacing and Instruction Specification for Reconfigurable Processors,” ACM Feb. 1999, pp. 145-154.
Janssen et al.., “A Specification Invariant Technique for Regularity Improvement between Flow-Graph Clusters,” Mar. 1996, 6 pages, http://delivery.acm.org/10.1145/790000/787534/74230138.pdf.
Jantsch, Axel et al., “A Case Study on Hardware/Software Partitioning,” Royal Institute of Technology, Kista, Sweden, Apr. 10, 1994, IEEE, pp. 111-118.
Jantsch, Axel et al., “Hardware/Software Partitioning and Minimizing Memory Interface Traffic,” Electronic System Design Laboratory, Royal Institute of Technology, ESDLab, Electrum 229, S-16440 Kista, Sweden (Apr. 1994), pp. 226-231.
Jo, Manhwee et al., “Implementation of Floating-Point Operations for 3D Graphics on a Coarse-Grained Reconfigurable Architecture,” Design Automation Laboratory, School of EE/CS, Seoul National University, Korea, IEEE 2007, pp. 127-130.
John, L., et al., “A Dynamically Reconfigurable Interconnect for Array Processors,” vol. 6, No. 1, Mar. 1998, IEEE, pp. 150-157.
Kanter, David, “NVIDIA's GT200: Inside a Parallel Processor,” http://www.realworldtech.com/page.cfm?ArticleID=RWT090989195242&p=1, Sep. 8, 2008, 27 pages.
Kastrup, B., “Automatic Hardware Synthesis for a Hybrid Reconfigurable CPU Featuring Philips CPLDs,” Proceedings of the PACT Workshop on Reconfigurable Computing, 1998, pp. 5-10.
Kaul, M., et al., “An automated temporal partitioning and loop fission approach of FPGA based reconfigurable synthesis of DSP applications,” University of Cincinnati, Cincinnati, OH, ACM 1999, pp. 616-622.
Kean, T.A., “Configurable Logic: A Dynamically Programmable Cellular Architecture and its VLSI Implementation,” University of Edinburgh (Dissertation) 1988, pp. 1-286. [in two PDFs, Pt.1 and Pt.2.].
Kean, T., et al., “A Fast Constant Coefficient Multiplier for the XC6200,” Xilinx, Inc., Lecture Notes in Computer Science, vol. 1142, Proceedings of the 6th International Workshop of Field-Programmable Logic, 1996, 7 pages.
Kim et al., “A Reconfigurable Multifunction Computing Cache Architecture,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems vol. 9, Issue 4, Aug. 2001 pp. 509-523.
Knittel, Gunter, “A PCI-compatible FPGA-Coprocessor for 2D/3D Image Processing,” University of Turgingen, Germany, 1996 IEEE, pp. 136-145.
Koch, A., et al., “Practical Experiences with the SPARXIL Co-Processor,” 1998, IEEE, pp. 394-398.
Koch, Andreas et al., “High-Level-Language Compilation for Reconfigurable Computers,” Proceedings of European Workshop on Reconfigurable Communication-Centric SOCS (Jun. 2005) 8 pages.
Koren et al., “A data-driven VLSI array for arbitrary algorithms,” IEEE Computer Society, Long Beach, CA vol. 21, No. 10, Oct. 1, 1988, pp. 30-34.
Kung, “Deadlock Avoidance for Systolic Communication,” 1988 Conference Proceedings of the 15th Annual International Symposium on Computer Architecture, May 30, 1998, pp. 252-260.
Lange, H. et al., “Memory access schemes for configurable processors,” Field-Programmable Logic and Applications, International Workshop, FPL, Aug. 27, 2000, pp. 615-625, XP02283963.
Larsen, S., et al., “Increasing and Detecting Memory Address Congruence,” Proceedings of the 2002 IEEE International Conference on Parallel Architectures and Compilation Techniques (PACT'02), pp. 1-12 (Sep. 2002).
Lee et al., “A new distribution network based on controlled switching elements and its applications,” IEEE/ACT Trans. of Networking, vol. 3, No. 1, pp. 70-81, Feb. 1995.
Lee, Jong-eun, et al., “Reconfigurable ALU Array Architecture with Conditional Execution,” International Soc. Design Conference (ISOOC) [online] Oct. 25, 2004, Seoul, Korea, 5 pages.
Lee, R. B., et al., “Multimedia extensions for general-purpose processors,” IEEE Workshop on Signal Processing Systems, SIPS 97—Design and Implementation (1997), pp. 9-23.
Lee, Ming-Hau et al., “Design and Implementation of the MorphoSys Reconfigurable Computing Processors,” The Journal of VLSI Signal Processing, Kluwer Academic Publishers, BO, vol. 24, No. 2-3, Mar. 2, 2000, pp. 1-29.
Li et al., “Hardware-Software Co-Design of Embedded Reconfigurable Architectures,” Los Angeles, CA, 2000 ACM, pp. 507-512.
Li, Zhiyuan, et al., “Configuration prefetching techniques for partial reconfigurable coprocessor with relocation and defragmentation,” International Symposium on Field Programmable Gate Arrays, Feb. 1, 2002, pp. 187-195.
Ling, X., “WASMII: An MPLD with Data-Driven Control on a Virtual Hardware,” Journal of Supercomputing, Kluwer Acdemic Publishers, Dordrecht, Netherlands, 1995, pp. 253-276.
Ling et al., “WASMII: A Multifunction Programmable Logic Device (MPLD) with Data Driven Control,” The Transactions of the Institute of Electronics, Information and Communication Engineers, Apr. 25, 1994, vol. J77-D-1, Nr. 4, pp. 309-317.
Mano, M.M., “Digital Design,” by Prentice Hall, Inc., Englewood Cliffs, New Jersey 07632, 1984, pp. 119-125, 154-161.
Margolus, N., “An FPGA architecture for DRAM-based systolic computations,” Boston University Center for Computational Science and MIT Artificial Intelligence Laboratory, IEEE 1997, pp. 2-11.
Marshall et al., “A Reconfigurable Arithmetic Array for Multimedia Applications,” FPGA '99 Proceedings of the 1999 ACM/SIGDA Seventh International Symposium on Field Programmable Gate Arrays, 10 pages.
Maxfield,C., “Logic that Mutates While-U-Wait,” EDN (Bur. Ed) (USA), EDN (European Edition), Nov. 7, 1996, Cahners Publishing, USA, pp. 137-140, 142.
Mei, Bingfeng, “A Coarse-Grained Reconfigurable Architecture Template and Its Compilation Techniques,” Katholeike Universiteit Leuven, PhD Thesis, Jan. 2005, IMEC vzw, Universitair Micro-Electronica Centrum, Belgium, pp. 1-195 (and Table of Contents).
Mei, Bingfeng et al., “Design and Optimization of Dynamically Reconfigurable Embedded Systems,” IMEC vzw, 2003, Belgium, 7 pages, http://www.imec.be/reconfigurable/pdf/ICERSA—01—design.pdf.
Mei, Bingfeng et al., “Adres: An Architecture with Tightly Coupled VLIW Processor and Coarse-Grained Reconfigurable Matrix,” Proc. Field-Programmable Logic and Applications (FPL 03), Springer, 2003, pp. 61-70.
Melvin, Stephen et al., “Hardware Support for Large Atomic Units in Dynamically Scheduled Machines,” Computer Science Division, University of California, Berkeley, IEEE (1988), pp. 60-63.
Microsoft Press Computer Dictionary, Second Edition, 1994, Microsoft Press, ISBN 1-55615-597-2, p. 10.
Miller, M.J., et al., “High-Speed FIFOs Contend with Widely Differing Data Rates: Dual-port RAM Buffer and Dual-pointer System Provide Rapid, High-density Data Storage and Reduce Overhead,” Computer Design, Sep. 1, 1985, pp. 83-86.
Mirsky, E. DeHon, “MATRIX: A Reconfigurable Computing Architecture with Configurable Instruction Distribution and Deployable Resources,” Proceedings of the IEEE Symposium on FPGAs for Custom Computing Machines, 1996, pp. 157-166.
Miyamori, T., et al., “REMARC: Reconfigurable Multimedia Array Coprocessor,” Computer Systems Laboratory, Stanford University, IEICE Transactions on Information and Systems E Series D, 1999; (abstract): Proceedings of the 1998 ACM/SIGDA sixth international symposium on Field programmable gate arrays, p. 261, Feb. 22-25, 1998, Monterey, California, United States, pp. 1-12.
Moraes, F., et al., “A Physical Synthesis Design Flow Based on Virtual Components,” XV Conference on Design of Circuits and Integrated Systems (Nov. 2000) 6 pages.
Muchnick, S., “Advanced Compiler Design and Implementation,” (Morgan Kaufmann 1997), Table of Contents, 11 pages.
Murphy, C., “Virtual Hardware Using Dynamic Reconfigurable Field Programmable Gate Arrays,” Engineering Development Centre, Liverpool John Moores University, UK, GERI Annual Research Symposium 2005, 8 pages.
Myers, G. “Advances in Computer Architecture,” Wiley-Interscience Publication, 2nd ed., John Wiley & Sons, Inc., 1978, pp. 463-494.
Nageldinger, U., “Design-Space Exploration for Coarse Grained Reconfigurable Architectures,” (Dissertation) Universitaet Kaiserslautern, 2000, Chapter 2, pp. 19-45.
Neumann, T., et al., “A Generic Library for Adaptive Computing Environments,” Field Programmable Logic and Applications, 11th International Conference, FPL 2001, Proceedings (Lecture Notes in Computer Science, vol. 2147) (2001) pp. 503-512.
Newton, Harry, “Newton's Telecom Dictionary,” Ninteenth Edition, 2003, CMP Books, p. 40.
Nilsson, et al., “The Scalable Tree Protocol—A Cache Coherence Approaches for Large-Scale Multiprocessors,” IEEE, pp. 498-506, Dec. 1992.
Norman, R.S., “Hyperchip Business Summary, The Opportunity,” Jan. 31, 2000, pp. 1-3.
Ohmsha, “Information Processing Handbook,” edited by the Information Processing Society of Japan, pp. 376, Dec. 21, 1998.
Olukotun, K., “The Case for a Single-Chip Microprocessor,” ACM Sigplan Notices, ACM, Association for Computing Machinery, New York, vol. 31, No. 9, Sep. 1996 pp. 2-11.
Olukotun et al., “Rationale, Designa nd Performance of the Hydra Multiprocessor,” Computer Systems Laboratory, Stanford University, CA, 1994, pp. 1-19.
Ozawa, Motokazu et al., “A Cascade ALU Architecture for Asynchronous Super-Scalar Processors,” IEICE Transactions on Electronics, Electronics Society, Tokyo, Japan, vol. E84-C, No. 2, Feb. 2001, pp. 229-237.
PACT Corporation, “The XPP Communication System,” Technical Report 15 (2000), pp. 1-16.
Page, Ian, “Reconfigurable processor architectures,” Oxford University Computing Laboratory, Oxford, UK, 1996 Elsevier Science B.V., pp. 185-196.
Parhami, B., “Parallel Counters for Signed Binary Signals,” Signals, Systems and Computers, 1989, Twenty-Third Asilomar Conference, vol. 1, pp. 513-516.
PCI Local Bus Specification, Production Version, Revision 2.1, Portland, OR, Jun. 1, 1995, pp. 1-281.
Piotrowski, A., “IEC-BUS, Die Funktionsweise des IEC-Bus unde seine Anwendung in Geräten und Systemen,” 1987, Franzis-Verlag GmbH, München, pp. 20-25. [English Abstract Provided].
Pirsch, P. et al., “VLSI implementations of image and video multimedia processing systems,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, No. 7, Nov. 1998, pp. 878-891.
Pistorius et al., “Generation of Very Large Circuits to Benchmark the Partitioning of FPGAs,” Monterey, CA, ACM 1999, pp. 67-73.
Price et al., “Debug of Reconfigurable Systems,” Xilinx, Inc., San Jose, CA, Proceedings of SPIE, 2000, pp. 181-187.
Quenot, G.M., et al., “A Reconfigurable Compute Engine for Real-Time Vision Automata Prototyping,” Laboratoire Systeme de Perception, DGA/Etablissement Technique Central de l'Armement, France, 1994 IEEE, pp. 91-100.
Ramanathan et al., “Reconfigurable Filter Coprocessor Architecture for DSP Applications,” Journal of VLSI Signal Processing, 2000, vol. 26, pp. 333-359.
Rehmouni et al.., “Formulation and evaluation of scheduling techniques for control flow graphs,” Dec. 1995, 6 pages, http://delivery.acm.org/10.1145/230000/224352/p386-rahmouni.pdf.
Razdan et al., A High-Performance Microarchitecture with Hardware-Programmable Functional Units, Micro-27, Proceedings of the 27th Annual International Symposium on Microarchitecture, IEEE Computer Society and Association for Computing Machinery, Nov. 30-Dec. 2, 1994, pp. 172-180.
Rotenberg, Eric., et al., “Trace Cache: a Low Latency Approach to High Bandwidth Instruction Fetching,” Proceedings of the 29th Annual International Symposium on Microarchitecture, Paris, France, IEEE (1996), 12 pages.
Ryo, A., “Auszug aus Handbuch der Informationsverarbeitung,” ed. Information Processing Society of Japan, Information Processing Handbook, New Edition, Software Information Center, Ohmsha, Dec. 1998, 4 pages. [Translation provided].
Saleeba, Z.M.G., “A Self-Reconfiguring Computer System,” Department of Computer Science, Monash University (Dissertation) 1998, pp. 1-306.
Saleeba, M. “A Self-Contained Dynamically Reconfigurable Processor Architecture,” Sixteenth Australian Computer Science Conference, ASCS-16, QLD, Australia, Feb. 1993, pp. 59-70.
Salefski, B. et al., “Re-configurable computing in wireless,” Annual ACM IEEE Design Automation Conference: Proceedings of the 38th conference on Design automation (2001) pp. 178-183.
Schmidt, H. et al., “Behavioral synthesis for FGPA-based computing,” Carnegie Mellon University, Pittsburgh, PA, 1994 IEEE, pp. 125-132.
Schmidt, U. et al., “Datawave: A Single-Chip Multiprocessor for Video Applications,” IEEE Micro, vol. 11, No. 3, May/Jun. 1991, pp. 22-25, 88-94.
Schmit, et al., “Hidden Markov Modeling and Fuzzy Controllers in FPGAs, FPGAs for Custom Computing Machines,” 1995; Proceedings, IEEE Symposium in Napa Valley, CA, Apr. 1995, pp. 214-221.
Schönfeld, M., et al., “The LISA Design Environment for the Synthesis of Array Processors Including Memories for the Data Transfer and Fault Tolerance by Reconfiguration and Coding Techniques,” J. VLSI Signal Processing Systems for Signal, Image, and Video Technology, (Oct. 1, 1995) vol. 11(1/2), pp. 51-74.
Shanley, Tom. Pentium Pro and Pentium II System Architecture, MindShare, Inc., Addition Wesley, 1998, Second Edition, pp. 11-17; Chapter 7; Chapter 10; pp. 209-211, and p. 394.
Shin, D., et al., “C-based Interactive RTL Design Methodology,” Technical Report CECS-03-42 (Dec. 2003) pp. 1-16.
Shirazi, et al., “Quantitative analysis of floating point arithmetic on FPGA based custom computing machines,” IEEE Symposium on FPGAs for Custom Computing Machines, IEEE Computer Society Press, Apr. 19-21, 1995, pp. 155-162.
Short, Kenneth L., Microprocessors and Programmed Logic, Prentice Hall, Inc., New Jersey 1981, p. 34.
Shoup, Richard, “Programmable Cellular Logic Arrays,” Dissertation, Computer Science Department, Carnegie-Mellon University, Mar. 1970, 193 pages.
Siemers, C., “Rechenfabrik Ansaetze Fuer Extern Parallele Prozessoren,” Verlag Heinze Heise GmbH., Hannover, DE No. 15, Jul. 16, 2001, pp. 170-179.
Siemers et al., “The .>S<puter: A Novel Micoarchitecture Model for Execution inside Superscalar and VLIW Processors Using Reconfigurable Hardware,” Australian Computer Science Communications, vol. 20, No. 4, Computer Architecture, Proceedings of the 3rd Australian Computer Architecture Conference, Perth, John Morris, Ed., Feb. 2-3, 1998, pp. 169-178.
Simunic, et al., Source Code Optimization and Profiling of Energy Consumation in Embedded Systems, Proceedings of the 13th International Symposium on System Synthesis, Sep. 2000, pp. 193-198.
Singh, H. et al., “MorphoSys: An Integrated Reconfigurable System for Data-Parallel Computation-Intensive Applications,” University of California, Irvine, CA. and Federal University of Rio de Janeiro, Brazil, 2000, IEEE Transactions on Computers, pp. 1-35; also published in IEEE Transactions on Computers, vol. 49, No. 5, May 2000, pp. 465-481.
Sinha et al., “System-dependence-graph-based slicing of programs with arbitrary interprocedural control flow,” May 1999, 10 pages, http://delivery.acm.org/10.1145/310000/203675/p432-sinha.pdf.
Skokan, Z.E., “Programmable logic machine (a programmable cell array),” IEEE Journal of Solid-State Circuits, vol. 18, Issue 5, Oct. 1983, pp. 572-578.
Sondervan, J., “Retiming and logic synthesis,” Electronic Engineering (Jan. 1993) vol. 65(793), pp. 33, 35-36.
Soni, M., “VLSI Implementation of a Wormhole Run-time Reconfigurable Processor,” Jun. 2001, (Masters Thesis)Virginia Polytechnic Institute and State University, 88 pages.
Stallings, William, “Data & Computer Communications,” Sixth Edition, Jun. 2000, Prentice-Hall, Inc., ISBN 0-084370-9, pp. 195-196.
Sueyoshi, T, “Present Status and Problems of the Reconfigurable Computing Systems.Toward the Computer Evolution,” Department of Artificial Intelligence, Kyushi Institute of Technology, Fukuoka, Japan; Institute of Electronics, Information and Communication Engineers, vol. 96, No. 426, IEICE Technical Report (1996), pp. 111-119 [English Abstract Only].
Sundararajan et al., “Testing FPGA Devices Using JBits,” Proc. MAPLD 2001, Maryland, USA, Katz (ed.), NASA, CA, 8 pages.
Sutton et al., “A Multiprocessor DSP System Using PADDI-2,” U.C. Berkeley, 1998 ACM, pp. 62-65.
Tau, E., et al., “A First Generation DPGA Implementation,” FPD'95, pp. 138-143.
Tenca, A.F., et al., “A Variable Long-Precision Arithmetic Unit Design for Reconfigurable Coprocessor Architectures,” University of California, Los Angeles, 1998, pp. 216-225.
The XPP White Paper, Release 2.1, PACT—A Technical Perspective, Mar. 27, 2002, pp. 1-27.
TMS320C54X DSP: CPU and Peripherals, Texas Instruments, 1996, 25 pages.
TMS320C54x DSP: Mnemonic Instruction Set, Texas Instruments, 1996, 342 pages.
Translation of DE 101 39 170, filed Aug. 16, 2001, by examiner in related case using Google Translate, 10 pages.
Tsutsui, A., et al., “YARDS: FPGA/MPU Hybrid Architecture for Telecommunication Data Processing,” NTT Optical Network Systems Laboratories, Japan, 1997 ACM, pp. 93-99.
Vasell et al., “The Function Processor: A Data-Driven Processor Array for Irregular Computations,” Chalmers University of Technology, Sweden, 1992, pp. 1-21.
Venkatachalam et al., “A highly flexible, distributed multiprocessor architecture for network processing,” Computer Networks, The International Journal of Computer and Telecommunications Networking, vol. 41, No. 5, Apr. 5, 2003, pp. 563-568.
Villasenor, et al., “Configurable Computing Solutions for Automatic Target Recognition,” IEEE, 1996 pp. 70-79.
Villasenor, et al., “Configurable Computing,” Scientific American, vol. 276, No. 6, Jun. 1997, pp. 66-71.
Villasenor, et al., “Express Letters Video Communications Using Rapidly Reconfigurable Hardware,” IEEE Transactions on Circuits and Systems for Video Technology, IEEE, Inc., NY, Dec. 1995, pp. 565-567.
Wada, et al., “A Performance Evaluation of Tree-based Coherent Distributed Shared Memory,” Proceedings of the Pacific RIM Conference on Communications, Comput and Signal Processing, Victoria, May 19-21, 1993, pp. 390-393.
Waingold, E., et al., “Baring it all to software: Raw machines,” IEEE Computer, Sep. 1997, at 86-93.
Webster's Ninth New Collegiate Dictionary, Merriam-Webster, Inc., 1990, p. 332 (definition of “dedicated”).
Weinhardt, M., “Compilation Methods for Structure-programmable Computers,” dissertation, ISBN 3-89722-011-3, 1997. [Table of Contents and English Abstract Provided].
Weinhardt, Markus et al., “Pipeline Vectorization for Reconfigurable Systems,” 1999, IEEE, pp. 52-62.
Weinhardt, Markus et al., “Pipeline Vectorization,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 20, No. 2, Feb. 2001, pp. 234-248.
Weinhardt, Markus et al., “Memory Access Optimization Reconfigurable Systems”, IEEE Proceedings Computers and Digital Techniques, 48(3) (May 2001) pp. 1-16.
Wittig, et al., “OneChip: An FPGA Processor with Reconfigurable Logic,” IEEE, 1996, pp. 126-135.
Wolfe, M. et al., “High Performance Compilers for Parallel Computing,” (Addison-Wesley 1996) Table of Contents, 11 pages.
Wu, et al., “A New Cache Directory Scheme,” IEEE, pp. 466-472, Jun. 1996.
Xilinx, “Logic Cell Array Families: XC4000, XC4000A and XC4000H,” 1994, product description, pp. 2-7, 2-9, 2-14, 2-15, 8-16, and 9-14.
Xilinx, “The Programmable Logic Data Book,” 1994, Section 2, pp. 1-231, Section 8, pp. 1, 23-25, 29, 45-52, 169-172.
Xilinx, “Spartan and SpartanXL Families Field Programmable Gate Arrays,” Jan. 1999, Xilinx, pp. 4-3 through 4-70.
Xilinx, “XC6200 Field Programmable Gate Arrays,” Apr. 24, 1997, Xilinx product description, pp. 1-73.
Xilinx, “XC3000 Series Field Programmable Gate Arrays,” Nov. 6, 1998, Xilinx product description, pp. 1-76.
Xilinx, “XC4000E and XC4000X Series Field Programmable Gate Arrays,” May 14, 1999, Xilinx product description, pp. 1-68.
Xilinx, “Virtex-E 1.8 V Extended Memory Field Programmable Gate Arrays,” (v1.5) Jul. 17, 2002, Xilinx Production Product Specification, pp. 1-118.
Xilinx, “Virtex-E 1.8 V Extended Memory Field Programmable Gate Arrays,” (v2.2) Sep. 10, 2002, Xilinx Production Product Specification, pp. 1-52.
Xilinx, “Virtex-II and Virtex-II Pro X FPGA User Guide,” Mar. 28, 2007, Xilinx user guide, pp. 1-559.
Xilinx, “Virtex-II and Virtex-II Pro X FPGA Platform FPGAs: Complete Data Sheet,” (v4.6) Mar. 5, 2007, pp. 1-302.
Xilinx, “Virtex-II Platform FPGAs: Complete Data Sheet,” (v3.5) Nov. 5, 2007, pp. 1-226.
Xilinx, White Paper 370: (Virtex-6 and Spartan-6 FPGA Families) “Reducing Switching Power with Intelligent Clock Gating,” Frederic Rivoallon, May 3, 2010, pp. 1-5.
Xilinx, White Paper 298: (Spartan-6 and Virtex-6 Devices) “Power Consumption at 40 and 50 nm,” Matt Klein, Apr. 13, 2009, pp. 1-21.
Xu, H. et al., “Parallel QR Factorization on a Block Data Flow Architecture,” Conference Proceeding Article, Mar. 1, 1992, pp. 332-336.
Ye, Z.A. et al., “A C-Compiler for a Processor With a Reconfigurable Functional Unit,” FPGA 2000 ACM/SIGNA International Symposium on Field Programmable Gate Arrays, Monterey, CA Feb. 9-11, 2000, pp. 95-100.
Yeung, A. et al., “A data-driven architecture for rapid prototyping of high throughput DSP algorithms,” Dept. of Electrical Engineering and Computer Sciences, Univ. of California, Berkeley, USA, Proceedings VLSI Signal Processing Workshop, IEEE Press, pp. 225-234, Napa, Oct. 1992.
Yeung, A. et al., “A reconfigurable data-driven multiprocessor architecture for rapid prototyping of high throughput DSP algorithms,” Dept. of Electrical Engineering and Computer Sciences, Univ. of California, Berkeley, USA, pp. 169-178, IEEE 1993.
Zhang, et al., “Architectural Evaluation of Flexible Digital Signal Processing for Wireless Receivers, Signals, Systems and Computers,” 2000; Conference Record of the Thirty-Fourth Asilomar Conference, Bd. 1, Oct. 29, 2000, pp. 78-83.
Zhang, et al., “A 1-V Heterogeneous Reconfigurable DSP IC for Wireless Baseband Digital Signal Processing,” IEEE Journal of Solid-State Circuits, vol. 35, No. 11, Nov. 2000, pp. 1697-1704.
Zhang et al., “Abstract: Low-Power Heterogeneous Reconfigurable Digital Signal Processors with Energy-Efficient Interconnect Network,” U.C. Berkeley (2004), pp. 1-120.
Zima, H. et al., “Supercompilers for parallel and vector computers,” (Addison-Wesley 1991) Table of Contents, 5 pages.
Zucker, Daniel F., “A Comparison of Hardware Prefetching Techniques for Multimedia Benchmarks,” Technical Report: CSL-TR-95-683, Dec. 1995, 26 pages.
Xilinx, Inc.'s and Avnet, Inc.'s Disclosure Pursuant to P.R. 4-2; PACT XPP Technologies, AG. V. Xilinx, Inc. and Avnet, Inc., Case No. 2:07-cv-00563-TJW-CE, U.S. District Court for the Eastern District of Texas, Dec. 28, 2007, 4 pages.
Xilinx, Inc.'s and Avnet, Inc.'s Disclosure Pursuant to P.R. 4-1; PACT XPP Technologies, AG. V. Xilinx, Inc. and Avnet, Inc., Case No. 2:07-cv-00563-TJW-CE, U.S. District Court for the Eastern District of Texas, Dec. 28, 2007, 9 pages.
Defendant's Claim Construction Chart for P.R. 4-2 Constructions and Extrinsic Evidence for Terms Proposed by Defendants, PACT XPP Technologies, AG. V. Xilinx, Inc. and Avnet, Inc., Case No. 2:07-cv-00563-TJW-CE, U.S. District Court for the Eastern District of Texas, Dec. 28, 2007, pp. 1-19.
PACT's P.R. 4-1 List of Claim Terms for Construction, PACT XPP Technologies, AG. V. Xilinx, Inc. and Avnet, Inc., Case No. 2:07-cv-00563-TJW-CE, U.S. District Court for the Eastern District of Texas, Dec. 28, 2007, pp. 1-7.
PACT's P.R. 4-2 Preliminary Claim Constructions and Extrinsic Evidence, PACT XPP Technologies, AG. V. Xilinx, Inc. and Avnet, Inc., Case No. 2:07-cv-00563-TJW-CE, U.S. District Court for the Eastern District of Texas, Dec. 28, 2007, pp. 1-16, and Exhibits re Extrinsic Evidence Parts in seven (7) separate additional PDF files (Parts 1-7).
Agarwal, A., et al., “APRIL: A Processor Architecture for Multiprocessing,” Laboratory for Computer Science, MIT, Cambridge, MA, IEEE 1990, pp. 104-114.
Almasi and Gottlieb, Highly Parallel Computing, The Benjamin/Cummings Publishing Company, Inc., Redwood City, CA, 1989, 3 pages (Fig. 4.1).
Advanced RISC Machines Ltd (ARM), “AMBA—Advanced Microcontroller Bus Architecture Specification,” (Document No. ARM IHI 0001C), Sep. 1995, 72 pages.
Alfke, Peter; New, Bernie, Xilinx Application Note, “Additional XC3000 Data,” XAPP 024.000, 1994, pp. 8-11 through 8-20.
Alfke, Peter; New, Bernie, Xilinx Application Note, “Adders, Subtracters and Accumulators in XC3000,” XAPP 022.000, 1994, pp. 8-98 through 8-104.
Alfke, Peter, Xilinx Application Note, “Megabit FIFO in Two Chips: One LCA Device and One DRAM,” XAPP 030.000, 1994, pp. 8-148 through 8-150.
Alfke, Peter, Xilinx Application Note, “Dynamic Reconfiguration,” XAPP 093, Nov. 10, 1997, pp. 13-45 through 13-46 through 8-150.
Alfke, Peter; New, Bernie, Xilinx Application Note, “Implementing State Machines in LCA Devices,” XAPP 027.001, 1994, pp. 8-169 through 8-172.
Algotronix, Ltd., CAL64K Preliminary Data Sheet, Apr. 1989, pp. 1-24.
Algotronix, Ltd., CAL4096 Datasheet, 1992, pp. 1-53.
Algotronix, Ltd., CHS2x4 User Manual, “CHA2x4 Custom Computer,” 1991, pp. 1-38.
Allaire, Bill; Fischer, Bud, Xilinx Application Note, “Block Adaptive Filter,” XAPP 055, Aug. 15, 1996 (Version 1.0), pp. 1-10.
Altera Application Note (73), “Implementing FIR Filters in FLEX Devices,” Altera Corporation, Feb. 1998, ver. 1.01, pp. 1-23.
Athanas, P. (Thesis), “An adaptive machine architecture and compiler for dynamic processor reconfiguration,” Brown University 1992, pp. 1-157.
Berkeley Design Technology, Inc., Buyer's Guide to DSP Processors, 1995, Fremont, CA., pp. 673-698.
Bittner, R. et al., “Colt: An Experiment in Wormhole Run-Time Reconfiguration,” Bradley Department of Electrical and Computer Engineering, Blacksburg, VA, SPIE—International Society for Optical Engineering, vol. 2914/187, Nov. 1996, Boston, MA, pp. 187-194.
Camilleri, Nick; Lockhard, Chris, Xilinx Application Note, “Improving XC4000 Design Performance,” XAPP 043.000, 1994, pp. 8-21 through 8-35.
Cartier, Lois, Xilinx Application Note, “System Design with New XC4000EX I/O Features,” Feb. 21, 1996, pp. 1-8.
Chen, D., (Thesis) “Programmable arithmetic devices for high speed digital signal processing,” U. California Berkeley 1992, pp. 1-175.
Churcher, S., et al., “The XC6200 FastMap TM Processor Interface,” Xilinx, Inc., Aug. 1995, pp. 1-8.
Cowie, Beth, Xilinx Application Note, “High Performance, Low Area, Interpolator Design for the XC6200,” XAPP 081, May 7, 1997 (Version 1.0), pp. 1-10.
Duncan, Ann, Xilinx Application Note, “A32x16 Reconfigurable Correlator for the XC6200,” XAPP 084, Jul. 25, 1997 (Version 1.0), pp. 1-14.
Ebeling, C., et al., “RaPiD—Reconfigurable Pipelined Datapath,” Dept. of Computer Science and Engineering, U. Washington, 1996, pp. 126-135.
Epstein, D., “IBM Extends DSP Performance with Mfast—Powerful Chip Uses Mesh Architecture to Accelerate Graphics, Video,” 1995 MicroDesign Resources, vol. 9, No. 16, Dec. 4, 1995, pp. 231-236.
Fawcett, B., “New SRAM-Based FPGA Architectures Address New Applications,” Xilinx, Inc. San Jose, CA, Nov. 1995, pp. 231-236.
Goslin, G; Newgard, B, Xilinx Application Note, “16-Tap, 8-Bit FIR Filter Applications Guide,” Nov. 21, 1994, pp. 1-5.
Iwanczuk, Roman, Xilinx Application Note, “Using the XC4000 RAM Capability,” XAPP 031.000, 1994, pp. 8-127 through 8-138.
Knapp, Steven, “Using Programmable Logic to Accelerate DSP Functions,” Xilinx, Inc., 1995, pp. 1-8.
New, Bernie, Xilinx Application Note, “Accelerating Loadable Counters in SC4000,” XAPP 023.001, 1994, pp. 8-82 through 8-85.
New, Bernie, Xilinx Application Note, “Boundary Scan Emulator for XC3000,” XAPP 007.001, 1994, pp. 8-53 through 8-59.
New, Bernie, Xilinx Application Note, “Ultra-Fast Synchronous Counters,” XAPP 014.001, 1994, pp. 8-78 through 8-81.
New, Bernie, Xilinx Application Note, “Using the Dedicated Carry Logic in XC4000,” XAPP 013.001, 1994, pp. 8-105 through 8-115.
New, Bernie, Xilinx Application Note, “Complex Digital Waveform Generator,” XAPP 008.002, 1994, pp. 8-163 through 8-164.
New, Bernie, Xilinx Application Note, “Bus-Structured Serial Input-Output Device,” XAPP 010.001, 1994, pp. 8-181 through 8-182.
Ridgeway, David, Xilinx Application Note, “Designing Complex 2-Dimensional Convolution Filters,” XAPP 037.000, 1994, pp. 8-175 through 8-177.
Rowson, J., et al., “Second-generation compilers optimize semicustom circuits,” Electronic Design, Feb. 19, 1987, pp. 92-96.
Schewel, J., “A Hardware/Software Co-Design System using Configurable Computing Technology,” Virtual Computer Corporation, Reseda, CA, IEEE 1998, pp. 620-625.
Segers, Dennis, Xilinx Memorandum, “MIKE—Product Description and MRD,” Jun. 8, 1994, pp. 1-29.
Texas Instruments, “TMS320C8x System-Level Synopsis,” Sep. 1995, 75 pages.
Texas Instruments, “TMS320C80 Digital Signal Processor,” Data Sheet, Digital Signal Processing Solutions 1997, 171 pages.
Texas Instruments, “TMS320C80 (MVP) Parallel Processor,” User's Guide, Digital Signal Processing Products 1995, 73 pages.
Trainor, D.W., et al., “Implementation of the 2D DCT Using a Xilinx XC6264 FPGA,” 1997, IEEE Workshop of Signal Processing Systems SiPS 97, pp. 541-550.
Trimberger, S, (Ed.) et al., “Field-Programmable Gate Array Technology,” 1994, Kluwer Academic Press, pp. 1-258 (and the Title Page, Table of Contents, and Preface) [274 pages total].
Trimberger, S., “A Reprogrammable Gate Array and Applications,” IEEE 1993, Proceedings of the IEEE, vol. 81, No. 7, Jul. 1993, pp. 1030-1041.
Trimberger, S., et al., “A Time-Multiplexed FPGA,” Xilinx, Inc., 1997 IEEE, pp. 22-28.
Ujvari, Dan, Xilinx Application Note, “Digital Mixer in an XC7272,” XAPP 035.002, 1994, p. 1.
Veendrick, H., et al., “A 1.5 GIPS video signal processor (VSP),” Philips Research Laboratories, The Netherlands, IEEE 1994 Custom Integrated Circuits Conference, pp. 95-98.
Wilkie, Bill, Xilinx Application Note, “Interfacing XC6200 to Microprocessors (TMS320C50 Example),” XAPP 064, Oct. 9, 1996 (Version 1.1), pp. 1-9.
Wilkie, Bill, Xilinx Application Note, “Interfacing XC6200 to Microprocessors (MC68020 Example),” XAPP 063, Oct. 9, 1996 (Version 1.1), pp. 1-8.
XCELL, Issue 18, Third Quarter 1995, “Introducing three new FPGA Families!”; “Introducing the XC6200 FPGA Architecture: The First FPGA Architecture Optimized for Coprocessing in Embedded System Applications,” 40 pages.
Xilinx Application Note, Advanced Product Specification, “XC6200 Field Programmable Gate Arrays,” Jun. 1, 1996 (Version 1.0), pp. 4-253—4-286.
Xilinx Application Note, “A Fast Constant Coefficient Multiplier for the XC6200,” XAPP 082, Aug. 24, 1997 (Version 1.0), pp. 1-5.
Xilinx Technical Data, “XC5200 Logic Cell Array Family,” Preliminary (v1.0), Apr. 1995, pp. 1-43.
Xilinx Data Book, “The Programmable Logic Data Book,” 1996, 909 pages.
Xilinx, Series 6000 User's Guide, Jun. 26, 1997, 223 pages.
Yeung, K., (Thesis) “A Data-Driven Multiprocessor Architecture for High Throughput Digital Signal Processing,” Electronics Research Laboratory, U. California Berkeley, Jul. 10, 1995, pp. 1-153.
Yeung, L., et al., “A 2.4GOPS Data-Driven Reconfigurable Multiprocessor IC for DSP,” Dept. of EECS, U. California Berkeley, 1995 IEEE International Solid State Circuits Conference, pp. 108-110.
ZILOG Preliminary Product Specification, “Z86C95 CMOS Z8 Digital Signal Processor,” 1992, pp. 1-82.
ZILOG Preliminary Product Specification, “Z89120 Z89920 (ROMless) 16-Bit Mixed Signal Processor,” 1992, pp. 1-82.
Defendants' Invalidity Contentions in PACT XPP Technologies, AG v. Xilinx, Inc., et al., (E.D. Texas Dec. 28, 2007) (No. 2:07cv563)., including Exhibits A through K in separate PDF files.
Microsoft Press Computer Dictionary, Third Edition, Redmond, WA, 1997, 3 pages.
Microsoft Press Computer Dictionary, Second Edition, Redmond, WA, 1994, 3 pages.
A Dictionary of Computing, Fourth Edition, Oxford University Press, 1997, 4 pages.
Communications Standard Dictionary, Third Edition, Martin Welk (Ed.), Chapman & Hall, 1996, 3 pages.
Dictionary of Communications Technology, Terms Definitions and Abbreviations, Second Edition, Gilbert Held (Ed.), John Wiley & Sons, England, 1995, 5 pages.
The Random House College Dictionary, Revised Edition, Random House, Inc., 1984, 14 pages.
The Random House College Dictionary, Revised Edition, Random House, Inc., 1984, 7 pages.
Random House Webster's College Dictionary with CD-ROM, Random House, 2001, 7 pages.
Random House Webster's College Dictionary with CD-ROM, Random House, 2001, 4 pages.
Random House Personal Computer Dictionary, Second Edition, Philip E. Margolis (Ed.), Random House, New York, 1996, 5 pages.
The IEEE Standard Dictionary of Electrical and Electronics Terms, Sixth Edition, 1996, 36 pages.
The IEEE Standard Dictionary of Electrical and Electronics Terms, Sixth Edition, 1996, 8 pages.
McGraw-Hill Electronics Dictionary, Sixth Edition, Neil Sclater et al. (Ed.), McGraw-Hill, 1997, 3 pages.
Modem Dictionary of Electronics, Sixth Edition, Rudolf Graf (Ed.), Newnes (Butterwoth-Heinemann), 1997, 5 pages.
The American Heritage Dictionary, Fourth Edition, Dell (Houghton-Mifflin), 2001, 5 pages.
The American Heritage Dictionary, Second College Edition, Houghton Mifflin, 1982, 23 pages.
The American Heritage Dictionary, Second College Edition, Houghton Mifflin, 1982, 8 pages.
The American Heritage Dictionary, Third Edition, Dell Publishing (Bantam Doubleday Dell Publishing Group, Inc.), 1994, 4 pages.
The American Heritage Dictionary, Fourth Edition, Dell/Houghton Mifflin 2001, 5 pages.
Webster's New Collegiate Dictionary, Merriam Co., 1981, 5 pages.
Webster's New Collegiate Dictionary, Merriam Co., 1981, 4 pages.
The Oxford American Dictionary and Language Guide, Oxford University Press, 1999, 5 pages.
The Oxford Duden German Dictionary, Edited by the Dudenredaktion and the German Section of the Oxford University Press, W. Scholze-Stubenrecht et al. (Eds), Clarendon Press, Oxford, 1990, 7 pages.
Oxford Dictionary of Computing, Oxford University Press, 2008, 4 pages.
Modern Dictionary of Electronics, Sixth Edition Revised and Updated, Rudolf F. Graf (Ed.), Butterworth-Heinemann, 1997, 7 pages.
Modern Dictionary of Electronics, Sixth Edition Revised and Updated, Rudolf F. Graf (Ed.), Butterworth-Heinemann, 1997, 5 pages.
Garner's Modern American Usage, Bryan A. Garner (Ed.), Oxford University Press, 2003, 3 pages.
The New Fowler's Modern English Usage, R.W. Burchfield (Ed.) , Oxford University Press, 2000, 3 pages.
Wikipedia, the free encyclopedia, “Granularity,” at http://en.wikipedia.org/wiki/Granularity, Jun. 18, 2010, 4 pages.
Wordsmyth, The Premier Educational Dictionary—Thesaurus, at http://www.wordsmyth.net, “communication,” Jun. 18, 2010, 1 page.
Yahoo! Education, “affect,” at http://education.yahoo.com/reference/dictionary/entry/affect, Jun. 18, 2010, 2 pages.
mPulse Living Language, “high-level,” at http://www.macmillandictionary.com/dictionary/american/high-level Jun. 18, 2010, 1 page.
MSN Encarta, “regroup,” at http://encarta.msn.com/encnet/features/dictionary/DictionaryResults.aspx?lextype=3&search=regroup, Jun. 17, 2010, 2 pages.
MSN Encarta, “synchronize,” at http://encarta.msn.com/encnet/features/dictionary/DictionaryResults.aspx?lextype=3&search=synchronize, Jun. 17, 2010, 2 pages.
MSN Encarta, “pattern,” at http://encarta.msn.com/encnet/features/dictionary/DictionaryResults.aspx?lextype=3&search=pattern, Jun. 17, 2010, 2 pages.
MSN Encarta, “dimension,” at http://encarta.msn.com/encnet/features/dictionary/DictionaryResults.aspx?lextype=3&search=dimension, Jun. 17, 2010, 2 pages.
MSN Encarta, “communication,” at http://encarta.msn.com/encnet/features/dictionary/DictionaryResults.aspx?lextype=3&search=communication, Jun. 17, 2010, 2 pages.
MSN Encarta, “arrangement,” at http.//encarta.msn.com/encnet/features/dictionary/DictionaryResults.aspx?lextype=3&search=arrangement, Jun. 17, 2010, 2 pages.
MSN Encarta, “vector,”at http://encarta.msn.com/encnet/features/dictionary/DictionaryResults.aspx?lextype=3&search=vector, Jul. 30, 2010, 2 pages.
Dictionary.com, “address,” at http://dictionary.reference.com/browse/address Jun. 18, 2010, 4 pages.
P.R . 4-3 Joint Claim Constructions Statement, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Jul. 19, 2010, pp. 1-50.
Order Granting Joint Motion for Leave to File an Amended Joint Claim Construction and Prehearing Statement and Joint Motion to File an Amended Joint Claim Construction and Prehearing Statement Pursuant to Local Patent Rule 4-3, and Exhibit A: P.R. 4-3 Amended Joint Claim Constructions Statement, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Aug. 2, 2010, 72 pages.
P.R. 4-3 Amended Joint Claim Constructions Statement, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Aug. 3, 2010, pp. 1-65.
Exhibit A—P.R. 4-3 Amended Joint Claim Constructions Statement, PACT XPP Technologies, AG v. Xilinx, Inc. aAnd Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Aug. 2, 2010, pp. 1-66.
PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv00563-CE, Nov. 1, 2010, pp. 1-55.
Declaration of Harry L. (Nick) Tredennick in Support of PACT's Claim Constructions, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-87.
Transcript of Harry (Nick) L. Tredennick III, Ph.D., Oct. 11, 2010, vol. 1, Exhibit 16 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-3.
Agreed and Disputed Terms, Exhibit 17 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-16.
Oral Videotaped Deposition—Joseph McAlexander dated Oct. 12, 2010, vol. 1, Exhibit 18 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-17.
Expert Report of Joe McAlexander Re Claim Construction dated Sep. 27, 2010, Exhibit 19 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avner, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-112.
Documents from File History of U.S. Appl. No. 09/290,342, (filed Apr. 12, 1999), Exhibit 20 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-37.
Amendment from File History of U.S. Appl. No. 10/156,397, (filed May 28, 2002), Exhibit 25 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-12.
Documents from File History U.S. Appl. No. 09/329,132, (filed Jun. 9, 1999), Exhibit 27 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-36.
Amendment from File History of U.S. Appl. No. 10/791,501, (filed Mar. 1, 2004), Exhibit 39 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-9.
Amendment from File History of U.S. Appl. No. 10/265,846, (filed Oct. 7, 2002), Exhibit 40 of PACT's Opening Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Nov. 1, 2010, pp. 1-12.
Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-55.
Declaration of Aaron Taggart in Support of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief (Exhibit A), PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-5.
Oral Videotaped Deposition Joseph McAlexander (Oct. 12, 2010), Exhibit 1 of of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-9.
Expert Report of Joe McAlexander re Claim Construction, Exhibit 2 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-137.
Documents from File History of U.S. Appl. No. 09/290,342, (filed Apr. 12, 1999), Exhibit 6 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv00563-CE, Dec. 6, 2010, pp. 1-181.
Transcript of Harry (Nick) L. Tredennick III, Ph.D., Oct. 11, 2010, vol. 1, Exhibit 7 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-28.
Amendment, Response from File History of U.S. Appl. No. 10/156,397, (filed May 28, 2002), Exhibit 15 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-137.
Application from File History of U.S. Appl. No. 08/544,435, (filed Nov. 17, 1995), Exhibit 20 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-102.
Documents from File History of U.S. Appl. No. 09/329,132, (filed Jun. 9, 1999), Exhibit 24 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-13.
Documents from File History of U.S. Appl. No. 10/791,501, (filed Mar. 1, 2004), Exhibit 25 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-14.
Amendment from File History of U.S. Appl. No. 11/246,617, (filed Oct. 7, 2005), Exhibit 26 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-9.
Documents from File History of U.S. Appl. No. 08/947,254, (filed Oct. 8, 1997), Exhibit 27 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-38.
Documents from File History of U.S. Appl. No. 08/947,254, (filed Oct. 8, 1997), specifically, German priority application specification [English translation provided], Exhibit 33 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, 54 pages [including English translation].
Documents from File History of U.S. Appl. No. 09/335,974, (filed Jun. 18, 1999), Exhibit 28 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-32.
Documents from File History of U.S. Patent Reexamination Control No. 90/010,450 (filed Mar. 27, 2009), Exhibit 30 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief; PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. el al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-71.
Documents from File History of U.S. Appl. No. 10/265,846, (filed Oct. 7, 2002), Exhibit 32 of Defendants Xilinx, Inc. and Avnet, Inc.'s Responsive Claim Construction Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Dec. 6, 2010, pp. 1-23.
PACT's Claim Construction Reply Brief; PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Jan. 7, 2011, pp. 1-20.
Defendants Xilinx, Inc. and Avnet, Inc.'s Claim Construction Surreply Brief, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Jan. 18, 2011, 142 pages.
Markman Hearing Minutes and Attorney Sign-In Sheet, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inc. et al., E.D. Texas, 2:07-cv-00563-CE, Feb. 22, 2011, 3 pages; and court transcript, 245 pages.
Memorandum Opinion and Order, PACT XPP Technologies, AG v. Xilinx, Inc. and Avnet, Inca et al., E.D. Texas, 2:07-cv-00563-CE, Jun. 17, 2011, pp. 1-71.
Atmel Corporation, Atmel 5-K- 50K Gates Coprocessor FPGA and FreeRAM, (www.atmel.com), Apr. 2002, pp. 1-68.
Glaskowsky, Peter N., “PACT Debuts Extreme Processor; Reconfigurable ALU Array Is Very Powerful—and Very Complex,” The Insider's Guide to Microprocessor Hardware, MicroDesign Resources—Micoprocessor Report, Oct. 9, 2000 (www.MPRonline.com), 6 pages.
Glaskowsky, Peter N., “Analysis' Choice Nominees Named; Our Picks for 2002's Most Important Products and Technologies,” The Insider's Guide to Microprocessor Hardware, MicroDesign Resources—Micoprocessor Report, Dec. 9, 2002 (www.MPRonline.com), 4 pages.
Lattice Semiconductor Corporation, “ispLSI 2000E, 2000VE and 2000 VL Family Architectural Description,” Oct. 2001, pp. 1-88.
Olukotun, K. et al., “Rationale, Design and Performance of the Hydra Multiprocessor,” Computer Systems Laboratory, Stanford University, CA, Nov. 1994, pp. 1-19.
PACT Corporate Backgrounder, PACT company release, Oct. 2008, 4 pages.
Page, Ian., “Reconfigurable processor architectures,” Oxford University Computing Laboratory, Oxford UK, Elsevier Science B.V., Microprocessors and Microsystems 20 (1996) pp. 185-196.
Singh, Hartej et al., “Morpho-Sys: A Reconfigurable Architecture for Multimedia Applications,” Univ. of California, Irvine, CA and Federal University of Rio de Janiero, Brazil, IEEE Transactions on Computers, 1998 at http://www.eng.uci.edu/morphosys/docs/sbcci98.html, 10 pages.
Theodoridis, G. et al., “Chapter 2—A Survey of Coarse-Grain Reconfigurable Architectures and Cad Tools, Basic Definitions, Critical Design Issues and Existing Coarse-grain Reconfigurable Systems,” from S. Vassiliadis, and D. Soudris (eds.) Fine- and Coarse-Grained Rerconfigurable Computing, Springer 2007, pp. 89-149.
Weinhardt, Markus et al., “Using Function Folding to Improve Silicon Efficiency of Reconfigurable Arithmetic Arrays,” PACT XPP Technologies AG, Munich, Germany, IEEE 2004, pp. 239-245.
Xilinx, XC6200 Field Programmable Gate Arrays, Advance Product Specification, Jun. 1, 1996 (Version 1.0), pp. 4-255 through 4-286.
Xilinx, Virtex-II Platform FPGA User Guide, UG002 (V2.1) 28 Mar. 2007, pp. 1-502 [Parts 1-3].
Xilinx, XC4000E and SC4000X Serial Field Programmable Gate Arrays, Product Specification (Version 1.6), May 14, 1999, pp. 1-107.
Shreekant et al., “The Internet Streaming SIMD Extensions,” Intel Corp., Intel Technology Journal Q2, 1999, pp. 1-8.
Bondalapati et al., “Reconfigurable Computing: Architectures, Models, and Altorithms,” Apr. 2000, Current Science, vol. 78, No. 7, pp. 828-837.
Ganesan et al., “An Integrated Temporal Partitioning and Partial Reconfiguration Technique for Design Latency Improvement,” 2000, Proceedings of the Conference on Design, Automation and Test in Europe, pp. 320-325.
Hartenstein et al., “Using the KressArray for Reconfigurable Computing,” Nov. 1998, SPIE Conference on Configurable Computing: Technology and Applications, pp. 150-161.
Hauck, S., “Configuration Prefetch for Single Context Reconfigurable Coprocessors,” 1998, Proceedings of the 1998 ACM/SIGDA Sixth International Symposium on Field Programmable Gate Arrays, pp. 65-74.
Li et al., “Configuration Caching Management Techniques for Reconfigurable Computing,” 2000, Proceedings of the 2000 IEEE Symposium on Field-Programmable Custom Computing Machines, pp. 22-36.
Related Publications (1)
Number Date Country
20120311301 A1 Dec 2012 US
Continuations (3)
Number Date Country
Parent 11122500 May 2005 US
Child 13491894 US
Parent 10191926 Jul 2002 US
Child 10297959 US
Parent 09623052 US
Child 10191926 US
Continuation in Parts (2)
Number Date Country
Parent 10764159 US
Child 11122500 US
Parent 10297959 Jun 2003 US
Child 10764159 US