The present invention may be applied to programmable arithmetic and/or logic hardware modules (Virtual Processing Units—VPUs) which can be reprogrammed during operation. For example, the present invention may be applied to VPUS having a plurality of arithmetic and/or logic units whose interconnection can also be programmed and reprogrammed during operation. Such logical hardware modules are available from several manufacturers under the generic name of FPGA (Field-Programmable Gate Arrays). Furthermore, several patents have been published, which describe special arithmetic hardware modules having automatic data synchronization and improved arithmetic data processing.
All the above-described hardware modules may have a two-dimensional or multidimensional arrangement of logical and/or arithmetic units (Processing Array Elements—PAEs) which can be interconnected via bus systems.
The above described hardware modules may either have the units listed below or these units may be programmed or added (including externally):
1. at least one configuration unit (CT) for loading configuration data;
2. PAEs;
3. at least one interface unit for one or more memory(ies) and/or peripheral device(s).
An object of the present invention is to provide a programming method which allows the above-described hardware modules to be efficiently programmed with conventional high-level programming languages, making automatic, full, and efficient use of the parallelism of the above-described hardware modules obtained by the plurality of units to the maximum possible degree.
Hardware modules of the type mentioned above may be programmed using popular data flow languages. This can create two basic problems:
1. A programmer must become accustomed to programming in data flow languages; multilevel sequential tasks can generally be described only in a complex manner;
2. Large applications and sequential descriptions can be mapped to the desired target technology (synthesized) with the existing translation programs (synthesis tools) only to a certain extent.
In general, applications are partitioned into multiple subapplications, which are then synthesized to the target technology individually (
Existing synthesis tools are capable of mapping program loops onto hardware modules only to a certain extent (
Contrary to FOR loops, WHILE loops (0203) have no constant abort value. Instead, a WHILE loop is evaluated using a condition, whenever interrupt takes place. Therefore, normally (when the condition is not constant), at the time of the synthesis, it is not known when the loop is aborted. Due to their dynamic behavior, these synthesis tools cannot map these loops onto the hardware, e.g., transfer them to a target module, in a fixed manner.
Using conventional synthesis tools, recursions basically cannot be mapped onto hardware if the recursion depth is not known at the time of the synthesis. Mapping may be possible if the recursion depth is known, e.g., constant. When recursion is used, new resources are allocated with each new recursion level. This would mean that new hardware has to be made available with each recursion level, which, however, is dynamically impossible.
Even simple basic structures can be mapped only by synthesis tools when the target module is sufficiently large to offer sufficient resources.
Simple time dependencies (0301) are not partitioned into multiple subapplications by conventional synthesis tools and can therefore be transferred onto a target module as a whole.
Conditional executions (0302) and loops over conditions (0303) can also only be mapped if sufficient resources exist on the target module.
The method described in German Patent 44 16 881 allows conditions to be recognized within the hardware structures of the above-mentioned modules at runtime and makes it possible to dynamically respond to such conditions so that the function of the hardware is modified according to the condition received, which is basically accomplished by configuring a new structure.
The method according to the present invention may include the partitioning of graphs (applications) into time-independent subgraphs (subapplications).
The term “time independence” is defined so that the data which are transmitted between two subapplications are separated by a memory of any design (including a simple register). This is possible, in particular, at the points of a graph where there is a clear interface with a limited and minimum amount of signals between the two subgraphs.
Furthermore, points in the graph having the following features may be particularly suitable when, for example:
1. There are few signals or variables between the nodes;
2. A small amount of data is transmitted via the signals or variables;
3. There is no feedback, e.g., no signals or variables are transmitted in the direction opposite to the others.
In the case of large graphs, time independence may be achieved by introducing specific, clearly defined interfaces that are as simple as possible to store data in a buffer (see S1, S2 and S3 in
Loops often have a strong time independence with respect to the rest of the algorithm, since they may work over a long period on a limited number of variables that are (mostly) local in the loop and may require a transfer of operands or of the result only when entering or leaving the loop.
With time independence, after a subapplication has been completely executed, the subsequent subapplication can be loaded without any further dependencies or influences occurring. When the data is stored in the above-named memory, a status signal trigger, as described in German Patent Application No. 197 04 782.9, filed on Feb. 8, 1997, can be generated, which may request the higher-level load unit to load the next subapplication. When simple registers are used as memories, the trigger may be generated when data is written into the register. When memories are used, in particular memories operating by the FIFO principle, triggers may be generated depending on multiple conditions. For example, the following conditions, individually or in combination, can generate a trigger:
In the following, a subapplication may also be referred to as a software module in order to improve understandability from the point of view of conventional programming. For the same reason, signals may also be called variables. These variables may differ from conventional variables in one important aspect: a status signal (Ready) which shows whether a given variable has a legal value may be assigned to each variable. If a signal has a legal (calculated) value, the status signal may be Ready; if the signal has no legal value (calculation not yet completed), the status signal may be Not_Ready. This principle is described in detail in German Patent Application No. 196 51 075.9.
In summary, the following functions may be assigned to the triggers:
1. Control of data processing as the status of individual processing array elements (PAEs);
2. Control of reconfiguration of PAEs (time sequence of the subapplications).
In particular, the abort criteria of loops (WHILE) and recursions, as well as conditional jumps in subapplications, may be implemented by triggers.
In case 1, the triggers are exchanged between PAEs; in case 2, the triggers are transmitted by the PAEs to the CT. The transition between case 1 and case 2 may depend on the number of subapplications running at the time in the matrix of PAEs. In other words, triggers may be to the subapplications currently being executed on the PAEs. If a subapplication is not configured, the triggers are sent to the CT. If this subapplication were also configured, the respective triggers would be sent directly to the respective PAEs.
This results in automatic scaling of the computing performance with increasing PAE size, e.g., with cascading of a plurality of PAE matrices. No more reconfiguration time is needed, but the triggers are sent directly to the PAEs which are now already configured.
Example Wave Reconfiguration
A plurality of software modules may be overlapped using appropriate hardware architecture (see FIGS. 10/11). A plurality of software modules may be pre-configured in the PAEs at the same time. Switching between configurations may be performed with minimum expenditure in time, so only one configuration is activated at one time for each PAE.
In a collection of PAEs into which a software module A and a module B are preconfigured, one part of this collection can be activated using a part of A and another part of this collection can be activated at the same time using a part of B. The separation of the two parts is given exactly by the PAE in which the switch-over state between A and B occurs. This means that, from a certain point in time B is activated in all PAEs for which A was activated for execution prior to this time, and in all other PAEs A is still activated after this time. With increasing time, B is activated in more and more PAEs.
Switch-over may take place on the basis of specific data, states which result from the computation of the data, or on the basis of any other events which are generated externally, e.g., by the CT.
As a result, after a data packet has been processed, switch-over to another configuration may take place. At the same time/alternatively, a signal (RECONFIG-TRIGGER) can be sent to the CT, which causes new configurations to be pre-loaded by the CT. Pre-loading can take place onto other PAEs, which are dependent on or independent of the current data processing. By isolating the active configuration from the configurations which are now available for reconfiguration (see FIGS. 10/11), new configurations can be loaded even into PAEs that are currently operating (active), in particular also the PAE which generated the RECONFIG-TRIGGER. This allows a configuration to overlap with the data processing.
In
In the next cycle, the data packet runs to PAE2 and a new data packet appears in PAE1. F is also active in PAE2. Together with the data packet, an event (↑1) appears in PAE1. The event may occur whenever the PAE receives any external event (e.g., a status flag or a trigger) or it is generated within the PAE by the computation performed.
In
In
g to 13j show that when running a wave reconfiguration, not all PAEs need to operate according to the same pattern. The way a PAE is configured by a wave configuration depends mainly on its own configuration. It should be mentioned here that PAE4 to PAE6 are configured so that they respond to events differently from the other PAEs. For example, in
In
It is not absolutely necessary that a reconfiguration having taken place once take place throughout the entire flow. For example, reconfiguration with activation of A in response to event (↑2) could take place only locally in PAEs 1 to 3 and PAE7, while configuration H continues to remain activated in all the other PAEs.
In other words:
In PAEs which continue to keep H activated even after (↑2), the receipt of event (↑3) may, of course, have a completely different effect, (I) such as activation of C instead of loading of G; (ii) also, (↑3) might not have any effect at all on these PAEs.
Example Processor Model
The example graphs shown in the following figures always have one software module as a graph node. It will be appreciated that a plurality of software modules may be mapped onto one target hardware module. This means that, although all software modules are time independent of one another, reconfiguration is performed and/or a data storage device is inserted only in those software modules which are marked with a vertical line and Δt. This point is referred to as reconfiguration time.
The reconfiguration time depends on certain data or the states resulting from the processing of certain data.
It will be appreciated that:
1. Large software modules can be partitioned at suitable points and broken down into small software modules which are time independent of one another, and fit into the PAE array in an optimum manner.
2. In the case of small software modules, which can be mapped together onto a target module, time independence is not needed. This saves configuration steps and speeds up data processing.
3. The reconfiguration times may be positioned according to the resources of the target modules. This makes it possible to scale the graph length in any desired manner.
4. Software modules may be configured with superimposition.
5. The reconfiguration of software modules may be controlled through the data itself or through the result of data processing.
6. The data generated by the software modules may be stored and the chronologically subsequent software modules read the data from this memory and in turn store the results in a memory or output the end result to the peripheral devices.
Example Use of Status Information in the Processor Model
In order to determine the states within a graph, the status registers of the individual cells (PAEs) may be made available to all the other arithmetic units via a freely routable and segmentable status bus system (0802) which exists in addition to the data bus (0801) (
The network of the status signals (0802) may represent a freely and specifically distributed status register of a single conventional processor (or of multiple processors of a Symmetric Multiprocessing (SMP) computer). The status of each individual Arithmetic Logic Unit (ALU) (e.g., each individual processor) and, in particular, each individual piece of status information may be available to the ALU or ALUs (processors) that need the information. There is no additional program runtime or communication runtime (except for the signal runtimes) for exchange of information between the ALUs (processors).
In conclusion, it should be noted that, depending on the task, both the data flow chart and the control flow chart may be treated according to the above-described method.
Example Virtual Machine Model
According to the previous sections, the principles of data processing using VPU hardware modules are mainly data flow oriented. However, in order to execute sequential programs with a reasonable performance, a sequential data processing model must be available for which the sequencers in the individual PAEs are often insufficient.
However, the architecture of VPUs basically allows sequencers of any desired complexity to be formed from individual PAEs. This means:
Thus, a virtual machine corresponding in particular to the sequential requirements of an algorithm may be implemented on VPUs.
An advantage of the VPU architecture is that an algorithm can be broken down by a compiler so that the data flow portions are extracted. The algorithm may be represented by an “optimum” data flow, in that an adjusted data flow is configured AND the sequential portions of the algorithm are represented by an “optimum” sequencer, by configuring an adjusted sequencer. A plurality of sequencers and data flows may be accommodated on one VPU at the same time.
As a result of the large number of PAEs, there may be a large number of local states within a VPU during operation. When changing tasks or calling a subprogram (interrupts), these states may need to be saved (see PUSH/POP for standard processors). This, however, may be difficult in practice due to the large number of states.
In order to reduce the states to a manageable number, a distinction must be made between two types of state:
In the case of DATA-STATES, handling can be further simplified depending on the algorithm. Two basic strategies are explained in detail below:
1. Concomitant Run of the Status Information
All the relevant status information that is needed at a later time may be transferred from one software module to the next as normally implemented in pipelines. The status information is then implicitly stored, together with the data, in a memory, so that the states are also available when the data is called. Therefore, no explicit handling of the status information takes place, in particular using PUSH and POP, which considerably speeds up processing depending on the algorithm, as well as results in simplified programming. The status information can be either stored with the respective data packet or, only in the event of an interrupt, saved and specifically marked.
2. Saving the Reentry Address
When large amounts of data stored in a memory are processed, it may be advantageous to pass the address of at least one of the operands of the data packet just processed together with the data packet through the PAEs. In this case the address is not modified, but is available when the data packet is written into a RAM as a pointer to the operand processed last.
This pointer can be either stored with the respective data packet or, only in the event of an interrupt, can be saved and specifically marked. In particular, if all pointers to the operands are computed using one address (or a group of addresses), it may be advantageous to save only one address (or a group of addressees).
Example “ULIW”-“UCISC” Model
The concept of VPU architecture may be extended. The virtual machine model may be used as a basis. The processing array of PAEs (PA) may be considered as an arithmetic unit with a configurable architecture. The CT(s) may represent a load unit (LOAD-UNIT) for opcodes. The interface units may take over the bus interface and/or the register set.
This arrangement allows two basic modes of operation which can be used mixed during operation:
1. A group of one or more PAEs may be configured to execute a complex command or command sequence and then the data associated with this command (which may be a single data word) is processed. Then this group is reconfigured to process the next command The size and arrangement of the group may change. According to partitioning technologies described previously, it is the compiler's responsibility to create optimum groups to the greatest possible extent. Groups are “loaded” as commands onto the module by the CT; therefore, the method is comparable to the known Very Long Instruction Word (VLIW), except that considerably more arithmetic units are managed AND the interconnection structure between the arithmetic units can also be covered by the instruction word (Ultra Large Instruction Word=“ULIW”). This allows a very high Instruction Level Parallelism (ILP) to be achieved. (See also
2. A group of PAEs (which can also be one PAE) may be configured to execute a frequently used command sequence. The data, which can also in this case be a single data word, is sent to the group as needed and received by the group. This group remains without being reconfigured for a one or more. This arrangement is comparable with a special arithmetic unit in a processor according to the related art (e.g., multimedia extension (MMX)), which is provided for special tasks and is only used as needed. With this method, special commands can be generated according to the Complex Instruction Set Computer (CISC) principle with the advantage that these commands can be configured to be application-specific (Ultra-CISC=UCISC).
Extension of the RDY/ACK Protocol
German Patent Application No. 196 51 075.9, filed on Dec. 9, 1996, describes a RDY/ACK standard protocol for synchronization procedures of German Patent 44 16 881 with respect to a typical data flow application. The disadvantage of the protocol is that only data can be transmitted and receipt acknowledged. Although the reverse case, with data being requested and transmission acknowledged (hereinafter referred to as REQ/ACK), can be implemented electrically with the same two-wire protocol, it is not detected semantically.
This is particularly true when REQ/ACK and RDY/ACK are used in mixed operation.
Therefore, a clear distinction is made between the protocols:
RDY: data is available at the transmitter for the receiver;
REQ: data is requested by the receiver from the transmitter;
ACK: general acknowledgment for receipt or transmission completed
It will be appreciated that a distinction could also be made between ACK for a RDY and an ACK for a REQ, but the semantics of the ACK is usually implicit in the protocols.
Example Memory Model
Memories (one or more) may be integrated in VPUs and addressed as in the case of a PAE. In the following, a memory model shall be described which represents at the same time an interface to external peripherals and/or external memories:
A memory within a VPU with PAE-like bus functions may represent various memory modes:
1. Standard memory (random access)
2. Cache (as an extension of the standard memory)
3. Lookup table
4. FIFO
5. Last-In-First-Out (LIFO) (stack).
A controllable interface, which writes into or reads from memory areas either one word or one block at a time, may be associated with the memory.
The following usage options may result:
The interface can be used, but it is not absolutely necessary if, for example, the data is used only locally in the VPU and the free memory in an internal memory is sufficient.
Example Stack Model
A simple stack processor may be designed by using the REQ/ACK protocol and the internal memory in the LIFO mode. In this mode, temporary data is written by the PAEs to the stack and loaded from the stack as needed. The necessary compiler technologies are sufficiently known. The stack may be as large as needed due to the variable stack depth, which is achieved through a data exchange of the internal memory with an external memory.
Example Accumulator Model
Each PAE can represent an arithmetic unit according to the accumulator principle. As described in German Patent Application No. 196 51 075.9, the output register may be looped back to the input of the PAE. This yields structure which may operate like a related art accumulator. Simple accumulator processors can be designed in connection with the sequencer according to
Example Register Model
A simple register processor can be designed by using the REQ/ACK protocol and the internal memory in the standard memory mode. The register addresses are generated by one group of PAEs, while another group of PAEs is responsible for processing the data.
Example Memory Architecture
The example memory has two interfaces: a first interface which connects the memory to the array, and a second one which connects the memory with an IO unit. In order to improve the access time, the memory may be designed as a dual-ported RAM, which allows read and write accesses to take place independently of one another.
The first interface may be a conventional PAE interface (PAEI), which may guarantee access to the bus system of the array and may ensure synchronization and trigger processing. Triggers can be used to display different states of the memory or to force actions in the memory, for example,
1. Empty/full: when used as a FIFO, the FIFO status “full,” “almost full,” “empty,” or “almost empty” is displayed;
2. Stack overrun/underrun: when used as a stack, stack overrun and underrun may be signaled;
3. Cache hit/miss: in the cache mode, whether an address has been found in the cache may be displayed;
4. Cache flush: writing the cache into the external RAM is forced by a trigger.
A configurable state machine, which may control the different operating modes, may be associated with the PAE interface. A counter may be associated with the state machine. The counter may generate the addresses in FIFO and LIFO modes. The addresses are supplied to the memory via a multiplexer, so that additional addresses generated in the array may be supplied to the memory.
The second interface may be used to connect an IO unit (IOI). The IO unit may be designed as a configurable controller having an external interface. The controller may read or write data one word or one block at a time from and into the memory. The data is exchanged with the IO unit. The controller also supports different cache functions using an additional TAG memory.
IOI and PAEI may be synchronized with one another, so that no collision of the two interfaces can occur. Synchronization is different depending on the mode of operation; for example, while in standard memory or stack mode operation either the IOI or the PAEI may access the entire memory at any time, synchronization is row by row in the FIFO mode, e.g., while IOI accesses a row x, the PAEI can access any other row other than x at the same time.
The IO unit may be configured according to the peripheral requirements, for example:
1. Synchronous Dynamic RAM (SDRAM) controller
2. Rambus Dynamic RAM (RDRAM) controller
3. Digital Signal Processor (DSP) bus controller
4. Peripheral Component Interconnect (PCI) controller
5. serial controller (e.g., Next-Generation-Input-Output (NGIO))
6. special purpose controller (Small Computer Systems Interface (SCSI), Ethernet, Universal Serial Bus (USB), etc.).
A VPU may have any desired memory elements having any desired IO units. Different IO units may be implemented in a single VPU.
Example Memory Modes of Operation:
1. Standard Memory
1.1 Internal/Local
Data and addresses are exchanged with the memory via the PAEI. The addressable memory size is limited by the size of the memory.
1.2 External/Memory Mapped Window
Data and addresses may be exchanged with the memory via the PAEI. A base address in the external memory may be specified in the IOI controller. The controller may read data from the external memory address one block at a time and write it into the memory, the internal and external addresses being incremented (or decremented) with each read or write operation, until the entire internal memory has been transmitted or a predefined limit has been reached. The array works with the local data until the data is written again into the external memory by the controller. The write operation takes place similarly to the read operation described previously.
Read and write by the controller may be initiated
a) by a trigger or
b) by access of the array to an address that is not locally stored. If the array accesses such an address, initially the internal memory may written to the external one and then the memory block is reloaded with the desired address.
This mode of operation may be particularly relevant for the implementation of a register set for a register processor. In this case, the push/pop of the register set with the external memory can be implemented using a trigger for a change in task or a context switchover.
1.3 External/Lookup Table
The lookup table function is a simplification of the external/memory mapped window mode of operation. In this case, the data may be read once or a number of times via a CT call or a trigger from the external RAM into the internal RAM. The array reads data from the internal memory, but writes no data into the internal memory. The base address in the external memory is stored in the controller either by the CT or by the array and can be modified at runtime. Loading from the external memory is initiated either by the CT or by a trigger from the array and can also be done at runtime.
1.4 External/Cached
In this mode, the array optionally accesses the memory. The memory operates as a cache memory for the external memory according to the related art. The cache can be emptied (e.g., the cache can be fully written into the external memory) through a trigger from the array or through the CT.
2. FIFO
The FIFO mode is normally used when data streams are sent from the outside to the VPU. Then the FIFO is used to isolate the external data processing from the data processing within the VPU so that either the write operation to the FIFO takes place from the outside and the read operation is performed by the VPU or vice versa. The states of the FIFO are signaled by triggers to the array or, if needed, also to the outside. The FIFO itself is implemented according to the related art with different read and write pointers.
3. Stack/Internal
An internal stack may be formed by an address register. The register is (a) incremented or (b) decremented depending on the mode with each write access to the memory by the array. In contrast, in the case of read accesses from the array, the register is (a) decremented and (b) incremented. The address register makes the addresses available for each access. The stack may be limited by the size of the memory. Errors, such as overrun or underrun may be indicated by triggers.
4. Stack/External
If the internal memory is too small for forming a stack, it may be transferred into the external memory. For this purpose, an address counter for the external stack address may be available in the controller. If a certain amount of records is exceeded in the internal stack, records may be written onto the external stack one block at a time. The stack may be written outward from the end, e.g., from the oldest record, a number of the newest records not being written to the external memory, but remaining internal. The external address counter (ERC) may be modified one row at a time.
After space has been created in the internal stack, the remaining content of the stack may need to be moved to the beginning of the stack; the internal stack address may adjusted accordingly.
A more efficient version is configuring the stack as a ring memory as described in German Patent Application No. 196 54 846.2, filed on Dec. 27, 1996. An internal address counter may be modified by adding or removing stack entries. As soon as the internal address counter (IAC) exceeds the top end of the memory, it point to the lowermost address. If the IAC is less than the lowermost address, it may point to the uppermost address. An additional counter (FC) may indicate the full status of the memory, e.g., the counter may be incremented with each word written, and decremented with each word read. Using the FC, it may be ascertained when the memory is full or empty. This technology is known from FIFOs. Thus, if a block is written into the external memory, the adjustment of the FC is sufficient for updating the stack. An external address counter (EAC) may be configured to always points to the oldest record in the internal memory and is therefore at the end of the stack opposite the IAC. The EAC may be modified if
(a) data is written to the external stack; then the EAC runs toward the IAC;
(b) data is read from the external stack; then The EAC moves away from the IAC.
It will be appreciated that it may be ensured by monitoring the FC that the IAC and the EAC do not collide.
The ERC may be modified according to the external stack operation, e.g., buildup or reduction.
Example MMU
A Memory Management Unit (MMU) can be associated with the external memory interface. The MMU may perform two functions:
1. Recompute the internal addresses to external addresses in order to support modern operating systems;
2. Monitor accesses to the external addresses, e.g., generate an error signal as a trigger if the external stack overruns or underruns.
Example Compiler
In an example embodiment according to the present invention, the VPU technology programming may include separating sequential codes and breaking them down into the largest possible number of small and independent subalgorithms, while the subalgorithms of the data flow code may be mapped directly onto the VPU.
Separation Between VPU Code and Standard Code
C++ is used in the following to represent all possible compilers (Pascal, Java, Fortran, etc.) within a related art language; a special extension (VC=VPU C), which contains the language constructs and types which can be mapped onto VPU technology particularly well, may be defined. VC may be used by programmers only within methods or functions that use no other constructs or types. These methods and functions can be mapped directly onto the VPU and run particularly efficiently. The compiler extracts the VC in the pre-processor and forwards it directly to the VC back-end processing (VCBP).
Extraction of the Parallelizable Compiler Code
In the following step, the compiler analyzes the remaining C++ codes and extracts the portions (MC=mappable C) which can be readily parallelized and mapped onto the VPU technology without the use of sequencers. Each individual MC may be placed into a virtual array and routed. Then, the space requirement and the expected performance are analysed. For this purpose, the VCBP may be called and the individual MCs may be partitioned together with the VCs, which are mapped in each case.
The MCs whose VPU implementations achieve the highest increase in performance are accepted and the others are forwarded to the next compiler stage as C++.
Example Optimizing Sequencer Generator
This compiler stage may be implemented in different ways depending on the architecture of the VPU system:
1. VPU without a sequencer or external processor
All remaining C++ codes may be compiled for the external processor.
2. VPU only with sequencer
2.1. Sequencer in the PAEs
All remaining C++ codes may be compiled for the sequencer of the PAEs.
2.2 Configurable sequencer in the array
The remaining C++ code is analysed for each independent software module. The best-suited sequencer version is selected from a database and stored as VC code (SVC). This step is mostly iterative, e.g., a sequencer version may be selected, the code may be compiled, analysed, and compared to the compiled code of other sequencer versions. Finally, the object code (SVCO) of the C++ code may be generated for the selected SVC.
2.3 Both 2.1 and 2.2 are used
The mode of operation corresponds to that of 2.2. Special static sequencer models are available in the database for the sequencers in the PAEs.
3. VPU with sequencer and external processor
This mode of operation also corresponds to 2.2. Special static sequencer models are available in the database for the external processor.
Example Linker
The linker connects the individual software modules (VC, MC, SVC, and SVCO) to form an executable program. For this purpose, the linker may use the VCBP in order to place and route the individual software modules and to determine the time partitioning. The linker may also add the communication structures between the individual software modules and, if needed, additional registers and memories. Structures for storing the internal states of the array and sequencers for the case of a reconfiguration may be added, e.g., on the basis of an analysis of the control structures and dependencies of the individual software modules.
Notes on the Processor Models
It will be appreciated that the machine models used may be combined within a VPU in any desired manner. It is also possible to switch from one model to another within an algorithm depending on which model is best.
If an additional memory is added to a register processor from which the operands are read and into which the results are written, a load/store processor may be created. A plurality of different memories may be assigned by treating the individual operands and the result separately.
These memories then may operate more or less as load/store units and represent a type of cache for the external memory. The addresses may be computed by the PAEs which are separate from the data processing.
Pointer Reordering
High-level languages such as C/C++ often use pointers, which are poorly handled by pipelines. If a pointer is not computed until immediately before a data structure at which it points is used, the pipeline often cannot be filled rapidly enough and the processing is inefficient, especially in the VPUs.
It may be useful not to use any pointers in programming VPUs; however, this may be impossible.
The problem may be solved by having the pointer structures re-sorted by the compiler so that the pointer addresses are computed as early as possible before they are used. At the same time, there should be as little direct dependence as possible between a pointer and the data at which it points.
Extensions of the PAEs
German Patents 196 51 075.9 and 196 54 846.2 describe possible configuration characteristics of cells (PAEs).
According to German Patent 196 51 075.9, a set of configuration registers (0904) containing a configuration may be associated with a PAE (0903) (
These related patents may be extended, e.g.,
a) to provide a method to speed up the reconfiguration of PAEs and isolate it in time from the higher-level load unit,
b) to design the method so that the possibility of simultaneously sequencing over more than one configuration is provided, and
c) to simultaneously hold in one PAE a plurality of configurations, one of which is always activated, with rapid switching between different configurations.
Isolation of the Configuration Register
The configuration register may be isolated from the higher-level load unit (CT) (
The configuration register to be selected by multiplexer 1002 may be determined by different sources:
1. Any status signal or a group of any status signals supplied via a bus system 0802 in
2. The status signal of the PAE which is configured by the configuration registers 1001 and multiplexer 1002 may be used for the selection (
3. A signal 1003 generated by the higher-level CT may used for the selection, as shown in
Optionally, the incoming signals 1003 may be stored for a certain period of time using a register and may be optionally called as needed.
By using a plurality of registers, the CT may be isolated in time. The CT may “pre-load” a plurality of configurations without a direct time-dependency existing.
The configuration of the PAE is delayed only until the CT has loaded the register if the selected/activated register in the register set 1001 has not yet been loaded. In order to determine whether a register has valid information, a “valid bit” 1004 which is set by the CT may be inserted in each register. If 0906 is not set in a selected register, the CT may be requested, via a signal, to configure the register as rapidly as possible.
The procedure described in
(a) the status of the status signal of the PAE which is configured by register set 1001 and 1002, as shown in
(b) any desired status signal supplied via bus system 0802, as shown in
(c) a combination of (a) and (b).
Register set 1001 may also be designed as a memory, with a command being addressed by instruction decoder 1101 instead of multiplexer 1002. Addressing here depends on the command itself and on a status register. In this respect, the structure corresponds to that of a “von Neumann” machine with the difference
(a) of universal applicability, e.g., non-use of the sequencer (as in
(b) that the status signal does not need to be generated by the arithmetic unit (PAE) associated with the sequencer, but may come from any other arithmetic unit (e.g.,
It will be appreciated that it may be useful if the sequencer can execute jumps, in particular also conditional jumps within the register set 1001.
In this procedure, the sequencer is not fixedly implemented, but may be emulated by a PAE or a group of PAEs. The internal memories may reload programs from the external memories.
In order to store local data (e.g., for iterative computations and as a register for a sequencer), the PAE may be provided with an additional register set, whose individual registers are either determined by the configuration, connected to the ALU or written into by the ALU; or they be freely used by the command set of an implemented sequencer (register mode). One of the registers may also be used as an accumulator (accumulator mode). If the PAE is used as a full-featured machine, it may be advantageous to use one of the registers as an address counter for external data addresses.
In order to manage stacks and accumulators outside the PAE (e.g., in the memories according to the present invention), the previously described RDY/ACK REQ/ACK synchronization model is used.
Conventional PAEs, such as those described in German Patent Application No. 196 51 075.9, may be ill-suited for processing bit-wise operations, since the integrated ALU may not particularly support bit operations, e.g., it has a narrow design (1, 2, 4 bits wide). Efficient processing of individual bits or signals may be guaranteed by replacing the ALU core with an FPGA core (LC), which executes logical operations according to its configuration. The LC can be freely configured in its function and internal interconnections. Conventional LCs can be used. For certain operations it may be advantageous to assign a memory to the LC internally. The interface modules between FC and the bus system of the array are adjusted only slightly to the FC, but are basically preserved. However, in order to configure the time response of the FC in a more flexible manner, it may be useful if the registers in the interface modules are configured so that they can be turned off.
a illustrates some basic characteristics of an example method according to the present invention. The Type A software modules may be combined into a group and, at the end, have a conditional jump either to B1 or to B2. At position 0401, a reconfiguration point may be inserted. It may be useful to treat each branch of the conditional jump as a separate group (case 1). However, if both B branches (B1 and B2), together with A as well, suit the target module (case 2), it may be more convenient to insert only one reconfiguration point at position 0402, since this reduces the number of configurations and increases the processing speed. Both branches (B1 and B2) jump to C at position 0402.
The configuration of cells on the target module is illustrated schematically in
Both cases of conditional jump (case 1, case 2) are shown.
The model of
If a sufficiently large sequencer (A) is implemented in 0501, a principle which is very similar to typical processors can be implemented with this model. In this case, the data may go to
1. sequencer A, which decodes it as commands and responds to it according to the “von Neumann” principle;
2. sequencer A, where it is treated as data and forwarded to a fixedly configured arithmetic unit C for computation.
Graph B selectively makes available a special arithmetic unit and/or special opcodes for certain functions and is alternatively used to speed up C. For example, B1 can be an optimized algorithm for performing matrix multiplications, while B2 represents a FIR filter, and B3 a pattern recognition. The appropriate, e.g., corresponding graph B is called according to an opcode which is decoded by the collection 0501.
b schematically shows the mapping onto the individual cells. The cell may perform pipeline-type arithmetic unit, as illustrated in 0502.
While larger memories may be introduced at the reconfiguration points of
a shows different loops. Loops may be basically h handled in three different ways:
1. Hardware approach: Loops may be mapped onto the target hardware completely rolled out (0601a/b). As explained previously, this may be possible only for a few types of loops;
2. Data flow approach: Loops may be formed over a plurality of cells within the data flow (0602a/b). The end of the loop may be looped back to the beginning of the loop.
3. Sequencer approach: A sequencer having a minimum command set may execute the loop (0603a/b). The cells of the target modules may be configured so that they contain the corresponding sequencers (see
The execution of the loops may sometimes be optimized by breaking them down in a suitable manner:
1. Using conventional optimizing methods, often the body of the loop, e.g., the part to be executed repeatedly, can be optimized by removing certain operations from the loop and placing them before or after the loop (0604a/b). Thus, the number of commands to be sequenced is substantially reduced. The removed operations are only executed once before or after the execution of the loop.
2. Another optimization option is dividing the loops into a plurality or smaller or shorter loops. This division is performed so that a plurality of parallel or sequential (0605a/b) loops are obtained.
The data and states of the results may be stored in memories (1411 and memory 1412). At the same time, the address of operands 1404 may be stored as a pointer 1413. Address 1404 may pass through registers 1414 for time synchronization.
In the ULIW model, each subgraph may be loaded separately by the CT, see German Patent Application No. 198 07 782.2. Subgraphs may be managed by the mechanisms of German Patent Application No. 198 07 782.2. These may include intelligent configuring, execute/start, and deletion of subapplications.
At point 1503 a fetch instruction may cause subapplication A to be loaded or configured, while subapplication K is being executed. Thus,
a) subapplication A may be already configured in the PAEs at the time subapplication K is completely executed if the PAEs have more than one configuration register;
b) subapplication A may be already loaded into the CT at the time subapplication K is completely executed if the PAEs only have one configuration register.
1504 starts the execution of subapplication K.
This means that, at runtime the next required program parts may be loaded independently while the current program parts are running. This may yield a much more efficient handling of the program codes than the usual cache mechanisms.
Another particular feature of subapplications A is shown. In principle, both possible branches (C, K) of the comparison could be preconfigured. Assuming that the number of free configuration registers available is insufficient for this, the more probable of the two branches is configured (1506). This also saves configuration time. When the non-configured branch is executed, the program execution may be interrupted (since the configuration is not yet loaded into the configuration registers) until the branch is configured.
In principle, unconfigured subapplications may also be executed (1505); in this case they may need to be loaded prior to execution as described previously.
A FETCH command may be initiated by a trigger via its own ID. This allows subapplications to be pre-loaded depending on the status of the array.
The ULIW model differs from the VLIW model in that it also includes data routing. The ULIW model also forms larger instruction words.
The above-described partitioning procedure may also be used by compilers for existing standard processors according to the RISC/CISC principle. If a unit described in German Patent Application No. 198 07 782.2 is used for controlling the command cache, it can be substantially optimized and sped up.
For this purpose, “normal” programs may be partitioned into subapplications in an appropriate manner. According to German Patent Application No. 198 07 782.2, references to possible subsequent subapplications are inserted (1501, 1502). Thus a CT may pre-load the subapplications into the cache before they are needed. In the case of a jump, only the subapplication to which the jump was made needs to be executed; the other(s) may be overwritten later by new subapplications. In addition to intelligent pre-loading, the procedure has the additional advantage that the size of the subapplications is already known at the time of loading. Thus, optimum bursts can be executed by the CT when accessing the memories, which in turn may considerably speed up memory access.
An array of PAEs may operate as a register processor in this embodiment (
It may be advantageous to use a separate PAE for reading the data. In this case, PAE 1705 would only write and PAE 1707 would only read. An additional PAE (1708, shown in broken lines underneath PAE 1706) may be added for generating the read addresses.
It is not necessary to use separate PAEs for generating addresses. Often the registers are implicit and, configured as constants, may be transmitted by the data processing PAEs.
The use of accumulator processors for a register processor is shown as an example. PAEs without accumulators can also be used for creating register processors. The architecture shown in
When used as a load/store unit, an external RAM (1709) may need to be connected downstream, so that RAM 1704 represents only a temporary section of external RAM 1709, similar to a cache.
Also, when 1704 is used as a register bank, it may be advantageous to some for an external memory to be connected downstream. In this case, PUSH/POP operations according to the related art, which write the content of the register into a memory or read it from there, may be performed.
There is no basic difference between the load/store unit (1802) and the register bank (1804) and their activation.
a shows a memory according to the present invention in the “register/cache” mode. In the memory (1901), words of a usually larger and slower external memory (1902) may be stored.
The data exchange between 1901, 1902, and the PAEs (not shown) connected via a bus (1903) may take place as follows, distinction being made between two modes of operation:
A) The data read or transmitted by the PAEs from main memory 1902 is buffered in 1901 using a cache technique. Any conventional cache technique can be used.
B) The data of certain addresses is transmitted between 1902 and 1901 via a load/store unit. Certain addresses may be predefined both in 1902 and in 1901, different addresses being normally used for 1902 and 1901. The individual addresses may be generated by a constant or by computations in PAEs. In this operating mode memory 1901 may operate as a register bank.
The addresses between 1901 and 1902 may be assigned in any desired manner, which only depends on the respective algorithms of the two operating modes.
The corresponding machine is shown in
A unit (2004) which controls the write and read pointers of the FIFO as a function of the bus operations of 2003 and 2002 may be provided to control the FIFO.
The current data may be held in internal memory 2101; the most recent record (2107) may be located at the very top in 2101. Old records are transferred to external memory 2102. If the stack continues to grow, the space in internal memory 2101 is no longer sufficient. When a certain amount of data is reached, which may be represented by a (freely selectable) address in 2101 or a (freely selectable) value in a record counter, part of 2101 is written as a block to the more recent end (2103) of the stack in 2102. This part is the oldest and thus the least current data (2104). Subsequently, the remaining data in 2101 may be shifted so that the data in 2101 copied to 2102 is overwritten with the remaining data (2105) and thus sufficient free memory (2106) may created for new stack inputs.
If the stack decreases, starting at a certain (freely selectable) point, the data in 2101 may be shifted so that free memory is created after the oldest and least current data. A memory block is copied from 2102 into the freed memory, and is then deleted in 2102.
Thus, 2101 and 2102 may represent a single stack, the current records being located in 2101 and the older and less current records being transferred to 2102. The method represents a quasi-cache for stacks. The data blocks may be transmitted by block operations; therefore, the data transfer between 2101 and 2102 can be performed in the rapid burst operating modes of modern memories (SDRAM, RAMBUS, etc.).
In the example illustrated in
Internal stack 2101 may be designed as a type of ring memory. The data at one end of the ring may be transmitted between PAEs and 2101 and at the other end of the ring between 2101 and 2102. This has the advantage that data can be easily shifted between 2101 and 2102 without having any effect on the internal addresses in 2101. Only the position pointers of the bottom and top data and the fill status counter have to be adjusted. The data transfer between 2101 and 2102 may by triggered by conventional ring memory flags “almost full”/“full” “almost empty”/“empty.”
Example hardware is shown as a block diagram in
The connection between the PAEs and 2101 may be implemented by bus system 2113.
A conditional jump chooses one of the two graphs. The special characteristic is that now 2302 needs to know the internal status of 2301 for execution and vice versa, 2301 must know the status of 2302.
This may be implemented by storing the status just once, namely in the registers of the PAEs of the higher-performance data flow graph (2301).
If a jump is performed in 2302, the sequencer may read the states of the respective registers from (2303) using the bus system of German Patent Application No. 197 04 742.4. The sequencer performs its operations and writes all the modified states back (2304) into the registers (again via the bus system of German Patent Application No. 197 04 742.4. Finally, it should be mentioned that the above-mentioned graphs need not necessarily be narrow loops (2305). The method is generally applicable to any subalgorithm which is executed multiple times within a program run (reentrant) and is run either sequentially or in parallel (data flow type). The states may be transferred between the sequential and the parallel portions.
Wave reconfiguration offers considerable advantages regarding the speed of reconfiguration, in particular for simple sequential operations. With wave reconfiguration, the sequencer may also be designed as an external microprocessor. A processor may be connected to the array via the data channels and the processor may exchange local, temporary data with the array via bus systems. All sequential portions of an algorithm that cannot be mapped into the array of PAEs may be run on the processor.
The example system may have three bus systems:
1. Data bus which regulates the exchange of processed data between the VPU and the processor;
2. Register bus which enables access to the VPU registers and thus guarantees the data exchange (2302, 2304) between 2302 and 2301;
3. Configuration data bus, which configures the VPU array.
Single-hatched areas represent data processing PAEs, 2401 showing PAEs after reconfiguration and 2403 showing PAEs before reconfiguration. Cross-hatched areas (2402) show PAEs which are being reconfigured or are waiting for reconfiguration.
a illustrates the effect of wave reconfiguration on a simple sequential algorithm. Those PAEs that have been assigned a new task may be reconfigured. This may be performed efficiently, e.g., simultaneously, because a PAE receives a new task in each cycle.
A row of PAEs from the matrix of all PAEs of a VPU is shown as an example. The states in the cycles after cycle t are given with a one-cycle delay.
b shows the effect over time of the reconfiguration of large portions. A number of PAEs of a VPU is shown as an example. The states in the cycles after cycle t are given with different delays of a plurality of cycles.
While initially only a small portion of the PAEs are being reconfigured or are waiting for reconfiguration, this area becomes larger over time until all PAEs are reconfigured. The enlarging of the area means that, due to the time delay of the reconfiguration, more and more PAEs will be waiting for reconfiguration (2402), resulting in loss of computing capacity.
A wider bus system may be used between the CT (in particular the memory of the CT) and the PAEs, which may provide sufficient lines for reconfiguring multiple PAEs at the same time within one cycle.
wait <trg#>
Wait for the receipt of a certain trigger f(trg#) from the array, which indicates which next configuration should be loaded.
lookup <trg#>
Returns the address of the subprogram called by a trigger received.
jmp <adr>
Jump to address
call <adr>
Jump to address. Return jump address may be stored on the stack.
jmp <cond><adr>
Conditional jump to address
call <cond><adr>
Conditional jump to address. Return jump address is stored on the stack.
ret
Return jump to the return jump address stored on the stack
mov <target><source>
Transfers a data word from source to target. Source and target may each be a peripheral address or in a memory.
The commands may be similar to those described in German Patent Application No. 198 07 782.2, e.g., the description of the CT. The implementation of 2602, may need only very simple commands for data management. A complete micro controller may be omitted.
The command set may include a “pabm” command for configuring the PAEs. Two commands (pabmr, pabmm) are available, which have the following structure:
The commands may copy an associated block of PAE addresses and PAE data from the memory to the PAE array. <count> indicates the size of the data block to be copied. The data block may either be directly appended to the opcode (a) or referenced by specifying the first memory address <memref> (b).
Each pa_adrn-pa_dtan row represents a configuration for a PAE. pa_adrn specifies the address and pa_dtan specifies the configuration word of the PAE.
An example of the RDY/ACK-REJ protocol is described in German Patent Application No. 198 07 782.2. If the configuration data is accepted by a PAE, the PAE acknowledges the transmitted data with an ACK. However, if a PAE cannot accept the configuration data because it is not in a reconfigurable state, it returns a REJ. Thus the configuration of the subalgorithm fails.
The location of the pa_adrn-pa_dtan row rejected with REJ is stored. The commands may be called again at a later time (as described in German Patent Application No. 198 07 782.2, FILMO). If the command was completely executed, e.g., no REJ occurred, the command performs no further configuration, but terminates immediately. If a REJ occurred, the command jumps directly to the location of the rejected pa_adrn-pa_dtan row. Depending on the command, the location is stored in different ways:
pabmr: the address is stored in the register named <regno>;
pabmm: the address is stored directly in the command at the memory location <offset>.
The commands can be implemented via DMA structures as memory/IO transfers according to the related art. The DMAs are extended by a logic for monitoring the incoming ACK/REJ. The start address is determined by <regno> or <offset>. The last address of the data block is computed via the address of the command plus its opcode length minus one plus the number of pa_adrn-pa_dtan rows.
It is also useful to extend the circuit described in German Patent Application No. 198 07 782.2, by the above-mentioned commands.
A unit 2704, which receives and acknowledges triggers from the associated PAEs and transmits triggers to the PAEs when appropriate, is connected to 2703. Incoming triggers cause an interrupt in sequencer 2706 or are queried by the WAIT command. Optionally, an interface (2705) to a data bus of the associated PAEs is connected to 2703 in order to be able to send data to the PAEs. For example, the assembler codes of a sequencer implemented in the PAEs are transmitted via 2705. The interface contains, when required, a converter for adjusting the different bus widths. Units 2701 through 2706 are connected to a bus system (2708), which is multiple times wider and leads to the memory (2709), via a multiplexer/demultiplexer (2707). 2707 is activated by the lower-value addresses of the address/stack register; the higher-value addresses lead directly to the RAM (2711). Bus system 2708 leads to an interface (2709), which is controlled by the PA commands and leads to the configuration bus of the PAEs. 2708 is designed to be wide enough to be able to send as many configuration bits as possible per cycle unit to the PAEs via 2709. An additional interface (2710) connects the bus to a higher-level CT, which exchanges configuration data and control data with 2602. Examples of interfaces 2710 and 2709 are described in German Patent Application No. 198 07 782.2.
2706 may have a reduced, minimum set of commands that is optimized for the task, mainly for PA commands, jumps, interrupts, and lookup commands. Furthermore, optimized wide bus system 2708, which is transferred to a narrow bus system via 2707 is of particular importance for the reconfiguration speed of the unit.
a illustrates a special version of the example configuration unit shown in
Multiple opcodes may use a common set of complex configurations to form an opcode group (2805). The different opcodes of a group differ from one another by the special versions of the complex configurations. Differentiation elements (2807) which either contain additional configuration words or overwrite configuration words occurring in 2801 may be used for this purpose.
If no differentiation is required, a complex configuration may be called directly by an opcode (2806). A program (2804) may be composed of a sequence of opcodes having the respective parameters.
A complex function may be loaded once into the array and then reconfigured again by different parameters or differentiations. Only the variable portions of the configuration are reconfigured. Different opcode groups use different complex configurations. (2805a, . . . , 2805n).
The different levels (complex configuration, differentiation, opcode, program) are run in different levels of CTs (see CT hierarchies in German Patent Application No. 198 07 782.2). The different levels are illustrated in 2810, with 1 representing the lowest level and N the highest. CTs with hierarchies of any desired depth can be constructed as described in, for example, German Patent Application No. 198 07 782.2.
A distinction may be made in the complex configurations 2801 between two types of code:
1. Configuration words which map an algorithm onto the array of PAEs. The algorithm may be designed as a sequencer. Configuration may take place via interface 2709. Configuration words may be defined by the hardware.
2. Algorithm-specific codes, which depend on the possible configuration of a sequencer or an algorithm. These codes may be defined by the programmer or the compiler and are used to activate an algorithm. If, for example, a Z80 microprocessor is configured as a sequencer in the PAEs, these codes represent the opcode of the Z80 microprocessor. Algorithm-specific codes may be transmitted to the array of PAEs via 2705.
The CT may selectively accesses a plurality of configuration registers (2913) via an interface unit (2911) using a bus system (2912). 2910 selects a certain configuration via a multiplexer (2914) or sequences via a plurality of configuration words which then represent commands for the sequencer.
Since the VPU technology operates mainly pipelined, it is of advantage to additionally provide either groups 2901 and 2903 or groups 2902 and 2904 or both groups with FIFOs. This can prevent pipelines from being jammed by simple delays (e.g., in the synchronization).
2920 is an optional bus access via which one of the memories of a CT (see
The addresses may be
a) generated for the CT memory by the circuit of
b) generated directly by 2910 for the internal memory.
For all bus systems, there are the following connection models to a processor which may be selected depending on the programming model and balancing price and performance.
1. Register Model
In the register model, the respective bus is addressed via a register, which is directly integrated in the register set of the processor and is addressed by the assembler as a register or a group of registers. This model is most efficient when a few registers suffice for the data exchange.
2. IO Model
The respective bus is located in the IO area of the processor. This is usually the simplest and most cost-effective version.
3. Shared Memory Model
Processor and respective bus share one memory area in the data memory storage device. This is an effective version for large amounts of data.
4. Shared Memory-DMA Model
Processor and bus share the same memory as in the previous model. There is a fast DMA to further increase speed (see
In order to increase the transmission speed, the respective memories may be physically separable from the other memories (a plurality of memory banks), so that processor and VPU can access their memories independently.
In
c/d correspond to
It will be appreciated that a compromise resulting in maximum flexibility and a reasonable size is evaluating the trigger and RDY/ACK signals by a unit according to 3301 and controlling all fixed processes within the PAE by a fixedly implemented unit according to 2910.
Possible designs of the logic cells include:
The selection of the functions and interconnection can be either flexibly programmable via SRAM cells or using read-only ROMs or semistatic Flash ROMs.
In order to speed up sequential algorithms, which are difficult to parallelize, speculative design may be utilized.
There are three design options for sequential code 3711:
1. Within a sequencer of a PAE (2910).
2. Via a sequencer configured in the VPU. To do so, the compiler may generate a sequencer optimized for the task, as well as the algorithm-specific sequencer code (see 2801) directly.
3. On a conventional external processor (3103).
The option selected depends on the architecture of the VPU, of the computer system, and of the algorithm.
The code (3701) may initially be separated in a pre-processor (3702) into data flow code (3716) (written in a special version of the respective programming language and optimized for the data flow), and common sequential code (3717). 3717 is checked for parallelizable subalgorithms (3703), and the sequential subalgorithms are eliminated (3718). The parallelizable subalgorithms are placed temporarily as macros and routed.
In an iterative process, the macros are placed together with the data flow-optimized code (3713), routed, and partitioned (3705). A statistical unit (3706) evaluates the individual macros and their partitioning with regard efficiency, with the time and the resources used for reconfiguration being factored into the efficiency evaluation. Inefficient macros are removed and separated as sequential code (3714).
The remaining parallel code (3715) is compiled and assembled (3707) together with 3716, and VPU object code is output (3708).
Statistics concerning the efficiency of the code generated and of the individual macros (including those removed with 3714) are output (3709); thus, the programmer receives essential information on the speed optimization of the program.
Each macro of the remaining sequential code is checked for complexity and requirements (3720). The appropriate sequencer is selected from a database, which depends on the VPU architecture and the computer system (3719), and output as VPU code (3721). A compiler (3721) generates and outputs (3711) the assembler code of the respective macro for the sequencer selected by 3720. 3710 and 3720 are closely linked. The process may take place iteratively in order to find the most suitable sequencer with the least and fastest assembler code.
A linker (3722) combines the assembler codes (3708, 3711, 3721) and generates the executable object code (3723).
Either the PC or a stack pointer (3807) supplied by the PAE array is supplied to an adder (3808) via multiplexer 3806. Here an offset which is stored in register 3809 and written via 3803 is subtracted from or added to the values. 3808 allows the program to be shifted within memory 2711. This enables garbage collector functions to clean up the memory German Patent Application no. 198 07 782.2. The address shift which occurs due to the garbage collector is compensated for by adjustment of the offset in 3809.
a is a variant of
Thus, the count can be rapidly adjusted to block-by-block changes. (Of course, it is also possible to modify the counter with each written or read word in a block operation.) For cache operations, a conventional cache controller (3911) is available, which is associated with a tag memory (3912). Depending on the mode of operation, the value of 3911 or 3906 is sent out (3914) via a multiplexer (3913) as an address. The data is sent out via bus 3915, and data is exchanged with the array via bus 3916.
Programming Examples to Illustrate the Subalgorithms
A software module may be declared in the following way, for example:
The following memory types are available, for example, as additional transfer modes to the output:
fifo <fifoname>, where the data is transmitted to a memory operating by the FIFO principle. <fifoname> is a global reference to a specific memory operating by the FIFO principle. terminated@ is extended by the “fifofull” parameter, e.g., signal, which shows that the memory is full. stack <stackname>, where the data is transmitted to a memory operating by the stack principle. <stackname> is a global reference to a specific memory operating in the stack mode.
terminate@ differentiates the programming by the method according to the present invention from conventional sequential programming. The command defines the abort criterion of the software module. The result variables res1 and res2 are not evaluated by terminate@ with their actual values, but only the validity of the variables (e.g., their status signal) is checked. For this purpose, the two signals res1 and res2 are gated with one another logically via an AND, OR, or XOR operation. If both variables are valid, the software module is terminated with the value 1. This means that a signal having value 1 is forwarded to the higher-level load unit, whereupon the higher-level load unit loads the next software module.
fifo <fifoname1> (res1, 256).
register is defined via input data in this example. <regname1> is the same here as in example1. This causes the register, which receives the output data in example1, to provide the input data for example2.
fifo defines a FIFO memory with a depth of 256 for the output data res1. The full flag (fifofull) of the FIFO memory is used as an abort criterion in terminate@.
define defines an interface for data (register, memory, etc.). The required resources and the name of the interface are specified with the definition. Since each of the resources is only available once, they must be specified unambiguously. Thus the definition is global, e.g., the name is valid for the entire program.
call calls a software module as a subprogram.
signal defines a signal as an output signal without a buffer being used.
The software module main is terminated by terminate@ (example2) as soon as subprogram example2 is terminated.
In principle, due to the global declaration “define . . . ” the input/output signals thus defined do not need to be included in the interface declaration of the software modules.
| Number | Date | Country | Kind |
|---|---|---|---|
| 199 26 538 | Jun 1999 | DE | national |
| 100 00 423 | Jan 2000 | DE | national |
| 100 18 119 | Apr 2000 | DE | national |
| Filing Document | Filing Date | Country | Kind | 371c Date |
|---|---|---|---|---|
| PCT/DE00/01869 | 6/13/2000 | WO | 00 | 5/29/2002 |
| Publishing Document | Publishing Date | Country | Kind |
|---|---|---|---|
| WO00/77652 | 12/21/2000 | WO | A |
| Number | Name | Date | Kind |
|---|---|---|---|
| 2067477 | Cooper | Jan 1937 | A |
| 3242998 | Gubbins | Mar 1966 | A |
| 3564506 | Bee et al. | Feb 1971 | A |
| 3681578 | Stevens | Aug 1972 | A |
| 3753008 | Guarnaschelli | Aug 1973 | A |
| 3757608 | Willner | Sep 1973 | A |
| 3855577 | Vandierendonck | Dec 1974 | A |
| 4151611 | Sugawara et al. | Apr 1979 | A |
| 4233667 | Devine et al. | Nov 1980 | A |
| 4414547 | Knapp et al. | Nov 1983 | A |
| 4498134 | Hansen et al. | Feb 1985 | A |
| 4498172 | Bhavsar | Feb 1985 | A |
| 4566102 | Hefner | Jan 1986 | A |
| 4571736 | Agrawal et al. | Feb 1986 | A |
| 4590583 | Miller | May 1986 | A |
| 4591979 | Iwashita | May 1986 | A |
| 4594682 | Drimak | Jun 1986 | A |
| 4623997 | Tulpule | Nov 1986 | A |
| 4663706 | Allen et al. | May 1987 | A |
| 4667190 | Fant et al. | May 1987 | A |
| 4682284 | Schrofer | Jul 1987 | A |
| 4706216 | Carter | Nov 1987 | A |
| 4720778 | Hall et al. | Jan 1988 | A |
| 4720780 | Dolecek | Jan 1988 | A |
| 4739474 | Holsztynski | Apr 1988 | A |
| 4761755 | Ardini et al. | Aug 1988 | A |
| 4791603 | Henry | Dec 1988 | A |
| 4811214 | Nosenchuck et al. | Mar 1989 | A |
| 4852043 | Guest | Jul 1989 | A |
| 4852048 | Morton | Jul 1989 | A |
| 4860201 | Miranker et al. | Aug 1989 | A |
| 4870302 | Freeman | Sep 1989 | A |
| 4873666 | Lefebvre et al. | Oct 1989 | A |
| 4882687 | Gordon | Nov 1989 | A |
| 4884231 | Mor et al. | Nov 1989 | A |
| 4891810 | de Corlieu et al. | Jan 1990 | A |
| 4901268 | Judd | Feb 1990 | A |
| 4910665 | Mattheyses et al. | Mar 1990 | A |
| 4918440 | Furtek et al. | Apr 1990 | A |
| 4939641 | Schwartz et al. | Jul 1990 | A |
| 4959781 | Rubinstein et al. | Sep 1990 | A |
| 4967340 | Dawes | Oct 1990 | A |
| 4972314 | Getzinger et al. | Nov 1990 | A |
| 4992933 | Taylor | Feb 1991 | A |
| 5010401 | Murakami et al. | Apr 1991 | A |
| 5014193 | Garner et al. | May 1991 | A |
| 5015884 | Agrawal et al. | May 1991 | A |
| 5021947 | Campbell et al. | Jun 1991 | A |
| 5023775 | Poret | Jun 1991 | A |
| 5034914 | Osterlund | Jul 1991 | A |
| 5036473 | Butts et al. | Jul 1991 | A |
| 5036493 | Nielsen | Jul 1991 | A |
| 5041924 | Blackborow et al. | Aug 1991 | A |
| 5043978 | Nagler et al. | Aug 1991 | A |
| 5047924 | Matsubara et al. | Sep 1991 | A |
| 5055997 | Sluijter et al. | Oct 1991 | A |
| 5065308 | Evans | Nov 1991 | A |
| 5072178 | Matsumoto | Dec 1991 | A |
| 5081375 | Pickett et al. | Jan 1992 | A |
| 5099447 | Myszewski | Mar 1992 | A |
| 5103311 | Sluijter et al. | Apr 1992 | A |
| 5109503 | Cruickshank et al. | Apr 1992 | A |
| 5113498 | Evan et al. | May 1992 | A |
| 5115510 | Okamoto et al. | May 1992 | A |
| 5119290 | Loo et al. | Jun 1992 | A |
| 5123109 | Hillis | Jun 1992 | A |
| 5125801 | Nabity et al. | Jun 1992 | A |
| 5128559 | Steele | Jul 1992 | A |
| 5142469 | Weisenborn | Aug 1992 | A |
| 5144166 | Camarota et al. | Sep 1992 | A |
| 5193202 | Jackson et al. | Mar 1993 | A |
| 5203005 | Horst | Apr 1993 | A |
| 5204935 | Mihara et al. | Apr 1993 | A |
| 5208491 | Ebeling et al. | May 1993 | A |
| 5212716 | Ferraiolo et al. | May 1993 | A |
| 5212777 | Gove et al. | May 1993 | A |
| 5218302 | Loewe et al. | Jun 1993 | A |
| 5226122 | Thayer et al. | Jul 1993 | A |
| RE34363 | Freeman | Aug 1993 | E |
| 5233539 | Agrawal et al. | Aug 1993 | A |
| 5237686 | Asano et al. | Aug 1993 | A |
| 5243238 | Kean | Sep 1993 | A |
| 5247689 | Ewert | Sep 1993 | A |
| RE34444 | Kaplinsky | Nov 1993 | E |
| 5274593 | Proebsting | Dec 1993 | A |
| 5276836 | Fukumaru et al. | Jan 1994 | A |
| 5287472 | Horst | Feb 1994 | A |
| 5287511 | Robinson et al. | Feb 1994 | A |
| 5287532 | Hunt | Feb 1994 | A |
| 5294119 | Vincent et al. | Mar 1994 | A |
| 5301284 | Estes et al. | Apr 1994 | A |
| 5301344 | Kolchinsky | Apr 1994 | A |
| 5303172 | Magar et al. | Apr 1994 | A |
| 5311079 | Ditlow et al. | May 1994 | A |
| 5327125 | Iwase et al. | Jul 1994 | A |
| 5336950 | Popli et al. | Aug 1994 | A |
| 5343406 | Freeman et al. | Aug 1994 | A |
| 5347639 | Rechtschaffen et al. | Sep 1994 | A |
| 5349193 | Mott et al. | Sep 1994 | A |
| 5353432 | Richek et al. | Oct 1994 | A |
| 5355508 | Kan | Oct 1994 | A |
| 5361373 | Gilson | Nov 1994 | A |
| 5365125 | Goetting et al. | Nov 1994 | A |
| 5379444 | Mumme | Jan 1995 | A |
| 5386154 | Goetting et al. | Jan 1995 | A |
| 5386518 | Reagle et al. | Jan 1995 | A |
| 5392437 | Matter et al. | Feb 1995 | A |
| 5408643 | Katayose | Apr 1995 | A |
| 5410723 | Schmidt et al. | Apr 1995 | A |
| 5412795 | Larson | May 1995 | A |
| 5418952 | Morley et al. | May 1995 | A |
| 5418953 | Hunt et al. | May 1995 | A |
| 5421019 | Holsztynski et al. | May 1995 | A |
| 5422823 | Agrawal et al. | Jun 1995 | A |
| 5425036 | Liu et al. | Jun 1995 | A |
| 5426378 | Ong | Jun 1995 | A |
| 5428526 | Flood et al. | Jun 1995 | A |
| 5430687 | Hung et al. | Jul 1995 | A |
| 5435000 | Boothroyd et al. | Jul 1995 | A |
| 5440245 | Galbraith et al. | Aug 1995 | A |
| 5440538 | Olsen et al. | Aug 1995 | A |
| 5442790 | Nosenchuck | Aug 1995 | A |
| 5444394 | Watson et al. | Aug 1995 | A |
| 5448186 | Kawata | Sep 1995 | A |
| 5450022 | New | Sep 1995 | A |
| 5455525 | Ho et al. | Oct 1995 | A |
| 5457644 | McCollum | Oct 1995 | A |
| 5465375 | Thepaut et al. | Nov 1995 | A |
| 5469003 | Kean | Nov 1995 | A |
| 5473266 | Ahanin et al. | Dec 1995 | A |
| 5473267 | Stansfield | Dec 1995 | A |
| 5475583 | Bock et al. | Dec 1995 | A |
| 5475803 | Stearns et al. | Dec 1995 | A |
| 5475856 | Kogge | Dec 1995 | A |
| 5477525 | Okabe | Dec 1995 | A |
| 5483620 | Pechanek et al. | Jan 1996 | A |
| 5485103 | Pedersen et al. | Jan 1996 | A |
| 5485104 | Agrawal et al. | Jan 1996 | A |
| 5489857 | Agrawal et al. | Feb 1996 | A |
| 5491353 | Kean | Feb 1996 | A |
| 5493239 | Zlotnick | Feb 1996 | A |
| 5497498 | Taylor | Mar 1996 | A |
| 5504439 | Tavana | Apr 1996 | A |
| 5506998 | Kato et al. | Apr 1996 | A |
| 5510730 | El Gamal et al. | Apr 1996 | A |
| 5511173 | Yamaura et al. | Apr 1996 | A |
| 5513366 | Agarwal et al. | Apr 1996 | A |
| 5521837 | Frankle et al. | May 1996 | A |
| 5522083 | Gove et al. | May 1996 | A |
| 5525971 | Flynn | Jun 1996 | A |
| 5530873 | Takano | Jun 1996 | A |
| 5530946 | Bouvier et al. | Jun 1996 | A |
| 5532693 | Winters et al. | Jul 1996 | A |
| 5532957 | Malhi | Jul 1996 | A |
| 5535406 | Kolchinsky | Jul 1996 | A |
| 5537057 | Leong et al. | Jul 1996 | A |
| 5537580 | Giomi et al. | Jul 1996 | A |
| 5537601 | Kimura et al. | Jul 1996 | A |
| 5541530 | Cliff et al. | Jul 1996 | A |
| 5544336 | Kato et al. | Aug 1996 | A |
| 5548773 | Kemeny et al. | Aug 1996 | A |
| 5550782 | Cliff et al. | Aug 1996 | A |
| 5555434 | Carlstedt | Sep 1996 | A |
| 5559450 | Ngai et al. | Sep 1996 | A |
| 5561738 | Kinerk et al. | Oct 1996 | A |
| 5568624 | Sites et al. | Oct 1996 | A |
| 5570040 | Lytle et al. | Oct 1996 | A |
| 5572710 | Asano et al. | Nov 1996 | A |
| 5574930 | Halverson, Jr. et al. | Nov 1996 | A |
| 5581731 | King et al. | Dec 1996 | A |
| 5581734 | DiBrino et al. | Dec 1996 | A |
| 5583450 | Trimberger et al. | Dec 1996 | A |
| 5584013 | Cheong et al. | Dec 1996 | A |
| 5586044 | Agrawal et al. | Dec 1996 | A |
| 5587921 | Agrawal et al. | Dec 1996 | A |
| 5588152 | Dapp et al. | Dec 1996 | A |
| 5590345 | Barker et al. | Dec 1996 | A |
| 5590348 | Phillips et al. | Dec 1996 | A |
| 5596742 | Agarwal et al. | Jan 1997 | A |
| 5600265 | El Gamal Abbas et al. | Feb 1997 | A |
| 5600597 | Kean et al. | Feb 1997 | A |
| 5600845 | Gilson | Feb 1997 | A |
| 5606698 | Powell | Feb 1997 | A |
| 5608342 | Trimberger | Mar 1997 | A |
| 5611049 | Pitts | Mar 1997 | A |
| 5617547 | Feeney et al. | Apr 1997 | A |
| 5617577 | Barker et al. | Apr 1997 | A |
| 5619720 | Garde et al. | Apr 1997 | A |
| 5625806 | Kromer | Apr 1997 | A |
| 5625836 | Barker et al. | Apr 1997 | A |
| 5627992 | Baror | May 1997 | A |
| 5634131 | Matter et al. | May 1997 | A |
| 5635851 | Tavana | Jun 1997 | A |
| 5642058 | Trimberger et al. | Jun 1997 | A |
| 5646544 | Iadanza | Jul 1997 | A |
| 5646545 | Trimberger et al. | Jul 1997 | A |
| 5649176 | Selvidge et al. | Jul 1997 | A |
| 5649179 | Steenstra et al. | Jul 1997 | A |
| 5652529 | Gould et al. | Jul 1997 | A |
| 5652894 | Hu et al. | Jul 1997 | A |
| 5655069 | Ogawara et al. | Aug 1997 | A |
| 5655124 | Lin | Aug 1997 | A |
| 5656950 | Duong et al. | Aug 1997 | A |
| 5657330 | Matsumoto | Aug 1997 | A |
| 5659785 | Pechanek et al. | Aug 1997 | A |
| 5659797 | Zandveld et al. | Aug 1997 | A |
| 5675262 | Duong et al. | Oct 1997 | A |
| 5675743 | Mavity | Oct 1997 | A |
| 5675757 | Davidson et al. | Oct 1997 | A |
| 5675777 | Glickman | Oct 1997 | A |
| 5680583 | Kuijsten | Oct 1997 | A |
| 5682491 | Pechanek et al. | Oct 1997 | A |
| 5687325 | Chang | Nov 1997 | A |
| 5694602 | Smith | Dec 1997 | A |
| 5696791 | Yeung | Dec 1997 | A |
| 5696976 | Nizar et al. | Dec 1997 | A |
| 5701091 | Kean | Dec 1997 | A |
| 5705938 | Kean | Jan 1998 | A |
| 5706482 | Matsushima et al. | Jan 1998 | A |
| 5713037 | Wilkinson et al. | Jan 1998 | A |
| 5717890 | Ichida et al. | Feb 1998 | A |
| 5717943 | Barker et al. | Feb 1998 | A |
| 5732209 | Vigil et al. | Mar 1998 | A |
| 5734869 | Chen | Mar 1998 | A |
| 5734921 | Dapp et al. | Mar 1998 | A |
| 5737516 | Circello et al. | Apr 1998 | A |
| 5737565 | Mayfield | Apr 1998 | A |
| 5742180 | Detton et al. | Apr 1998 | A |
| 5745734 | Craft et al. | Apr 1998 | A |
| 5748872 | Norman | May 1998 | A |
| 5748979 | Trimberger | May 1998 | A |
| 5752035 | Trimberger | May 1998 | A |
| 5754459 | Telikepalli | May 1998 | A |
| 5754820 | Yamagami | May 1998 | A |
| 5754827 | Barbier et al. | May 1998 | A |
| 5754871 | Wilkinson et al. | May 1998 | A |
| 5760602 | Tan | Jun 1998 | A |
| 5761484 | Agarwal et al. | Jun 1998 | A |
| 5773994 | Jones | Jun 1998 | A |
| 5778439 | Trimberger et al. | Jul 1998 | A |
| 5781756 | Hung | Jul 1998 | A |
| 5784636 | Rupp | Jul 1998 | A |
| 5794059 | Barker et al. | Aug 1998 | A |
| 5794062 | Baxter | Aug 1998 | A |
| 5801715 | Norman | Sep 1998 | A |
| 5801958 | Dangelo et al. | Sep 1998 | A |
| 5802290 | Casselman | Sep 1998 | A |
| 5804986 | Jones | Sep 1998 | A |
| 5815004 | Trimberger et al. | Sep 1998 | A |
| 5815715 | Kucukcakar | Sep 1998 | A |
| 5815726 | Cliff | Sep 1998 | A |
| 5821774 | Veytsman et al. | Oct 1998 | A |
| 5828229 | Cliff et al. | Oct 1998 | A |
| 5828858 | Athanas et al. | Oct 1998 | A |
| 5831448 | Kean | Nov 1998 | A |
| 5832288 | Wong | Nov 1998 | A |
| 5838165 | Chatter | Nov 1998 | A |
| 5841973 | Kessler et al. | Nov 1998 | A |
| 5844422 | Trimberger et al. | Dec 1998 | A |
| 5844888 | Markkula, Jr. et al. | Dec 1998 | A |
| 5848238 | Shimomura et al. | Dec 1998 | A |
| 5854918 | Baxter | Dec 1998 | A |
| 5857097 | Henzinger et al. | Jan 1999 | A |
| 5857109 | Taylor | Jan 1999 | A |
| 5859544 | Norman | Jan 1999 | A |
| 5860119 | Dockser | Jan 1999 | A |
| 5862403 | Kanai et al. | Jan 1999 | A |
| 5865239 | Carr | Feb 1999 | A |
| 5867691 | Shiraishi | Feb 1999 | A |
| 5867723 | Peters, Jr. et al. | Feb 1999 | A |
| 5870620 | Kadosumi et al. | Feb 1999 | A |
| 5884075 | Hester et al. | Mar 1999 | A |
| 5887162 | Williams et al. | Mar 1999 | A |
| 5887165 | Martel et al. | Mar 1999 | A |
| 5889533 | Lee | Mar 1999 | A |
| 5889982 | Rodgers et al. | Mar 1999 | A |
| 5892370 | Eaton et al. | Apr 1999 | A |
| 5892961 | Trimberger | Apr 1999 | A |
| 5892962 | Cloutier | Apr 1999 | A |
| 5894565 | Furtek et al. | Apr 1999 | A |
| 5898602 | Rothman et al. | Apr 1999 | A |
| 5901279 | Davis, III | May 1999 | A |
| 5915099 | Takata et al. | Jun 1999 | A |
| 5915123 | Mirsky et al. | Jun 1999 | A |
| 5924119 | Sindhu et al. | Jul 1999 | A |
| 5926638 | Inoue | Jul 1999 | A |
| 5927423 | Wada et al. | Jul 1999 | A |
| 5933023 | Young | Aug 1999 | A |
| 5933642 | Baxter et al. | Aug 1999 | A |
| 5936424 | Young et al. | Aug 1999 | A |
| 5943242 | Vorbach et al. | Aug 1999 | A |
| 5956518 | DeHon et al. | Sep 1999 | A |
| 5960193 | Guttag et al. | Sep 1999 | A |
| 5960200 | Eager et al. | Sep 1999 | A |
| 5966143 | Breternitz, Jr. | Oct 1999 | A |
| 5966534 | Cooke et al. | Oct 1999 | A |
| 5970254 | Cooke et al. | Oct 1999 | A |
| 5978260 | Trimberger et al. | Nov 1999 | A |
| 5978583 | Ekanadham et al. | Nov 1999 | A |
| 5996048 | Cherabuddi et al. | Nov 1999 | A |
| 5996083 | Gupta et al. | Nov 1999 | A |
| 5999990 | Sharrit et al. | Dec 1999 | A |
| 6003143 | Kim et al. | Dec 1999 | A |
| 6011407 | New | Jan 2000 | A |
| 6014509 | Furtek et al. | Jan 2000 | A |
| 6020758 | Patel et al. | Feb 2000 | A |
| 6020760 | Sample et al. | Feb 2000 | A |
| 6021490 | Vorbach et al. | Feb 2000 | A |
| 6023564 | Trimberger | Feb 2000 | A |
| 6023742 | Ebeling et al. | Feb 2000 | A |
| 6026481 | New et al. | Feb 2000 | A |
| 6034538 | Abramovici | Mar 2000 | A |
| 6035371 | Magloire | Mar 2000 | A |
| 6038650 | Vorbach et al. | Mar 2000 | A |
| 6038656 | Cummings et al. | Mar 2000 | A |
| 6044030 | Zheng et al. | Mar 2000 | A |
| 6045585 | Blainey | Apr 2000 | A |
| 6047115 | Mohan et al. | Apr 2000 | A |
| 6049222 | Lawman | Apr 2000 | A |
| 6049866 | Earl | Apr 2000 | A |
| 6052773 | DeHon et al. | Apr 2000 | A |
| 6054873 | Laramie | Apr 2000 | A |
| 6055619 | North et al. | Apr 2000 | A |
| 6058266 | Megiddo et al. | May 2000 | A |
| 6058469 | Baxter | May 2000 | A |
| 6064819 | Franssen et al. | May 2000 | A |
| 6076157 | Borkenhagen et al. | Jun 2000 | A |
| 6077315 | Greenbaum et al. | Jun 2000 | A |
| 6078736 | Guccione | Jun 2000 | A |
| 6081903 | Vorbach et al. | Jun 2000 | A |
| 6084429 | Trimberger | Jul 2000 | A |
| 6085317 | Smith | Jul 2000 | A |
| 6086628 | Dave et al. | Jul 2000 | A |
| 6088795 | Vorbach et al. | Jul 2000 | A |
| 6092174 | Roussakov | Jul 2000 | A |
| 6096091 | Hartmann | Aug 2000 | A |
| 6105105 | Trimberger et al. | Aug 2000 | A |
| 6105106 | Manning | Aug 2000 | A |
| 6108760 | Mirsky et al. | Aug 2000 | A |
| 6118724 | Higginbottom | Sep 2000 | A |
| 6119181 | Vorbach et al. | Sep 2000 | A |
| 6122719 | Mirsky et al. | Sep 2000 | A |
| 6125408 | McGee et al. | Sep 2000 | A |
| 6127908 | Bozler et al. | Oct 2000 | A |
| 6128720 | Pechanek et al. | Oct 2000 | A |
| 6134166 | Lytle et al. | Oct 2000 | A |
| 6137307 | Iwanczuk et al. | Oct 2000 | A |
| 6144220 | Young | Nov 2000 | A |
| 6145072 | Shams et al. | Nov 2000 | A |
| 6150837 | Beal et al. | Nov 2000 | A |
| 6150839 | New et al. | Nov 2000 | A |
| 6154048 | Iwanczuk et al. | Nov 2000 | A |
| 6154049 | New | Nov 2000 | A |
| 6157214 | Marshall | Dec 2000 | A |
| 6170051 | Dowling | Jan 2001 | B1 |
| 6172520 | Lawman et al. | Jan 2001 | B1 |
| 6173419 | Barnett | Jan 2001 | B1 |
| 6173434 | Wirthlin et al. | Jan 2001 | B1 |
| 6178494 | Casselman | Jan 2001 | B1 |
| 6185256 | Saito et al. | Feb 2001 | B1 |
| 6185731 | Maeda et al. | Feb 2001 | B1 |
| 6188240 | Nakaya | Feb 2001 | B1 |
| 6188650 | Hamada et al. | Feb 2001 | B1 |
| 6198304 | Sasaki | Mar 2001 | B1 |
| 6201406 | Iwanczuk et al. | Mar 2001 | B1 |
| 6202182 | Abramovici et al. | Mar 2001 | B1 |
| 6204687 | Schultz et al. | Mar 2001 | B1 |
| 6211697 | Lien et al. | Apr 2001 | B1 |
| 6212544 | Borkenhagen et al. | Apr 2001 | B1 |
| 6212650 | Guccione | Apr 2001 | B1 |
| 6215326 | Jefferson et al. | Apr 2001 | B1 |
| 6216223 | Revilla et al. | Apr 2001 | B1 |
| 6219833 | Solomon et al. | Apr 2001 | B1 |
| RE37195 | Kean | May 2001 | E |
| 6230307 | Davis et al. | May 2001 | B1 |
| 6240502 | Panwar et al. | May 2001 | B1 |
| 6243808 | Wang | Jun 2001 | B1 |
| 6247147 | Beenstra | Jun 2001 | B1 |
| 6252792 | Marshall et al. | Jun 2001 | B1 |
| 6256724 | Hocevar et al. | Jul 2001 | B1 |
| 6260114 | Schug | Jul 2001 | B1 |
| 6260179 | Ohsawa et al. | Jul 2001 | B1 |
| 6262908 | Marshall et al. | Jul 2001 | B1 |
| 6263430 | Trimberger et al. | Jul 2001 | B1 |
| 6266760 | DeHon et al. | Jul 2001 | B1 |
| 6279077 | Nasserbakht et al. | Aug 2001 | B1 |
| 6282627 | Wong et al. | Aug 2001 | B1 |
| 6282701 | Wygodny et al. | Aug 2001 | B1 |
| 6285624 | Chen | Sep 2001 | B1 |
| 6286134 | Click, Jr. et al. | Sep 2001 | B1 |
| 6288566 | Hanrahan et al. | Sep 2001 | B1 |
| 6289440 | Casselman | Sep 2001 | B1 |
| 6298043 | Mauger et al. | Oct 2001 | B1 |
| 6298396 | Loyer et al. | Oct 2001 | B1 |
| 6298472 | Phillips et al. | Oct 2001 | B1 |
| 6301706 | Maslennikov et al. | Oct 2001 | B1 |
| 6311200 | Hanrahan et al. | Oct 2001 | B1 |
| 6311265 | Beckerle et al. | Oct 2001 | B1 |
| 6321298 | Hubis | Nov 2001 | B1 |
| 6321366 | Tseng et al. | Nov 2001 | B1 |
| 6321373 | Ekanadham et al. | Nov 2001 | B1 |
| 6338106 | Vorbach et al. | Jan 2002 | B1 |
| 6339840 | Kothari et al. | Jan 2002 | B1 |
| 6341318 | Dakhil | Jan 2002 | B1 |
| 6347346 | Taylor | Feb 2002 | B1 |
| 6349346 | Hanrahan et al. | Feb 2002 | B1 |
| 6353841 | Marshall et al. | Mar 2002 | B1 |
| 6362650 | New et al. | Mar 2002 | B1 |
| 6370596 | Dakhil | Apr 2002 | B1 |
| 6373779 | Pang et al. | Apr 2002 | B1 |
| 6374286 | Gee | Apr 2002 | B1 |
| 6378068 | Foster et al. | Apr 2002 | B1 |
| 6381624 | Colon-Bonet et al. | Apr 2002 | B1 |
| 6389379 | Lin et al. | May 2002 | B1 |
| 6389579 | Phillips et al. | May 2002 | B1 |
| 6392912 | Hanrahan et al. | May 2002 | B1 |
| 6398383 | Huang | Jun 2002 | B1 |
| 6400601 | Sudo et al. | Jun 2002 | B1 |
| 6404224 | Azegami et al. | Jun 2002 | B1 |
| 6405185 | Pechanek et al. | Jun 2002 | B1 |
| 6405299 | Vorbach et al. | Jun 2002 | B1 |
| 6421808 | McGeer et al. | Jul 2002 | B1 |
| 6421809 | Wuytack et al. | Jul 2002 | B1 |
| 6421817 | Mohan et al. | Jul 2002 | B1 |
| 6425054 | Nguyen | Jul 2002 | B1 |
| 6425068 | Vorbach et al. | Jul 2002 | B1 |
| 6426649 | Fu et al. | Jul 2002 | B1 |
| 6427156 | Chapman et al. | Jul 2002 | B1 |
| 6430309 | Pressman et al. | Aug 2002 | B1 |
| 6434642 | Camilleri et al. | Aug 2002 | B1 |
| 6434672 | Gaither | Aug 2002 | B1 |
| 6434695 | Esfahani et al. | Aug 2002 | B1 |
| 6434699 | Jones et al. | Aug 2002 | B1 |
| 6437441 | Yamamoto | Aug 2002 | B1 |
| 6438747 | Schreiber et al. | Aug 2002 | B1 |
| 6449283 | Chao et al. | Sep 2002 | B1 |
| 6457116 | Mirsky et al. | Sep 2002 | B1 |
| 6476634 | Bilski | Nov 2002 | B1 |
| 6477643 | Vorbach et al. | Nov 2002 | B1 |
| 6480937 | Vorbach et al. | Nov 2002 | B1 |
| 6480954 | Trimberger et al. | Nov 2002 | B2 |
| 6483343 | Faith et al. | Nov 2002 | B1 |
| 6487709 | Keller et al. | Nov 2002 | B1 |
| 6490695 | Zagorski et al. | Dec 2002 | B1 |
| 6496902 | Faanes et al. | Dec 2002 | B1 |
| 6496971 | Lesea et al. | Dec 2002 | B1 |
| 6504398 | Lien et al. | Jan 2003 | B1 |
| 6507898 | Gibson et al. | Jan 2003 | B1 |
| 6507947 | Schreiber et al. | Jan 2003 | B1 |
| 6512804 | Johnson et al. | Jan 2003 | B1 |
| 6513077 | Vorbach et al. | Jan 2003 | B2 |
| 6516382 | Manning | Feb 2003 | B2 |
| 6518787 | Allegrucci et al. | Feb 2003 | B1 |
| 6519674 | Lam et al. | Feb 2003 | B1 |
| 6523107 | Stansfield et al. | Feb 2003 | B1 |
| 6525678 | Veenstra et al. | Feb 2003 | B1 |
| 6526520 | Vorbach et al. | Feb 2003 | B1 |
| 6538468 | Moore | Mar 2003 | B1 |
| 6538470 | Langhammer et al. | Mar 2003 | B1 |
| 6539415 | Mercs | Mar 2003 | B1 |
| 6539438 | Ledzius et al. | Mar 2003 | B1 |
| 6539477 | Seawright | Mar 2003 | B1 |
| 6542394 | Marshall et al. | Apr 2003 | B2 |
| 6542844 | Hanna | Apr 2003 | B1 |
| 6542998 | Vorbach et al. | Apr 2003 | B1 |
| 6553395 | Marshall et al. | Apr 2003 | B2 |
| 6553479 | Mirsky et al. | Apr 2003 | B2 |
| 6567834 | Marshall et al. | May 2003 | B1 |
| 6571381 | Vorbach et al. | May 2003 | B1 |
| 6587939 | Takano | Jul 2003 | B1 |
| 6598128 | Yoshioka et al. | Jul 2003 | B1 |
| 6606704 | Adiletta et al. | Aug 2003 | B1 |
| 6624819 | Lewis | Sep 2003 | B1 |
| 6631487 | Abramovici et al. | Oct 2003 | B1 |
| 6633181 | Rupp | Oct 2003 | B1 |
| 6657457 | Hanrahan et al. | Dec 2003 | B1 |
| 6658564 | Smith et al. | Dec 2003 | B1 |
| 6665758 | Frazier et al. | Dec 2003 | B1 |
| 6665865 | Ruf | Dec 2003 | B1 |
| 6668237 | Guccione et al. | Dec 2003 | B1 |
| 6681388 | Sato et al. | Jan 2004 | B1 |
| 6687788 | Vorbach et al. | Feb 2004 | B2 |
| 6697979 | Vorbach et al. | Feb 2004 | B1 |
| 6704816 | Burke | Mar 2004 | B1 |
| 6708325 | Cooke et al. | Mar 2004 | B2 |
| 6717436 | Kress et al. | Apr 2004 | B2 |
| 6721830 | Vorbach et al. | Apr 2004 | B2 |
| 6725334 | Barroso et al. | Apr 2004 | B2 |
| 6728871 | Vorbach et al. | Apr 2004 | B1 |
| 6745317 | Mirsky et al. | Jun 2004 | B1 |
| 6748440 | Lisitsa et al. | Jun 2004 | B1 |
| 6751722 | Mirsky et al. | Jun 2004 | B2 |
| 6754805 | Juan | Jun 2004 | B1 |
| 6757847 | Farkash et al. | Jun 2004 | B1 |
| 6757892 | Gokhale et al. | Jun 2004 | B1 |
| 6782445 | Olgiati et al. | Aug 2004 | B1 |
| 6785826 | Durham et al. | Aug 2004 | B1 |
| 6802026 | Patterson et al. | Oct 2004 | B1 |
| 6803787 | Wicker, Jr. | Oct 2004 | B1 |
| 6820188 | Stansfield et al. | Nov 2004 | B2 |
| 6829697 | Davis et al. | Dec 2004 | B1 |
| 6836842 | Guccione et al. | Dec 2004 | B1 |
| 6847370 | Baldwin et al. | Jan 2005 | B2 |
| 6868476 | Rosenbluth | Mar 2005 | B2 |
| 6871341 | Shyr | Mar 2005 | B1 |
| 6874108 | Abramovici et al. | Mar 2005 | B1 |
| 6886092 | Douglass et al. | Apr 2005 | B1 |
| 6901502 | Yano et al. | May 2005 | B2 |
| 6928523 | Yamada | Aug 2005 | B2 |
| 6961924 | Bates et al. | Nov 2005 | B2 |
| 6975138 | Pani et al. | Dec 2005 | B2 |
| 6977649 | Baldwin et al. | Dec 2005 | B1 |
| 7000161 | Allen et al. | Feb 2006 | B1 |
| 7007096 | Lisitsa et al. | Feb 2006 | B1 |
| 7010687 | Ichimura | Mar 2006 | B2 |
| 7028107 | Vorbach et al. | Apr 2006 | B2 |
| 7036114 | McWilliams et al. | Apr 2006 | B2 |
| 7038952 | Zack et al. | May 2006 | B1 |
| 7043416 | Lin | May 2006 | B1 |
| 7155708 | Hammes et al. | Dec 2006 | B2 |
| 7164422 | Wholey et al. | Jan 2007 | B1 |
| 7210129 | May et al. | Apr 2007 | B2 |
| 7216204 | Rosenbluth | May 2007 | B2 |
| 7237087 | Vorbach et al. | Jun 2007 | B2 |
| 7249351 | Songer et al. | Jul 2007 | B1 |
| 7254649 | Subramanian et al. | Aug 2007 | B2 |
| 7340596 | Crosland et al. | Mar 2008 | B1 |
| 7346644 | Langhammer et al. | Mar 2008 | B1 |
| 7350178 | Crosland et al. | Mar 2008 | B1 |
| 7382156 | Pani et al. | Jun 2008 | B2 |
| 7595659 | Vorbach et al. | Sep 2009 | B2 |
| 7650448 | Vorbach et al. | Jan 2010 | B2 |
| 7657877 | Vorbach et al. | Feb 2010 | B2 |
| 7759968 | Hussein et al. | Jul 2010 | B1 |
| 20010001860 | Beiu | May 2001 | A1 |
| 20010003834 | Shimonishi | Jun 2001 | A1 |
| 20010010074 | Nishihara et al. | Jul 2001 | A1 |
| 20010018733 | Fujii et al. | Aug 2001 | A1 |
| 20010032305 | Barry | Oct 2001 | A1 |
| 20020010853 | Trimberger et al. | Jan 2002 | A1 |
| 20020013861 | Adiletta et al. | Jan 2002 | A1 |
| 20020038414 | Taylor et al. | Mar 2002 | A1 |
| 20020045952 | Blemel | Apr 2002 | A1 |
| 20020073282 | Chauvel et al. | Jun 2002 | A1 |
| 20020083308 | Pereira et al. | Jun 2002 | A1 |
| 20020099759 | Gootherts | Jul 2002 | A1 |
| 20020103839 | Ozawa | Aug 2002 | A1 |
| 20020124238 | Metzgen | Sep 2002 | A1 |
| 20020138716 | Master et al. | Sep 2002 | A1 |
| 20020143505 | Drusinsky | Oct 2002 | A1 |
| 20020144229 | Hanrahan | Oct 2002 | A1 |
| 20020152060 | Tseng | Oct 2002 | A1 |
| 20020156962 | Chopra et al. | Oct 2002 | A1 |
| 20020162097 | Meribout | Oct 2002 | A1 |
| 20020165886 | Lam | Nov 2002 | A1 |
| 20030001615 | Sueyoshi et al. | Jan 2003 | A1 |
| 20030014743 | Cooke et al. | Jan 2003 | A1 |
| 20030046607 | Vorbach | Mar 2003 | A1 |
| 20030052711 | Taylor et al. | Mar 2003 | A1 |
| 20030055861 | Lai et al. | Mar 2003 | A1 |
| 20030056062 | Prabhu | Mar 2003 | A1 |
| 20030056085 | Vorbach | Mar 2003 | A1 |
| 20030056091 | Greenberg | Mar 2003 | A1 |
| 20030056202 | Vorbach | Mar 2003 | A1 |
| 20030061542 | Bates et al. | Mar 2003 | A1 |
| 20030062922 | Douglass et al. | Apr 2003 | A1 |
| 20030070059 | Dally et al. | Apr 2003 | A1 |
| 20030086300 | Noyes et al. | May 2003 | A1 |
| 20030093662 | Vorbach et al. | May 2003 | A1 |
| 20030097513 | Vorbach et al. | May 2003 | A1 |
| 20030123579 | Safavi et al. | Jul 2003 | A1 |
| 20030135686 | Vorbach et al. | Jul 2003 | A1 |
| 20030154349 | Berg et al. | Aug 2003 | A1 |
| 20030192032 | Andrade et al. | Oct 2003 | A1 |
| 20040015899 | May et al. | Jan 2004 | A1 |
| 20040025005 | Vorbach et al. | Feb 2004 | A1 |
| 20040039880 | Pentkovski et al. | Feb 2004 | A1 |
| 20040078548 | Claydon et al. | Apr 2004 | A1 |
| 20040088689 | Hammes | May 2004 | A1 |
| 20040088691 | Hammes et al. | May 2004 | A1 |
| 20040168099 | Vorbach et al. | Aug 2004 | A1 |
| 20040199688 | Vorbach et al. | Oct 2004 | A1 |
| 20050066213 | Vorbach et al. | Mar 2005 | A1 |
| 20050091468 | Morita et al. | Apr 2005 | A1 |
| 20050144210 | Simkins et al. | Jun 2005 | A1 |
| 20050144212 | Simkins et al. | Jun 2005 | A1 |
| 20050144215 | Simkins et al. | Jun 2005 | A1 |
| 20060036988 | Allen et al. | Feb 2006 | A1 |
| 20060230094 | Simkins et al. | Oct 2006 | A1 |
| 20060230096 | Thendean et al. | Oct 2006 | A1 |
| 20070083730 | Vorbach et al. | Apr 2007 | A1 |
| 20080313383 | Morita et al. | Dec 2008 | A1 |
| 20090085603 | Paul et al. | Apr 2009 | A1 |
| 20100306602 | Kamiya et al. | Dec 2010 | A1 |
| Number | Date | Country |
|---|---|---|
| 42 21 278 | Jan 1994 | DE |
| 44 16 881 | Nov 1994 | DE |
| 38 55 673 | Nov 1996 | DE |
| 196 51 075 | Jun 1998 | DE |
| 196 51 075 | Jun 1998 | DE |
| 196 54 593 | Jul 1998 | DE |
| 196 54 595 | Jul 1998 | DE |
| 196 54 846 | Jul 1998 | DE |
| 197 04 044 | Aug 1998 | DE |
| 197 04 728 | Aug 1998 | DE |
| 197 04 742 | Sep 1998 | DE |
| 198 22 776 | Mar 1999 | DE |
| 198 07 872 | Aug 1999 | DE |
| 198 61 088 | Feb 2000 | DE |
| 199 26 538 | Dec 2000 | DE |
| 100 28 397 | Dec 2001 | DE |
| 100 36 627 | Feb 2002 | DE |
| 101 29 237 | Apr 2002 | DE |
| 102 04 044 | Aug 2003 | DE |
| 0 208 457 | Jan 1987 | EP |
| 0 221 360 | May 1987 | EP |
| 0 398 552 | Nov 1990 | EP |
| 0 428 327 | May 1991 | EP |
| 0 463 721 | Jan 1992 | EP |
| 0 477 809 | Apr 1992 | EP |
| 0 485 690 | May 1992 | EP |
| 0 497 029 | Aug 1992 | EP |
| 0 539 595 | May 1993 | EP |
| 0 638 867 | Aug 1994 | EP |
| 0 628 917 | Dec 1994 | EP |
| 0 678 985 | Oct 1995 | EP |
| 0 686 915 | Dec 1995 | EP |
| 0 707 269 | Apr 1996 | EP |
| 0 735 685 | Oct 1996 | EP |
| 0 835 685 | Oct 1996 | EP |
| 0 746 106 | Dec 1996 | EP |
| 0 748 051 | Dec 1996 | EP |
| 0 726 532 | Jul 1998 | EP |
| 0 926 594 | Jun 1999 | EP |
| 1 102 674 | Jul 1999 | EP |
| 1 061 439 | Dec 2000 | EP |
| 1 115 204 | Jul 2001 | EP |
| 1 146 432 | Oct 2001 | EP |
| 0 696 001 | Dec 2001 | EP |
| 1669 885 | Jun 2006 | EP |
| 2 752 466 | Feb 1998 | FR |
| 2 304 438 | Mar 1997 | GB |
| 58-58672 | Apr 1983 | JP |
| 58-058672 | Apr 1983 | JP |
| 10-44571 | Feb 1989 | JP |
| 1-229378 | Sep 1989 | JP |
| 2-130023 | May 1990 | JP |
| 2-226423 | Sep 1990 | JP |
| 5-265705 | Oct 1993 | JP |
| 5-276007 | Oct 1993 | JP |
| 05-509184 | Dec 1993 | JP |
| 6-266605 | Sep 1994 | JP |
| 7-086921 | Mar 1995 | JP |
| 7-154242 | Jun 1995 | JP |
| 8-148989 | Jun 1995 | JP |
| 7-182160 | Jul 1995 | JP |
| 7-182167 | Jul 1995 | JP |
| 8-44581 | Feb 1996 | JP |
| 08069447 | Mar 1996 | JP |
| 8-101761 | Apr 1996 | JP |
| 8-102492 | Apr 1996 | JP |
| 8-106443 | Apr 1996 | JP |
| 8-221164 | Aug 1996 | JP |
| 8-250685 | Sep 1996 | JP |
| 9-27745 | Jan 1997 | JP |
| 9-237284 | Sep 1997 | JP |
| 9-294069 | Nov 1997 | JP |
| 11-046187 | Feb 1999 | JP |
| 11-184718 | Jul 1999 | JP |
| 11-307725 | Nov 1999 | JP |
| 2000-076066 | Mar 2000 | JP |
| 2000-181566 | Jun 2000 | JP |
| 2000-201066 | Jul 2000 | JP |
| 2000-311156 | Nov 2000 | JP |
| 2001-500682 | Jan 2001 | JP |
| 2001-167066 | Jun 2001 | JP |
| 2001-510650 | Jul 2001 | JP |
| 2001-236221 | Aug 2001 | JP |
| 2002-0033457 | Jan 2002 | JP |
| 3-961028 | Aug 2007 | JP |
| WO9004835 | May 1990 | WO |
| WO9011648 | Oct 1990 | WO |
| WO9201987 | Feb 1992 | WO |
| WO9311503 | Jun 1993 | WO |
| WO9406077 | Mar 1994 | WO |
| WO9408399 | Apr 1994 | WO |
| WO9500161 | Jan 1995 | WO |
| WO9526001 | Sep 1995 | WO |
| WO9810517 | Mar 1998 | WO |
| WO9826356 | Jun 1998 | WO |
| WO9828697 | Jul 1998 | WO |
| WO9829952 | Jul 1998 | WO |
| WO9831102 | Jul 1998 | WO |
| WO9835294 | Aug 1998 | WO |
| WO9835299 | Aug 1998 | WO |
| WO 9900731 | Jan 1999 | WO |
| WO9900731 | Jan 1999 | WO |
| WO9900739 | Jan 1999 | WO |
| WO9912111 | Mar 1999 | WO |
| WO9932975 | Jul 1999 | WO |
| WO 9940522 | Aug 1999 | WO |
| WO9940522 | Aug 1999 | WO |
| WO9944120 | Sep 1999 | WO |
| WO9944147 | Sep 1999 | WO |
| WO0017771 | Mar 2000 | WO |
| WO0038087 | Jun 2000 | WO |
| WO0045282 | Aug 2000 | WO |
| WO0049496 | Aug 2000 | WO |
| WO0077652 | Dec 2000 | WO |
| WO0155917 | Aug 2001 | WO |
| WO0213000 | Feb 2002 | WO |
| WO0221010 | Mar 2002 | WO |
| WO0229600 | Apr 2002 | WO |
| WO0250665 | Jun 2002 | WO |
| WO02071196 | Sep 2002 | WO |
| WO02071248 | Sep 2002 | WO |
| WO02071249 | Sep 2002 | WO |
| WO02103532 | Dec 2002 | WO |
| WO03017095 | Feb 2003 | WO |
| WO03023616 | Mar 2003 | WO |
| WO03025781 | Mar 2003 | WO |
| WO03032975 | Apr 2003 | WO |
| WO03036507 | May 2003 | WO |
| WO 03091875 | Nov 2003 | WO |
| WO2004053718 | Jun 2004 | WO |
| WO2004114128 | Dec 2004 | WO |
| WO2005045692 | May 2005 | WO |
| WO 2007030395 | Mar 2007 | WO |