Method and system for creating and programming an adaptive computing engine

Information

  • Patent Grant
  • 7865847
  • Patent Number
    7,865,847
  • Date Filed
    Friday, January 25, 2008
    17 years ago
  • Date Issued
    Tuesday, January 4, 2011
    14 years ago
Abstract
A system and corresponding method for creating an adaptive computing engine (ACE) includes algorithmic elements, ACE building blocks, and creates a design for heterogeneous nodes to provide appropriate hardware circuit functions that implement the algorithmic elements. Creating the design includes selecting an initial set of the ACE building blocks. The system and corresponding method also optimizes the design by selecting a different set of the ACE building blocks that meets predetermined performance standards for the efficiency of the ACE when performance of the ACE is simulated. The ACE building block preferably belong to one of a plurality of building block types. Preferably, the system and method includes a profiler for providing code to simulate a hardware design that implements the algorithmic elements, for identifying hotspots in the code, and for creating the design based thereon.
Description
FIELD OF INVENTION

The present invention relates to adaptive computing machines, and more particularly to creating and programming an adaptive computing engine.


BACKGROUND OF THE INVENTION

The electronics industry has become increasingly driven to meet the demands of high-volume consumer applications, which comprise a majority of the embedded systems market. Examples of consumer applications where embedded systems are employed include handheld devices, such as cell phones, personal digital assistants (PDAs), global positioning system (GPS) receivers, digital cameras, etc. By their nature, these devices are required to be small, low-power, light-weight, and feature-rich. Thus, embedded systems face challenges in producing performance with minimal delay, minimal power consumption, and at minimal cost. As the numbers and types of consumer applications where embedded systems are employed increases, these challenges become even more pressing.


Each of these applications typically comprises a plurality of algorithms which perform the specific function for a particular application. An algorithm typically includes multiple smaller elements called algorithmic elements which when performed produce a work product. An example of an algorithm is the QCELP (QUALCOMM Code Excited Linear Prediction) voice compression/decompression algorithm which is used in cell phones to compress and decompress voice in order to save wireless spectrum.


Conventional systems in hardware architectures provide a specific hardware accelerator typically for one or two algorithmic elements. This has typically sufficed in the past since most hardware acceleration has been performed in the realm of infrastructure base stations. There, many channels are processed (typically 64 or more) and having one or two hardware accelerations, which help accelerate the two algorithmic elements, can be justified. Best current practices are to place a Digital Signal Processing IC alongside the specific hardware acceleration circuitry and then arraying many of these together in order to process the workload. Since any gain in performance or power dissipation is multiplied by the number of channels (64), this approach is currently favored.


For example, in a base station implementation of the QCELP algorithm acceleration the pitch computation will result in a 20% performance/power savings per channel. 20% of the processing which is done across 64 channels results in a significantly large performance/power savings.


The shortcomings with this approach are revealed when attempts are made to accelerate an algorithmic element in a mobile terminal. There typically is only a single channel is processed and for significant performance and power saving to be realized then many algorithmic elements must be accelerated. The problem, however, is that the size of the silicon is bounded by cost constraints and a designer can not justify added specific acceleration circuitry for every algorithmic element. However, the QCELP algorithm itself consists of many individual algorithm elements (17 of the most frequently used algorithmic elements):


1. Pitch Search Recursive Convolution


2. Pitch Search Autocorrelation of Exx


3. Pitch Search Correlation of Exy


4. Pitch Search Autocorrelation of Eyy


5. Pitch Search Pitch Lag and Minimum Error


6. Pitch Search Sinc Interpolation of Exy


7. Pitch Search Interpolation of Eyy


8. Codebook Search Recursive convolution


9. Codebook Search Autocorrelation of Eyy


10. Codebook Search Correlation of Exy


11. Codebook Search Codebook index and Minimum Error


12. Pole Filter


13. Zero Filter


14. Pole 1 Tap Filter


15. Cosine


16. Line Spectral Pair Zero search


17. Divider


For example, in a mobile terminal implementation of the QCELP algorithm, if the pitch computation is accelerated, the performance/power dissipation is reduced by 20% for an increased cost of silicon area. By itself, the gain for the cost is not economically justifiable. However, if for the cost in silicon area of a single accelerator there was an IC that can adapt itself in time to be able to become the accelerator for each of the 17 algorithmic elements, it would be 80% of the cost for a single adaptable accelerator.


Normal design approaches for embedded systems tend to fall in one of three categories: an ASIC (application specific integrated circuit) approach; a microprocessor/DSP (digital signal processor) approach; and an FPGA (field programmable gate array) approach. Unfortunately, each of these approaches has drawbacks. In the ASIC approach, the design tools have limited ability to describe the algorithm of the system. Also, the hardware is fixed, and the algorithms are frozen in hardware. For the microprocessor/DSP approach, the general-purpose hardware is fixed and inefficient. The algorithms may be changed, but they have to be artificially partitioned and constrained to match the hardware. With the FPGA approach, use of the same design tools as for the ASIC approach result in the same problem of limited ability to describe the algorithm. Further, FPGAs consume significant power and are too difficult to reconfigure to meet the changes of product requirements as future generations are produced.


An alternative is to attempt to overcome the disadvantages of each of these approaches while utilizing their advantages. Accordingly, what is desired is a system in which more efficient consumer applications can be created and programmed than when utilizing conventional approaches.


SUMMARY OF THE INVENTION

A system for creating an adaptive computing engine (ACE) is disclosed. The system comprises a plurality of algorithmic elements capable of being configured into an adaptive computing engine, and means for mapping the operations of the plurality of algorithmic elements to non-homogenous nodes by using computational and data analysis. The system and method also includes means for utilizing the mapped algorithmic elements to provide the appropriate hardware function. A system and method in accordance with the present invention provides the ability to bring into existence efficient hardware accelerators for a particular algorithmic element and then to reuse the same silicon area to bring into existence a new hardware accelerator for the next algorithmic element.


With the ability to optimize operations of an ACE in accordance with the present invention, an algorithm is allowed to run on the most efficient hardware for the minimum amount of time required. Further, more adaptability is achieved for a wireless system to perform the task at hand during run time. Thus, algorithms are no longer required to be altered to fit predetermined hardware existing on a processor, and the optimum hardware required by an algorithm comes into existence for the minimum time that the algorithm needs to run.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram illustrating a preferred apparatus in accordance with the present invention.



FIG. 2 illustrates a simple flow chart of providing an ACE in accordance with the present invention.



FIG. 3 is a flow chart which illustrates the operation of the profiler in accordance with the present invention.



FIG. 4 is a flow chart which illustrates optimizing the mixture of composite blocks.



FIGS. 5A-5F illustrate dataflow graphs for QCELP operations.



FIG. 6 illustrates an integrated environment in accordance with the present invention.





DETAILED DESCRIPTION OF THE INVENTION

The present invention provides a method and system for optimizing operations of an ACE. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the embodiments shown but is to be accorded the widest scope consistent with the principles and features described herein.


An approach that is dynamic both in terms of the hardware resources and algorithms is emerging and is referred to as an adaptive computing engine (ACE) approach. ACEs can be reconfigured upwards of hundreds of thousands of times a second while consuming very little power. The ability to reconfigure the logical functions inside the ACE at high speed and “on-the-fly”, i.e., while the device remains in operation, describes the dynamic hardware resource feature of the ACE. Similarly, the ACE operates with dynamic algorithms, which refers to algorithms with constituent parts that have temporal elements and thus are only resident in hardware for short portions of time as required.


While the advantages of on-the-fly adaptation in ACE approaches are easily demonstrated, a need exists for a tool that supports optimizing of the ACE architecture for a particular problem space. FIG. 1 is a block diagram illustrating a preferred apparatus 100 in accordance with the present invention. The apparatus 100, referred to herein as an adaptive computing machine (“ACE”) 100, is preferably embodied as an integrated circuit, or as a portion of an integrated circuit having other, additional components. In the preferred embodiment, and as discussed in greater detail below, the ACE 100 includes a controller 120, one or more reconfigurable matrices 150, such as matrices 150A through 150N as illustrated, a matrix interconnection network 110, and preferably also includes a memory 140.


A significant departure from the prior art, the ACE 100 does not utilize traditional (and typically separate) data and instruction busses for signaling and other transmission between and among the reconfigurable matrices 150, the controller 120, and the memory 140, or for other input/output (“I/O”) functionality. Rather, data, control and configuration information are transmitted between and among these elements, utilizing the matrix interconnection network 110, which may be configured and reconfigured, in real-time, to provide any given connection between and among the reconfigurable matrices 150, the controller 120 and the memory 140, as discussed in greater detail below.


The memory 140 may be implemented in any desired or preferred way as known in the art, and may be included within the ACE 100 or incorporated within another IC or portion of an IC. In the preferred embodiment, the memory 140 is included within the ACE 100, and preferably is a low power consumption random access memory (RAM), but also may be any other form of memory, such as flash, DRAM, SRAM, MRAM, ROM, EPROM or EEPROM. In the preferred embodiment, the memory 140 preferably includes direct memory access (DMA) engines, not separately illustrated.


The controller 120 is preferably implemented as a reduced instruction set (“RISC”) processor, controller or other device or IC capable of performing the two types of functionality discussed below. The first control functionality, referred to as “kernel” control, is illustrated as kernel controller (“KARC”) 125, and the second control functionality, referred to as “matrix” control, is illustrated as matrix controller (“MARC”) 130. The control functions of the KARC 125 and MARC 130 are explained in greater detail below, with reference to the configurability and reconfigurability of the various matrices 150, and with reference to the preferred form of combined data, configuration and control information referred to herein as a “silverware” module.


The matrix interconnection network 110 of FIG. 1, and its subset interconnection networks collectively and generally referred to as “interconnect”, “interconnection(s)” or “interconnection network(s)”, may be implemented as known in the art, such as utilizing the interconnection networks or switching fabrics of FPGAs, albeit in a considerably more limited, less “rich” fashion, to reduce capacitance and increase speed of operation. In the preferred embodiment, the various interconnection networks are implemented as described, for example, in U.S. Pat. Nos. 5,218,240, 5,336,950, 5,245,277 and 5,144,166. These various interconnection networks provide selectable (or switchable) connections between and among the controller 120, the memory 140, the various matrices 150, providing the physical basis for the configuration and reconfiguration referred to herein, in response to and under the control of configuration signaling generally referred to herein as “configuration information”. In addition, the various interconnection networks including 110 and the interconnection networks within each of the matrices (not shown) provide selectable or switchable data, input, output, control and configuration paths, between and among the controller 120, the memory 140, the various matrices 150, and the computational units (not shown) and computational elements (not shown) within the matrices 150, in lieu of any form of traditional or separate input/output busses, data busses, and instruction busses.


The various matrices 150 are reconfigurable and heterogeneous, namely, in general, and depending upon the desired configuration: reconfigurable matrix 150A is generally different from reconfigurable matrices 150B through 150N; reconfigurable matrix 150B is generally different from reconfigurable matrices 150A and 150C through 150N; reconfigurable matrix 150C is generally different from reconfigurable matrices 150A, 150B and 150D through 150N, and so on. The various reconfigurable matrices 150 each generally contain a different or varied mix of computation units, which in turn generally contain a different or varied mix of fixed, application specific computational elements, which may be connected, configured and reconfigured in various ways to perform varied functions, through the interconnection networks. In addition to varied internal configurations and reconfigurations, the various matrices 150 may be connected, configured and reconfigured at a higher level, with respect to each of the other matrices 150, through the matrix interconnection network 110.



FIG. 2 illustrates a simple flow chart of providing an ACE in accordance with the present invention. First, a plurality of algorithmic elements are provided, via step 202. Next, the algorithmic elements are mapped onto non-homogeneous, i.e., heterogeneous, nodes by using data and computational analyses, via step 204. Finally, the mapped algorithmic elements within the node are utilized to provide the appropriate hardware function, via step 206. In a preferred embodiment, the algorithmic elements within a node are segmented to over optimize performance. The segmentation can either be spacial, that is, ensuring elements are close to each other or the segmentation can be temporal, that is, the elements come into existence at different points in time.


The data and computational and analysis of the algorithmic mapping step 204 is provided through the use of a profiler. The operation of the profiler is described in more detail herein below in conjunction with the accompanying figure. FIG. 3 is a flow chart which illustrates the operation of the profiler in accordance with the present invention.


First code is provided to simulate the device, via step 302. From the design code hot spots are identified, via step 304. Hot spots are those operations which utilize high power and/or require a high amount of movement of data (data movement). The identification of hot spots, in particular the identification of data movement is important in optimizing the performance of the implemented hardware device. A simple example of the operation of the profiler is described below.


A code that is to be profiled is shown below:















line 1: for (i = 0 to 1023 ) {
// do this loop 1024 times









line 2:
x[i] = get data from producer A
// fill up an array of 1024


line 3: }




line 4:










line 5: for ( ;; )
// do this loop forever









line 6:
sum = 0
// initialize variable sum


line 7:
temp = get data from producer B
// get a new value


line 8:
for (j = 0 to 1023 ) {
// do this loop 1024 times









line 9:
sum = sum + x[i] * temp
// perform multiply









accumulate








line 10:
}









line 11:
send sum to consumer C
// send sum







line 12: }









This illustrates three streams of data, producer A on line 2, producer B on line 7 and a consumer of data on line 11. The producers or consumers may be variables, may be pointers, may be arrays, or may be physical devices such as Analog to Data Converters (ADC) or Data to Analog Converters (DAC). Traditional profilers would identify line 8 as a computational hot spot—an area of the code which consumes large amounts of computations. Line 8 consists of a multiply followed by an accumulation which, on some hardware architecture may take many clock cycles to perform. What this invention will identify which is not performed in existing profiles is identify not only the computation hot spots, but also memory hot spots as well as data movement hotspots. Line 7 and line 11 are identified as data movement hot spots since the data will be input from the producer B on line 7 and the sum will be sent to consumer C in line 11. Also identified by the profiler as a secondary data movement hot spot is line 2 where 1024 values from Producer A will be moved into the array X. Finally, line 9 is identified as a data movement hot spot since an element of the array x and the temp value are summed with the variable sum and the result placed back into the variable sum. The profile will also identify on line 9, the array x, as a memory hotspot followed, secondarily, by line 2, array x, as a memory hotspot.


With this information from the profiler, the ACE can instantiate the following hardware circuitry to accelerate the performance as well as lower the power dissipation of this algorithmic fragment (algorithmic element) by putting the building block elements together. Data movements will be accelerated by constructing from the low level ACE building blocks DMA (Direct Memory Address) hardware to perform the data movement on lines 2, 9, 7, and 11. A specific hardware accelerator to perform the computation on line 9 will be constructed from the lower level ACE building blocks to construct a Multiply Accumulate hardware accelerator. Finally, the information from the profiler on the memory hot spots on lines 2 and 9 will allow the ACE to either build a memory array of exactly 1024 elements from the low level ACE building blocks or ensure that the smallest possible memory which can fit 1024 elements is used. Optimal sizing of memory is mandatory to ensure low power dissipation. In addition, the profiling information on the memory hot spot on line 9 is used to ensure that the ACE will keep the circuitry for the multiply accumulate physically local to the array x to ensure the minimum physical distance which is directly proportional to the effective capacitance—the greater the distance between where data is kept and where data is processed means greater capacitance, which is one of the prime elements which dictates power consumption.


The resources needed for implementing the algorithmic elements specify the types of composite blocks needed for a given problem, the number of each of the types that are needed, and the number of composite blocks per minimatrix. The composite blocks and their types are preferably stored in a database. By way of example, one type of composite block may be labeled linear composite blocks and include multipliers, adders, double adders, multiply double accumulators, radix 2, DCT, FIR, IIR, FFT, square root, divides. A second type may include Taylor Series approximation, CORDIC, sines, cosines, polynomial evaluations. A third type may be labeled FSM (finite status machine) blocks, while a fourth type may be termed FPGA blocks. Bit processing blocks may form a fifth type, and memory blocks may form a sixth type of composite block.



FIG. 4 is a flow chart which illustrates optimizing the mixture of composite blocks. First, a mixture of composite blocks is chosen, via step 402. Given a certain mixture of composite blocks, composite block types, and interconnect density, a simulator/resource estimator/scheduler is invoked to provide performance metrics, via step 404. In essence, the performance metrics determine the efficiency of the architecture to meet the desired goal. Thus, the operations by the designated hardware resources are simulated to identify the metrics of the combination of composite blocks. The metrics produced by the simulation are then reviewed to determine whether they meet the chosen performance metrics, via step 406. When the chosen performance metrics are not met, the combination of resources provided by the composite blocks is adjusted until the resulting metrics are deemed good enough, via step 408. By way of example, computation power efficiency (CPE) refers to the ratio of the number of gates actively working in a clock cycle to the total number of gates in the device. A particular percentage for CPE can be chosen as a performance metric that needs to be met by the combination of composite blocks.


Once the chosen performance metrics are met, the information about which composite blocks were combined to achieve the particular design code is stored in a database, via step 410. In this manner, subsequent utilization of that design code to optimize an ACE is realized by accessing the saved data. For purposes of this discussion, these combinations are referred to as dataflow graphs. FIGS. 5A-5F illustrate dataflow graphs for QCELP operations.


To implement the flow chart of FIG. 4, an integrated environment is provided to allow a user to make the appropriate tradeoffs between power performance and data movement. FIG. 6 illustrates an integrated environment 600 in accordance with the present invention. A legacy code of a typical design on one chart 602 is provided alongside the corresponding ACE architecture on the other chart 604. Power, performance and data movement readings are provided at the bottom of each of the charts; illustrated at 606 and 608, respectively. In a preferred embodiment, it would be possible to drag and drop code from the legacy chart 602 onto one of the mini-matrices of the ACE chart 604. In a preferred embodiment, there would be immediate feedback, that is, as a piece of code was dropped on the ACE chart 604, the power energy and data movement reading would change to reflect the change. Accordingly, an ACE which is optimized for a particular performance can be provided through this process.


As mentioned before, the ACE can be segmented spatially and temporally to ensure that a particular task is performed in the optimum manner. By adapting the architecture over and over, a slice of ACE material builds and dismantles the equivalent of hundreds or thousands of ASIC chips, each optimized to a specific task. Since each of these ACE “architectures” is optimized so explicitly, conventional silicon cannot attempt its recreation, conventional ASIC chips would be far too large, and microprocessors/DSPs far too customized. Further, the ACE allows software algorithms to build and then embed themselves into the most efficient hardware possible for their application. This constant conversion of “software” into “hardware” allows algorithms to operate faster and more efficiently than with conventional chip technology. ACE technology also extends conventional DSP functionality by adding a greater degree of freedom to such applications as wireless designs that so far have been attempted by changing software.


Adapting the ACE chip architecture as necessary introduces many new system features within reach of a single ACE-based platform. For example, with an ACE approach, a wireless handset can be adapted to become a handwriting or voice recognition system or to do on-the-fly cryptography. The performance of these and many other functions at hardware speeds may be readily recognized as a user benefit while greatly lowering power consumption within battery-driven products.


In a preferred embodiment, the hardware resources of an ACE are optimized to provide the necessary resources for those parts of the design that most need those resources to achieve efficient and effective performance. By way of example, the operations of a vocoder, such as the QCELP, provide a design portion of a cellular communication device that benefits from the optimizing of an ACE. As a vector quantizer-based speed codec, a QCELP coding speech compression engine has eight inner loops/algorithms that consume most of the power. These eight algorithms include code book search, pitch search, line spectral pairs (LSP) computation, recursive convolution and four different filters. The QCELP engine thus provides an analyzer/compressor and synthesizer/decompressor with variable compression ranging from 13 to 4 kilobits/second (kbit/s).


With the analyzer operating on a typical DSP requiring about 26 MHz of computational power, 90 percent of the power and performance is dissipated by 10 percent of the code, since the synthesizer needs only about half that performance. For purposes of this disclosure, a small portion of code that requires a large portion of the power and performance dissipated is referred to as a hot spot in the code. The optimization of an ACE in accordance with the present invention preferably occurs such that it appears that a small piece of silicon is time-sliced to make it appear as an ASIC solution in handling the hot spots of coding. Thus, for the example QCELP vocoder, when data comes into the QCELP speech codec every 20 milliseconds, each inner loop is applied 50 times a second. By optimizing the ACE, the hardware required to run each inner loop algorithm 400 times a second is brought into existence.


With the ability to optimize operations of an ACE in accordance with the present invention, an algorithm is allowed to run on the most efficient hardware for the minimum amount of time required. Further, more adaptability is achieved for a wireless system to perform the task at hand during run time. Thus, algorithms are no longer required to be altered to fit predetermined hardware existing on a processor, and the optimum hardware required by an algorithm comes into existence for the minimum time that the algorithm needs to run.


Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.

Claims
  • 1. A system for creating a configuration for an adaptive computing engine (ACE) having a plurality of heterogeneous nodes having hardware elements, the system comprising: an algorithmic elements stored in a database;composite ACE building block types;means for creating a configuration for at least some of the heterogeneous nodes of the ACE to provide appropriate hardware circuit functions that implement the algorithmic element, by selecting an initial set of the composite ACE building block types required to implement the algorithmic element, and configuring the hardware elements on at least one of the heterogeneous nodes according to the selected initial set of composite ACE building block types; andmeans for optimizing the configuration for the heterogeneous nodes by selecting a different set of the composite ACE building block types required to implement the algorithmic element that meets predetermined performance standards for efficiency of the ACE when performance of the ACE is simulated.
  • 2. The system of claim 1, wherein the composite ACE building block types belong to at least one of a plurality of composite ACE building blocks types including linear computation block types, finite state machine block types, field programmable gate array block types, processor block types, and memory block types.
  • 3. The system of claim 2, wherein the linear computation block types are selected from the group consisting of: multipliers, adders, double adders, multiply double accumulators, radix 2, discrete cosine transform (DCT), finite impulse response (FIR), infinite impulse response (IIR), Fast Fourier Transform (FFT), square root, and divides.
  • 4. The system of claim 1, wherein the means for creating a design further includes a profiler that comprises: means for providing code to simulate a hardware design that implements the algorithmic elements; andmeans for identifying one or more hot spots in the code, wherein the identified hot spots are those areas of code requiring high power and/or high data movement; andwherein the means for creating a design selects the initial set of the composite ACE building block types based on the identified hot spots.
  • 5. The system of claim 4, wherein each hot spot comprises a computational hot spot or a data movement hot spot.
  • 6. The system of claim 5, wherein the means for creating a design uses each data movement hot spot to restrict high data movements to a minimum physical distance in the ACE.
  • 7. The system of claim 1, wherein the hardware elements of the heterogeneous nodes are coupled with each other by a configurable and reconfigurable interconnection network.
  • 8. The system of claim 7, wherein the heterogeneous nodes are configurable and reconfigurable to provide appropriate hardware circuit functions for implementing the algorithmic elements by configuring and reconfiguring the reconfigurable interconnection network coupling the hardware elements.
  • 9. The system of claim 1, wherein the composite ACE building block types are stored in a database.
  • 10. The system of claim 1, wherein the different set of composite ACE building block types that meets the predetermined performance standards for efficiency of the ACE when performance of the ACE is simulated is stored in a database.
  • 11. The method of claim 1, wherein the ACE is reconfigured to perform a different algorithmic element via selecting another set of composite ACE building block types and configuring the hardware elements one at least one of the heterogeneous nodes according to the another set of composite ACE building block types.
  • 12. A method for creating a configuration for an adaptive computing engine (ACE) having a plurality of heterogeneous nodes, each heterogeneous node including a plurality of hardware elements, the method comprising: providing an algorithmic elements;providing composite ACE building block types;creating a configuration for at least some of the heterogeneous nodes of the ACE to provide appropriate hardware circuit functions that implement the algorithmic elements comprising: selecting an initial set of the composite ACE building block types required to implement the algorithmic element;configuring the hardware elements on at least one of the heterogeneous nodes according to the selected initial set of composite ACE building block types; andoptimizing the design for the heterogeneous nodes comprising:simulating performance of the ACE;selecting a different set of the composite building block types required to perform the algorithmic element that meets predetermined performance standards for efficiency of the ACE when performance of the ACE is simulated by a computer.
  • 13. The method of claim 12, wherein the composite building block types belong to at least one of a plurality of composite ACE building blocks types including linear computation block types, finite state machine block types, field programmable gate array block types, processor block types, and memory block types.
  • 14. The method of claim 13, wherein the linear computation block types are selected from the group consisting of: multipliers, adders, double adders, multiply double accumulators, radix 2, discrete cosine transform (DCT), finite impulse response (FIR) infinite impulse response (IIR), Fast Fourier Transform (FFT), square root, and divides.
  • 15. The method of claim 12, wherein the creating a design further includes profiling using a profiler, wherein the profiling comprises: providing code to simulate a hardware design that implements the algorithmic elements; andidentifying one or more hot spots in the code, wherein the identified hot spots are those areas of code requiring high power and/or high data movement; andwherein the creating a design further comprises selecting the initial set of the composite ACE building block types based on the identified hot spots.
  • 16. The method of claim 15, wherein each hot spot comprises a computational hot spot or a data movement hot spot.
  • 17. The method of claim 16, wherein the creating a design uses each data movement hot spot to restrict high data movements to a minimum physical distance in the ACE.
  • 18. The method of claim 12, wherein the hardware elements of the heterogeneous nodes of the ACE are coupled with each other by a configurable and reconfigurable interconnection network.
  • 19. The method of claim 18, wherein the heterogeneous nodes of the ACE are configurable and reconfigurable to provide appropriate hardware circuit functions for implementing the algorithmic elements by configuring and reconfiguring the interconnection network and the hardware elements.
  • 20. The method of claim 12, wherein the composite ACE building blocks types are stored in a database.
  • 21. The method of claim 12, further comprising storing in a database the different set of composite ACE building block types that meets the predetermined performance standards for efficiency of the ACE when performance of the ACE is simulated.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 10/437,800, filed May 13, 2003, which claims priority to U.S. Provisional application Ser. No. 60/378,088, filed May 13, 2002. The disclosures of each of the aforementioned applications are hereby incorporated by reference in their entirety as if set forth in full herein for all purposes.

US Referenced Citations (539)
Number Name Date Kind
3409175 Byrne Nov 1968 A
3666143 Weston May 1972 A
3938639 Birrell Feb 1976 A
3949903 Benasutti et al. Apr 1976 A
3960298 Birrell Jun 1976 A
3967062 Dobias Jun 1976 A
3991911 Shannon et al. Nov 1976 A
3995441 McMillin Dec 1976 A
4076145 Zygiel Feb 1978 A
4143793 McMillin et al. Mar 1979 A
4172669 Edelbach Oct 1979 A
4174872 Fessler Nov 1979 A
4181242 Zygiel et al. Jan 1980 A
RE30301 Zygiel Jun 1980 E
4218014 Tracy Aug 1980 A
4222972 Caldwell Sep 1980 A
4237536 Enelow et al. Dec 1980 A
4252253 Shannon Feb 1981 A
4302775 Widergren et al. Nov 1981 A
4333587 Fessler et al. Jun 1982 A
4354613 Desai et al. Oct 1982 A
4377246 McMillin et al. Mar 1983 A
4380046 Frosch et al. Apr 1983 A
4393468 New Jul 1983 A
4413752 McMillin et al. Nov 1983 A
4458584 Annese et al. Jul 1984 A
4466342 Basile et al. Aug 1984 A
4475448 Shoaf et al. Oct 1984 A
4509690 Austin et al. Apr 1985 A
4520950 Jeans Jun 1985 A
4549675 Austin Oct 1985 A
4553573 McGarrah Nov 1985 A
4560089 McMillin et al. Dec 1985 A
4577782 Fessler Mar 1986 A
4578799 Scholl et al. Mar 1986 A
RE32179 Sedam et al. Jun 1986 E
4633386 Terepin et al. Dec 1986 A
4649512 Nukiyama Mar 1987 A
4658988 Hassell Apr 1987 A
4694416 Wheeler et al. Sep 1987 A
4711374 Gaunt et al. Dec 1987 A
4713755 Worley, Jr. et al. Dec 1987 A
4719056 Scott Jan 1988 A
4726494 Scott Feb 1988 A
4747516 Baker May 1988 A
4748585 Chiarulli et al. May 1988 A
4758985 Carter Jul 1988 A
4760525 Webb Jul 1988 A
4760544 Lamb Jul 1988 A
4765513 McMillin et al. Aug 1988 A
4766548 Cedrone et al. Aug 1988 A
4781309 Vogel Nov 1988 A
4800492 Johnson et al. Jan 1989 A
4811214 Nosenchuck et al. Mar 1989 A
4824075 Holzboog Apr 1989 A
4827426 Patton et al. May 1989 A
4850269 Hancock et al. Jul 1989 A
4856684 Gerstung Aug 1989 A
4870302 Freeman Sep 1989 A
4901887 Burton Feb 1990 A
4905231 Leung et al. Feb 1990 A
4921315 Metcalfe et al. May 1990 A
4930666 Rudick Jun 1990 A
4932564 Austin et al. Jun 1990 A
4936488 Austin Jun 1990 A
4937019 Scott Jun 1990 A
4960261 Scott et al. Oct 1990 A
4961533 Teller et al. Oct 1990 A
4967340 Dawes Oct 1990 A
4974643 Bennett et al. Dec 1990 A
4982876 Scott Jan 1991 A
4993604 Gaunt et al. Feb 1991 A
5007560 Sassak Apr 1991 A
5021947 Campbell et al. Jun 1991 A
5040106 Maag Aug 1991 A
5044171 Farkas Sep 1991 A
5090015 Dabbish et al. Feb 1992 A
5099418 Pian et al. Mar 1992 A
5129549 Austin Jul 1992 A
5139708 Scott Aug 1992 A
5144166 Camarota et al. Sep 1992 A
5156301 Hassell et al. Oct 1992 A
5156871 Goulet et al. Oct 1992 A
5165023 Gifford Nov 1992 A
5165575 Scott Nov 1992 A
5190083 Gupta et al. Mar 1993 A
5190189 Zimmer et al. Mar 1993 A
5193151 Jain Mar 1993 A
5193718 Hassell et al. Mar 1993 A
5202993 Tarsy et al. Apr 1993 A
5203474 Haynes Apr 1993 A
5218240 Camarota et al. Jun 1993 A
5240144 Feldman Aug 1993 A
5245227 Furtek et al. Sep 1993 A
5261099 Bigo et al. Nov 1993 A
5263509 Cherry et al. Nov 1993 A
5269442 Vogel Dec 1993 A
5280711 Motta et al. Jan 1994 A
5297400 Benton et al. Mar 1994 A
5301100 Wagner Apr 1994 A
5303846 Shannon Apr 1994 A
5335276 Thompson et al. Aug 1994 A
5336950 Popli et al. Aug 1994 A
5339428 Burmeister et al. Aug 1994 A
5343716 Swanson et al. Sep 1994 A
5361362 Benkeser et al. Nov 1994 A
5367651 Smith et al. Nov 1994 A
5367687 Tarsy et al. Nov 1994 A
5368198 Goulet Nov 1994 A
5379343 Grube et al. Jan 1995 A
5381546 Servi et al. Jan 1995 A
5381550 Jourdenais et al. Jan 1995 A
5388062 Knutson Feb 1995 A
5388212 Grube et al. Feb 1995 A
5392960 Kendt et al. Feb 1995 A
5428754 Baldwin Jun 1995 A
5437395 Bull et al. Aug 1995 A
5450557 Kopp et al. Sep 1995 A
5454406 Rejret et al. Oct 1995 A
5465368 Davidson et al. Nov 1995 A
5475856 Kogge Dec 1995 A
5479055 Eccles Dec 1995 A
5490165 Blakeney, II et al. Feb 1996 A
5491823 Ruttenberg Feb 1996 A
5504891 Motoyama et al. Apr 1996 A
5507009 Grube et al. Apr 1996 A
5515519 Yoshioka et al. May 1996 A
5517600 Shimokawa May 1996 A
5519694 Brewer et al. May 1996 A
5522070 Sumimoto May 1996 A
5530964 Alpert et al. Jun 1996 A
5534796 Edwards Jul 1996 A
5542265 Rutland Aug 1996 A
5553755 Bonewald et al. Sep 1996 A
5555417 Odnert et al. Sep 1996 A
5560028 Sachs et al. Sep 1996 A
5560038 Haddock Sep 1996 A
5570587 Kim Nov 1996 A
5572572 Kawan et al. Nov 1996 A
5590353 Sakakibara et al. Dec 1996 A
5594657 Cantone et al. Jan 1997 A
5600810 Ohkami Feb 1997 A
5600844 Shaw et al. Feb 1997 A
5602833 Zehavi Feb 1997 A
5603043 Taylor et al. Feb 1997 A
5607083 Vogel et al. Mar 1997 A
5608643 Wichter et al. Mar 1997 A
5611867 Cooper et al. Mar 1997 A
5623545 Childs et al. Apr 1997 A
5625669 McGregor et al. Apr 1997 A
5626407 Westcott May 1997 A
5630206 Urban et al. May 1997 A
5635940 Hickman et al. Jun 1997 A
5646544 Iadanza Jul 1997 A
5646545 Trimberger et al. Jul 1997 A
5647512 Assis Mascarenhas de Oliveira et al. Jul 1997 A
5667110 McCann et al. Sep 1997 A
5684793 Kiema et al. Nov 1997 A
5684980 Casselman Nov 1997 A
5687236 Moskowitz et al. Nov 1997 A
5694613 Suzuki Dec 1997 A
5694794 Jerg et al. Dec 1997 A
5699328 Ishizaki et al. Dec 1997 A
5701398 Glier et al. Dec 1997 A
5701482 Harrison et al. Dec 1997 A
5704053 Santhanam Dec 1997 A
5706191 Bassett et al. Jan 1998 A
5706976 Purkey Jan 1998 A
5712996 Schepers Jan 1998 A
5720002 Wang Feb 1998 A
5721693 Song Feb 1998 A
5721854 Ebcioglu et al. Feb 1998 A
5729754 Estes Mar 1998 A
5732563 Bethuy et al. Mar 1998 A
5734808 Takeda Mar 1998 A
5737631 Trimberger Apr 1998 A
5742180 DeHon et al. Apr 1998 A
5742821 Prasanna Apr 1998 A
5745366 Higham et al. Apr 1998 A
RE35780 Hassell et al. May 1998 E
5751295 Becklund et al. May 1998 A
5754227 Fukuoka May 1998 A
5758261 Wiedeman May 1998 A
5768561 Wise Jun 1998 A
5771362 Bartkowiak et al. Jun 1998 A
5778439 Trimberger et al. Jul 1998 A
5784636 Rupp Jul 1998 A
5784699 McMahon et al. Jul 1998 A
5787237 Reilly Jul 1998 A
5790817 Asghar et al. Aug 1998 A
5791517 Avital Aug 1998 A
5791523 Oh Aug 1998 A
5794062 Baxter Aug 1998 A
5794067 Kadowaki Aug 1998 A
5802055 Krein et al. Sep 1998 A
5802278 Isfeld et al. Sep 1998 A
5812851 Levy et al. Sep 1998 A
5818603 Motoyama Oct 1998 A
5819255 Celis et al. Oct 1998 A
5822308 Weigand et al. Oct 1998 A
5822313 Malek et al. Oct 1998 A
5822360 Lee et al. Oct 1998 A
5828858 Athanas et al. Oct 1998 A
5829085 Jerg et al. Nov 1998 A
5835753 Witt Nov 1998 A
5838165 Chatter Nov 1998 A
5838894 Horst Nov 1998 A
5845815 Vogel Dec 1998 A
5854929 Van Pract et al. Dec 1998 A
5860021 Klingman Jan 1999 A
5862961 Motta et al. Jan 1999 A
5870427 Tiedemann, Jr. et al. Feb 1999 A
5873045 Lee et al. Feb 1999 A
5881106 Cartier Mar 1999 A
5884284 Peters et al. Mar 1999 A
5886537 Macias et al. Mar 1999 A
5887174 Simons et al. Mar 1999 A
5889816 Agrawal et al. Mar 1999 A
5889989 Robertazzi et al. Mar 1999 A
5890014 Long Mar 1999 A
5892900 Ginter et al. Apr 1999 A
5892950 Rigori et al. Apr 1999 A
5892961 Trimberger Apr 1999 A
5892962 Cloutier Apr 1999 A
5894473 Dent Apr 1999 A
5901884 Goulet et al. May 1999 A
5903886 Heimlich et al. May 1999 A
5907285 Toms et al. May 1999 A
5907580 Cummings May 1999 A
5910733 Bertolet et al. Jun 1999 A
5912572 Graf, III Jun 1999 A
5913172 McCabe et al. Jun 1999 A
5917852 Butterfield et al. Jun 1999 A
5920801 Thomas et al. Jul 1999 A
5931918 Row et al. Aug 1999 A
5933642 Greenbaum et al. Aug 1999 A
5940438 Poon et al. Aug 1999 A
5949415 Lin et al. Sep 1999 A
5950011 Albrecht et al. Sep 1999 A
5950131 Vilmur Sep 1999 A
5951674 Moreno Sep 1999 A
5953322 Kimball Sep 1999 A
5956518 DeHon et al. Sep 1999 A
5956967 Kim Sep 1999 A
5959811 Richardson Sep 1999 A
5959881 Trimberger et al. Sep 1999 A
5963048 Harrison et al. Oct 1999 A
5966534 Cooke et al. Oct 1999 A
5970254 Cooke et al. Oct 1999 A
5987105 Jenkins et al. Nov 1999 A
5987611 Freund Nov 1999 A
5991302 Berl et al. Nov 1999 A
5991308 Fuhrmann et al. Nov 1999 A
5993739 Lyon Nov 1999 A
5999734 Willis et al. Dec 1999 A
6005943 Cohen et al. Dec 1999 A
6006249 Leong Dec 1999 A
6016395 Mohamed Jan 2000 A
6018783 Chiang Jan 2000 A
6021186 Suzuki et al. Feb 2000 A
6021492 May Feb 2000 A
6023742 Ebeling et al. Feb 2000 A
6023755 Casselman Feb 2000 A
6028610 Deering Feb 2000 A
6036166 Olson Mar 2000 A
6039219 Bach et al. Mar 2000 A
6041322 Meng et al. Mar 2000 A
6041970 Vogel Mar 2000 A
6046603 New Apr 2000 A
6047115 Mohan et al. Apr 2000 A
6052600 Fette et al. Apr 2000 A
6055314 Spies et al. Apr 2000 A
6056194 Kolls May 2000 A
6059840 Click, Jr. May 2000 A
6061580 Altschul et al. May 2000 A
6073132 Gehman Jun 2000 A
6076174 Freund Jun 2000 A
6078736 Guccione Jun 2000 A
6085740 Ivri et al. Jul 2000 A
6088043 Kelleher et al. Jul 2000 A
6091263 New et al. Jul 2000 A
6091765 Pietzold, III et al. Jul 2000 A
6094065 Tavana et al. Jul 2000 A
6094726 Gonion et al. Jul 2000 A
6111893 Volftsun et al. Aug 2000 A
6111935 Hughes-Hartogs Aug 2000 A
6115751 Tam et al. Sep 2000 A
6119178 Martin et al. Sep 2000 A
6120551 Law et al. Sep 2000 A
6122670 Bennett et al. Sep 2000 A
6128307 Brown Oct 2000 A
6134605 Hudson et al. Oct 2000 A
6134629 L'Ecuyer Oct 2000 A
6138693 Matz Oct 2000 A
6141283 Bogin et al. Oct 2000 A
6150838 Wittig et al. Nov 2000 A
6154492 Araki et al. Nov 2000 A
6154494 Sugahara et al. Nov 2000 A
6157997 Oowaki et al. Dec 2000 A
6158031 Mack et al. Dec 2000 A
6173389 Pechanek et al. Jan 2001 B1
6175854 Bretscher Jan 2001 B1
6175892 Sazzad et al. Jan 2001 B1
6181981 Varga et al. Jan 2001 B1
6185418 MacLellan et al. Feb 2001 B1
6192070 Poon et al. Feb 2001 B1
6192255 Lewis et al. Feb 2001 B1
6192388 Cajolet Feb 2001 B1
6195788 Leaver et al. Feb 2001 B1
6198924 Ishii et al. Mar 2001 B1
6199181 Rechef et al. Mar 2001 B1
6202130 Scales, III et al. Mar 2001 B1
6202189 Hinedi et al. Mar 2001 B1
6216252 Dangelo et al. Apr 2001 B1
6219697 Lawande et al. Apr 2001 B1
6219756 Kasamizugami Apr 2001 B1
6219780 Lipasti Apr 2001 B1
6223222 Fijolek et al. Apr 2001 B1
6226387 Tewfik et al. May 2001 B1
6230307 Davis et al. May 2001 B1
6237029 Master et al. May 2001 B1
6246883 Lee Jun 2001 B1
6247125 Noel-Baron et al. Jun 2001 B1
6249251 Chang et al. Jun 2001 B1
6258725 Lee et al. Jul 2001 B1
6263057 Silverman Jul 2001 B1
6266760 DeHon et al. Jul 2001 B1
6272579 Lentz et al. Aug 2001 B1
6272616 Fernando et al. Aug 2001 B1
6281703 Furuta et al. Aug 2001 B1
6282627 Wong et al. Aug 2001 B1
6286134 Click, Jr. et al. Sep 2001 B1
6289375 Knight et al. Sep 2001 B1
6289434 Roy Sep 2001 B1
6289488 Dave et al. Sep 2001 B1
6292822 Hardwick Sep 2001 B1
6292827 Raz Sep 2001 B1
6292830 Taylor et al. Sep 2001 B1
6292938 Sarkar et al. Sep 2001 B1
6301653 Mohamed et al. Oct 2001 B1
6305014 Roediger et al. Oct 2001 B1
6311149 Ryan et al. Oct 2001 B1
6321985 Kolls Nov 2001 B1
6326806 Fallside et al. Dec 2001 B1
6346824 New Feb 2002 B1
6347346 Taylor Feb 2002 B1
6349394 Brock et al. Feb 2002 B1
6353841 Marshall et al. Mar 2002 B1
6356994 Barry et al. Mar 2002 B1
6359248 Mardi Mar 2002 B1
6360256 Lim Mar 2002 B1
6360259 Bradley Mar 2002 B1
6360263 Kurtzberg et al. Mar 2002 B1
6363411 Dugan et al. Mar 2002 B1
6366999 Drabenstott et al. Apr 2002 B1
6377983 Cohen et al. Apr 2002 B1
6378072 Collins et al. Apr 2002 B1
6381293 Lee et al. Apr 2002 B1
6381735 Hunt Apr 2002 B1
6385751 Wolf May 2002 B1
6405214 Meade, II Jun 2002 B1
6408039 Ito Jun 2002 B1
6410941 Taylor et al. Jun 2002 B1
6411612 Halford et al. Jun 2002 B1
6421372 Bierly et al. Jul 2002 B1
6421809 Wuytack et al. Jul 2002 B1
6426649 Fu et al. Jul 2002 B1
6430624 Jamtgaard et al. Aug 2002 B1
6433578 Wasson Aug 2002 B1
6434590 Blelloch et al. Aug 2002 B1
6438737 Morelli et al. Aug 2002 B1
6446258 McKinsey et al. Sep 2002 B1
6449747 Wuytack et al. Sep 2002 B2
6456996 Crawford, Jr. et al. Sep 2002 B1
6459883 Subramanian et al. Oct 2002 B2
6467009 Winegarden et al. Oct 2002 B1
6469540 Nakaya Oct 2002 B2
6473609 Schwartz et al. Oct 2002 B1
6483343 Faith et al. Nov 2002 B1
6507947 Schreiber et al. Jan 2003 B1
6510138 Pannell Jan 2003 B1
6510510 Garde Jan 2003 B1
6526570 Click, Jr. et al. Feb 2003 B1
6538470 Langhammer et al. Mar 2003 B1
6556044 Langhammer et al. Apr 2003 B2
6563891 Eriksson et al. May 2003 B1
6570877 Kloth et al. May 2003 B1
6577678 Scheuermann Jun 2003 B2
6587684 Hsu et al. Jul 2003 B1
6590415 Agrawal et al. Jul 2003 B2
6601086 Howard et al. Jul 2003 B1
6601158 Abbott et al. Jul 2003 B1
6604085 Kolls Aug 2003 B1
6604189 Zemlyak et al. Aug 2003 B1
6606529 Crowder, Jr. et al. Aug 2003 B1
6611906 McAllister et al. Aug 2003 B1
6615333 Hoogerbrugge et al. Sep 2003 B1
6618434 Heidari-Bateni et al. Sep 2003 B2
6618777 Greenfield Sep 2003 B1
6640304 Ginter et al. Oct 2003 B2
6647429 Semal Nov 2003 B1
6653859 Sihlbom et al. Nov 2003 B2
6675265 Barroso et al. Jan 2004 B2
6675284 Warren Jan 2004 B1
6684319 Mohamed et al. Jan 2004 B1
6691148 Zinky et al. Feb 2004 B1
6694380 Wolrich et al. Feb 2004 B1
6711617 Bantz et al. Mar 2004 B1
6718182 Kung Apr 2004 B1
6718541 Ostanevich et al. Apr 2004 B2
6721286 Williams et al. Apr 2004 B1
6721884 De Oliveira Kastrup Pereira et al. Apr 2004 B1
6732354 Ebeling et al. May 2004 B2
6735621 Yoakum et al. May 2004 B1
6738744 Kirovski et al. May 2004 B2
6748360 Pitman et al. Jun 2004 B2
6751723 Kundu et al. Jun 2004 B1
6754470 Hendrickson et al. Jun 2004 B2
6760587 Holtzman et al. Jul 2004 B2
6760833 Dowling Jul 2004 B1
6766165 Sharma et al. Jul 2004 B2
6778212 Deng et al. Aug 2004 B1
6785341 Walton et al. Aug 2004 B2
6807590 Carlson et al. Oct 2004 B1
6819140 Yamanaka et al. Nov 2004 B2
6823448 Roth et al. Nov 2004 B2
6829633 Gelfer et al. Dec 2004 B2
6832250 Coons et al. Dec 2004 B1
6836839 Master et al. Dec 2004 B2
6859434 Segal et al. Feb 2005 B2
6865664 Budrovic et al. Mar 2005 B2
6871236 Fishman et al. Mar 2005 B2
6883074 Lee et al. Apr 2005 B2
6883084 Donohoe Apr 2005 B1
6894996 Lee May 2005 B2
6901440 Bimm et al. May 2005 B1
6907598 Fraser Jun 2005 B2
6912515 Jackson et al. Jun 2005 B2
6941336 Mar Sep 2005 B1
6980515 Schunk et al. Dec 2005 B1
6985517 Matsumoto et al. Jan 2006 B2
6986021 Master et al. Jan 2006 B2
6986142 Ehlig et al. Jan 2006 B1
6988139 Jervis et al. Jan 2006 B1
7032229 Flores et al. Apr 2006 B1
7044741 Leem May 2006 B2
7082456 Mani-Meitav et al. Jul 2006 B2
7124064 Thurston Oct 2006 B1
7139910 Ainsworth et al. Nov 2006 B1
7142731 Toi Nov 2006 B1
7249242 Ramchandran Jul 2007 B2
20010003191 Kovacs et al. Jun 2001 A1
20010023482 Wray Sep 2001 A1
20010029515 Mirsky Oct 2001 A1
20010034795 Moulton et al. Oct 2001 A1
20010039654 Miyamoto Nov 2001 A1
20010048713 Medlock et al. Dec 2001 A1
20010048714 Jha Dec 2001 A1
20010050948 Ramberg et al. Dec 2001 A1
20020010848 Kamano et al. Jan 2002 A1
20020013799 Blaker Jan 2002 A1
20020013937 Ostanevich et al. Jan 2002 A1
20020015435 Rieken Feb 2002 A1
20020015439 Kohli et al. Feb 2002 A1
20020023210 Tuomenoksa et al. Feb 2002 A1
20020024942 Tsuneki et al. Feb 2002 A1
20020024993 Subramanian et al. Feb 2002 A1
20020031166 Subramanian et al. Mar 2002 A1
20020032551 Zakiya Mar 2002 A1
20020035623 Lawande et al. Mar 2002 A1
20020041581 Aramaki Apr 2002 A1
20020042875 Shukla Apr 2002 A1
20020042907 Yamanaka et al. Apr 2002 A1
20020061741 Leung et al. May 2002 A1
20020069282 Reisman Jun 2002 A1
20020072830 Hunt Jun 2002 A1
20020078337 Moreau et al. Jun 2002 A1
20020083305 Renard et al. Jun 2002 A1
20020083423 Ostanevich et al. Jun 2002 A1
20020087829 Snyder et al. Jul 2002 A1
20020089348 Langhammer Jul 2002 A1
20020101909 Chen et al. Aug 2002 A1
20020107905 Roe et al. Aug 2002 A1
20020107962 Richter et al. Aug 2002 A1
20020119803 Bitterlich et al. Aug 2002 A1
20020120672 Butt et al. Aug 2002 A1
20020133688 Lee et al. Sep 2002 A1
20020138716 Master et al. Sep 2002 A1
20020141489 Imaizumi Oct 2002 A1
20020147845 Sanchez-Herrero et al. Oct 2002 A1
20020159503 Ramachandran Oct 2002 A1
20020162026 Neuman et al. Oct 2002 A1
20020168018 Scheuermann Nov 2002 A1
20020181559 Heidari-Bateni et al. Dec 2002 A1
20020184275 Dutta et al. Dec 2002 A1
20020184291 Hogenauer Dec 2002 A1
20020184498 Qi Dec 2002 A1
20020191790 Anand et al. Dec 2002 A1
20030007606 Suder et al. Jan 2003 A1
20030012270 Zhou et al. Jan 2003 A1
20030018446 Makowski et al. Jan 2003 A1
20030018700 Giroti et al. Jan 2003 A1
20030023830 Hogenauer Jan 2003 A1
20030026242 Jokinen et al. Feb 2003 A1
20030030004 Dixon et al. Feb 2003 A1
20030046421 Horvitz et al. Mar 2003 A1
20030061260 Rajkumar Mar 2003 A1
20030061311 Lo Mar 2003 A1
20030063656 Rao et al. Apr 2003 A1
20030074473 Pham et al. Apr 2003 A1
20030076815 Miller et al. Apr 2003 A1
20030099223 Chang et al. May 2003 A1
20030102889 Master et al. Jun 2003 A1
20030105949 Master et al. Jun 2003 A1
20030110485 Lu et al. Jun 2003 A1
20030131162 Secatch et al. Jul 2003 A1
20030142818 Raghunathan et al. Jul 2003 A1
20030154357 Master et al. Aug 2003 A1
20030163723 Kozuch et al. Aug 2003 A1
20030172138 McCormack et al. Sep 2003 A1
20030172139 Srinivasan et al. Sep 2003 A1
20030200538 Ebeling et al. Oct 2003 A1
20030212684 Meyer et al. Nov 2003 A1
20030229864 Watkins Dec 2003 A1
20040006584 Vandeweerd Jan 2004 A1
20040010645 Scheuermann et al. Jan 2004 A1
20040015970 Scheuermann Jan 2004 A1
20040025159 Scheuermann et al. Feb 2004 A1
20040057505 Valio Mar 2004 A1
20040062300 McDonough et al. Apr 2004 A1
20040081248 Parolari Apr 2004 A1
20040093479 Ramchandran May 2004 A1
20040133745 Ramchandran Jul 2004 A1
20040168044 Ramchandran Aug 2004 A1
20050044344 Stevens Feb 2005 A1
20050166038 Wang et al. Jul 2005 A1
20050166073 Lee Jul 2005 A1
20050198199 Dowling Sep 2005 A1
20060031660 Master et al. Feb 2006 A1
Foreign Referenced Citations (52)
Number Date Country
100 18 374 Oct 2001 DE
1 189 358 Jul 1981 EP
0 301 169 Feb 1989 EP
0 166 586 Jan 1991 EP
0 236 633 May 1991 EP
0 478 624 Apr 1992 EP
0 479 102 Apr 1992 EP
0 661 831 Jul 1995 EP
0 668 659 Aug 1995 EP
0 690 588 Jan 1996 EP
0 691 754 Jan 1996 EP
0 768 602 Apr 1997 EP
0 817 003 Jan 1998 EP
0 821 495 Jan 1998 EP
0 866 210 Sep 1998 EP
0 923 247 Jun 1999 EP
0 926 596 Jun 1999 EP
1 056 217 Nov 2000 EP
1 061 437 Dec 2000 EP
1 061 443 Dec 2000 EP
1 126 368 Aug 2001 EP
1 150 506 Oct 2001 EP
2 067 800 Jul 1981 GB
2 237 908 May 1991 GB
62-249456 Oct 1987 JP
63-147258 Jun 1988 JP
4-51546 Feb 1992 JP
7-064789 Mar 1995 JP
7066718 Mar 1995 JP
10233676 Sep 1998 JP
10254696 Sep 1998 JP
11296345 Oct 1999 JP
2000315731 Nov 2000 JP
2001-053703 Feb 2001 JP
WO 8905029 Jun 1989 WO
WO 8911443 Nov 1989 WO
WO 9100238 Jan 1991 WO
WO 9313603 Jul 1993 WO
WO 9511855 May 1995 WO
WO 9633558 Oct 1996 WO
WO 9832071 Jul 1998 WO
WO 9903776 Jan 1999 WO
WO 9921094 Apr 1999 WO
WO 9926860 Jun 1999 WO
WO 9965818 Dec 1999 WO
WO 0019311 Apr 2000 WO
WO 0065855 Nov 2000 WO
WO 0069073 Nov 2000 WO
WO 0111281 Feb 2001 WO
WO 0122235 Mar 2001 WO
WO 0176129 Oct 2001 WO
WO 0212978 Feb 2002 WO
Related Publications (1)
Number Date Country
20080134108 A1 Jun 2008 US
Provisional Applications (1)
Number Date Country
60378088 May 2002 US
Continuations (1)
Number Date Country
Parent 10437800 May 2003 US
Child 12011340 US