Apparatus and Methods for Parallelizing Integrated Circuit Computer-Aided Design Software

Information

  • Patent Application
  • 20100070979
  • Publication Number
    20100070979
  • Date Filed
    August 21, 2009
    15 years ago
  • Date Published
    March 18, 2010
    14 years ago
Abstract
A system for parallelizing software in computer-aided design (CAD) software for logic design includes a computer. The computer is configured to identify dependencies among a set of tasks. The computer is also configured to perform the set of tasks in parallel such that a solution of a problem is identical to a solution produced by performing the set of tasks serially.
Description
TECHNICAL FIELD

Generally, the disclosed concepts relate to apparatus and methods for parallelizing software and algorithms. More specifically, the concepts relate to apparatus and methods for parallelizing computer-aided design (CAD) software for integrated circuits (ICs), such as programmable logic devices (PLDs).


BACKGROUND

Traditionally, processors (such as the Pentium series from Intel, Athlon series from AMD, etc) have become faster by supporting ever-increasing clock speeds. As processors got faster in this way, the time used up to run a particular piece of software on these processors automatically sped up proportionally (because the time to execute a single instruction of code is roughly proportional to the speed of the processor clock).


New generations of processors being released today, however, are not using clocks that are significantly faster than they were two years ago (about 3 GHz). Instead, these processor chips now include more than one processor inside them (e.g., Pentium D processors are “dual core,” meaning they have two mini-processors in one chip). This property enables the computer to simultaneously run several “threads” of execution.


Any software that is serial (meaning that it has one task to perform at a time) does not speed up with the availability of additional processors in these chips. In order to leverage the additional processing power, serial software needs to be parallelized, meaning it has to have multiple tasks that are ready to be executed in order to keep all the processors busy. Unfortunately, this parallelization can almost never be done automatically, as it entails modifying the software code. The modifications themselves are also fairly tricky, as many of the assumptions that underlie serial software break down in parallel software. A need therefore exists for parallelizing software, such as CAD software.


SUMMARY

The disclosed novel concepts relate to apparatus and methods for parallelizing software, such as CAD software and algorithms. One aspect of the inventive concepts relates to running a dependency checker by a processor, such as a dedicated processor, configured as the master in a master-workers or master-slave configuration.





BRIEF DESCRIPTION OF THE DRAWINGS

The appended drawings illustrate only exemplary embodiments of the invention and therefore should not be considered or construed as limiting its scope. Persons of ordinary skill in the art who have the benefit of the description of the invention appreciate that the disclosed inventive concepts lend themselves to other equally effective embodiments. In the drawings, the same numeral designators used in more than one drawing denote the same, similar, or equivalent functionality, components, or blocks.



FIG. 1 shows a technique for parallelization used in exemplary embodiments by using multiple threads.



FIG. 2 illustrates another technique for parallelization used in exemplary embodiments by using multiple processors.



FIG. 3 depicts a general block diagram of a PLD that may be designed or used by using illustrative embodiments of the invention.



FIG. 4 shows a floor-plan of a PLD that one may design or implement by using the inventive concepts.



FIG. 5 illustrates various software modules that PLD CAD software according to illustrative embodiments of the invention uses.



FIG. 6 shows a simplified block diagram of a parallelization technique.



FIG. 7 depicts illustrates an example of an initial configuration of a device floorplan.



FIG. 8 shows the device floorplan of FIG. 7 after the acceptance of the moving of a resource.



FIG. 9 illustrates a proposal for moves of resources in a device floorplan.



FIG. 10 shows a parallelization technique according to an exemplary embodiment.



FIG. 11 depicts an example of a serial analysis algorithm.



FIG. 12 shows an example of the parallelization of an analysis algorithm.



FIG. 13 illustrates a block diagram of a system for processing information using the disclosed concepts.



FIG. 14 depicts an example of a technique for increasing the size of a set of independent tasks according to an exemplary embodiment.



FIG. 15 shows a simplified flowchart for a technique for increasing the size of a set of independent tasks according to an exemplary embodiment.



FIG. 16 illustrates an example of a set of tasks that may be used by an exemplary embodiment.



FIG. 17 depicts another example of a set of tasks that may be used by an exemplary embodiment.



FIG. 18 shows a simplified flowchart for a technique for increasing efficiency according to an exemplary embodiment.



FIG. 19 illustrates a simplified flowchart for another technique for increasing efficiency according to an exemplary embodiment.



FIG. 20 depicts a simplified flowchart corresponding to an exemplary embodiment that employs thread timers.



FIG. 21 shows a simplified flowchart corresponding to another exemplary embodiment that employs thread timers.



FIG. 22 illustrates a simplified flowchart corresponding to an exemplary embodiment that performs dependency analysis during task execution.



FIG. 23 depicts a simplified diagram corresponding to an exemplary embodiment for speculative task execution.





DETAILED DESCRIPTION

The inventive concepts contemplate apparatus and associated methods for parallelizing software, such as CAD algorithms or software, or CAD software for FPGAs. The disclosed concepts seek to run software or algorithms in parallel, for example, by using threading or multiple processors, so as to improve the speed of execution.


Generally speaking, the inventive concepts contemplate various ways of running software in a parallel fashion or executing algorithms in parallel. FIGS. 1 and 2 show two examples of techniques that may be used. Persons of ordinary skill in the art who have the benefit of the description of the invention understand that other techniques and examples may be used, as desired.



FIG. 1 shows a technique for parallelization used in exemplary embodiments by using multiple threads. The arrangement shown in FIG. 1 includes a set of tasks 13, a scheduler 10, and a set of threads 16. The set of tasks 13 make up the various tasks that the CAD software or algorithm seeks to execute or run. Generally, set 13 may include any desired number of tasks, say, N tasks, whereas the set of threads 16 may include any desired or suitable number of threads, say, K threads (note that K and N may or may not be equal).


Scheduler 10 accepts tasks from set 13 and schedules them for execution on one or more computers (i.e., determines the task(s) to be executed), via one or more threads. More specifically, scheduler 10 assigns the tasks in set 13 to the threads in set 16. For example, scheduler 10 may assign task 1 to thread 1, task 2 to thread 2, and so on. The assignment to the threads will then result in execution of the corresponding assigned tasks. Note that scheduler 10 may run in a thread separate from the threads executing the set of tasks, or may run in one or more threads that are executing task(s) in the set of tasks, or other configurations, as persons of ordinary skill in the art who have the benefit of the description of the invention understand.



FIG. 2 illustrates another technique for parallelization used in exemplary embodiments by using multiple processors. The arrangement in FIG. 2 includes a set of tasks 13, a scheduler 10, and a set of processors or computers or similar appropriate apparatus, labeled as 19. As an example, the set of processors 19 may constitute a parallel computer, a massively parallel computer, etc., as persons of ordinary skill in the art who have the benefit of the description of the invention understand. Similar to FIG. 1, scheduler 10 may run in a thread separate from the threads executing the set of tasks, or may run in one or more threads that are executing task(s) in the set of tasks, or other configurations, as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


The set of tasks 13 represents the various tasks that the CAD software or algorithm seeks to execute or run. Generally, set 13 may include any desired number of tasks, say, N tasks, whereas the set of processors 19 may include any desired or suitable number of processors, say, M threads (note that K and M may or may not be equal).


Scheduler 10 accepts tasks from set 13 and schedules them for execution by one or more computers (i.e., determines the task(s) to be executed), via one or more threads. More specifically, scheduler 10 assigns tasks in set 13 to the processors in set 19. For example, scheduler 10 may assign task 1 to processor 1, task 2 to processor 2, and so on. The assignment of the tasks to the processors will then result in execution of the corresponding assigned tasks. Each task may run or execute in a thread, as desired. A task executed in one thread may be independent or mostly or substantially independent (e.g., relatively few dependencies) of one or more tasks in one or more other threads.


The scheduler in either FIG. 1 or FIG. 2 may consider one or more of the following factors in determining tasks to be executed or in scheduling tasks for execution: (i) processing time for each task in the set of tasks; (ii) any dependency that each task or one or more tasks in the set of tasks might have; and (iii) the number of threads or processors available for executing or performing the set of tasks. As another example, the scheduler may schedule the tasks speculatively (e.g., a speculative task schedule). The speculative task schedule may guide task execution, as desired. Persons of ordinary skill in the art who have the benefit of the description of the invention understand may also use other configurations of schedulers, as desired.


The disclosed concepts contemplate communication between the set of threads executing tasks. The communication may include information about the activities of the threads, their status, etc., as persons of ordinary skill in the art who have the benefit of the description of the invention understand. In task execution using multiple threads, the scheduler or task generator might consider a variety of factors, as persons of ordinary skill in the art who have the benefit of the description of the invention understand. For example, in one embodiment, one might consider task dependencies and tasks executed since the last communication from one or more threads in the set of threads.


One may apply the inventive concepts to a variety of CAD software, algorithms, and applications, as desired. One particular area of application constitutes CAD software for designing and using PLDs (e.g., implementing a user's design by using the PLD's resources). The following description provides details of such PLDs and the software parallelization techniques.



FIG. 3 depicts a general block diagram of a PLD that may be designed or used via illustrative embodiments of the invention. One may use the disclosed concepts for parallelizing software in CAD software for designing PLD 103 or using its resources to implement a desired circuit or system.


PLD 103 includes configuration circuitry 130, configuration memory (CRAM) 133, control circuitry 136, programmable logic 106, programmable interconnect 109, and I/O circuitry 112. In addition, PLD 103 may include test/debug circuitry 115, one or more processors 118, one or more communication circuitry 121, one or more memories 124, one or more controllers 127, and initialization circuit 139, as desired.


Note that the figure shows a simplified block diagram of PLD 103. Thus, PLD 103 may include other blocks and circuitry, as persons of ordinary skill in the art understand. Examples of such circuitry include clock generation and distribution circuits, redundancy circuits, and the like. Furthermore, PLD 103 may include, analog circuitry, other digital circuitry, and/or mixed-mode circuitry, as desired.


Programmable logic 106 includes blocks of configurable or programmable logic circuitry, such as look-up tables (LUTs), product-term logic, multiplexers (MUXs), logic gates, registers, memory, and the like. Programmable interconnect 109 couples to programmable logic 106 and provides configurable interconnects (coupling mechanisms) between various blocks within programmable logic 106 and other circuitry within or outside PLD 103.


Control circuitry 136 controls various operations within PLD 103. Under the supervision of control circuitry 136, PLD configuration circuitry 130 uses configuration data (which it obtains from an external source, such as a storage device, a host, etc.) to program or configure the functionality of PLD 103. Configuration data are typically used to store information in CRAM 133. The contents of CRAM 133 determine the functionality of various blocks of PLD 103, such as programmable logic 106 and programmable interconnect 109. Initialization circuit 139 may cause the performance of various functions at reset or power-up of PLD 103.


I/O circuitry 112 may constitute a wide variety of I/O devices or circuits, as persons of ordinary skill in the art who have the benefit of the description of the invention understand. I/O circuitry 112 may couple to various parts of PLD 103, for example, programmable logic 106 and programmable interconnect 109. I/O circuitry 112 provides a mechanism and circuitry for various blocks within PLD 103 to communicate with external circuitry or devices.


Test/debug circuitry 115 facilitates the testing and troubleshooting of various blocks and circuits within PLD 103. Test/debug circuitry 115 may include a variety of blocks or circuits known to persons of ordinary skill in the art who have the benefit of the description of the invention. For example, test/debug circuitry 115 may include circuits for performing tests after PLD 103 powers up or resets, as desired. Test/debug circuitry 115 may also include coding and parity circuits, as desired.


PLD 103 may include one or more processors 118. Processor 118 may couple to other blocks and circuits within PLD 103. Processor 118 may receive data and information from circuits within or external to PLD 103 and process the information in a wide variety of ways, as persons skilled in the art with the benefit of the description of the invention appreciate. One or more of processor(s) 118 may constitute a digital signal processor (DSP). DSPs allow performing a wide variety of signal processing tasks, such as compression, decompression, audio processing, video processing, filtering, and the like, as desired.


PLD 103 may also include one or more communication circuits 121. Communication circuit(s) 121 may facilitate data and information exchange between various circuits within PLD 103 and circuits external to PLD 103, as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


PLD 103 may further include one or more memories 124 and one or more controller(s) 127. Memory 124 allows the storage of various data and information (such as user-data, intermediate results, calculation results, etc.) within PLD 103. Memory 124 may have a granular or block form, as desired. Controller 127 allows interfacing to, and controlling the operation and various functions of circuitry outside the PLD. For example, controller 127 may constitute a memory controller that interfaces to and controls an external synchronous dynamic random access memory (SDRAM), as desired.


As noted, PLD 103 includes a number of blocks of programmable resources. Implementing a design using those resources often entails placement of those blocks (described below) within PLD 103's floorplan. FIG. 4 shows a floor-plan of a PLD that one may design or implement by using the inventive concepts.


PLD 103 includes programmable logic 106 arranged as a two-dimensional array. Programmable interconnect 109, arranged as horizontal interconnect and vertical interconnect, couples the blocks of programmable logic 106 to one another. One may place the blocks in a particular manner so as to implement a user's design, as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


In illustrative embodiments, PLD 103 has a hierarchical architecture. In other words, each block of programmable logic 106 may in turn include smaller or more granular programmable logic blocks or circuits. For example, in one embodiment, programmable logic 106 may constitute blocks of configurable logic named logic array block (LAB), and each LAB may include logic elements (LEs) or other circuitry, as desired.


Persons of ordinary skill in the art who have the benefit of the description of the invention understand, however, that a wide variety of other arrangements, with varying terminology and topology, are possible, and fall within the scope of the inventive concepts. Furthermore, although FIG. 4 shows blocks of programmable logic 106, one may use PLDs with other or additional blocks (e.g., memory, processors, other blocks in FIG. 3, etc.) in their floorplans and take advantage of the inventive concepts, as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


Regardless of the particular arrangement or design, however, one may use the inventive concepts in CAD software (e.g., in one or more areas, steps, or blocks of CAD software, described below in detail) or programs to exploit the PLD's resources and implement a desired circuit or system. Implementing a user's design in a PLD, such as PLD 103, entails a number of steps or processes, as detailed below.



FIG. 5 illustrates various software modules that PLD CAD software according to illustrative embodiments of the invention uses. The modules include design-entry module 203, synthesis module 206, place-and-route module 209, and verification module 212. The following description provides a simplified explanation of the operation of each module.


The CAD techniques may have a variety of applications, as persons of ordinary skill in the art who have the benefit of the description of the invention understand. Examples include analysis of design area, timing performance, power requirements, congestion, and routability, as desired. In exemplary embodiments, the analysis may be performed as at least one task in the set of tasks in FIGS. 1 and 2, as desired. Results from the at least one task or from multiple tasks may constitute a single complete analysis. One may make the analysis independent of other tasks in the set of tasks by performing the analysis on a stale tentative solution, as desired, and as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


Referring to FIG. 5, design-entry module 203 allows the editing of various design description files using graphical or textual descriptions of a circuit or its behavior, such as schematics, hardware description languages (HDL), or waveforms, as desired. The user may generate the design files by using design-entry module 203 or by using a variety of electronic design automation (EDA) or CAD tools (such as industry-standard EDA tools), as desired. The user may enter the design in a graphic format, a waveform-based format, a schematic format, in a text or binary format, or as a combination of those formats, as desired.


Synthesis module 206 accepts the output of design-entry module 203. Based on the user-provided design, synthesis module 206 generates appropriate logic circuitry that realizes the user-provided design. One or more PLDs (not shown explicitly) implement the synthesized overall design or system. Synthesis module 206 may also generate any glue logic that allows integration and proper operation and interfacing of various modules in the user's designs. For example, synthesis module 206 provides appropriate hardware so that an output of one block properly interfaces with an input of another block. Synthesis module 206 may provide appropriate hardware so as to meet the specifications of each of the modules in the overall design or system.


Furthermore, synthesis module 206 may include algorithms and routines for optimizing the synthesized design. Through optimization, synthesis module 206 seeks to more efficiently use the resources of the one or more PLDs that implement the overall design or system. Synthesis module 206 provides its output to place-and-route module 209.


Place-and-route module 209 uses the designer's timing specifications to perform optimal logic mapping and placement. The logic mapping and placement determine the use of routing resources within the PLD(s). In other words, by use of particular programmable interconnects with the PLD(s) for certain parts of the design, place-and-route module 209 helps optimize the performance of the overall design or system. By proper use of PLD routing resources, place-and-route module 209 helps to meet the critical timing paths of the overall design or system.


Place-and-route module 209 optimizes the critical timing paths to help provide timing closure faster in a manner known to persons of ordinary skill in the art with the benefit of the description of the invention. As a result, the overall design or system can achieve faster performance (i.e., operate at a higher clock rate or have higher throughput).


Verification module 212 performs simulation and verification of the design. The simulation and verification seek in part to verify that the design complies with the user's prescribed specifications. The simulation and verification also aim at detecting and correcting any design problems before prototyping the design. Thus, verification module 212 helps the user to reduce the overall cost and time-to-market of the overall design or system.


Verification module 212 may support and perform a variety of verification and simulation options, as desired. The options may include functional verification, test-bench generation, static timing analysis, timing simulation, hardware/software simulation, in-system verification, board-level timing analysis, signal integrity analysis and electro-magnetic compatibility (EMC), formal netlist verification, and the like, as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


Note that one may perform other or additional verification techniques as desired and as persons of ordinary skill in the art who have the benefit of the description of the invention understand. Verification of the design may also be performed at other phases in the flow, as appropriate, and as desired.


A large number (probably the majority) of conventional commercial CAD algorithms are serial in nature. In other words, they carry out the various tasks in a serial, rather than parallel, fashion. This is not surprising, first because processor clock speeds have been speeding up regularly until now, and second because it is generally much more difficult to develop robust parallel software.


With the trends described above, it is now much more important to modify existing algorithms to leverage the new parallel processing power that will be available to users. This is particularly true because CAD algorithms are generally among the slowest types of software in use. Typical run-times of a full weekend are quite common. Unless parallelization techniques are used, the serial algorithms will likely not speed up sufficiently to meet the more complex problems they will be used to solve in the future.


Generally, two approaches are commonly used when parallelizing a serial CAD algorithm. In the first approach, one discards the serial algorithm and uses instead an algorithm that has more inherent parallelism. This option has several disadvantages.


First, it forces the designer to start from scratch, discarding existing code and developing new parallel code. Given that many person-years of effort have been invested into optimizing existing algorithms, discarding them makes it difficult to reach the same level of quality in the new algorithms until many years later. The approach also restricts the choice of algorithms available to the designer—some serial algorithms are better suited to certain problems, and being forced to use a parallel algorithm can hurt the quality of the software tool.


In addition, parallel algorithms are relatively difficult to make deterministic. Deterministic algorithms give the same result when run multiple times with the same input. Parallel programs or algorithms, however, are executing multiple sets of instructions simultaneously and, depending on the access given by each processor to these sets, the results can be different each time the algorithm is run. This property makes it hard for a user to reproduce a result they get with the algorithm, as well as for the vendor to debug any issues the user encounters.


Finally, for users who are still using a single processor to run the algorithm, forcing a change to a parallel algorithm with the potential loss of quality described above, and the other shortcoming mentioned above would make the users dissatisfied. In addition, parallel algorithms generally incur overhead that could result in the program becoming significantly slower for these users. The software tool vendor would therefore need to maintain both sets of algorithms for at least a short period of time, leading to higher maintenance costs.


As the second option, one might run the serial algorithm on each available processor with different settings, and take the best result at the end. This conventional approach, although easier to implement than the first one, has several limitations as well.


First, it does not involve speeding up the algorithm—it merely runs more copies of the algorithm to improve the results. Any user who wants the fastest possible run-time for the algorithm is not going to get what they want with this approach. Second, it does not scale well as more processors are made available because the ability to get better results from multiple runs of the same algorithm quickly diminishes as more and more copies are run. Clearly, both of these approaches have important limitations. The inventive concepts, however, provide techniques that overcome those limitations.


More specifically, the inventive method takes advantage of the fact that many serial CAD algorithms spend most of their execution time performing a particular action or set of actions on different portions of the input problem. This action is repeated many times (often millions of times), which results in relatively long run-times for these algorithms. The property that makes these algorithms serial is often the fact that each action is performed with knowledge of the results of each previous action (i.e., dependence on previous actions). This property in turn means that one action can be or is done at any time, which limits the algorithm to serial execution.


Often, however, a given set of contiguous actions are affecting independent portions of the input problem, thereby removing the need for them all to be performed serially. This property holds especially for input problems that are relatively large. For example, in a problem that includes many actions, including actions #10 to #20, action #10 to action #20 may be independent of one another. In other words, performing the actions does not depend on the result(s) of performing other action(s).


In such a situation, the algorithm could perform all of those 11 actions in parallel. In exemplary embodiments, the inventive techniques use local independence to create parallel execution. For example, if action #21 is then dependent on two of the previous actions (say #13 and #17), the algorithm must finish action #13 and #17 before it can proceed with #21 (otherwise the results will not be deterministic). Otherwise, the algorithm can perform the actions in parallel. This local independence is what this method uses to create parallelism and, hence, improved performance.


The inventive technique uses a queue or set of actions (ordered or unordered, as desired), where the queue is loaded with actions that are independent of each other. As persons of ordinary skill in the art who have the benefit of the description of the invention understand, rather than using a queue, one may generally use any structure or concept that can accommodate a set of actions, ordered or unordered, as desired. This queue is loaded serially to ensure that the actions are all independent. In one variant of the invention, the queue is loaded in the same order as the serial algorithm would perform actions. This action ensures that the results of the parallel version of the algorithm are similar or identical to those of the serial version.



FIG. 6 shows a simplified block diagram of this technique. A set of tasks 13 are input to scheduler 10. Scheduler 10 provides tasks to queue 250 so as to provide local independence, as described above. The tasks are output from queue 250 and executed in parallel manner (as long as local independence exists).


In another variant of the invention, actions can be chosen in ways that maximize the number of independent actions that the queue holds. Once this queue is loaded, all available processors can process the actions in any arbitrary or desired order they choose because the independence of the actions in the queue is guaranteed. Once all the actions in the queue are finished, the queue is loaded again and the process repeated.


To illustrate the technique in more detail, a placement example is provided to show how it can be used to parallelize a placement algorithm. A placement algorithm takes as input a netlist representation of a circuit, and a floorplan representation of a device. In the Quartus II software (available from Altera Corporation, the assignee of this application), for example, the netlist represents the blocks in a user's logic circuit (e.g., logic array blocks, or LABs; RAM blocks; multiplier blocks, etc). The floorplan represents the blocks available in a PLD or similar device.


A serial placement algorithm may operate as follows: Create an initial legal placement as quickly as possible, or relatively quickly, with little or no regard to quality. As a result, every block in the netlist has been assigned a location in the floorplan. Second, randomly pick a block in the netlist and try to move it to a random location. Swap any block that is already there with the source block. Third, evaluate whether this change to the placement is good or desirable. If so, commit the change. Otherwise, discard the change. The evaluation is often done with several metrics and, generally, the metrics generally try to keep blocks that are connected or coupled to each other placed near each other. Finally, go back to the second step and repeat until a given number of moves are done (for example, this number might be 1,000 times the number of blocks in the netlist).


The placement algorithm above is serial in nature because the decision to commit a change in the third step affects all future iterations (i.e., moves) of the algorithm. For example, assume the floorplan shown in FIG. 7. Assume block #6 is at X=3 and Y=4 in the floorplan, and the first move of the algorithm attempts to swap it with block #20, which is at X=30 and Y=40.


Further, assume that the second move of the algorithm is going to move block #21 (which happens to be connected or coupled to block #20) from X=30, Y=4 to X=1, Y=1. FIG. 8 shows what the locations and connectivity would be if the first move was accepted.


If the first move of the algorithm accepts the move, the second move (which is attempting to move block #21 to (1,1)) is more likely to be accepted since block #21's new location (1,1) will be closer to the block it is connected or coupled to (i.e., block #20, which has a current location of (3,4)). If the first move was not accepted (leaving the situation in FIG. 7), however, moving block #21 to (1,1) will not seem like a good move because its connected or coupled block (i.e., block #20) is at (30,40), and the current location for block #21 (i.e., 30,4) is closer than (1,1) would be.


This example shows the problem that an algorithm like the above serial algorithm would face if it were running in parallel. For example, if move #1 and #2 are running at the same time, whether move #2 is accepted or not depends on whether move #1 finishes before move #2 is evaluated.


Unless changes are made to the algorithm, running it in parallel could result in blocks chasing the last location that their connected or coupled blocks resided at, potentially reducing the quality of the final placement drastically. It would also make the results non-deterministic, as it is generally impossible to predict how long a given move will take to complete even for different runs of the same circuit.


To apply the inventive technique to solve these issues, one could make a queue of independent moves, as noted above. When the first move from the example above is placed into the queue, the second move would no longer be allowed into the queue (because that move depends on the first one through the connection or coupling between block #21 and block #20). The queue loading could be stopped and the moves processed, or the queue could be loaded with other independent moves before processing the moves, as described above. In either case, the larger the queue is, the greater the speedup will be from having multiple processors. For example, a queue that always has no more than two moves in it would see a benefit from using two processors (but not four or more).


Note that the above technique uses serial loading of the queue. If the time it takes to propose a move is relatively small, the serial loading does not pose a problem. For instance, an algorithm where the loading takes 5% of serial runtime and the evaluation takes 95% runtime could theoretically be sped up by a factor of 1.9 on a two-processor machine. If the serial portion is higher, however, this benefit may drop off dramatically. For example, if merely half the algorithm is parallel, then the speedup on a two-processor system would be limited to a factor of 1.33.


By using a relatively more sophisticated queue, however, it is possible to alleviate this problem. Returning to the placement example above, we note that there are two sources of dependence between moves: (1) it may be impossible to propose an independent move (move-generation dependency); and (2) it may be impossible to evaluate a move independently (evaluation dependency).


These two instances are treated similarly or identically, but they are quite different. For example, consider two proposed moves for a single block. Obviously, one cannot even propose the second move until the first one has been either committed or rejected, as one does not know where the block will be after the first move.


On the other hand, consider two blocks that one wishes to move closer together. One could easily propose a move for both blocks at the same time. One would not be able to evaluate them independently (because, depending on which block is moved first, the second move might not be good or desirable or advantageous). Note, though, that one would be able to proceed and propose other moves even before the moves for the blocks have both been evaluated. From a parallel viewpoint, doing so could be advantageous, as it enables one to keep generating work for all the processors in far more circumstances than one could when any kind of dependency causes a stall.


The following describes an example of the application of this improvement. Consider the placement in FIG. 9, with several moves being proposed regarding blocks 303-315. Using the original inventive algorithm described above, one would propose the first move, then stop after proposing the second move because they are related to connected or coupled blocks, and hence the decision to accept or reject move #2 will depend on the result of move #1 (in other words, move #1 would move block 303, and move #2 would move block 306, which is coupled to block 303).


One, however, could then evaluate moves #2 and #3 (moving block 309) in parallel, then move #4 (moving block 312), #5 (moving block 315) and #6 (moving block 303), and finally move #7 (moving block 318). Note that the placement has stopped three times, and that in the four “sets” of moves, half the sets had a single block moving. Thus, for half the time, one processor on a dual-core machine (as an example) would be sitting idle.


If instead one stops when moves could no longer be proposed, however, the situation improves. For instance, one may propose moves #1 through #5 without stopping. Note that one would stop at move #6 because it targets a block (i.e., block 303) that may already be moving as a result of another move. One may resume as soon as move #1 has been accepted or rejected, and proceed to propose move #7. In other words, one may resume when one or more dependencies on one or more earlier move(s) have been resolved.


Now, at any given time, there are always at least two moves that can be evaluated in parallel (move #3 in parallel with #1; move #4 with #3; move #5 with moves #4 and move #2; move #6 with move #3; moves #4, #5, and #7 with moves #3, #5, and #6). Persons of ordinary skill in the art who have the benefit of the description of the invention appreciate how, using this technique, one would also have a much greater chance of ensuring that one could generate 4 or 8 or even more moves at a time, thus being able to take advantage of machines with more than two processors, as desired.


To implement this algorithm, the inventive concepts use a more sophisticated or “smart” or improved or enhanced queue. More specifically, instead of keeping all its moves in order and allowing processors to work on the next one that's available, such a queue keeps track of the last move or all moves, as desired, which should be accepted or rejected before each move can be evaluated. For instance, move #2 would list move #1, and move #6 would list #2 (but not moves #3, #4, or #5). A processor that finishes evaluating move #2, for example, would be able to start work on move #6 even if moves #3, #4, and #5 have not yet been completed.


One may use this technique in a variety of situations. For example, one may substitute such a queue for queue 250 in FIG. 6, as desired. Alternatively, one may use other arrangements, as desired, and as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


If even the speedup allowed by the enhanced or improved queues is not enough, it is also possible to have different threads choose which portions of the input problem they wish to work on in parallel. Note that doing so will still maintain deterministic results. Using the placement example above, this approach would mean that not only do we evaluate the moves in parallel, we also generate them in parallel. The technique operates as described below and as shown in FIG. 10.


As described above, at 350 every action is given a numerical ID. Multiple threads, however, may at 355 make a decision as to which part of the input problem they choose to examine (e.g., which blocks each thread proposes to move). The respective thread, however, does not actually perform the action.


The thread then adds the action to a submission queue at 360. This queue accepts actions in any order, but will emit them in order of their ID numbers. For instance, if action #1 and #3 are added, the queue will appear to have one action in it (#1) until action #2 is also added.


As actions are removed from the queue, at 365 one performs the dependency analysis, as described above. If an action is found to be dependent on a previous action, one processes it as described above. The action itself, however, may be invalid. For example, one may be proposing a move for a block that may no longer be in the location that was anticipated. Note that if this situation had arisen with the versions of the technique described herein, one would simply have stopped generating new actions. Given that with the improved technique one may have multiple threads generating actions in parallel, that would be a relatively more serious limitation.


Once this relatively more serious kind of dependency is found, a thread is simply asked at 370 to re-generate the action, preferably as soon as possible. For example, “as soon as possible” might be when it is determined whether or not the targeted block has actually moved. If it has not, one may simply evaluate the move; if it has, however, one proposes or considers a new move from scratch and evaluate that move instead.


The benefit of this technique is that, because no parts of the algorithm are serial (except the dependency checker, which one assumes is relatively fast), one expects to be able to accelerate the entire program as much as is theoretically possible, given its inherent dependencies. Note that the algorithm introduces almost no new dependencies of its own.


As the above paragraphs describe, the inventive concepts seek to take advantage of independence among moves or generally tasks to be performed. Note, therefore, that increasing the size of the set of independent tasks is desirable because doing so allows more parallel threads or processors to be used, thereby reducing run-time for the algorithm. One additional way to increase the size of the set of independent tasks is for the scheduler to postpone or discard dependent tasks if further or additional or other tasks continue to be independent.


For example, referring to FIG. 14, assume pairs of blocks are being processed (e.g., for moves) as tasks for an algorithm. An independent set of tasks cannot list the same block twice because doing so would constitute a dependency. In other words, the later task would have to know the outcome of the earlier or previous task as concerns that particular block before it can run. Now, assume, as FIG. 14 shows, that a set of tasks AB, CD, EF, and GH were loaded into the set, where each letter (e.g., A) denotes a block, and the pair of letters is the pair of blocks being considered in a task (for example, a move or some other operation). Further assume that the tasks have the numbers shown in FIG. 14. For example, task #1 corresponds to operation on blocks AB, and so on.


Now, suppose that the next desired pairs of blocks were AD, RS, and TV (corresponding to tasks #5, #6, and #7, respectively). The scheduler (for example, as shown in FIGS. 1 and 2, and described above) may either postpone task #5 (blocks AD), which is dependent on task #1 (blocks AB) and task #2 (blocks CD) finishing in order to allow task #6 (blocks RS) and task #7 (blocks TV) to proceed. (Note that task #6 (blocks RS) and task #7 (blocks TV) are not independent on any other tasks or blocks). In this case, task #5 (blocks AD) would be the first task loaded into the next set of independent tasks. Alternatively, task #5 might be discarded altogether if the quality of the results of the algorithm is not degraded by such discarding. In either case, note that this technique increases the size of the set of independent tasks, which in turn increases the parallelism and speed of execution. FIG. 15 shows a simplified flowchart of this technique according to an exemplary embodiment of the invention.


This postponing or discarding technique can be applied to the enhanced inventive scheme described above, including FIG. 10, which uses a set of tasks with dependencies indicating which other tasks should be performed before a given task may be performed. Since that scheme tracks task-execution dependencies in the set of tasks, task-generation dependencies may potentially prevent the scheduler from adding additional tasks to the set of tasks. Thus, one may apply the postponing or discarding techniques described above to task-generation dependencies.


Note that another technique for increasing the size of a set of independent tasks even with a scheduler that stops on the first task-generation dependency is to change task-generation dependencies into task-execution dependencies. This technique has already been applied in the example described above and shown in FIG. 14. In that example, a task simply specifies is the pair of blocks to be considered. That property results in relatively few task-generation dependencies relative to examples where each task specifies not only a pair of blocks to consider, but also the changes that should be made to those blocks, if those changes rely on previous tasks being completed.


By keeping the number of task-generation dependencies relatively low by specifying less about tasks during task generation, larger sets of tasks can be generated. Note that, to ensure proper task execution, task-execution dependencies should be determined.


Another technique according to the inventive concepts seeks to increase efficiency when tasks are generated in parallel. As described above, one may generate and use an enhanced set of tasks that contains tasks that have some amount of dependency among them. The consumers of the set (i.e., the one or more threads performing those tasks) can be aware of the dependencies in the set, and can perform tasks when the tasks on which they are dependent have been performed.


In such a scheme, it is possible that a given set of tasks will include some tasks that have no or few dependencies, while some other tasks may form a relatively long chain of dependent tasks. As an example, consider a situation where a set contains six tasks, each of which has a one-second duration (i.e., takes one second to execute). Of those six tasks, assume that one task is independent, and five tasks are dependent on each other, such that each of the five tasks can be executed only after the previous one is finished. FIG. 16 illustrates such a set of tasks, with six tasks labeled as #1 through #6.


Given those assumptions, running the six tasks sequentially would take six seconds. Running them fully in parallel, however, would take three seconds if one incorrectly ignores the task dependencies. Accounting for the dependencies, the tasks would take five seconds to run, which is not much faster than the sequential execution of the tasks. In addition, no number of additional processors would reduce the amount of time it takes to run the six tasks because of the chain of five dependent tasks.


As another example, suppose that the set of tasks had six tasks, of which three tasks were in one dependent chain, and the other three were in another dependent chain. FIG. 17 shows such an example, with tasks #1 through #6. In this situation, a two-processor system would provide maximum speedup by parallel execution, and would execute the six tasks in three seconds. Note that in the first example above, it is better to add an independent task, or one that is in a shorter chain of dependencies, than to add more tasks to the one long chain of five tasks.


These examples illustrate that the scheduler, which is loading the set of tasks, can benefit from considering one or more of the following factors: (i) the processing time each task takes, (ii) the set of dependencies for each task, and/or (iii) the number of threads or processors available for execution. Note that even a fully independent set of tasks can still benefit from a scheduler that considers (i) and (iii) above.


In an exemplary embodiment of the invention, a scheduler that considers one or more of the above factors may try to speculate on a tentative assignment of tasks to threads as the set of tasks is being generated. Such a scheme assumes that a thread or processor will start executing a task when the task(s) upon which it depends are complete. The speculative assignment of tasks would enable evaluation as to whether adding a new task to the set would create a situation where some threads or processors are under-utilized during the time it takes to execute all tasks. FIG. 18 illustrates a simplified flowchart for a technique according to such an embodiment.


Note that in such a scheme the processing time assumed for tasks need not perfectly correspond with the actual task execution time. That is, this speculation need not be perfect; rather, it is used to facilitate good utilization of the threads or processors. Note that some embodiments according to the disclosed concepts may use the speculative task-to-thread scheduling when executing tasks, while other embodiments may allow dynamic scheduling as the threads or processors are performing tasks, as desired.


As a further enhancement to this technique, the dependency checker may be run by a dedicated processor configured as the master in a master-workers configuration. This processor would be inactive until a task had been completed by a worker thread.


Alternatively, the dependency checker could be run by a “supervisor” processor, which is configured to also perform tasks like the other workers when there are no tasks that are ready to have their dependencies checked. This has the advantage that the processor would always be doing useful work, and furthermore, it would never have to ask the host operating system to re-activate its thread, which can be very time-consuming.


As another, all of the processors could be configured to become the supervisor of the algorithm when tasks have been completed and need to have their dependencies checked. In addition to the two advantages listed above, this has two further advantages.


First, a processor that has just completed a task will typically be able to begin the dependency check immediately, rather than waiting for a specific thread to complete its own task. Second, if a task is checked by the same processor on which is was speculatively executed, its data is likely to still be present in the processor's cache, which can significantly improve performance.


As a further enhancement to this technique, a priority queue can be used to hold tasks that have been speculatively executed but not yet had their dependencies checked by a supervisor. This allows any processor to speculatively complete its task at any time, while ensuring that the dependency checks will be carried out in a deterministic order. An example of such a configuration is shown in FIG. 23, with four processors (C0-C3), each of which can become a supervisor if it inserts the lowest-priority task into the queue (in this case, M0). The processor to insert this task (in this case, C0) will then run the dependency checker on all tasks already present in the queue (in this case, M1-M3, in addition to M0).


As a further enhancement to this technique, the supervisor can also be responsible for assigning new tasks to the processors. This enables the algorithm to begin and end under the control of a single entity.


As a further enhancement to this technique, the supervisor can be used to execute portions of the algorithm which are not thread-safe. Referring to FIG. 23, assume that task M7 (which is yet to be assigned to a processor by the supervisor) will not be able to be safely executed while any other tasks are in progress. In this case, the current supervisor can insert M7 directly into the priority queue, bypassing the workers, and suspend the assignment of all future tasks (M8 and higher).


After a future supervisor has dependency checked tasks M0 through M6, it will encounter M7 and instead of checking its dependencies, will simply execute the task itself before resuming the assignment of further tasks. In this manner, infrequent thread-unsafe tasks may be safely executed along with more frequent thread-safe tasks with minimal loss of performance.


Another technique according to the inventive concepts relates to increasing efficiency when tasks are generated in parallel. As described above, one may perform tasks in parallel without regard to dependencies. In such a scheme, a dependency-checker receives the tasks and, based on the dependencies related to the tasks, automatically instructs one or more of the threads or processors to re-perform or re-generate one or more tasks that was incorrectly performed because of a dependency violation.


As a further enhancement to this technique, one may have the thread(s) responsible for task generation consider (or configure or adapt it to consider): (i) dependencies, and (ii) the work or tasks done or performed by other threads to avoid re-generation of the task list or scheduling, which helps to increase efficiency of the algorithm). FIG. 19 shows a simplified flowchart for such a technique according to an exemplary embodiment of the invention.


One way to achieve this technique in a placement problem, see for example, FIGS. 7-9, is to mark each block that gets moved, so that other threads or processors generating moves avoid moving that block (since such a move would likely be an invalid move). When the threads (at least two of the threads) are all synchronized, the mark on that block would be cleared. The cleared mark would signal to the threads or processors that they may move that block, if desired.


Another possible enhancement entails having the threads perform periodic re-synchronization, so that they receive updated state information from other threads. The updated state information can be used to avoid re-generation by ensuring a task is performed considering the effects produced by other threads that already completed the pre-requisite tasks.


As a further enhancement to this scheme, one may guide the threads (or processors), so that the tasks being performed in separate threads are completely independent, or highly independent. Doing so would again increase or tend to increase the efficiency of the computations.


One way to achieve complete independence is to partition the problem space (e.g., the blocks involved in a placement problem), so that task(s) performed by each thread (or processor) are independent of the task(s) performed by other threads. One way to do this is to find independent parts of the problem to assign to the various threads (or processors). Some problems have more intrinsic dependencies which might make doing so relatively difficult. In such cases, it can be advantageous to establish boundaries to partition the problem or process being optimized.


The boundaries act to prevent or make it unlikely that changes in one part of the entity being optimized affect another part of the entity being optimized. That way, the two parts of the entity can be optimized independently or mostly or substantially independently.


Another way to increase independence is to apply the techniques described above to speculatively assign tasks to threads (or processors) or groups of possible tasks to threads (or processors) in order to partition the tasks in a way that minimizes inter-thread dependencies (temporal invariance), or to minimize the number of inter-thread dependencies that may be violated (temporal considerations). This partitioning of tasks before execution can be strictly obeyed or it can be used as hints or clues to guide task execution. Note that the goal here is not to necessarily eliminate all inter-thread dependencies, but to reduce the number of re-generations performed because of dependency violations between resynchronization points.


Yet another application of task re-generation according to the inventive concepts is to employ task re-generation to recover from deadlocks among some of the tasks. Deadlocks may occur, for example, when one thread awaits completion of a task by another thread, and vice-versa. Instead of having threads wait for a resolution indefinitely, timers such as thread timers can be used to prevent deadlocks.


In the event of a potential deadlock, thread timers expire. The algorithm can identify which tasks were being processed when the deadlock occurred. Those tasks can be aborted shortly after the expiration of the thread timers. FIG. 20 shows a simplified flowchart corresponding to an exemplary embodiment that employs this technique.


Alternatively, the threads can be allowed to continue, and the results of those tasks can be discarded once they are complete (since the results are probably corrupted). In that event, the aborted or incorrectly executed tasks can be re-generated by treating the tasks as being part of a dependency violation. FIG. 21 illustrates a simplified flowchart that corresponds to an illustrative embodiment that employs this technique.


Another aspect of the inventive concepts relates to performing dependency analysis during task execution, as opposed to before execution or after execution but before committing to the results. An exemplary embodiment according to this technique performs tasks in parallel without performing a comprehensive dependency analysis ahead of time (i.e., before execution of the tasks). Instead of monitoring for dependencies upon committing to the results, the dependency analysis can be performed by the threads or processors performing the tasks.


Put another way, before each thread performs a set of tasks, or before it performs each task, it can try to synchronize itself with other threads by determining the tasks upon which its future tasks depend have completed. If it also synchronizes with the current state of those other threads, its future tasks can make use of up-to-date dependency data to perform those future tasks correctly. FIG. 22 illustrates a simplified flowchart that corresponds to an illustrative embodiment that employs this technique. Note that dependency analysis is being performed during task execution, rather than before execution of the tasks commences.


There are other approaches beyond PLD CAD applications that are specific to particular algorithms that can be used to take advantage of parallel processing power without significantly affecting algorithm design flexibility. One example is parallel analysis.


More specifically, optimization algorithms often rely on analysis engines to determine how much effort should be applied (and where that effort should be applied) to achieve various design goals. These analysis engines often take a snapshot of the current state and return the results of the analysis for that state. A serial algorithm, shown in FIG. 11, will wait for that analysis and proceed when it is done (e.g., optimization phase 403B awaits results of analysis phase 406, which in turn receives its input from optimization phase 403A). Consequently, it has the disadvantages described above.


To make the algorithms parallel, one can have additional processors constantly taking snapshots of the state and performing the analysis. This has one disadvantage in that the analysis results will be stale since the state used for the analysis will not be current when the analysis results are made available but, on the other hand, the parallelism provides for increased efficiency and reduced resource demands. FIG. 12 shows how this process works.


In the technique shown in FIG. 12, one may perform analysis and optimization in parallel. For example, optimization phase or engine 403A may operate in parallel or currently with analysis phase or engine 406A. Similarly, optimization phase or engine 403B may operate in parallel or currently with analysis phase or engine 406B. In this scenario, the analysis phase is performed on a previous optimization state. The results of the analysis phase are fed back to the optimization phase after the state of the optimization has potentially changed.


Note that the input to each analysis step is from a different optimization state than the state that uses its output. For example, assume the optimization step is placement (where, say, thousands of moves are being made to blocks), and the analysis step is timing analysis, which provides input to the placement phase regarding which connections are most timing-critical. This technique provides the advantage that analysis and optimization are performed concurrently or in parallel, albeit potentially (but not necessarily) at the cost of a less optimal solution.


Examples of analysis that this technique may be applied to include timing analysis (determining the timing performance of each path in a circuit); congestion analysis (determining which areas of a chip are likely to face routing congestion based on the placement of the design); and design analysis (determining for what portions of the design more focus for optimization is desirable or beneficial (or required)). Note that the examples listed are illustrative, and that one may apply the techniques to other applications or situations, as persons of ordinary skill in the art who have the benefit of the description of the invention understand.


As noted above, one may run or execute algorithms or software according to the invention on computer systems or processors. FIG. 13 shows a block diagram of an exemplary system for processing information according to the invention.


System 1000 includes a computer device 1005, an input device 1010, a video/display device 1015, and a storage/output device 1020, although one may include more than one of each of those devices, as desired.


The computer device 1005 couples to the input device 1010, the video/display device 1015, and the storage/output device 1020. The system 1000 may include more that one computer device 1005, for example, a set of associated computer devices or systems, as desired.


The system 1000 operates in association with input from a user. The user input typically causes the system 1000 to perform specific desired information-processing tasks, including circuit simulation. The system 1000 in part uses the computer device 1005 to perform those tasks. The computer device 1005 includes an information-processing circuitry, such as a central-processing unit (CPU), although one may use more than one CPU or information-processing circuitry, as persons skilled in the art would understand.


The input device 1010 receives input from the user and makes that input available to the computer device 1005 for processing. The user input may include data, instructions, or both, as desired. The input device 1010 may constitute an alphanumeric input device (e.g., a keyboard), a pointing device (e.g., a mouse, roller-ball, light pen, touch-sensitive apparatus, for example, a touch-sensitive display, or tablet), or both. The user operates the alphanumeric keyboard to provide text, such as ASCII characters, to the computer device 1005. Similarly, the user operates the pointing device to provide cursor position or control information to the computer device 1005.


The video/display device 1015 displays visual images to the user. The visual images may include information about the operation of the computer device 1005, such as graphs, pictures, images, and text. The video/display device may constitute a computer monitor or display, a projection device, and the like, as persons of ordinary skill in the art would understand. If a system uses a touch-sensitive display, the display may also operate to provide user input to the computer device 1005.


The storage/output device 1020 allows the computer device 1005 to store information for additional processing or later retrieval (e.g., softcopy), to present information in various forms (e.g., hardcopy), or both. As an example, the storage/output device 1020 may constitute a magnetic, optical, or magneto-optical drive capable of storing information on a desired medium and in a desired format. As another example, the storage/output device 1020 may constitute a printer, plotter, or other output device to generate printed or plotted expressions of the information from the computer device 1005.


The computer-readable medium 1025 interrelates structurally and functionally to the computer device 1005. The computer-readable medium 1025 stores, encodes, records, and/or embodies functional descriptive material. By way of illustration, the functional descriptive material may include computer programs, computer code, computer applications, and/or information structures (e.g., data structures or file systems). When stored, encoded, recorded, and/or embodied by the computer-readable medium 1025, the functional descriptive material imparts functionality. The functional descriptive material interrelates to the computer-readable medium 1025.


Information structures within the functional descriptive material define structural and functional interrelations between the information structures and the computer-readable medium 1025 and/or other aspects of the system 1000. These interrelations permit the realization of the information structures' functionality. Moreover, within such functional descriptive material, computer programs define structural and functional interrelations between the computer programs and the computer-readable medium 1025 and other aspects of the system 1000. These interrelations permit the realization of the computer programs' functionality.


By way of illustration, the computer device 1005 reads, accesses, or copies functional descriptive material into a computer memory (not shown explicitly in the figure) of the computer device 1005. The computer device 1005 performs operations in response to the material present in the computer memory. The computer device 1005 may perform the operations of processing a computer application that causes the computer device 1005 to perform additional operations. Accordingly, the functional descriptive material exhibits a functional interrelation with the way the computer device 1005 executes processes and performs operations.


Furthermore, the computer-readable medium 1025 constitutes an apparatus from which the computer device 1005 may access computer information, programs, code, and/or applications. The computer device 1005 may process the information, programs, code, and/or applications that cause the computer device 1005 to perform additional operations.


Note that one may implement the computer-readable medium 1025 in a variety of ways, as persons of ordinary skill in the art would understand. For example, memory within the computer device 1005 may constitute a computer-readable medium 1025, as desired. Alternatively, the computer-readable medium 1025 may include a set of associated, interrelated, coupled (e.g., through conductors, fibers, etc.), or networked computer-readable media, for example, when the computer device 1005 receives the functional descriptive material from a network of computer devices or information-processing systems. Note that the computer device 1005 may receive the functional descriptive material from the computer-readable medium 1025, the network, or both, as desired.


Note that one may apply the inventive concepts effectively to various ICs that include ICs with programmable or configurable circuitry, known by other names in the art, as desired, and as persons skilled in the art with the benefit of the description of the invention understand. Such circuitry include, for example, devices known as complex programmable logic device (CPLD), programmable gate array (PGA), field programmable gate array (FPGA), and structured application specific ICs, or structured ASICs.


Referring to the figures, persons of ordinary skill in the art will note that the various blocks shown may depict mainly the conceptual functions and signal flow. The actual circuit implementation may or may not contain separately identifiable hardware for the various functional blocks and may or may not use the particular circuitry shown. For example, one may combine the functionality of various blocks into one circuit block, as desired. Furthermore, one may realize the functionality of a single block in several circuit blocks, as desired. The choice of circuit implementation depends on various factors, such as particular design and performance specifications for a given implementation, as persons of ordinary skill in the art who have the benefit of the description of the invention understand. Other modifications and alternative embodiments of the invention in addition to those described here will be apparent to persons of ordinary skill in the art who have the benefit of the description of the invention. Accordingly, this description teaches those skilled in the art the manner of carrying out the invention and are to be construed as illustrative only.


The forms of the invention shown and described should be taken as the presently preferred or illustrative embodiments. Persons skilled in the art may make various changes in the shape, size and arrangement of parts without departing from the scope of the invention described in this document. For example, persons skilled in the art may substitute equivalent elements for the elements illustrated and described here. Moreover, persons skilled in the art who have the benefit of this description of the invention may use certain features of the invention independently of the use of other features, without departing from the scope of the invention.

Claims
  • 1. (canceled)
  • 2. A system for parallelizing software in computer-aided design (CAD) software for logic design, the system comprising: a computer, configured to: use a first processor to perform dependency checking among a set of tasks; anduse a set of processors, comprising at least a second processor, to perform the set of tasks.
  • 3. The system according to claim 2, wherein the first processor comprises a dedicated processor.
  • 4. The system according to claim 3, wherein the first processor and the set of processors are configured in a master-workers configuration.
  • 5. The system according to claim 4, wherein the first processor is inactive until a task had been completed by a worker thread.
  • 6. The system according to claim 2, wherein the first processor comprises a supervisor.
  • 7. The system according to claim 6, wherein the first processor is configured to perform tasks when no tasks exist that are ready to have their dependencies checked.
  • 8. The system according to claim 2, wherein any of the processors may be configured as a supervisor when tasks have been speculatively completed and are ready for checking dependencies.
  • 9. The system according to claim 8, wherein the computer is further configured to use a priority queue to hold tasks in the set of tasks that have been speculatively executed but have not had their dependencies checked by a supervisor.
  • 10. The system according to claim 6, wherein the supervisor assigns tasks in the set of tasks to the set of processors.
  • 11. The system according to claim 8, wherein the supervisor is used to execute thread-unsafe instructions.
  • 12. A computer-readable medium, comprising: a computer application adapted for processing by a computer to parallelize computer aided design (CAD) software, the computer application configured to cause the computer to: use a first processor to perform dependency checking among a set of tasks; anduse a set of processors, comprising at least a second processor, to perform the set of tasks.
  • 13. The computer-readable medium according to claim 12, wherein the computer application is configured to cause the computer to use the first processor as a dedicated processor.
  • 14. The computer-readable medium according to claim 13, wherein the computer application is configured to cause the computer to use first processor and the set of processors in a master-workers configuration.
  • 15. The computer-readable medium according to claim 12, wherein the computer application is configured to cause the computer to use the first processor as a supervisor.
  • 16. The computer-readable medium according to claim 12, wherein the computer application is configured to cause the computer to use any of the processors as a supervisor when tasks have been speculatively completed and are ready for checking dependencies.
  • 17. A method of parallelizing computer aided design (CAD) software, the method comprising: using a first processor to perform dependency checking among a set of tasks; andusing a set of processors, comprising at least a second processor, to perform the set of tasks.
  • 18. The method according to claim 17, wherein the first processor comprises a dedicated processor.
  • 19. The method according to claim 18, further comprising using wherein the first processor and the set of processors in a master-workers configuration.
  • 20. The method according to claim 17, wherein the first processor comprises a supervisor.
  • 21. The method according to claim 17, further comprising using any of the processors a supervisor when tasks have been speculatively completed and are ready for checking dependencies.
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent application claims priority to Provisional U.S. Patent Application Ser. No. 61/091,631, filed on Aug. 25, 2008, titled “Apparatus and Methods for Parallelizing Integrated Circuit Computer-Aided Design Software,” Attorney Docket No. ALTR055CIP2P1. This patent application is a continuation-in-part of U.S. patent application Ser. No. 12/037,329, filed on Feb. 26, 2008, titled “Apparatus and Methods for Parallelizing Integrated Circuit Computer-Aided Design Software,” Attorney Docket No. ALTR055CIP, which claims priority to Provisional U.S. Patent Application Ser. No. 60/892,537, filed on Mar. 1, 2007, titled “Apparatus and Methods for Parallelizing Integrated Circuit Computer-Aided Design Software,” Attorney Docket No. ALTR-055CPZ1; said application Ser. No. 12/037,329 is also a continuation-in-part of U.S. patent application Ser. No. 11/392,215, filed on Mar. 29, 2006, titled “Apparatus and Methods for Parallelizing Integrated Circuit Computer-Aided Design Software,” Attorney Docket No. ALTR055, which claims priority to Provisional U.S. Patent Application Ser. No. 60/772,747, filed on Feb. 13, 2006, titled “Apparatus and Methods for Parallelizing Software,” Attorney Docket No. ALTR:055PZ1. The present patent application incorporates by reference the foregoing patent applications.

Provisional Applications (3)
Number Date Country
61091631 Aug 2008 US
60892537 Mar 2007 US
60772747 Feb 2006 US
Continuation in Parts (2)
Number Date Country
Parent 12037329 Feb 2008 US
Child 12545224 US
Parent 11392215 Mar 2006 US
Child 12037329 US