This application is related to commonly assigned U.S. application Ser. No. 10/582,204, filed by Xiaofeng Guo, Jinquan Dai, and Long Li with an effective filing date of Jan. 26, 2006 and entitled “Scheduling Multithreaded Programming Instructions Based on Dependency Graph,” and is related to commonly assigned U.S. application Ser. No. 10/582,427, filed by Xiaofeng Guo, Jinquan Dai, Long Li, and Zhiyuan Lv with an effective filing date of Nov. 17, 2005 and entitled “Latency Hiding of Traces Using Block Coloring,” and is related to commonly assigned U.S. application Ser. No. 11/662,217, filed by Xiaofeng Guo, Jinquan Dai, and Long Li with an effective filing date of Dec. 24, 2005 (the PCT application designating the U.S. was filed on this date) and entitled “Automatic Critical Section Ordering for Parallel Program.”
1. Field
This disclosure relates generally to compiling technologies in a computing system, and more specifically but not exclusively, to method and apparatus for merging critical sections when compiling a computer program.
2. Description
Multithreading and multiprocessing are common programming techniques often used to maximize the efficiency of computer programs by providing a tool to permit concurrency or multitasking. Threads are ways for a computer program to be divided into multiple and distinct sequences of programming instructions where each sequence is treated as a single task and to be processed simultaneously. An application that may use the multithreaded programming technique is a packet-switched network application that processes network packets in a high speed packet-switched system concurrently.
To maintain and organize the different packets, a new thread may be created for each incoming packet. In a single processor environment, the processor may divide its time between different threads. In a multiprocessor environment, different threads may be processed on different processors. For example, the Intel® IXA™ network processors (IXPs) have multiple microengines (MEs) processing network packets in parallel where each ME supports multiple threads.
In such a parallel programming paradigm, accesses to shared resources, including shared memory, global variables, shared pipes, and so on, are typically be protected by critical sections to ensure mutual exclusiveness and synchronizations between threads. Normally, critical sections are created by using a signal mechanism in a multiprocessor system. A signal may be used to permit the entering or to indicate the exiting of a critical section. For instance, in an Intel® IXP™, packets are distributed to a chain of threads in order (i.e., an earlier thread in the chain processes an earlier packet). Each thread waits for a signal from the previous thread before entering the critical section. After the signal is received, the thread executes the critical section code exclusively. Once this thread is done, it sends the signal to the next thread after leaving the critical section.
Due to hardware cost, the number of signals is limited by the scale of processing element. For example, each thread only has 16 signals in Intel® IXP™ MEs. Excessive use of critical sections may adversely impact the performance of a program. Therefore, it is desirable to efficiently use critical sections.
The features and advantages of the disclosed subject matter will become apparent from the following detailed description of the subject matter in which:
According to embodiments of the subject matter disclosed in this application, critical sections used for multiple threads in a program to access shared resource may be minimized. A trace-based instruction level dependence graph may be constructed based on the result of the critical section minimization. The dependence graph so constructed may be summarized. Additionally, critical sections in the program may be selected to merge with each other based on the summarized dependence graph to reduce the number of signals/tokens used to create critical sections. Furthermore, latency-sensitive optimizations may be applied to hide resource access latency.
Reference in the specification to “one embodiment” or “an embodiment” of the disclosed subject matter means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosed subject matter. Thus, the appearances of the phrase “in one embodiment” appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
The memory 113 may be a dynamic random access memory (“DRAM”) device, a static random access memory (“SRAM”) device, read-only memory (“ROM”), a synchronous DRAM (“SDRAM”) device, a Double Data Rate (“DDR”) SDRAM device, and/or other memory device. The memory 113 may store instructions and code represented by data signals that may be executed by the processor 101. According to an embodiment of the computing system 100, a compiler may be stored in the memory 113 and implemented by the processor 101 in the computing system 100. The compiler may construct an instruction-level dependence graph and summarize the dependence graph so constructed. According to an embodiment of the subject matter disclosed in this application, the summarized dependence graph may be used to merge critical sections to save signals needed for critical section creations and to reduce the number of overall instructions in an execution path of a program.
A cache 102 may reside inside processor 101 to store data stored in memory 113. The cache 102 speeds access to memory by the processor 101 by taking advantage of its locality of access. In an alternative embodiment of the computing system 100, the cache 102 may reside external to the processor 101. In another embodiment, the cache 102 may include multiple levels, such as level 1 cache (L1 cache), level 2 cache (L2 cache), level 3 cache, and so on, with one or more levels (e.g., L1 cache) residing inside the processor 101 and others residing outside the processor 101. A bridge memory controller 111 directs data signals between the processor 101, the memory 113, and other components in the computing system 100 and bridges the data signals between the CPU bus 110, the memory 113, and a first IO (Input/Output) bus 120.
The first IO bus 120 may be a single bus or a combination of multiple buses. The first IO bus 120 provides communication links between components in the computer system 100. A network controller 121 may be coupled to the first IO bus 120. The network controller 121 may link the computing system 100 to a network of computers (not shown) and support communication among the computers. A display device controller 122 may be coupled to the first IO bus 120. The display device controller 122 allows coupling of a display device (not shown) to the computing system 100 and acts as an interface between the display device and the computing system 100.
A second IO bus 130 may be a single bus or a combination of multiple buses. The second IO bus 130 may provide communication links between components in the computing system 100. A data storage device 131 is coupled to the second IO bus 130. The data storage device 131 may be hard disk drive, a floppy disk drive, a compact disc (“CD”) ROM device, a flash memory device or other mass storage device. An input interface 132 may be coupled to the second IO bus 130. The input interface 132 may be, for example, a keyboard and/or mouse controller to other input interface. The input interface 132 may be a dedicated device or can reside in another device such as a bus controller or other controller. The input interface 132 allows coupling of an input device to the computing system 100 and transmits data signals from an input device to the computing system 100. An audio controller 133 may be coupled to the second IO bus 130. The audio controller 133 operates to coordinate the recording and playing of sounds by a device such as an audio codec which is also coupled to the IO bus 130. A bus bridge 123 couples the first IO bus 120 and the second IO bus 130. The bus bridge 123 operates to buffer and bridge data signals between the first IO bus 120 and the second IO bus 130.
When a program is executed in the computing system 100, it may be executed in multiple threads. In one embodiment, all of the threads may be running on processor 101. In another embodiment, threads may be distributed and run on multiple processor or processing cores. Threads communicate to other threads through shared resources such as global memory, registers, or signals. In many instances, the shared resource may only be accessed by one thread. Such an exclusive access of the shared resource by one thread at a time may be implemented by using a critical section. A conventional method to implement a critical section is to use a signal mechanism. A thread may enter a critical section after receiving a signal and exiting the critical section by notifying the next thread that it is done and by passing a signal to the next thread.
Typically it takes time to access the shared resource. This time is referred to as resource access latency, which is measured between the instant when resource access (e.g., memory access) is initiated and the instant when the accessed data in the resource is effective. If resource access latency is included in a critical section, the processor or processing core executing the thread that has entered this critical section will be idle during this latency period. This results in inefficient use of computing power. One way to improve the efficiency of a computing system running multiple threads is to hide resource access latency or overlap resource access latency in one thread with resource access latency and/or other computations in other threads.
When the wait instruction 351 is moved outside of the critical section 311, the critical section 311 may be shortened. As depicted in
When resource access latency and other unnecessary instructions are removed out of critical sections, it may become more effective to merge shortened critical sections to reduce the number of signals used by critical sections than merge un-shortened critical sections.
The compiler 400 may include a front end unit 420. According to an embodiment of the compiler 400, the front end unit 420 operates to parse source code and convert it to an abstract syntax tree. The compiler 400 may also include an intermediate language (“IL”) unit 430. The IL unit 430 transforms the abstract syntax tree into a common intermediate form such as an intermediate representation. It should be appreciated that the IL unit 430 may transform the abstract syntax tree into one or more common intermediate forms.
The complier may include an optimizer unit 440. The optimizer unit 440 may utilize one or more optimization procedures to optimize the intermediate representation of the code. According to an embodiment of the compiler 440, the optimizer unit 440 may perform peephole, local, loop, global, interprocedural and/or other optimizations. According to an embodiment of the compiler 440, the optimizer unit 440 includes a critical section merging apparatus 441. The critical section merging apparatus 441 may minimize critical sections and construct a trace-based instruction level dependence graph based on the result of the critical section minimization. Additionally, the critical section merging apparatus 441 may summarize the dependence graph so constructed. Moreover, the critical section merging apparatus 441 may merge critical sections based on the summarized dependence graph. Moreover, the critical section merging apparatus 441 may apply latency-sensitive optimizations to hide resource access latency.
The compiler 400 may include a register allocator unit 450. The register allocator unit 450 identifies data in the intermediate representation that may be stored in registers in the processor rather than in memory. Additionally, the compiler 400 may include a code generator 460. The code generator 460 converts the intermediate representation into machine or assembly code.
The critical section merging apparatus 500 may include a minimization unit 520. The minimization unit 520 may perform critical section minimization. The minimization unit 520 may employ any method or a combination of a variety of methods to minimize each critical section by identifying a portion of instructions that could be executed outside of the critical section and by removing such a portion of instructions out of the critical section. The commonly assigned U.S. patent application Ser. No. 10/582,204 entitled “Scheduling Multithreaded Programming Instructions Based on Dependency Graph,” filed by Xiaofeng Guo, Jinquan Dai, and Long Li with an effective filing date of Jan. 26, 2006, and the commonly assigned U.S. patent application Ser. No. 10/582,427 entitled “Latency Hiding of Traces Using Block Coloring,” filed by Xiaofeng Guo, Jinquan Dai, Long Li, and Zhiyuan Lv with an effective filing date of Nov. 17, 2005 describe some methods for shortening critical sections and thus minimizing the length of critical sections. These two U.S. patent applications are incorporated by reference herein.
As mentioned above, using multi-threading technology is one approach to shorten critical sections. It is estimated that if all memory accesses latency can be hided and the computations out of critical section can be used on hiding the memory accesses latency by using a multi-threading technology in a single processor, the execution speed of the program may be sped up by:
where Cc denotes cycles for computation; Cm denotes times of memory accesses; and Li denotes the ith memory access latency. When multiple processors or processing cores are used for the multiple threads, the execution speed of the program may be sped up by:
where Ccs denotes computations in the largest critical section. It may be noted from Equation (2) that the critical section size acts as one of the most important parameter in evaluating the performance of a multi-processor system.
The critical section merging apparatus 500 may include a dependence unit 530. The dependence unit 530 generates an instruction dependence graph of instructions in the code. According to an embodiment of the critical section merging apparatus 500, the dependence unit 530 generates the instruction dependence graph by constructing a control flow graph of the code, computing flow dependence and output dependence of instructions by using a forward and disjunctive data flow, computing anti-dependence of the instructions by using a backward and disjunctive data flow, and adding the flow dependence and output dependence of instructions with the anti-dependence of the instructions. It should be appreciated that other techniques may be used to generate the instruction dependence graph.
The critical section merging apparatus 500 may include a graph summary unit 540. The graph summary unit 540 generates a summarized graph reflecting only instructions that protect and release the critical sections. According to an embodiment of the critical section merging apparatus 500, the graph summary unit 540 generates the summarized graph by building a transitive closure of the instruction dependence graph generated by the dependence unit 530, and adding an edge from a node n to a node m if there is a path from the node n to the node m in the instruction dependence graph, where n and m represent instructions that start or release a critical section or instructions of resource accesses. It should be appreciated that other techniques may be used to generate the summarized dependence graph.
The critical section merging apparatus 500 includes a merger unit 550. The merger unit 550 merges critical sections based on the summarized dependence graph generated by the graph summary unit 540. After the summarized dependence graph is created, the merger unit 550 may select certain critical sections to merge based on rules below:
[Rule] Merge CS1 and CS2 if and only if:
The critical section merging apparatus 500 may include a general optimization unit 560. The general optimization unit 560 applies general optimization methods such as code motion, code scheduling, copy optimizations to hide resource access latency. The commonly assigned U.S. patent application Ser. No. 10/582,427 entitled “Latency Hiding of Traces Using Block Coloring,” filed by Xiaofeng Guo, Jinquan Dai, Long Li, and Zhiyuan Lv with an effective filing date of Nov. 17, 2005 describes several approaches to optimize code so that resource access latencies may be hid. This patent application is incorporated by reference herein.
It should be appreciated that any other techniques used for constructing an instruction-level dependence graph and summarizing the dependence graph so constructed may be used at block 620 and block 630. The commonly assigned PCT patent application No. PCT/CN2005/002307 entitled “Automatic Critical Section Ordering for Parallel Program,” filed by Jinquan Dai, Long Li, and Xiaofeng Guo on Dec. 24, 2005 describes approaches to constructing an instruction dependence graph and summarizing the graph so constructed. This patent application is incorporated by reference herein.
At block 640, critical sections may be selected to merge based on the rule described above along with the
At block 650, optimizations may be applied to hide resource access latency. Any optimization approach may be applied, for example, those mentioned above along with the
Although an example embodiment of the disclosed subject matter is described with reference to block and flow diagrams in
In the preceding description, various aspects of the disclosed subject matter have been described. For purposes of explanation, specific numbers, systems and configurations were set forth in order to provide a thorough understanding of the subject matter. However, it is apparent to one skilled in the art having the benefit of this disclosure that the subject matter may be practiced without the specific details. In other instances, well-known features, components, or modules were omitted, simplified, combined, or split in order not to obscure the disclosed subject matter.
Various embodiments of the disclosed subject matter may be implemented in hardware, firmware, software, or combination thereof, and may be described by reference to or in conjunction with program code, such as instructions, functions, procedures, data structures, logic, application programs, design representations or formats for simulation, emulation, and fabrication of a design, which when accessed by a machine results in the machine performing tasks, defining abstract data types or low-level hardware contexts, or producing a result.
For simulations, program code may represent hardware using a hardware description language or another functional description language which essentially provides a model of how designed hardware is expected to perform. Program code may be assembly or machine language, or data that may be compiled and/or interpreted. Furthermore, it is common in the art to speak of software, in one form or another as taking an action or causing a result. Such expressions are merely a shorthand way of stating execution of program code by a processing system which causes a processor to perform an action or produce a result.
Program code may be stored in, for example, volatile and/or non-volatile memory, such as storage devices and/or an associated machine readable or machine accessible medium including solid-state memory, hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, digital versatile discs (DVDs), etc., as well as more exotic mediums such as machine-accessible biological state preserving storage. A machine readable medium may include any mechanism for storing, transmitting, or receiving information in a form readable by a machine. Program code may be transmitted in the form of packets, serial data, parallel data, propagated signals, etc., and may be used in a compressed or encrypted format.
Program code may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, and other electronic devices, each including a processor, volatile and/or non-volatile memory readable by the processor, at least one input device and/or one or more output devices. Program code may be applied to the data entered using the input device to perform the described embodiments and to generate output information. The output information may be applied to one or more output devices. One of ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multiprocessor or multiple-core processor systems, minicomputers, mainframe computers, as well as pervasive or miniature computers or processors that may be embedded into virtually any device. Embodiments of the disclosed subject matter can also be practiced in distributed computing environments where tasks may be performed by remote processing devices that are linked through a communications network.
Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally and/or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter. Program code may be used by or in conjunction with embedded controllers.
While the disclosed subject matter has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications of the illustrative embodiments, as well as other embodiments of the subject matter, which are apparent to persons skilled in the art to which the disclosed subject matter pertains are deemed to lie within the scope of the disclosed subject matter.
Number | Name | Date | Kind |
---|---|---|---|
4571678 | Chaitin | Feb 1986 | A |
5107418 | Cramer et al. | Apr 1992 | A |
5202993 | Tarsy et al. | Apr 1993 | A |
5303377 | Gupta et al. | Apr 1994 | A |
5544342 | Dean | Aug 1996 | A |
5557761 | Chan et al. | Sep 1996 | A |
5712791 | Lauterbach | Jan 1998 | A |
5774730 | Aizikowitz et al. | Jun 1998 | A |
5867711 | Subramanian et al. | Feb 1999 | A |
6006326 | Panwar et al. | Dec 1999 | A |
6038538 | Agrawal et al. | Mar 2000 | A |
6044222 | Simons et al. | Mar 2000 | A |
6243864 | Odani et al. | Jun 2001 | B1 |
6289507 | Tanaka et al. | Sep 2001 | B1 |
6305014 | Roediger et al. | Oct 2001 | B1 |
6427235 | Kosche et al. | Jul 2002 | B1 |
6611956 | Ogawa et al. | Aug 2003 | B1 |
6651246 | Archambault et al. | Nov 2003 | B1 |
6654952 | Nair et al. | Nov 2003 | B1 |
6732260 | Wang et al. | May 2004 | B1 |
6785796 | Damron et al. | Aug 2004 | B1 |
6795963 | Andersen et al. | Sep 2004 | B1 |
6820223 | Heishi et al. | Nov 2004 | B2 |
7120762 | Rajwar et al. | Oct 2006 | B2 |
7197747 | Ishizaki et al. | Mar 2007 | B2 |
7290239 | Singh et al. | Oct 2007 | B1 |
7516312 | Wang et al. | Apr 2009 | B2 |
7555634 | Thatipelli et al. | Jun 2009 | B1 |
20020013937 | Ostanevich et al. | Jan 2002 | A1 |
20020066090 | Babaian | May 2002 | A1 |
20020095666 | Tabata et al. | Jul 2002 | A1 |
20020095668 | Koseki et al. | Jul 2002 | A1 |
20030074654 | Goodwin et al. | Apr 2003 | A1 |
20030120480 | Mohri et al. | Jun 2003 | A1 |
20030208673 | Chaudhry et al. | Nov 2003 | A1 |
20040025152 | Ishizaki et al. | Feb 2004 | A1 |
20040025153 | Johnson et al. | Feb 2004 | A1 |
20040039900 | Heishi et al. | Feb 2004 | A1 |
20040073906 | Chamdani | Apr 2004 | A1 |
20040083468 | Ogawa et al. | Apr 2004 | A1 |
20040111708 | Calder et al. | Jun 2004 | A1 |
20040133886 | Wu | Jul 2004 | A1 |
20040187101 | Inagaki et al. | Sep 2004 | A1 |
20040193856 | Wang et al. | Sep 2004 | A1 |
20050050527 | McCrady et al. | Mar 2005 | A1 |
20050055533 | Kadambi | Mar 2005 | A1 |
20050060705 | Katti et al. | Mar 2005 | A1 |
20050108695 | Li et al. | May 2005 | A1 |
20050108696 | Dai et al. | May 2005 | A1 |
20050149916 | Shpeisman et al. | Jul 2005 | A1 |
20050177831 | Goodman et al. | Aug 2005 | A1 |
20050188184 | Senter | Aug 2005 | A1 |
20050204119 | Saha | Sep 2005 | A1 |
20050210208 | Long et al. | Sep 2005 | A1 |
20050257221 | Inchingolo et al. | Nov 2005 | A1 |
20060048124 | Martin | Mar 2006 | A1 |
20060053351 | Anderson et al. | Mar 2006 | A1 |
20060085782 | Ward | Apr 2006 | A1 |
20070169039 | Lin | Jul 2007 | A1 |
20090049433 | Li et al. | Feb 2009 | A1 |
20090089765 | Guo et al. | Apr 2009 | A1 |
20090113396 | Rosen et al. | Apr 2009 | A1 |
20090265530 | Guo et al. | Oct 2009 | A1 |
Number | Date | Country |
---|---|---|
1561480 | Jan 2005 | CN |
1670699 | Sep 2005 | CN |
WO 2005062170 | Jul 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20080163181 A1 | Jul 2008 | US |